Synthonex
A local operating system for personal AI agents

Synthonex is the renamed Nexbrain/Neurasite stack: three coordinated codebases that turn a local machine into an agent workspace. The desktop app is the cockpit, the gateway is the single entry point for sessions, channels, memory, scheduling and automation, and the code agent is the execution runtime that owns tools, model providers and sub-agents.
What it does
Desktop cockpit
A Tauri and React app for chat, voice, editor panels, gateway settings, session navigation, filesystem graphing and runtime inspection.
Gateway control plane
A TypeScript service that owns WebSocket sessions, auth, channel routing, cron, heartbeat loops, hooks, TTS, media understanding and persisted transcripts.
Persistent memory substrate
User and project memory live under ~/.neurasite, with brain.db powering transcript recall, typed user facts, review flows and future skill indexing.
Code-agent runtime
A Bun workspace for the agent loop, tool registry, sub-agent manager, provider abstraction, session storage and CLI/server surfaces.
Local and cloud model routing
OpenAI-compatible local providers such as Ollama or LM Studio can sit beside cloud models, with provider switching and model discovery built into the agent.
Autonomous run orchestration
The gateway can claim tracked work, prepare isolated git worktrees, launch worker sessions, supervise retries and publish runtime graph updates back to the cockpit.
Why three pieces
Synthonex separates the product by responsibility. The desktop app is the human-facing cockpit. It should render sessions, files, tools, voice, settings and runtime graphs, but it should not be the source of truth for agent state.
The gateway is the control plane. Every client comes through it: desktop, CLI, Telegram, Discord, Slack, WhatsApp, Messenger, cron and heartbeat. It owns sessions, auth, memory loading, channel routing, scheduling, workspaces and run state. The code agent stays focused on execution: model calls, tools, sub-agents, compaction and filesystem work.
Memory as a local filesystem
The core bet is that personal AI needs durable local context, not only a prompt window. Synthonex stores the user's world under ~/.neurasite: USER.md, MEMORY.md, daily logs, project context, session transcripts and brain.db indexes.
The gateway already has the typed user-model path in place: facts are extracted from turns, stored in per-user SQLite tables, reconciled over time, and reviewed in the desktop cockpit before they replace the legacy USER.md prompt context. Transcript recall, skill curation and the future brain inspector use the same per-user database instead of introducing separate services.
From chat to autonomous work
Interactive chat stays direct: the user talks through the desktop app or a channel, the gateway loads context, and the code agent streams back events. Autonomous work sits beside that path rather than replacing it.
The orchestration layer is gateway-owned. It can watch task sources such as GitHub, claim eligible work, create an isolated git worktree, launch a worker session, monitor terminal states, retry or block when needed, and publish a runtime graph back to Synthonex. Worker sessions still use the same code-agent tools and sub-agent system, so autonomous work and interactive work share one execution core.
Built with
Desktop
Gateway
Agent runtime
Interfaces
Where it stands
Work in progress. The desktop cockpit, gateway control plane and code-agent workspace exist as separate repos under the renamed Synthonex stack. Core paths are implemented for WebSocket sessions, memory files, brain.db, user-model capture, heartbeat, channel adapters, local LLM support and the first version of autonomous run orchestration. Current work is consolidating the renamed product surface, hardening the runtime graph, and moving user-model and skill-curation features from shadow mode into daily use.