v0.9 · pre-release · Open source

Shared memory
for agent teams

Agents on different frameworks can't share what they've learned. SynapseNet is a self-hosted memory server. Any agent, any framework, any language can store, query, and subscribe to shared context.

36.9%
of multi-agent failures from misalignment
72–86%
token duplication across frameworks
4
core operations · store, query, forget, subscribe
The Problem

Agents are working in silos

Multi-agent systems break down because agents can't share what they know. Every team rebuilds the same context from scratch.

36.9%

Failure from misalignment

Over a third of multi-agent system failures stem directly from inter-agent misalignment: agents working from different, contradictory, or stale context about the same environment.

72–86%

Token duplication rate

Agents on different frameworks duplicate 72–86% of their context tokens because there's no shared memory layer. Each agent rebuilds what another already knows. They burn compute and drift apart.

Agent A
LangChain
no shared memory
Agent B
CrewAI
context lost
Agent C
AutoGen
duplicate work
Agent D
Custom

Agents on different frameworks have no shared memory layer. Each operates in isolation, even when working on the same task.

How It Works

Four operations. One shared layer.

SynapseNet is a self-hosted memory server with a simple HTTP API. Any agent, any framework, any language. If it can make HTTP calls, it can read and write to the shared memory layer.

POST /memory/store

Store

Persist a memory item. Content is SHA-256 hashed: same content, same ID. Idempotent and deduplicating by design. Accepts optional tags and TTL.

JSON
// Request
{
  "agent_id": "orchestrator",
  "content":  "Inbox processed at 09:14 UTC",
  "tags":    ["email", "ops"],
  "ttl":     3600
}

// Response
{ "status": "stored", "id": "a1b2c3..." }
POST /memory/query

Query

Find memories by agent, tags, or keyword. Filters combine with AND; tags use OR matching. All parameters are optional. Omit all to fetch everything.

JSON
// Request — all filters optional
{
  "agent_id": "orchestrator",
  "tags":     ["email"],
  "keyword":  "inbox"
}

// Response
{
  "results": [{
    "id":        "a1b2c3...",
    "agent_id": "orchestrator",
    "content":   "Inbox processed...",
    "timestamp": "2026-03-18T09:14:00Z"
  }]
}
DELETE /memory/forget

Forget

Remove a specific memory by ID. Returns 404 if not found. Critical for privacy, compliance, and keeping the shared context clean as tasks complete.

JSON
// Request
{
  "id": "a1b2c3..."
}

// Response
{
  "status": "forgotten",
  "id":     "a1b2c3..."
}

// 404 if memory not found
GET /memory/subscribe

Subscribe

Server-Sent Events stream. Agents receive real-time pushes when matching memories are stored. Filter by agent or tag. Keepalive every 30s.

SSE stream
# Connect with query params
GET /memory/subscribe?tag=email&agent_id=rowan

# Stream — one event per stored memory
data: {
  "id":       "a1b2c3...",
  "agent_id": "analyst",
  "content":  "Email draft ready",
  "tags":     ["email"]
}

# Keepalive every 30s
: keepalive
Architecture
Agent: orchestrator
LangChain
Agent: builder
Custom
Agent: analyst
CrewAI
Agent: researcher
AutoGen
↑ ↓ ↑ ↓
Python SDK · synapsenet_client
HTTP / SSE · Bearer auth
SynapseNet Server · :7700
FastAPI · /store · /query · /forget · /subscribe
In-memory store (shipped v0.1)
→ SQLite persistence (shipped v0.5)
Open Source

Not a SaaS. You run it.

Mem0, Letta, and AWS AgentCore are all managed products. They hold your agent's memory and you pay for access. SynapseNet is open-source software you deploy yourself. Your memory stays on your infra.

The server is a FastAPI app. The client is a single Python file. Clone it, run it, point your agents at it. No account, no API key, no vendor in the middle.

The goal is interoperability. A LangChain agent and a CrewAI agent running the same task should share context without custom glue code. SynapseNet is the shared layer that makes that work.

Managed memory tools (require an account)
Mem0 Letta AWS AgentCore Zep MemGPT
Solid tools. But you're a tenant in their system. SynapseNet runs in your environment, under your control.

Shared memory, not siloed memory

Every agent in your team reads and writes the same store. One agent processes email, another picks up where it left off. No duplication, no drift.

Framework-agnostic by design

LangChain, CrewAI, AutoGen, custom. The API is plain HTTP. If your agent can make a POST request, it can join the shared memory layer.

Real-time coordination via SSE

Agents subscribe to memory events and react instantly when another agent stores something relevant. No polling, no message queues to configure.

Four design principles

Open: MIT licensed, source available.
Minimal: 4 operations, nothing more.
Portable: any language, any host.
Composable: layer it under anything.

Quick Start

Running in under 2 minutes

Clone the repo, start the server, store your first memory. No configuration required.

1

Clone & start the server

The start script installs Python dependencies and launches the FastAPI server on port 7700.

2

Install the SDK

Copy synapsenet_client.py into your project. Pip package coming once the API is stable.

3

Store & query memories

Any agent with the bearer token can start sharing memory immediately.

4

Subscribe to live updates

Agents can subscribe to memory events in real time via SSE, enabling reactive, event-driven coordination.

Shell
# Clone and start
git clone https://github.com/RowanBeck/synapsenet
cd synapsenet
./start.sh

# Run integration tests
python tests/test_mvp.py
Python
from synapsenet_client import SynapseNet

# Connect — works from any framework
client = SynapseNet(
    host="http://localhost:7700",
    token="synapsenet-mvp-token",
    agent_id": "orchestrator"
)

# Store a memory
client.store(
    "Inbox processed — 3 tasks extracted",
    tags=["email", "ops"]
)

# Query it back — from any agent
results = client.query(tags=["email"])
for r in results:
    print(r["content"])

# Subscribe to live memory events
for event in client.subscribe(tag="ops"):
    handle_memory(event)
cURL — works from any language
# Store a memory via HTTP — no SDK needed
curl -X POST http://localhost:7700/memory/store \
  -H "Authorization: Bearer synapsenet-mvp-token" \
  -H "Content-Type: application/json" \
  -d '{
    "agent_id": "builder",
    "content":  "Deployment completed — v0.6.0 on port 7700",
    "tags":     ["deploy", "infra"]
  }'

# Query from a different agent
curl -X POST http://localhost:7700/memory/query \
  -H "Authorization: Bearer synapsenet-mvp-token" \
  -H "Content-Type: application/json" \
  -d '{"tags": ["deploy"]}'
Roadmap

Where we're headed

SynapseNet is a work in progress. The MVP proves the core loop. Everything beyond is the community's to build.

v0.1 Shipped

MVP: Core memory loop

  • FastAPI server on :7700 with Bearer auth
  • store · query · forget endpoints
  • Python SDK: synapsenet_client.py
  • SHA-256 content addressing (deduplication)
  • TTL support · tag + keyword filtering
  • 6 integration tests passing
v0.5 Shipped

Persistence & security hardening

  • SQLite backend: memories survive restart
  • Auth token via env var, not hardcoded
  • Agent Card extension schema
  • LaunchAgent: server stays running
  • Migration script from agent-comms.json
v0.6 Shipped

Semantic search

  • Ollama nomic-embed-text vector embeddings
  • POST /memory/search: cosine similarity
  • TTL auto-cleanup · thread-safe writes
  • search() method in SDK + agent bridge
v0.7 Shipped

Live coordination

  • SSE subscribe: stream new memories in real time
  • Agent presence with 5-min TTL heartbeat
  • promote_memory.py: seed from MEMORY.md + daily logs
v0.8 Shipped

Channels & access control

  • Named channels: subscribe to a topic, not all memory
  • Per-agent tokens with read/write scope restrictions
  • Memory visibility: private · team · broadcast
  • /agents registry: declare capabilities + subscriptions
  • Pagination on query + search results
v0.9 Shipped

Production hardening

  • Audit log: every write/read/forget stamped with agent + timestamp
  • /metrics: memory count, agent count, throughput, embedding hit rate
  • Conflict detection: flag contradictory writes on the same key
  • Rate limiting per token
  • Docker image + docker-compose
v0.9.0 Current Release

Test suite hardened

  • 69 integration tests passing
  • Full coverage across store, query, forget, subscribe, and search endpoints
v1.0 Next

Earned, not declared

v1.0 ships when it's earned. That means semantic search verified in production, API stable across real usage, stability guarantee in place.

  • Embeddings verified in production workloads
  • API stable across real multi-agent usage
  • Stability guarantee: semver enforced, no breaking changes without a major bump
  • TypeScript SDK: full parity with Python
  • OpenAPI 3.1 spec: generated, authoritative, versioned