Agents on different frameworks can't share what they've learned. SynapseNet is a self-hosted memory server. Any agent, any framework, any language can store, query, and subscribe to shared context.
Multi-agent systems break down because agents can't share what they know. Every team rebuilds the same context from scratch.
Over a third of multi-agent system failures stem directly from inter-agent misalignment: agents working from different, contradictory, or stale context about the same environment.
Agents on different frameworks duplicate 72–86% of their context tokens because there's no shared memory layer. Each agent rebuilds what another already knows. They burn compute and drift apart.
Agents on different frameworks have no shared memory layer. Each operates in isolation, even when working on the same task.
SynapseNet is a self-hosted memory server with a simple HTTP API. Any agent, any framework, any language. If it can make HTTP calls, it can read and write to the shared memory layer.
Persist a memory item. Content is SHA-256 hashed: same content, same ID. Idempotent and deduplicating by design. Accepts optional tags and TTL.
// Request
{
"agent_id": "orchestrator",
"content": "Inbox processed at 09:14 UTC",
"tags": ["email", "ops"],
"ttl": 3600
}
// Response
{ "status": "stored", "id": "a1b2c3..." }
Find memories by agent, tags, or keyword. Filters combine with AND; tags use OR matching. All parameters are optional. Omit all to fetch everything.
// Request — all filters optional
{
"agent_id": "orchestrator",
"tags": ["email"],
"keyword": "inbox"
}
// Response
{
"results": [{
"id": "a1b2c3...",
"agent_id": "orchestrator",
"content": "Inbox processed...",
"timestamp": "2026-03-18T09:14:00Z"
}]
}
Remove a specific memory by ID. Returns 404 if not found. Critical for privacy, compliance, and keeping the shared context clean as tasks complete.
// Request
{
"id": "a1b2c3..."
}
// Response
{
"status": "forgotten",
"id": "a1b2c3..."
}
// 404 if memory not found
Server-Sent Events stream. Agents receive real-time pushes when matching memories are stored. Filter by agent or tag. Keepalive every 30s.
# Connect with query params
GET /memory/subscribe?tag=email&agent_id=rowan
# Stream — one event per stored memory
data: {
"id": "a1b2c3...",
"agent_id": "analyst",
"content": "Email draft ready",
"tags": ["email"]
}
# Keepalive every 30s
: keepalive
Mem0, Letta, and AWS AgentCore are all managed products. They hold your agent's memory and you pay for access. SynapseNet is open-source software you deploy yourself. Your memory stays on your infra.
The server is a FastAPI app. The client is a single Python file. Clone it, run it, point your agents at it. No account, no API key, no vendor in the middle.
The goal is interoperability. A LangChain agent and a CrewAI agent running the same task should share context without custom glue code. SynapseNet is the shared layer that makes that work.
Every agent in your team reads and writes the same store. One agent processes email, another picks up where it left off. No duplication, no drift.
LangChain, CrewAI, AutoGen, custom. The API is plain HTTP. If your agent can make a POST request, it can join the shared memory layer.
Agents subscribe to memory events and react instantly when another agent stores something relevant. No polling, no message queues to configure.
Open: MIT licensed, source available.
Minimal: 4 operations, nothing more.
Portable: any language, any host.
Composable: layer it under anything.
Clone the repo, start the server, store your first memory. No configuration required.
The start script installs Python dependencies and launches the FastAPI server on port 7700.
Copy synapsenet_client.py into your project. Pip package coming once the API is stable.
Any agent with the bearer token can start sharing memory immediately.
Agents can subscribe to memory events in real time via SSE, enabling reactive, event-driven coordination.
# Clone and start
git clone https://github.com/RowanBeck/synapsenet
cd synapsenet
./start.sh
# Run integration tests
python tests/test_mvp.py
from synapsenet_client import SynapseNet
# Connect — works from any framework
client = SynapseNet(
host="http://localhost:7700",
token="synapsenet-mvp-token",
agent_id": "orchestrator"
)
# Store a memory
client.store(
"Inbox processed — 3 tasks extracted",
tags=["email", "ops"]
)
# Query it back — from any agent
results = client.query(tags=["email"])
for r in results:
print(r["content"])
# Subscribe to live memory events
for event in client.subscribe(tag="ops"):
handle_memory(event)
# Store a memory via HTTP — no SDK needed
curl -X POST http://localhost:7700/memory/store \
-H "Authorization: Bearer synapsenet-mvp-token" \
-H "Content-Type: application/json" \
-d '{
"agent_id": "builder",
"content": "Deployment completed — v0.6.0 on port 7700",
"tags": ["deploy", "infra"]
}'
# Query from a different agent
curl -X POST http://localhost:7700/memory/query \
-H "Authorization: Bearer synapsenet-mvp-token" \
-H "Content-Type: application/json" \
-d '{"tags": ["deploy"]}'
SynapseNet is a work in progress. The MVP proves the core loop. Everything beyond is the community's to build.
synapsenet_client.pypromote_memory.py: seed from MEMORY.md + daily logs/agents registry: declare capabilities + subscriptions/metrics: memory count, agent count, throughput, embedding hit ratev1.0 ships when it's earned. That means semantic search verified in production, API stable across real usage, stability guarantee in place.