AlphaNimble
INITIALIZATION_VECTOR

QUICK start.

The fastest way to deploy MEM-BRAIN is via the MCP Server Protocol. Ensure you have your JWT token from the access portal.

01. SET ENVIRONMENT

TERMINAL // SH
export MEMBRAIN_API_KEY="your_jwt_token_here"

02. LAUNCH MCP SERVER

TERMINAL // UVX
uvx mem-brain-mcp

This launches a local SSE server on port 8100 by default.

IDE_ADAPTERS

NATIVE integration.

Cursor IDE

Add MEM-BRAIN as a Tool in Cursor to give your AI agent direct access to its long-term memory graph.

~/.cursor/mcp.json
{
  "mcpServers": {
    "mem-brain": {
      "url": "http://localhost:8100/mcp"
    }
  }
}

Claude Code

Add MEM-BRAIN as an MCP server in Claude Code by connecting to your locally running server via URL.

Method 1: CLI Command

TERMINAL // CLAUDE CODE CLI
claude mcp add mem-brain --scope user --url http://localhost:8100/mcp

Method 2: Manual Configuration

~/.claude.json
{
  "mcpServers": {
    "mem-brain": {
      "url": "http://localhost:8100/mcp"
    }
  }
}

Make sure your MCP server is running locally on port 8100 before connecting. Use claude mcp list to verify your server is configured. Scope options: user (all projects), project (current project), or local (current directory).

FRAMEWORK_ORCHESTRATION

AGENTIC ecosystem.

Agno (formerly Phidata)

Integrate via the Agno MCPTools class using the streamable-http transport. This gives your Agno agents direct access to the persistent memory graph. Use connect() and close() to manage the connection lifecycle.

PYTHON // AGNO + MCP
import asyncio
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools

async def main():
    # Initialize and connect to the MCP server via URL
    mcp_tools = MCPTools(
        transport="streamable-http",
        url="http://localhost:8100/mcp"
    )
    await mcp_tools.connect()
    
    try:
        # Create agent with MCP tools
        agent = Agent(
            name="Memory-Aware Agent",
            model=Claude(id="claude-sonnet-4-0"),
            tools=[mcp_tools],
            instructions="Use MEM-BRAIN to recall past context and reasoning paths."
        )
        
        # Run the agent
        await agent.aprint_response("What were our last architecture decisions?", stream=True)
    finally:
        # Always close the connection when done
        await mcp_tools.close()

# Alternative: Using async context manager
async def main_with_context():
    async with MCPTools(
        transport="streamable-http",
        url="http://localhost:8100/mcp"
    ) as mcp_tools:
        agent = Agent(
            model=Claude(id="claude-sonnet-4-0"),
            tools=[mcp_tools]
        )
        await agent.aprint_response("Tell me about our previous discussions", stream=True)

# Run with asyncio
asyncio.run(main())

Make sure your MCP server is running locally on port 8100 before connecting. You can also use MCPTools as an async context manager for automatic cleanup. In AgentOS, connection lifecycle is automatically managed.

LangChain / LangGraph

Connect via the MCP toolset to provide any LangChain agent with semantic graph capabilities.

PYTHON // LANGCHAIN MCP
from langchain_mcp import MCPToolProvider

tools = MCPToolProvider(url="http://localhost:8100/mcp").get_tools()
agent = create_openai_functions_agent(llm, tools, prompt)

Google Agent SDK

Integrate MEM-BRAIN as a toolset in the Google Agent SDK for multi-modal reasoning.

PYTHON // GOOGLE AGENT SDK
from google_agent_sdk import Agent
from mcp_client import MCPClient

mcp = MCPClient("http://localhost:8100/mcp")
agent = Agent(
    tools=mcp.get_tools(),
    instruction="Use MEM-BRAIN to remember and recall user preferences across sessions."
)
THEORETICAL_FOUNDATION

LOGIC engine.

LLM Guardian

Every incoming memory is analyzed by the Guardian. It extracts context, classifies tags, and validates potential semantic links against existing nodes to prevent hallucinated connections.

Unified Graph Search

Unlike traditional RAG that only searches nodes, MEM-BRAIN treats relationship edges as first-class searchable entities. It searches the "reasoning" between facts.

Memory Evolution

The graph isn't static. Over time, the Evolution Handler prunes weak links and strengthens causal paths, consolidating "Super-Hubs" for faster retrieval of central concepts.

Orbit-1 Context

Retrieval returns not just the matching node, but the immediate semantic neighborhood. This "Orbit-1" context provides agents with immediate cause-and-effect visibility.