Second Brain

A personal AI memory and knowledge system — exposed as an MCP server for Claude Code.

13 Pydantic AI agents backed by Mem0 semantic memory, Supabase/pgvector, and Voyage AI multimodal embeddings. Your AI remembers what you’ve built, what you’ve learned, and how you think — across every session. Supports text, images, PDFs, and video.

License: MIT


The Problem

Most AI sessions start from zero. You re-explain your architecture, re-describe your voice, re-establish your preferences — every time. The AI that helped you build your auth system last week has no idea it exists today.

Second Brain fixes that. It gives Claude Code a persistent memory layer: store decisions, recall patterns, generate content in your voice, score your work, and get coaching on your priorities. Everything persists across sessions via semantic search, not keyword matching.


How It Works

graph TD
    CC["Claude Code"] -->|"MCP tool call"| MCP["mcp_server.py<br/>FastMCP — validates input, enforces timeout"]
    MCP --> COS["chief_of_staff<br/>Routing orchestrator"]

    COS --> RECALL["recall<br/>Semantic memory search"]
    COS --> ASK["ask<br/>Q&A with brain context"]
    COS --> LEARN["learn<br/>Pattern extraction + storage"]
    COS --> CREATE["create<br/>Content generation"]
    COS --> REVIEW["review<br/>Content scoring"]
    COS --> COACH["coach<br/>Daily accountability"]
    COS --> PMO["pmo<br/>Task prioritization"]
    COS --> EMAIL["email_agent<br/>Email composition"]
    COS --> SPEC["specialist<br/>Claude Code / Pydantic AI Q&A"]
    COS --> CLARITY["clarity<br/>Readability analysis"]
    COS --> SYNTH["synthesizer<br/>Feedback consolidation"]
    COS --> TB["template_builder<br/>Template detection"]

    RECALL --> SVC["Service Layer"]
    ASK --> SVC
    LEARN --> SVC
    CREATE --> SVC
    REVIEW --> SVC

    SVC --> MEM0["Mem0<br/>Semantic memory"]
    SVC --> SB["Supabase<br/>PostgreSQL + pgvector"]
    SVC --> VAI["Voyage AI<br/>Embeddings + reranking"]
    SVC --> GR["Graphiti<br/>Knowledge graph (optional)"]

    style CC fill:#2c3e50,color:#fff
    style MCP fill:#8e44ad,color:#fff
    style COS fill:#8e44ad,color:#fff
    style RECALL fill:#4a90d9,color:#fff
    style ASK fill:#4a90d9,color:#fff
    style LEARN fill:#27ae60,color:#fff
    style CREATE fill:#27ae60,color:#fff
    style REVIEW fill:#e67e22,color:#fff
    style COACH fill:#e67e22,color:#fff
    style PMO fill:#e67e22,color:#fff
    style EMAIL fill:#e67e22,color:#fff
    style SPEC fill:#7b68ee,color:#fff
    style CLARITY fill:#7b68ee,color:#fff
    style SYNTH fill:#7b68ee,color:#fff
    style TB fill:#7b68ee,color:#fff
    style SVC fill:#34495e,color:#fff
    style MEM0 fill:#e74c3c,color:#fff
    style SB fill:#e74c3c,color:#fff
    style VAI fill:#e74c3c,color:#fff
    style GR fill:#95a5a6,color:#fff

The 13 Agents

Memory Agents — Store and retrieve knowledge across sessions

graph LR
    YOU["You"] -->|"paste notes, code, decisions"| LEARN["learn agent<br/>extracts patterns + insights"]
    LEARN -->|"stores to"| MEM0[("Mem0<br/>Semantic memory")]
    MEM0 -->|"searched by"| RECALL["recall agent<br/>surfaces relevant knowledge"]
    RECALL -->|"answers"| ASK["ask agent<br/>Q&A with full brain context"]
    ASK --> YOU

    style YOU fill:#2c3e50,color:#fff
    style LEARN fill:#27ae60,color:#fff
    style MEM0 fill:#e74c3c,color:#fff
    style RECALL fill:#4a90d9,color:#fff
    style ASK fill:#4a90d9,color:#fff
AgentWhat It Does
recallSemantic search across everything stored in memory — surfaces relevant past decisions, patterns, and notes
askAnswers questions using full brain context — connects your stored knowledge to new questions
learnExtracts patterns and insights from anything you feed it (notes, code, articles) and stores them
learn_imageStores images to Mem0 + generates multimodal Voyage AI embeddings for cross-modal search
learn_documentIngests PDFs, MDX, and TXT documents into semantic memory
learn_videoGenerates multimodal video embeddings via Voyage AI with text context stored to memory

Content Agents — Generate and score content in your voice

graph LR
    STORED[("Stored examples<br/>+ voice patterns")] --> CREATE["create agent<br/>generates content"]
    CREATE -->|"draft"| REVIEW["review agent<br/>scores across dimensions"]
    REVIEW -->|"scores + feedback"| CLARITY["clarity agent<br/>readability analysis"]
    CLARITY -->|"issues"| SYNTH["synthesizer agent<br/>consolidates all feedback"]
    SYNTH -->|"unified report"| YOU["You"]

    style STORED fill:#e74c3c,color:#fff
    style CREATE fill:#27ae60,color:#fff
    style REVIEW fill:#e67e22,color:#fff
    style CLARITY fill:#7b68ee,color:#fff
    style SYNTH fill:#7b68ee,color:#fff
    style YOU fill:#2c3e50,color:#fff
AgentWhat It Does
createGenerates content (posts, docs, emails, code comments) with awareness of your stored voice and style examples
reviewScores content across multiple dimensions: clarity, structure, impact, tone — returns dimension-by-dimension scores
clarityReadability analysis — identifies passive voice, jargon, complex sentences, and structural issues
synthesizerConsolidates feedback from multiple sources (review scores, clarity issues, your notes) into a single prioritized action list
template_builderDetects when you’re repeating a pattern and proposes a reusable template

Operations Agents — Manage priorities and communications

graph LR
    CTX[("Stored context<br/>projects + history")] --> COACH["coach agent<br/>daily accountability"]
    CTX --> PMO["pmo agent<br/>task prioritization"]
    CTX --> EMAIL["email_agent<br/>voice-aware composition"]

    COACH -->|"priority brief"| YOU["You"]
    PMO -->|"ranked task list"| YOU
    EMAIL -->|"drafted email"| YOU

    style CTX fill:#e74c3c,color:#fff
    style COACH fill:#e67e22,color:#fff
    style PMO fill:#e67e22,color:#fff
    style EMAIL fill:#e67e22,color:#fff
    style YOU fill:#2c3e50,color:#fff
AgentWhat It Does
coachDaily accountability coaching — surfaces your top priorities, checks progress against goals, prompts reflection
pmoPMO-style task prioritization — manages competing projects, deadlines, and resource constraints
email_agentComposes emails matched to your voice and the relationship context of the recipient

Specialist Agent

AgentWhat It Does
specialistDeep Q&A on Claude Code, Pydantic AI, and the Second Brain system itself — uses stored knowledge of your setup

Service Layer

Three external systems do the actual work. Agents call them through a clean service abstraction — swappable at runtime via MEMORY_PROVIDER (mem0, graphiti, or none).

graph TD
    subgraph "Service Layer"
        MS["memory.py<br/>Mem0 wrapper"]
        SS["storage.py<br/>Supabase CRUD + ContentTypeRegistry"]
        ES["embeddings.py<br/>Voyage AI / OpenAI"]
        VS["voyage.py<br/>Voyage AI reranking"]
        GS["graphiti.py<br/>Knowledge graph (optional)"]
        HS["health.py<br/>Metrics + growth milestones"]
    end

    subgraph "External Systems"
        MEM0[("Mem0<br/>Semantic memory store")]
        SB[("Supabase<br/>PostgreSQL + pgvector")]
        VAI[("Voyage AI<br/>voyage-multimodal-3.5")]
        FK[("FalkorDB<br/>Graph database")]
    end

    MS <--> MEM0
    SS <--> SB
    ES <--> VAI
    VS <--> VAI
    GS <--> FK

    style MS fill:#4a90d9,color:#fff
    style SS fill:#4a90d9,color:#fff
    style ES fill:#4a90d9,color:#fff
    style VS fill:#4a90d9,color:#fff
    style GS fill:#95a5a6,color:#fff
    style HS fill:#4a90d9,color:#fff
    style MEM0 fill:#e74c3c,color:#fff
    style SB fill:#e74c3c,color:#fff
    style VAI fill:#e74c3c,color:#fff
    style FK fill:#95a5a6,color:#fff
ServicePurpose
memory.pyWraps Mem0 — add, search, and retrieve semantic memories with embedding-based similarity. Supports multimodal content (images, PDFs, documents)
storage.pyWraps Supabase — CRUD for all structured data + ContentTypeRegistry for content type configs
embeddings.pyGenerates embeddings via Voyage AI (primary) or OpenAI (fallback) for vector search. Supports multimodal inputs (text + images) via embed_multimodal()
voyage.pyVoyage AI reranking + multimodal embeddings — voyage-multimodal-3.5 embeds text, images, and video into a shared 1024-dim vector space
graphiti.pyOptional knowledge graph via Graphiti + FalkorDB — entity and relationship extraction
graphiti_memory.pyAdapts Graphiti to the MemoryServiceBase interface — drop-in replacement for Mem0
health.pyBrain metrics, growth milestones, and system health checks
retry.pyTenacity retry decorators for transient failures
search_result.pyShared data structures for search results across all retrieval methods
abstract.pyAbstract base classes (MemoryServiceBase, etc.) for pluggable service implementations + stub services for testing

Multi-User Support

Each Second Brain instance is scoped to a single user via the BRAIN_USER_ID environment variable. All reads and writes in storage.py are filtered by this value, so multiple instances can share one Supabase deployment without data leaking between users. Migration 015_user_id_isolation.sql adds a user_id column and performance index to every relevant table, and updates the vector_search RPC to enforce the same boundary. Existing single-user setups work unchanged — the default value is ryan, so no configuration change is required unless you are adding a second user.


Pluggable Memory Providers

The memory layer is defined by an abstract interface (MemoryServiceBase) with three interchangeable backends — switch between them with a single environment variable:

ProviderMEMORY_PROVIDER=BackendBest For
Mem0mem0 (default)Mem0 cloud APIProduction — managed semantic memory with built-in embedding search
GraphitigraphitiFalkorDB graph databaseKnowledge graphs — entity/relationship extraction with graph-native search
NonenoneIn-memory stubTesting and CI — zero external dependencies, instant startup

All three providers implement the same 13-method interface. Agents never know which backend is active — they call memory_service.search() and get back a SearchResult regardless. If a provider fails to initialize (e.g., Graphiti packages not installed), it falls back to Mem0 automatically. Search errors return empty results instead of crashing.


Multimodal Support

Second Brain supports storing and searching across multiple content types — not just text.

Content TypeMCP ToolMemory StorageVector Embedding
Images (JPEG, PNG, WebP, GIF)learn_imageMem0 image_url blockVoyage multimodal embedding
Documents (PDF, MDX, TXT)learn_documentMem0 pdf_url / mdx_url blockText extraction + embedding
Videolearn_videoText context to Mem0Voyage multimodal embedding
Cross-modal searchmultimodal_vector_searchCombined text + image query vectors

All multimodal embeddings use voyage-multimodal-3.5 (1024 dimensions) — the same vector space as text embeddings. This means images, documents, and video are searchable alongside text memories using the same pgvector infrastructure. No database migration needed.

The Graphiti memory provider falls back to text-only mode for multimodal content — non-text blocks are skipped with a debug log.


Data Flow

Learn → Store → Recall

sequenceDiagram
    participant You
    participant MCP as mcp_server.py
    participant Learn as learn agent
    participant Mem0
    participant Voyage as Voyage AI
    participant Recall as recall agent

    You->>MCP: "Learn this pattern: [content]"
    MCP->>Learn: run(input, deps=BrainDeps)
    Learn->>Voyage: embed(content)
    Voyage-->>Learn: vector
    Learn->>Mem0: add(content, vector, metadata)
    Mem0-->>Learn: stored ✓
    Learn-->>MCP: InsightResult
    MCP-->>You: "Stored: [summary of what was learned]"

    Note over You,Recall: Later session...

    You->>MCP: "Recall what I know about Supabase RLS"
    MCP->>Recall: run(query, deps=BrainDeps)
    Recall->>Voyage: embed(query)
    Voyage-->>Recall: query vector
    Recall->>Mem0: search(vector, top_k=10)
    Mem0-->>Recall: ranked memories
    Recall-->>MCP: RecallResult
    MCP-->>You: relevant memories + context

Error Handling

graph TD
    MCP["MCP Layer<br/>mcp_server.py"] -->|"catches"| VE["ValueError<br/>→ return plain string"]
    MCP -->|"catches"| TE["TimeoutError<br/>→ return timeout message"]

    AGENT["Agent Tools<br/>@agent.tool"] -->|"catches"| EX["Exception<br/>→ tool_error('name', e)"]

    OUTPUT["Output Validation<br/>@agent.output_validator"] -->|"raises"| MR["ModelRetry(message)<br/>→ agent retries with guidance"]

    SVC["Service Layer"] -->|"logs + returns"| FB["empty fallback<br/>[] or {}"]

    style MCP fill:#8e44ad,color:#fff
    style AGENT fill:#4a90d9,color:#fff
    style OUTPUT fill:#e67e22,color:#fff
    style SVC fill:#27ae60,color:#fff
    style VE fill:#e74c3c,color:#fff
    style TE fill:#e74c3c,color:#fff
    style EX fill:#e74c3c,color:#fff
    style MR fill:#f39c12,color:#fff
    style FB fill:#95a5a6,color:#fff

Tech Stack

ComponentTechnology
LanguagePython 3.11+
Agent frameworkPydantic AI (pydantic-ai[anthropic])
MCP serverFastMCP
Semantic memoryMem0 (mem0ai)
DatabaseSupabase (PostgreSQL + pgvector)
EmbeddingsVoyage AI voyage-multimodal-3.5 (primary, text + images + video), OpenAI (text fallback)
Image processingPillow (PIL) — decodes base64 images for Voyage multimodal embeddings
Knowledge graphGraphiti + FalkorDB (optional, GRAPHITI_ENABLED=false)
CLIClick (brain entrypoint)
RetriesTenacity
ConfigPydantic Settings (loads .env via BrainConfig)
Testingpytest + pytest-asyncio (asyncio_mode = "auto")

Setup

1. Python Version

Requires Python 3.11–3.13. Python 3.14+ is not supported (voyageai requires <3.14).

If you have multiple Python versions installed, create a venv with the correct one:

# Windows (PowerShell)
py -3.13 -m venv .venv
.venv\Scripts\Activate.ps1
 
# macOS / Linux
python3.13 -m venv .venv
source .venv/bin/activate

2. Environment

cd backend
cp .env.example .env

Edit .env:

MEM0_API_KEY=...            # Required — semantic memory store
SUPABASE_URL=...            # Required — structured storage + vector search
SUPABASE_KEY=...            # Required — Supabase service role key
OPENAI_API_KEY=...          # Required — Mem0 internal embeddings (text-embedding-3-small)
VOYAGE_API_KEY=...          # Optional — primary embeddings + reranking (falls back to OpenAI)
GRAPH_PROVIDER=mem0         # mem0 (default), graphiti, or none
BRAIN_USER_ID=ryan          # Optional — isolates data per user (default: ryan)

LLM Backend (choose one)

The agents need an LLM to think. You have three options:

OptionEnv VarsCostQuality
Anthropic APIANTHROPIC_API_KEY=sk-ant-...Pay per tokenBest
Claude SubscriptionUSE_SUBSCRIPTION=trueIncluded in Claude Pro/MaxBest (same models)
Ollama CloudOLLAMA_BASE_URL=https://... + OLLAMA_API_KEY=... + OLLAMA_MODEL=...VariesGood

Claude Subscription setup (recommended if you have Claude Pro/Max):

  1. Install Claude CLI: npm install -g @anthropic-ai/claude-code
  2. Authenticate: run claude and complete the login flow
  3. Set USE_SUBSCRIPTION=true in .env
  4. No ANTHROPIC_API_KEY needed — the system reads your OAuth token from the credential store automatically

The subscription auth works with any MCP client (Claude Code, Cursor, Windsurf, etc.) — the OAuth token is stored on your machine, not tied to the editor.

Ollama Cloud setup (for non-Anthropic models):

OLLAMA_BASE_URL=https://your-ollama-endpoint.com
OLLAMA_API_KEY=your-api-key
OLLAMA_MODEL=gpt-oss:120b-cloud

Any OpenAI-compatible API endpoint works here (Ollama, Together AI, OpenRouter, etc.).

3. Install

cd backend
pip install -e ".[dev]"

Optional extras:

pip install -e ".[dev,graphiti]"      # + Graphiti knowledge graph
pip install -e ".[dev,subscription]"  # + Claude Agent SDK (subscription auth)
pip install -e ".[dev,ollama]"        # + Ollama local model support

4. Database Migrations

Apply migrations in order via the Supabase dashboard or CLI. All 15 migrations are in backend/supabase/migrations/, numbered 001 through 015.

001_initial_schema.sql            — Core tables
002_examples_knowledge.sql        — Examples and knowledge tables
003_pattern_constraints.sql       — Pattern uniqueness constraints
004_content_types.sql             — Content type registry
005_growth_tracking_tables.sql    — Growth and milestone tracking
006_rls_policies.sql              — Row Level Security policies
007_foreign_keys_indexes.sql      — Foreign keys and indexes
008_data_constraints.sql          — Data validation constraints
009_reinforce_pattern_rpc.sql     — Pattern reinforcement RPC
010_vector_search_rpc.sql         — pgvector similarity search RPC
011_voyage_dimensions.sql         — Voyage AI embedding dimensions
012_projects_lifecycle.sql        — Project lifecycle tables
013_quality_trending.sql          — Quality score trending
014_content_type_instructions.sql — Content type prompt instructions
015_user_id_isolation.sql         — Multi-user data isolation (user_id column + indexes)

5. Start the MCP Server

Local (stdio — default):

cd backend
python -m second_brain.mcp_server

Docker (HTTP transport):

cd backend
docker compose up -d

The container starts with MCP_TRANSPORT=http on port 8000, includes a /health endpoint, and restarts automatically on failure.

All 13 agents are now available as MCP tools inside Claude Code.


Docker

Build & Run

cd backend
docker build -t second-brain-mcp .
docker compose up -d

The multi-stage Dockerfile uses python:3.11-slim, runs as a non-root user, and includes a health check that probes /health every 30 seconds.

Transport Configuration

The server supports three transport modes, configured via the MCP_TRANSPORT environment variable:

TransportMCP_TRANSPORT=Use Case
stdiostdio (default)Local development — Claude Code spawns as subprocess
HTTPhttpDocker / network — single /mcp endpoint, stateless
SSEsseLegacy — Server-Sent Events (deprecated by MCP spec)

Additional env vars for HTTP/SSE mode:

MCP_HOST=0.0.0.0   # Bind address (default: 0.0.0.0)
MCP_PORT=8000       # Port (default: 8000, range: 1024-65535)

Health Check

When running in HTTP/SSE mode, a health endpoint is available:

curl http://localhost:8000/health
# {"status": "healthy", "service": "second-brain"}

MCP Integration

Local (stdio)

Add to your Claude Code MCP config (.mcp.json or claude_desktop_config.json):

{
  "mcpServers": {
    "second-brain": {
      "command": "python",
      "args": ["-m", "second_brain.mcp_server"],
      "cwd": "/path/to/repo/backend"
    }
  }
}

Docker (HTTP) — Claude Code

claude mcp add --transport http second-brain http://localhost:8000/mcp

Or add to .mcp.json:

{
  "mcpServers": {
    "second-brain": {
      "type": "http",
      "url": "http://localhost:8000/mcp"
    }
  }
}

Docker (HTTP) — Claude Desktop

Claude Desktop requires the mcp-remote proxy to connect to HTTP MCP servers:

{
  "mcpServers": {
    "second-brain": {
      "command": "npx",
      "args": ["mcp-remote", "http://localhost:8000/mcp"]
    }
  }
}

Once connected, you can call any agent from Claude Code:

Use the second brain to recall everything I know about Supabase RLS.

Learn this pattern from my code: [paste code]

Create a LinkedIn post in my voice about shipping this feature.

Review this draft and score it across all dimensions.

Coach me — what should I be focused on today?

You can also manage projects and knowledge directly:

List all my active projects.

Update project "auth-system" — mark it as shipped.

Search my stored experiences for anything related to Supabase migrations.

Search patterns — find everything I've learned about rate limiting.

Ingest this example into my brain: [paste code or content]

Add an artifact to project "second-brain" — link to this PR.

Ingest this knowledge entry: [paste article, doc, or note]

Multimodal Content

Store images, documents, and video alongside text memories:

Learn this image — it's my app's architecture diagram: [image URL]

Learn this PDF — it's the Supabase RLS guide: [PDF URL]

Learn this video — it's a demo of the new onboarding flow: [video URL]

Search across all my stored content (text + images) for "authentication flow".

Vault Ingestion

Bulk-ingest an Obsidian/knowledge vault into Mem0 (semantic + graph) and Supabase.

Prerequisites

  1. Apply migration 016_vault_ingestion.sql to your Supabase instance (SQL Editor)
  2. Set VAULT_PATH in .env to your vault directory
  3. Ensure MEM0_API_KEY, OPENAI_API_KEY, SUPABASE_URL, SUPABASE_KEY are set

Multi-User Classification

Files are automatically classified by directory structure:

Vault PathAssigned user_id
content/cc-content-system/uttam-gpt/...uttam
content/cc-content-system/robert-gpt/...robert
clients/...brainforge (shared)
Everything else (functional dirs)brainforge (shared)

Commands

# Preview what will be ingested (no writes)
brain ingest --dry-run
 
# Run full ingestion
brain ingest
 
# Check ingestion status
brain ingest-status

Configuration

# .env
VAULT_PATH=C:\path\to\your\vault
VAULT_INGESTION_BATCH_SIZE=20    # Files per batch (default: 20)
VAULT_INGESTION_CONCURRENCY=5    # Concurrent workers (default: 5)
GRAPH_PROVIDER=mem0              # Enables graph relationships during ingestion

CLI

Direct access without the MCP layer:

brain --help         # Show all commands
brain health         # Check brain health and growth milestones
brain migrate        # Run data migration
brain ingest         # Ingest vault into memory
brain ingest-status  # Check ingestion progress

Code Structure

backend/
├── src/second_brain/
│   ├── mcp_server.py          # Public surface: @server.tool() functions
│   ├── service_mcp.py         # Supplemental service routing
│   ├── deps.py                # BrainDeps dataclass + create_deps() factory
│   ├── config.py              # BrainConfig (Pydantic Settings, loads .env)
│   ├── schemas.py             # All Pydantic output models (no internal imports)
│   ├── models.py              # AI model selection logic
│   ├── models_sdk.py          # Claude SDK model support
│   ├── auth.py                # Authentication helpers
│   ├── migrate.py             # Data migration utilities
│   ├── cli.py                 # Click CLI ("brain" command)
│   ├── agents/
│   │   ├── chief_of_staff.py  # Routing orchestrator
│   │   ├── recall.py
│   │   ├── ask.py
│   │   ├── learn.py
│   │   ├── create.py
│   │   ├── review.py
│   │   ├── coach.py
│   │   ├── pmo.py
│   │   ├── email_agent.py
│   │   ├── specialist.py
│   │   ├── clarity.py
│   │   ├── synthesizer.py
│   │   ├── template_builder.py
│   │   └── utils.py           # Shared: tool_error(), run_pipeline(), format_*()
│   └── services/
│       ├── memory.py          # Mem0 semantic memory wrapper
│       ├── storage.py         # Supabase CRUD + ContentTypeRegistry
│       ├── embeddings.py      # Voyage AI / OpenAI embedding generation
│       ├── voyage.py          # Voyage AI reranking
│       ├── graphiti.py        # Knowledge graph (optional)
│       ├── graphiti_memory.py # Graphiti-backed MemoryServiceBase adapter
│       ├── health.py          # Brain metrics + growth milestones
│       ├── retry.py           # Tenacity retry helpers
│       ├── search_result.py   # Search result data structures
│       └── abstract.py        # ABCs + stub services (MemoryServiceBase, etc.)
├── supabase/migrations/       # 15 SQL migrations (001–015)
├── tests/                     # ~926 tests (one file per module)
├── scripts/                   # Utility scripts
├── Dockerfile                 # Multi-stage production image
├── docker-compose.yml         # Local dev compose (HTTP transport)
├── .dockerignore              # Docker build context exclusions
├── .env.example               # Documented env var template
└── pyproject.toml             # Dependencies + pytest config

Tests

cd backend
pytest                              # All tests (~926)
pytest tests/test_agents.py         # Single file
pytest -k "test_recall"             # Filter by name
pytest -x                           # Stop on first failure
pytest -v                           # Verbose output

One test file per source module. All async tests run without @pytest.mark.asyncioasyncio_mode = "auto" in pyproject.toml.


By the Numbers

ComponentCount
Pydantic AI agents13
MCP tools42
Service layer modules9
Database migrations15
Test files20
Tests~926
Python version3.11+

License

MIT