Brainforge Setup Guide

Complete walkthrough for setting up the backend, configuring services, and running vault ingestion.


Prerequisites

  • Python 3.11+
  • Git
  • A Supabase project
  • API keys for the services you want to use (see sections below)

1. Install Dependencies

cd backend
python -m venv .venv
.venv\Scripts\pip install -e ".[dev]"

Windows note: Always use .venv\Scripts\python and .venv\Scripts\pip — not the system python.


2. Configure .env

Copy the example and fill in your values:

copy .env.example .env

Required

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
BRAIN_DATA_PATH=C:\path\to\your\markdown\data

LLM — pick one

Option A: Anthropic API key

ANTHROPIC_API_KEY=sk-ant-...

Option B: Claude subscription (Pro/Max)

USE_SUBSCRIPTION=true
# Requires claude CLI installed and authenticated:
#   npm install -g @anthropic-ai/claude-code
#   claude   (complete login flow)

Option C: Ollama (local or cloud)

OLLAMA_BASE_URL=http://localhost:11434   # local
# or for Ollama Cloud:
OLLAMA_BASE_URL=https://ollama.com
OLLAMA_MODEL=gpt-oss:120b-cloud         # include -cloud suffix for cloud models
OLLAMA_API_KEY=your-key                 # from ollama.com/settings/keys

If no LLM is configured, the system falls back to Ollama automatically.

Memory — pick one

Option A: Mem0 Cloud (recommended)

MEM0_API_KEY=m0-...

Option B: Local Mem0 (no key needed)

# Leave MEM0_API_KEY unset — uses local Qdrant automatically

Voyage AI gives significantly better search quality than OpenAI embeddings.

VOYAGE_API_KEY=pa-...

Or use OpenAI embeddings:

OPENAI_API_KEY=sk-proj-...

3. Apply Database Migrations

Run all 16 migrations against your Supabase project:

.venv\Scripts\python -m second_brain.migrate

This creates all tables, indexes, RLS policies, and vector search RPCs.


4. Verify the Setup

.venv\Scripts\python -c "from second_brain.config import BrainConfig; c = BrainConfig(); print('Config OK')"

No errors = config loaded correctly.


5. Vault Ingestion

Vault ingestion pulls your Obsidian/markdown vault into Mem0 + Supabase for semantic search.

Configure vault path

VAULT_PATH=C:\path\to\your\vault

Transcript summarization model

By default, transcripts are summarized using your primary LLM (Claude). To use Ollama Cloud instead (saves Claude credits):

VAULT_INGESTION_USE_OLLAMA=true
OLLAMA_BASE_URL=https://ollama.com
OLLAMA_MODEL=gpt-oss:120b-cloud
OLLAMA_API_KEY=your-key

Regular markdown files (notes, patterns, examples) are never sent to an LLM — they go directly to Mem0 + Supabase. Only transcripts need a model for summarization.

Run ingestion

# Preview what would be ingested (no writes)
.venv\Scripts\python -m second_brain.cli ingest --dry-run
 
# Preview transcripts only
.venv\Scripts\python -m second_brain.cli ingest --dry-run --transcripts-only
 
# Ingest everything
.venv\Scripts\python -m second_brain.cli ingest
 
# Ingest transcripts only
.venv\Scripts\python -m second_brain.cli ingest --transcripts-only
 
# Filter to a specific client
.venv\Scripts\python -m second_brain.cli ingest --transcripts-only --client eden
 
# Ingest from a specific vault path (overrides .env)
.venv\Scripts\python -m second_brain.cli ingest C:\path\to\vault

Vault folder structure

The ingester classifies files by their folder path:

Path patternuser_idcategory
sales/content/cc-content-system/{name}-gpt/memory/patterns/namepatterns
sales/content/cc-content-system/{name}-gpt/memory/examples/linkedin/nameexamples (linkedin)
clients/{client}/transcripts/brainforgetranscript
clients/{client}/meeting-notes/brainforgemeeting_notes
gtm/, engineering/, company/brainforgegtm / engineering / company

Files in .claude, .codex, brain-health, experiences, projects are skipped automatically.


6. Start the MCP Server

# stdio mode (default — for Claude Code / Cursor)
.venv\Scripts\python -m second_brain.mcp_server
 
# or via the shell script
bash scripts/start_mcp.sh

See docs/mcp-usage-guide.md for connecting clients (Claude Code, Cursor, Windsurf).


7. Docker (optional)

docker-compose up --build

Set MCP_TRANSPORT=http and MCP_PORT=8000 in .env for HTTP mode.


Troubleshooting

ModuleNotFoundError — you’re using system Python, not the venv:

.venv\Scripts\python ...   # always prefix with this

Ollama returns markdown instead of JSON — the model ignored the format schema. This is handled automatically with retry + JSON extraction logic in the summarizer.

No vault path configured — set VAULT_PATH in .env or pass it as an argument:

.venv\Scripts\python -m second_brain.cli ingest C:\path\to\vault

Supabase migration errors — check that your SUPABASE_KEY is the service role key (not the anon key). The service role key is required to apply migrations.