Brainforge Setup Guide
Complete walkthrough for setting up the backend, configuring services, and running vault ingestion.
Prerequisites
- Python 3.11+
- Git
- A Supabase project
- API keys for the services you want to use (see sections below)
1. Install Dependencies
cd backend
python -m venv .venv
.venv\Scripts\pip install -e ".[dev]"Windows note: Always use
.venv\Scripts\pythonand.venv\Scripts\pip— not the systempython.
2. Configure .env
Copy the example and fill in your values:
copy .env.example .envRequired
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
BRAIN_DATA_PATH=C:\path\to\your\markdown\dataLLM — pick one
Option A: Anthropic API key
ANTHROPIC_API_KEY=sk-ant-...Option B: Claude subscription (Pro/Max)
USE_SUBSCRIPTION=true
# Requires claude CLI installed and authenticated:
# npm install -g @anthropic-ai/claude-code
# claude (complete login flow)Option C: Ollama (local or cloud)
OLLAMA_BASE_URL=http://localhost:11434 # local
# or for Ollama Cloud:
OLLAMA_BASE_URL=https://ollama.com
OLLAMA_MODEL=gpt-oss:120b-cloud # include -cloud suffix for cloud models
OLLAMA_API_KEY=your-key # from ollama.com/settings/keysIf no LLM is configured, the system falls back to Ollama automatically.
Memory — pick one
Option A: Mem0 Cloud (recommended)
MEM0_API_KEY=m0-...Option B: Local Mem0 (no key needed)
# Leave MEM0_API_KEY unset — uses local Qdrant automaticallyEmbeddings (optional but recommended)
Voyage AI gives significantly better search quality than OpenAI embeddings.
VOYAGE_API_KEY=pa-...Or use OpenAI embeddings:
OPENAI_API_KEY=sk-proj-...3. Apply Database Migrations
Run all 16 migrations against your Supabase project:
.venv\Scripts\python -m second_brain.migrateThis creates all tables, indexes, RLS policies, and vector search RPCs.
4. Verify the Setup
.venv\Scripts\python -c "from second_brain.config import BrainConfig; c = BrainConfig(); print('Config OK')"No errors = config loaded correctly.
5. Vault Ingestion
Vault ingestion pulls your Obsidian/markdown vault into Mem0 + Supabase for semantic search.
Configure vault path
VAULT_PATH=C:\path\to\your\vaultTranscript summarization model
By default, transcripts are summarized using your primary LLM (Claude). To use Ollama Cloud instead (saves Claude credits):
VAULT_INGESTION_USE_OLLAMA=true
OLLAMA_BASE_URL=https://ollama.com
OLLAMA_MODEL=gpt-oss:120b-cloud
OLLAMA_API_KEY=your-keyRegular markdown files (notes, patterns, examples) are never sent to an LLM — they go directly to Mem0 + Supabase. Only transcripts need a model for summarization.
Run ingestion
# Preview what would be ingested (no writes)
.venv\Scripts\python -m second_brain.cli ingest --dry-run
# Preview transcripts only
.venv\Scripts\python -m second_brain.cli ingest --dry-run --transcripts-only
# Ingest everything
.venv\Scripts\python -m second_brain.cli ingest
# Ingest transcripts only
.venv\Scripts\python -m second_brain.cli ingest --transcripts-only
# Filter to a specific client
.venv\Scripts\python -m second_brain.cli ingest --transcripts-only --client eden
# Ingest from a specific vault path (overrides .env)
.venv\Scripts\python -m second_brain.cli ingest C:\path\to\vaultVault folder structure
The ingester classifies files by their folder path:
| Path pattern | user_id | category |
|---|---|---|
sales/content/cc-content-system/{name}-gpt/memory/patterns/ | name | patterns |
sales/content/cc-content-system/{name}-gpt/memory/examples/linkedin/ | name | examples (linkedin) |
clients/{client}/transcripts/ | brainforge | transcript |
clients/{client}/meeting-notes/ | brainforge | meeting_notes |
gtm/, engineering/, company/ | brainforge | gtm / engineering / company |
Files in .claude, .codex, brain-health, experiences, projects are skipped automatically.
6. Start the MCP Server
# stdio mode (default — for Claude Code / Cursor)
.venv\Scripts\python -m second_brain.mcp_server
# or via the shell script
bash scripts/start_mcp.shSee docs/mcp-usage-guide.md for connecting clients (Claude Code, Cursor, Windsurf).
7. Docker (optional)
docker-compose up --buildSet MCP_TRANSPORT=http and MCP_PORT=8000 in .env for HTTP mode.
Troubleshooting
ModuleNotFoundError — you’re using system Python, not the venv:
.venv\Scripts\python ... # always prefix with thisOllama returns markdown instead of JSON — the model ignored the format schema. This is handled automatically with retry + JSON extraction logic in the summarizer.
No vault path configured — set VAULT_PATH in .env or pass it as an argument:
.venv\Scripts\python -m second_brain.cli ingest C:\path\to\vaultSupabase migration errors — check that your SUPABASE_KEY is the service role key (not the anon key). The service role key is required to apply migrations.