OpenCode CLI — Brainforge (Azure + Google)

Internal cheat sheet for OpenCode in brainforge-platform. Repo-root opencode.jsonc is MCP + instructions + compaction only so the TUI/Desktop Connect provider flow stays stock. Put models and provider blocks in ~/.config/opencode/opencode.json — start from opencode-user-config.azure-eastus-legacy.example.jsonc or the minimal opencode-user-config.example.json (East US brainforge-openai, azure-eastus/*) or connect other providers in the app. Azure inventory: azure-models-for-devs.md.

Date: 2026-04-21


Prerequisites

  • OpenCode CLI installed (opencode on PATH).
  • Work from brainforge-platform git root (or a subdirectory under it) so OpenCode loads project opencode.jsonc (see config locations).
  • Optional global overrides: ~/.config/opencode/opencode.json merges with the project file; do not duplicate mcp.supabase there if you hit OAuth Unrecognized client_id (see Supabase notes below).

Full permissions (auto-allow all tools)

For unrestricted tool use without approval prompts, configure in ~/.config/opencode/opencode.json:

{
  "approvalPolicy": "auto",
  "mcp": { ... }
}

Or via CLI:

opencode config set approvalPolicy auto

Verify:

cat ~/.config/opencode/opencode.json | grep approvalPolicy

Config layers (why CLI ≠ Desktop sometimes)

OpenCode does not read only this repo. In practice you have:

LayerRole
Repo opencode.jsoncMCP list, instructions, compaction, watcher — no provider / model
User ~/.config/opencode/opencode.jsonYour machine-wide overrides (Compound Engineering and other installers may write here)
Auth ~/.local/share/opencode/auth.jsonProvider keys the app/CLI stored after Settings → Providers or opencode auth login
Shell (CLI only)export / direnv / .env.local — Desktop launched from the Dock usually does not see these

Ground truth: from the repo root run opencode debug config and confirm model, provider, and mcp match what you expect. If Desktop and CLI disagree, compare whether the CLI had env vars in the shell.

Uninstall / reinstall (official CLI — prefer this)

OpenCode ships a first-class uninstaller; use it before manually rm -rf-ing ~/.opencode, ~/.local/share/opencode, or ~/.config/opencode (hand deletes can miss pieces or fight open file handles under worktree/).

opencode uninstall --help          # flags
opencode uninstall --dry-run     # show what would be removed
# Typical full reset (non-interactive): removes CLI + related files; omit --keep-* for a clean wipe
opencode uninstall --force

Useful flags (see opencode uninstall --help for your installed version):

  • --dry-run — print what would be removed, change nothing.
  • -c / --keep-config — keep ~/.config/opencode/ (default without this flag is to remove config when doing a full uninstall — confirm in --help on your version).
  • -d / --keep-data — keep session data / snapshots under ~/.local/share/opencode/ when you only want to refresh the binary.
  • -f / --force — skip confirmation prompts (automation-friendly).

After opencode uninstall, reinstall the CLI with curl -fsSL https://opencode.ai/install | bash (or your usual installer). Desktop may still be a separate artifact (/Applications/OpenCode.app, Homebrew opencode-desktop, or an older Beta app) — remove or reinstall that channel explicitly if the GUI still looks stale.

Searchable team write-up (when hand deletes fail or you want the full rationale): docs/solutions/workflow-issues/opencode-official-uninstall-and-reinstall-2026-04-21.md.

Clean slate (when everything feels stale)

  1. Quit OpenCode Desktop (Cmd+Q) and stop any long-running opencode TUI sessions.

  2. Backup then trim user noise (adjust paths if XDG_CONFIG_HOME is set):

    ts=$(date +%Y%m%d%H%M%S)
    cp ~/.config/opencode/opencode.json ~/.config/opencode/opencode.json.bak.$ts 2>/dev/null || true
    cp ~/.local/share/opencode/auth.json ~/.local/share/opencode/auth.json.bak.$ts 2>/dev/null || true
  3. Option A — minimal user file: replace ~/.config/opencode/opencode.json with {} if you want only stock providers; then merge opencode-user-config.azure-eastus-legacy.example.jsonc (or opencode-user-config.example.json) when you need Brainforge Azure again. Re-add Zen/Google in the app after.

  4. Option B — fix auth only: edit auth.json (from backup) to remove duplicate Azure slots (e.g. stray azure-cognitive-services vs azure-eastus) so one key maps to the provider you use.

  5. Re-auth MCP: opencode mcp auth linear, opencode mcp auth supabase, opencode mcp auth exa, etc., from a shell with the repo as cwd.

  6. Reopen Desktop with brainforge-platform as the workspace folder (the directory that contains opencode.jsonc, not a subfolder); pick a model after your user opencode.json defines providers (or use /connect for Zen/OpenAI/etc.).

MCPs show as disconnected in OpenCode Desktop

  • Wrong workspace: Local MCP hubspot uses ${workspaceFolder}/scripts/.... Opening only apps/platform (or any subfolder) breaks that path — open the repo root.
  • OAuth not finished: Run opencode mcp auth linear, opencode mcp auth supabase, etc. from a terminal whose cwd is the repo root; complete the browser flow. Desktop reads the same ~/.local/share/opencode/ token files as the CLI.
  • Verify: opencode mcp list and opencode debug config (check merged mcp). If CLI lists servers but the Desktop UI does not, fully quit the app (Cmd+Q), update OpenCode, and reopen the repo root.
  • Full playbook: opencode-desktop-azure-setup.md.

Chats live under ~/.local/share/opencode/ (e.g. SQLite); the steps above do not wipe history unless you delete those files on purpose.


OpenCode Zen + Google (optional providers)

OpenCode Zen is pay-as-you-go curated routing (models show as opencode/<id>, e.g. opencode/gpt-5.4, opencode/gpt-5.4-mini). After you buy credits: sign in at opencode.ai/auth, copy your Zen API key, then in OpenCode Desktop use /connectOpenCode Zen (or the app’s Providers flow) and paste the key — same idea as opencode auth login in the CLI for other providers.

Google Gemini: CLI example: opencode auth login → choose Google → API key (as in your terminal log). Desktop: Settings → Providers → Google.

The model picker lists whatever OpenCode discovers from merged config and connected providers. With no provider block in the repo file, Connect a provider should show the normal catalog again; Brainforge Azure is optional via user config (example file above) or the app’s Azure flow.


Compound Engineering plugin (OpenCode + Codex)

Every’s Compound Engineering plugin supplies shared workflows (/ce:brainstorm, /ce:plan, /ce:work, /ce:review, etc.). Cursor loads it via marketplace settings (this repo: .cursor/settings.json). OpenCode and Codex use Every’s converter CLI; it writes user-level files (commands/skills, and for OpenCode may deep-merge opencode.json MCP config — the tool backs up existing files).

One-shot install (OpenCode + Codex on this machine):

bunx @every-env/compound-plugin install compound-engineering --to opencode --also codex

OpenCode only: --to opencode
Codex only: --to codex
All detected tools: --to all

After install, run /ce-setup inside OpenCode or Codex once to check dependencies (gh, jq, etc.). To refresh when the upstream plugin changes, re-run the same bunx command (or use their sync flow from ~/.claude/ per the upstream README).


Model ID format

OpenCode uses provider/deployment-or-model-id:

opencode models
opencode models azure-eastus
opencode models google

Pick a line from the list, or use -m / --model:

opencode -m azure-eastus/gpt-5.4
opencode run -m azure-eastus/gpt-5.4 "Summarize AGENTS.md in three bullets"

Azure OpenAI — East US (azure-eastus)

Merge the provider / model keys you need into ~/.config/opencode/opencode.json (copy from opencode-user-config.azure-eastus-legacy.example.jsonc or opencode-user-config.example.json and extend models per azure-models-for-devs.md for whatever is deployed on brainforge-openai). opencode models azure-eastus only works after that provider exists in merged config.

Provider IDResourceEnv varEndpoint note
azure-eastusbrainforge-openaiAZURE_OPENAI_EASTUS_API_KEYExample uses baseURL: https://brainforge-openai.openai.azure.com/openai (no /v1 suffix) + resourceName + apiKey.

Policy (OpenCode CLI + Desktop): Use azure-eastus/* + AZURE_OPENAI_EASTUS_API_KEY + resourceName (or AZURE_RESOURCE_NAME=brainforge-openai). Optional helper: scripts/opencode-cli-legacy-eastus.sh. Inventory: azure-models-for-devs.md.

GPT-5.4 chat (East US)

cd /path/to/brainforge-platform
export AZURE_OPENAI_EASTUS_API_KEY="$(az cognitiveservices account keys list -g brainforge -n brainforge-openai --query key1 -o tsv)"
opencode -m azure-eastus/gpt-5.4

GPT-5.4 mini (usually lower latency than full 5.4):

opencode -m azure-eastus/gpt-5.4-mini

Other useful Azure model IDs (after you merge models on brainforge-openai)

Only list deployments that exist on brainforge-openai (see azure-models-for-devs.md — e.g. gpt-5.1, gpt-5-mini, gpt-4o). Example -m values use the same provider prefix:

Use caseExample -m
Strong general chatazure-eastus/gpt-5.1, azure-eastus/gpt-5-mini
Vision chat (when merged)azure-eastus/gpt-5.4, azure-eastus/gpt-5.4-mini
Legacy fast visionazure-eastus/gpt-4o

Google (Gemini)

The repo opencode.jsonc does not define chat provider rows (Google, Azure, Zen, etc.). Connect them in the app or merge into ~/.config/opencode/opencode.json. After Google is connected, google/gemini-* appears in opencode models google. Store API keys in OpenCode’s auth store (~/.local/share/opencode/auth.json), not in the vault file.

One-shot CLI with Gemini

After Google is connected once:

cd /path/to/brainforge-platform
opencode -m google/gemini-2.5-flash

List what your install exposes:

opencode models google

Examples that commonly appear: google/gemini-2.0-flash, google/gemini-2.5-flash, google/gemini-2.5-flash-lite, google/gemini-1.5-pro, etc. Names change with OpenCode / models.dev updates; always prefer opencode models google.

Google Cloud Vertex AI (alternative)

OpenCode also documents Vertex AI with GOOGLE_CLOUD_PROJECT, optional VERTEX_LOCATION, and GOOGLE_APPLICATION_CREDENTIALS or gcloud auth application-default login. See OpenCode providers — Google Vertex AI.

brainforge-google MCP is not the chat model

The brainforge-google entry in opencode.jsonc is a remote MCP (Google Workspace tools), not the Gemini chat provider. Chat still uses google/... model IDs after you connect Google as a provider.

Figma MCP — OpenCode CLI auth will 403 (client allowlist)

Figma’s remote MCP URL is https://mcp.figma.com/mcp, but OAuth is only enabled for clients listed in the Figma MCP Catalog (e.g. VS Code, Cursor, Claude Code, Codex). See Figma’s remote server installation note at the top of that page.

OpenCode is not on the MCP catalog list, so opencode mcp auth figma with only repo opencode.jsonc (no static OAuth client) often fails at dynamic registration (HTTP 403 / Forbidden). That is separate from the manual registration flow below.

Use Figma MCP from Cursor (.cursor/mcp.json in this repo) or Codex when you do not want OpenCode-specific setup. For OpenCode, follow the community steps that match Figma’s register API.

OpenCode + Figma MCP: follow OpenCode #988 (comment)

Brainforge verified this sequence (2026-04-16): use the curl body exactly as in the comment (including client_name: Claude Code (figma) and no X-Figma-Token header). Then merge client_id / client_secret into ~/.config/opencode/opencode.json only, strip figma from ~/.local/share/opencode/mcp-auth.json, and run opencode mcp auth figma. Do not commit client_secret to git.

  1. Register (verbatim from the comment):

    curl -sS -X POST "https://api.figma.com/v1/oauth/mcp/register" \
      -H "Content-Type: application/json" \
      -d '{
        "client_name": "Claude Code (figma)",
        "redirect_uris": ["http://127.0.0.1:19876/mcp/oauth/callback"],
        "grant_types": ["authorization_code", "refresh_token"],
        "response_types": ["code"],
        "token_endpoint_auth_method": "none"
      }'

    Expect HTTP 200 and JSON with client_id and client_secret. If you get 403 here, the thread also discusses X-Figma-Token with a Figma PAT; team PAT storage: 1PasswordBrainforge Platform TeamFigma Brainforge Platform Token (notesPlain).

  2. Merge into ~/.config/opencode/opencode.json under mcp.figma: enabled, type: "remote", url: "https://mcp.figma.com/mcp", and oauth.clientId / oauth.clientSecret (camelCase in config; map from client_id / client_secret in the response).

  3. Remove the figma key from ~/.local/share/opencode/mcp-auth.json, then run opencode mcp auth figma and finish the browser flow.

If anything still fails, use Cursor Figma MCP for design work; Figma or OpenCode may change behavior without notice.


Common CLI commands

GoalCommand
TUI (default)opencode or opencode /path/to/project
Headless messageopencode run -m azure-eastus/gpt-5.4 "your prompt"
List modelsopencode models or opencode models azure-eastus
Providers / keysopencode providers (alias: opencode auth)
MCPopencode mcp list, opencode mcp auth linear, opencode mcp auth supabase, opencode mcp auth exa
Repo helpers (monorepo root)npm run opencode:mcp:list, npm run opencode:mcp:auth-supabase
Inspect merged configopencode debug config
Overnight agent loop (gnhf)npm run tools:gnhf -- "your objective" — skill .agents/skills/keep-running/SKILL.md; use --agent opencode to drive the local OpenCode agent instead of Claude/Codex

gnhf (keep-running) vs opencode run

opencode run is one headless session with tools. gnhf is an external orchestrator: many iterations, each commits (or hard-resets) in git, notes.md carries memory between iterations, optional caps (--max-iterations, --max-tokens, --stop-when), and --worktree for parallel objectives. Configure defaults in ~/.gnhf/config.yml; pass --agent opencode when you want gnhf to start opencode serve for that loop (see upstream README “Agents”). OpenCode Desktop/TUI in this repo also receives the keep-running skill text via opencode.jsonc instructions.


Context compaction

Repo root opencode.jsonc sets compaction explicitly: auto and prune stay at defaults (true), and reserved is raised to 32000 tokens so compaction has more headroom on long Codex-scale sessions (MCP + large tool output) before the window overflows mid-summary.

Human/agent habit layer: .opencode/context-handoff.md is listed under instructions so long runs restate goal, branch/PR, files touched, and blockers after compaction or before handoff.

If behavior still looks wrong, confirm nothing in ~/.config/opencode/opencode.json overrides compaction, then re-check with opencode debug config.


Legacy East US wrapper (oclegacy)

Repo script scripts/opencode-cli-legacy-eastus.sh cds to the monorepo root, sets AZURE_OPENAI_EASTUS_API_KEY (from the env if already set, otherwise az cognitiveservices account keys list … brainforge-openai), and runs opencode -m azure-eastus/gpt-5.4 by default. You still need an azure-eastus provider block in ~/.config/opencode/opencode.json (merge from a copy of the old repo provider if needed — see git history or ask Platform).

/path/to/brainforge-platform/scripts/opencode-cli-legacy-eastus.sh

Override default chat model: OPENCODE_LEGACY_MODEL=azure-eastus/gpt-5.4-mini ./scripts/opencode-cli-legacy-eastus.sh

Subcommands pass through: ./scripts/opencode-cli-legacy-eastus.sh run -m azure-eastus/gpt-5.4 "ping". If the first flag is -m, nothing is injected (your model wins).

Session history: By default (OPENCODE_LEGACY_ISOLATE=0 or unset) the script does not override XDG paths, so you use the same ~/.local/share/opencode/ store as a normal opencode launch (your existing threads stay visible).

Concurrent runs: Set OPENCODE_LEGACY_ISOLATE=1. Then the script sets XDG_CONFIG_HOME / XDG_DATA_HOME / XDG_STATE_HOME / XDG_CACHE_HOME under ~/.opencode-legacy-isolated/<session>/ so each worker has its own SQLite DB (avoids lock / disk-full style failures when many CLIs share one file). Session id is $OPENCODE_LEGACY_SESSION_ID if set, otherwise tty-$$ (or noTTY-$$ without a tty). On first use it copies ~/.local/share/opencode/auth.json into the isolated tree if present (one-time seed). Chats created while isolated live only under ~/.opencode-legacy-isolated/ until you merge or copy them manually; they were never deleted from global storage—just written elsewhere when isolation was on.

Shortcut:

ln -sf "/path/to/brainforge-platform/scripts/opencode-cli-legacy-eastus.sh" ~/bin/oc-legacy
# alias oclegacy="$HOME/bin/oc-legacy"

Parallel smoke test (needs opencode on PATH and Azure key or az): npm run test:opencode-legacy-isolation (runs scripts/opencode-cli-legacy-eastus-selftest.sh).


OpenCode Desktop (stock app)

Use the official OpenCode Desktop app and open brainforge-platform as the workspace folder. This repo does not ship a custom launcher; credentials belong in Settings → Providers (and merged ~/.config/opencode/opencode.json) — macOS open -a OpenCode … does not load your shell .env.

“Invalid subscription key” or wrong API endpoint

The repo file does not choose a default model anymore. Whatever you merged under provider in ~/.config/opencode/opencode.json (or connected in Settings → Providers) must use AZURE_OPENAI_EASTUS_API_KEY for azure-eastus/* on brainforge-openai. Playbook: opencode-desktop-azure-setup.md.

Terminal opencode run can read repo .env / .env.local when your shell exports {env:…} placeholders; Desktop does not unless keys are in Providers or user opencode.json.

Providers shows Connected but chat still fails

Stale or duplicate Azure entries in ~/.local/share/opencode/auth.json (e.g. azure-cognitive-services vs azure-eastus) can make the UI look fine while the wrong key is used. Use the clean slate steps above: backup, edit or reset auth.json, quit the app fully, reopen.

opencode debug config shows the merged model / provider / mcp — resolve conflicts in user config (the repo file no longer carries provider).


Authenticated browser workflow

If browser work keeps dropping you into a fresh login flow, do not rely on Chrome’s default profile. Modern Chrome blocks reliable remote-debugging reuse against the default data directory, and control-chrome often lands on a separate automation browser anyway.

Use the repo helper to start a dedicated non-default Chrome profile:

npm run opencode:chrome-profile
# or
./scripts/opencode-agent-chrome.sh

Defaults:

  • Profile dir: $HOME/ChromeProfiles/brainforge-agent
  • Remote debugging port: 9222

Optional overrides:

BRAINFORGE_AGENT_CHROME_PROFILE_DIR="$HOME/ChromeProfiles/work-auth" \
BRAINFORGE_AGENT_CHROME_DEBUG_PORT=9333 \
./scripts/opencode-agent-chrome.sh

Recommended workflow:

  1. Launch the dedicated profile once.
  2. Sign into the sites you use repeatedly there (Platform, Google, Slack, Linear, etc.).
  3. Keep reusing that same browser window/profile for authenticated work.
  4. Prefer usecomputer when the agent session exposes it, because it can drive the already-open signed-in browser window directly.
  5. Treat control-chrome as a fallback for flows that do not depend on your existing login state, unless you explicitly wire a CDP attach flow to the dedicated debug port.

Do I need to restart OpenCode?

  • No restart is required just to use the dedicated Chrome profile helper above. It is an external browser launcher.
  • Yes, a restart is usually required if you changed OpenCode Desktop / CLI config, MCP config, or plugin/tool exposure and want a new session to see those changes.
  • This repo currently exposes control-chrome in opencode.jsonc, but it does not add usecomputer to an already-running OpenCode session. Relaunching OpenCode without changing runtime tool exposure will not magically add a new tool.

Known Pitfalls

  • Know which provider your session uses. There is no repo-level default model anymore — it comes from user config, the picker, or -m. azure-eastus/* needs AZURE_OPENAI_EASTUS_API_KEY. Use opencode debug config when in doubt.
  • Remember config layering. OpenCode merges repo config with user-level config under ~/.config/opencode/ and other local files. If behavior looks wrong, inspect the active provider/model first with opencode models and your current session picker before assuming repo config is being ignored.
  • Do not recurse blindly through ~/.local/share/opencode. Broad grep/glob over logs, tool-output, or snapshots can hit size limits. Prefer targeted sqlite3 ~/.local/share/opencode/opencode.db, scoped file reads, or narrower path filters.
  • Re-read before patching. Several local apply_patch failures came from stale patch context. Read the file again immediately before preparing a patch if other agents or generators may have touched it.
  • Validate workdir before shelling. Some local bash failures came from stale worktree paths. Check that the directory still exists before running commands against it.
  • Preflight browser sessions. Some control-chrome_* failures were just disconnected sessions. Run control-chrome_list_pages or create a fresh page before longer browser flows.
  • Question tool schemas are strict. If you build prompts or helpers around the question tool, every option needs a label; malformed options fail fast.
  • Duplicate skill warnings are often local mirror noise. Repo docs already keep Cursor indexing on canonical paths, but OpenCode can still warn if user-level mirrors like ~/.agents/skills/.cursor-mirror/ or duplicated submodule trees are present. Treat repo .cursor/skills/ and .agents/skills/ as the canonical sources for this workspace and prune extra local mirrors if startup logs get noisy.