OpenCode: “silent” assistant (blank TUI) and empty exports
Last updated: 2026-04-21
Scope: OpenCode CLI / Desktop 1.14.x, Brainforge brainforge-platform + Orca worktrees.
Pairs with: opencode-cli-brainforge.md.
Symptom
- The TUI shows your messages, but assistant replies look empty (or you keep sending “are you there?” style pings).
opencode export <sessionId>shows assistant rows with notype: textparts — often onlystep-start/step-finish— andinfo.tokenswithinput/output= 0,finish:"other".
That pattern means the stored session has no user-visible assistant text, so the UI has nothing durable to render for that turn (this is not “only a rendering glitch” until you prove text exists elsewhere).
Fast triage (do this before swapping models)
1) Export and inspect part types
opencode export "$SESSION_ID" > /tmp/oc-session.json
python3 - <<'PY'
import json
d = json.load(open("/tmp/oc-session.json"))
for i, m in enumerate(d.get("messages", [])):
info = m.get("info", {})
types = [p.get("type") for p in m.get("parts", []) if isinstance(p, dict)]
print(i, info.get("role"), "model=", info.get("modelID"), "finish=", info.get("finish"), "tokens=", info.get("tokens"), "parts=", types)
PY- Healthy: assistant messages include
text(and often tool parts).tokensshould move off all zeros for real completions. - Broken: assistant has only
step-*parts and zero tokens → treat as pipeline / provider / permissions / run mode until logs say otherwise.
2) Correlate logs (lines can be huge)
grep -n "$SESSION_ID" ~/.local/share/opencode/log/*.log | tail -40
# Then open the hit file at the printed line numbers, truncating each line:
sed -n '880,930p' ~/.local/share/opencode/log/2026-04-21T180244.log | awk '{print substr($0,1,400)}'Look for service=llm (model id, small=, agent=) immediately before session.prompt … exiting loop. Note ERROR lines in the same minute (MCP noise is common; focus on llm / session / provider).
3) Split “cwd / symlink” vs “model / provider”
| Variable | What to compare |
|---|---|
| Working tree | Session directory under Documents/.../brainforge-platform vs .../orca/workspaces/brainforge-platform/... (opencode session list --format json). |
| Model | Same prompt with opencode/gpt-5-nano vs opencode/minimax-m2.5-free (or your paid Zen / Azure chat model) — change one thing, re-run steps 1–2. |
Upstream class of issues when pwd ≠ pwd -P: OpenCode #16528.
Headless opencode run gotcha (automation / agents)
We observed opencode run creating a session whose permission array includes question → deny (and plan_enter / plan_exit → deny) alongside an assistant turn that has step parts only and zero tokens. That is consistent with a non-interactive agent path that cannot ask and may exit immediately without streaming user-visible text.
Do not treat a hung or empty opencode run in CI as proof the TUI is broken — validate in the interactive TUI and use opencode export on that session id.
When you truly need headless runs, prefer explicit flags/docs for your OpenCode version (e.g. --dangerously-skip-permissions) and still verify export contains text parts.
Repro we closed on-machine (2026-04-21)
Session ses_24eb270f3ffe3rPRW68F25pJSJ (directory = monorepo Documents/.../brainforge-platform), user message test:
opencode export: assistantminimax-m2.5-free, parts['step-finish']only,tokensinput/output0,finishother— matches an empty Build bubble in the TUI.- Log (
~/.local/share/opencode/log/2026-04-21T182900.log):service=llm … modelID=minimax-m2.5-free … agent=buildthensession.prompt…exiting loop~650ms later; no LLMERRORon that slice (only unrelated MCP-32601Method not found noise at session start).
User config at time of repro: ~/.config/opencode/opencode.json had model, small_model, agent.build.model, and agent.plan.model all set to opencode/minimax-m2.5-free.
Mitigation tried (same day): backup ~/.config/opencode/opencode.json.bak.minimax-blank-*, then set agent.build.model to opencode/gpt-5-nano so the Build agent stops using minimax-m2.5-free. Restart OpenCode after editing user config.
Follow-up repro (still 2026-04-21): session ses_24eacc618ffebjtC5ux7B38aZh after the switch — TUI shows Build · GPT-5 Nano · ~1.3s but opencode export still has assistant gpt-5-nano with step-start / step-finish only, tokens all zero, finish other. Logs show @ai-sdk/openai for the Zen gpt-5-nano path and the same immediate session.prompt … exiting loop pattern. So this is not explained by “only MiniMax free is broken”; the empty persisted turn reproduces across both model ids tested here.
Next knobs (outside this doc’s scope): Zen account/credits/support, OpenCode DEBUG logging while reproducing, opencode upgrade, or routing Build through a non-Zen provider you know works (e.g. azure-eastus/... after AZURE_OPENAI_EASTUS_API_KEY is set in the shell and provider is merged in user config).
If primary chat (non-build) is still blank, also move top-level model / small_model off minimax-m2.5-free using the same one-knob discipline.
What we still treat as an open chain
Confirmed: empty UI lines up with empty persisted assistant text parts + zero token counts in export for the analyzed sessions, and for minimax-m2.5-free + agent=build we have a local repro + log window (above).
Not fully pinned for all models: whether minimax-m2.5-free is failing account-wide, rate-limited, or regressed server-side — gpt-5-nano (or paid opencode/minimax-m2.5) is the practical comparison. opencode run headless remained a poor signal in automation (hung / no stdout) even after the config tweak; keep validating in the interactive TUI.
Related
opencode-cli-brainforge.md— config layers,opencode debug config, modelprovider/modellayout.- Session persistence location:
~/.local/share/opencode/(see cheat sheet “Clean slate”). - Official reset path (compound):
docs/solutions/workflow-issues/opencode-official-uninstall-and-reinstall-2026-04-21.md.