Learning Moment: Test Run Feedback (Plan-Level Unit Economics)

Source: Test run of /discover → /plan_analysis → /review on the Plan-Level Unit Economics & Margin Accuracy outline.
Date: 2026-01-20


What We Learned

Input (00_input_proposal / prepare step)

  • Dataset cross-reference is required. Data Plan tables must be cross-referenced against data_platform/ documentation and schema in the tryeden (analytics) repo. The check is the essential addition—not a regurgitation of the data plan. Everything (data platform, schema) should live under the same tryeden folder.
  • New command: /prepare_analysis_input. Converts team outline (PDF, Doc, Notion) to structured markdown and adds a Dataset cross-reference (baseline) section: for each dataset, table/source | In data_platform/schema? (Y/N/Unknown) | Notes.

/discover

  • If business context is missing: ask questions. Do not fill in. No inventing or assuming.
  • Reasoning before summaries. Same as other parts of the analysis: explain reasoning before giving a summary or conclusion.
  • Metric definitions: Must reference data_platform/ Metrics tab (source of truth). All metrics should be compliant; flag any not defined there or that deviate.
  • Data landscape: Use only what is in the input. Do not import context from other requests, tickets, or conversations (e.g. GA, earned media, Kristina). There was hallucination: “Josh/CSO’s view of GA not accurate” was from a different request (Kristina/earned media); the Plan-Level Unit Economics PDF does not mention GA. Root cause: persistent or chat context from another conversation. Fix: explicit “Scope to the input only” and “Do not import context from other requests” in /discover and .cursorrules.

/review

  • No changes. CRITICAL/HIGH/MEDIUM/LOW and Validation (Passed/Failed/Unable) are good; user said the review output is fine. A separate scoring framework beyond that is not required.

What to Do Differently

Slash commands

  • /prepare_analysis_input — Added. Converts to markdown + Dataset cross-reference (baseline) against data_platform/schema.
  • /discover — If context missing: ask, don’t fill. Reasoning before summaries. Metric definitions: cross-reference data_platform/ Metrics tab. Data landscape: only from input; do not import from other requests. Important Guidelines: “Do not fill in missing business context”; “Do not import context from other analyses, tickets, or conversations.”
  • .cursorrules — /prepare_analysis_input; /discover note “Scope to the input only”; Important Guidelines: “When reviewing a proposal: use ONLY the input; do not bring in context from other conversations or requests.”

Test protocol

  • Step 0 — Now: Run /prepare_analysis_input (convert to markdown + dataset cross-reference). Output: 00_input_proposal with Dataset cross-reference (baseline) at the end.
  • Step 1 (/discover) — Rules: ask don’t fill; reasoning before summaries; metrics from data_platform Metrics tab; data landscape only from input; no cross-contamination.
  • Section 6 (Iterating) — Cross-analysis contamination: ensure context scoped to input; check .cursorrules and slash command.
  • This Run’s Summary — Note on GA hallucination and the updates made.

Test run artifacts

  • 00_input_proposal — Added “Dataset cross-reference (baseline)” placeholder; /prepare_analysis_input fills it.
  • 01_discover_output — Removed the GA / “Josh/CSO’s view of GA not accurate” line (replaced with proposal-only framing).

What to Remember

Reusable

  • Input = format + baseline check. Converting to markdown is for LLM usability; the dataset cross-reference to data_platform/schema is the mandatory check.
  • Discover: ask, don’t fill; reasoning first; metrics from platform; data only from input. Stops fill-in and cross-request contamination.

Pitfalls

  • Cross-analysis contamination: A prior conversation (e.g. Kristina/GA) can leak into /discover or /review if we don’t explicitly scope to the input. Mitigation: “Use ONLY the input” in the command and .cursorrules.

Root Cause: GA in Discover

“GA not accurate” and “Josh/CSO” appeared in the Data Landscape section of 01_discover_output. That comes from the Kristina/earned media request (Josh said GA isn’t accurate for her PR coverage). The Plan-Level Unit Economics proposal never mentions GA. Likely causes: same Cursor chat or workspace context, or model state from the prior request. Fix: strict “Scope to the input only” and “Do not import context from other requests” in /discover and .cursorrules so the model does not reach into other conversations.


Slash Commands to Update (done)

  • /prepare_analysis_input — created
  • /discover — ask don’t fill; reasoning before summaries; metrics from data_platform Metrics tab; data landscape only from input; do not import from other requests
  • .cursorrules — /prepare_analysis_input; scope-to-input rule for review/discover