GTM Demo — ESG Auto OEM (OpenWork work platform skin)

Status: Draft (Notion mirror — paste into GTM Engineering project)
Created: 2026-04-12
Updated: 2026-04-12
Author: [name]
GTM owner: [name]
Engineering owner: [name]
Client / story: Auto OEM ESG intelligence (design partner / pitch demo — not a signed SOW)

Notion fields (suggested)

FieldValue
Project typeGTM demo / thin vertical slice
PlatformBrainforge work platform (OpenWork base)
Timebox3–5 engineering days (iterate fast)
Risk tierLow (mock data, no production PDF pipeline)

Quick links

  • OpenClaw for Enterprises offer (positioning): knowledge/sales/services/ai/ai-infrastructure/openclaw-for-enterprises/offer.md
  • Stakeholder one-pager: knowledge/sales/services/ai/ai-infrastructure/openclaw-for-enterprises/sales-assets/copilot-vs-agent-first-one-pager.md
  • Agentic mockup coaching (prompt + agent sequence): knowledge/standards/04-prompts/coaching/agentic-mockup-coaching-playbook.md
  • OpenWork setup / runtime (engineering): knowledge/standards/03-knowledge/engineering/setup/openwork-local-build-setup.md, openwork-hosted-runtime-contract.md
  • Delivery orchestrator (routing pattern reference): .cursor/skills/delivery-orchestrator/SKILL.md

1. Context & problem statement (source: original blurb)

Buyer story (verbatim intent):

It’s ESG for an auto company. They need to ingest their own ESG annual report and 6 other OEMs in for comparison. The PDF is very flat so it’s hard to get structured data from PDFs they grab from public websites of all the OEMs — they have to build something. No need to build the actual backend to dissect the PDFs — it’s a mock up — but the thoughts behind the mockup are: PDF is flat → structure the data for further analysis → mapping is not standard (same concepts, different terminology) → map to standard terms → dashboards and reports on ESG KPIs → chat board for users to ask questions and ask the system to perform tasks.

Current state

  • Stakeholder thinking is still “screens + copilot” first.
  • Brainforge / Vicinity direction is agent-first + embedded execution in surfaces people already use.

Problem statement

  1. Flat PDFs do not arrive analysis-ready; the demo must show structure + governance without pretending OCR/ETL is solved.
  2. Cross-OEM comparison fails without a canonical ESG vocabulary and a visible mapping layer.
  3. Dashboards and reports only land if the demo has a credible data contract (even if synthetic).
  4. “Chat” only sells if it can trigger a bounded task and show status/outcome (not generic Q&A only).

Goal (this demo)

Lightly skin the Brainforge work platform built on OpenWork so the demo ships a guided workflow (plan → steps → outputs) that produces a better one-shot: the presenter runs one vertical slice end-to-end in under 3 minutes, with clear human vs agent ownership.

Success is decision clarity, not extraction accuracy.


2. Connection to broader GTM goals

GTM objectives

  • Show the step-change from copilot to agent-first operating layer (see one-pager in Quick links).
  • Prove internal velocity: how fast GTM Engineering can iterate on a credible demo skin.

Delivery standards (lightweight)

  • Thin vertical slice; no scope creep into real PDF parsing or full multi-tenant productization.
  • Explicit mock labeling on all synthetic metrics and mappings.

3. Delivery orchestrator routing (demo request → workstream)

Use this table the same way delivery-orchestrator maps intent → skill. Here, intent → demo workstream (can map 1:1 to Linear labels later).

User / stakeholder saysIntentPrimary workstreamOutput artifact
“We need OEM comparison”Benchmark storyData contract + sliceCanonical KPI list (8–12) + 3 companies in mock data
“PDFs are flat”Ingestion realismIngestion UX (mock)Queue + status + “extracted row” preview (no real parser)
“Terminology differs”MappingMapping workbenchSource term → canonical term → confidence + rationale
“Dashboards and reports”Insight surfacesDashboard + reportOne comparison view + one exec summary view
“Chat to perform tasks”Agent executionTask panel3 canned actions with run / result / trace stub
“One shot demo”Narrative QADemo script3-minute talk track + click path

Multi-stream sequencing (Comprehensive Overview style) — run in this order when scoping the build:

  1. Narrative lock (3-minute story) — prevents orphan screens
  2. Data contract (mock entities) — prevents pretty UI with no logic
  3. Mapping + KPI slice — proves cross-OEM comparability
  4. UI skin on OpenWork — workflow chrome, not net-new product
  5. Chat / task hooks — bounded automation demo
  6. QA pass — missing states, labels, “mock” disclosures

4. Sub-agent workflow (for content / PM — optional Cursor lane)

If the team uses sub-agents or separate threads to generate demo copy and structure, use this sequence (mirrors agentic-mockup-coaching-playbook.md):

StepAgent roleInputOutput
1Intake classifierOriginal blurb + timeboxPilot mock vs build-later call
2Scope definerVertical slice rulesScreen list + click path
3Data contractScreen listEntities + example JSON rows
4Terminology mapperCanonical KPI setMapping table + low-confidence examples
5UI generatorSchema + pathOpenWork layout / copy deck
6Narrative QAFull stackTalk track + fixes

Engineering can ignore agents 1–6 if they already have designs; this table is for GTM + design parity with delivery orchestration habits.


5. Work phases (engineering — checkboxes)

Phase 0 — OpenWork skin (0.5–1 day)

  • Fork or branch OpenWork / work-platform baseline used for Brainforge demos
  • Apply light theming: logo, product name, ESG-adjacent nav labels (no full redesign)
  • Seed one demo workspace with preloaded mock JSON (anchor OEM + 2 peers only for v1)

Phase 1 — Mock ingestion + structure (1 day)

  • Ingestion queue UI: document list, status (queued / extracted (mock) / needs review)
  • Extraction review UI: table of ExtractedMetric rows (synthetic), confidence, link to “source page” (mock)
  • Copy deck: disclose mock extraction in UI microcopy

Phase 2 — Mapping + KPI canonicalization (1 day)

  • Mapping workbench: source phrase → canonical ESG term → confidence → analyst action (approve / edit)
  • Canonical KPI set locked to 8–12 metrics for the demo
  • Optional: “low confidence” filter for chat task #2 (see Phase 4)

Phase 3 — Dashboard + report (1 day)

  • Comparison dashboard: time window control, 3 companies, canonical KPIs only
  • Report / exec summary view: generated narrative + chart thumbnails (static acceptable)
  • Footnote component: “Definitions per internal taxonomy v0 (demo)”

Phase 4 — Chat / task board (0.5–1 day)

  • Chat or task panel with exactly three starter actions:
    1. Compare Company A vs B on a named KPI family (e.g. emissions intensity)
    2. Show low-confidence mapped metrics
    3. Draft executive summary from current dashboard selection
  • Each action: queued → completed with stub trace (run_id, steps, tool names as strings)
  • No open-ended internet browse in v1 unless explicitly scoped and safe

Phase 5 — Demo hardening (0.5 day)

  • 3-minute talk track in repo or Notion page linked from demo home
  • Reset script: one command or seed button to restore golden demo state
  • Run-through with one cold reader; fix confusing labels only

6. Functional requirements (must-have)

IDRequirementNotes
FR-1Show 1 anchor + 2 peer OEMs in v1 (not all 7)Expand later; story clarity first
FR-2Represent flat PDF → structured rows without real PDF backendStatus + plausible fields
FR-3Terminology mapping is a first-class screenCore differentiator
FR-4Dashboard on canonical KPIsAlign filters to demo script
FR-5Report view (exec or compliance tone — pick one)PDF export optional
FR-6Chat / tasks with three bounded actionsMust show task lifecycle, not only chat bubbles
FR-7Workflow that improves one-shotGuided steps / plan panel so the presenter is not improvising from a blank prompt

7. Non-goals (explicit)

  • Production PDF parsing, OCR, or web scraping of OEM sites
  • Full ontology management, workflow builder UX for end users, or multi-tenant admin
  • Model training or benchmark claims on accuracy

8. Success metrics (demo-grade)

MetricBeforeTargetOwner
End-to-end demo timeN/A≤ 3 minutes cold runGTM eng
Screens w/o data storyMany0 (each screen ties to entities)PM / eng
Chat actions implemented03 bounded tasksEng
“Mock” disclosureMissingVisible on extraction + mappingEng / GTM
Reset-to-goldenManual≤ 2 minutesEng

9. Risks & mitigations

RiskMitigationOwner
Scope creep to “real ingestion”Timebox + FR table; escalate to product laterGTM lead
Chat feels genericEnforce 3 canned tasks only in v1Eng
OEM count expands to 7 in UILock v1 to 3 companies in seed dataEng
OpenWork skin takes too longLight chrome only; no new design systemEng

10. Open questions

  1. Anchor OEM name + two peers — real names vs anonymized (OEM A/B/C)?
  2. Which 8–12 canonical KPIs for the auto ESG slice? (GTM + subject advisor pick once.)
  3. Report tone — exec summary vs compliance checklist? (Pick one for v1.)
  4. Hosting — labs OpenWork URL vs local-only for first iteration?

11. Next steps (kickoff checklist)

  • Confirm 3-company slice + KPI list (owner: [name]) — Due: [date]
  • Confirm OpenWork baseline commit / deploy target (owner: [name]) — Due: [date]
  • Create golden mock JSON pack (owner: [name]) — Due: [date]
  • Dry run 3-minute script; log punch-list (owner: [name]) — Due: [date]

12. Linear execution (optional — clone for GTM board)

Initiative: GTM — ESG Auto OEM OpenWork Demo

Project A — Demo shell

  • Milestone: OpenWork skin + routing
  • Issues: theme, nav, seed data loader, reset script

Project B — ESG vertical slice

  • Milestone: Ingest → map → compare → report → task
  • Issues: one issue per FR-1…FR-7 where sensible

Labels (suggested): gtm, demo, workflow-automation or copilots-agents (per your taxonomy), openwork


13. Sign-offs (internal demo)

  • GTM lead — story and stakeholder framing — [ ]
  • Engineering lead — feasibility and timebox — [ ]

Last updated: 2026-04-12