Context Graph Approach in brainforge-vault

Purpose: How we model “how work actually gets done” in brainforge-vault, not just “what exists.” This helps assess PR quality: does it improve our process knowledge and make agents smarter over time?

Related: PRD-design-ready-copy-agent.md, DESIGN_READY_COPY_TAXONOMY.md


What is our context graph?

Our context graph is the dynamic, process-aware layer built on top of static docs in brainforge-vault. It captures:

  • Entities (knowledge graph): campaigns, service lines, clients, agents, templates, assets, people
  • Actions (traces): who did what, in which files/tools, in what order, with what outcomes
  • Process patterns (aggregated traces): “how campaign launches actually happen,” “which archetypes get used by service type,” “where things stall before gate”

Unlike static playbooks (which say “what should happen”), the context graph learns from actual execution traces to answer: “How do we actually do this? What paths work? Where do we deviate and why?”


How our context graph evolves

1. Each deployment adds new entities and relationships

When you ship a new agent, template, or process:

  • New entities: Agent name, archetype, campaign type, asset type
  • New relationships: “Design-ready copy agent uses Service 2-pager archetype for single-service campaigns”
  • New process steps: “Campaign brief → pick archetype → generate copy → designer handoff”

Example: The design-ready copy agent PR added:

  • Entity: design-ready-copy-agent
  • Entity: service_2pager archetype (and 4 others: Sprint, Seasonal, Strategy guide, Partner kit)
  • Relationship: insurance-broker-lead-intake campaign → service_2pager archetype → insurance-broker-lead-intake-2pager.md
  • Process step: “brief → taxonomy lookup → fill sections → output markdown”

2. Each agent run becomes a trace

Every time someone uses an agent or follows a process:

  • Inputs: Campaign brief path, chosen archetype, case study paths
  • Decisions: Why this archetype vs that one, which sections included/excluded
  • Outputs: Generated file path, manual edits, whether it was used
  • Outcomes: Did designer use it? Did campaign hit gate? Did it convert?

Example trace (design-ready copy agent):

Run ID: design-ready-copy-2026-02-04-insurance-broker
Campaign: insurance-broker-lead-intake
Archetype: service_2pager
Input: gtm/campaign-launch/campaigns/insurance-broker-lead-intake.md
Output: gtm/marketing-assets/design-ready-copy/insurance-broker-lead-intake-2pager.md
Decisions: Single service → Service 2-pager (not Sprint, not Seasonal)
Outcome: Used by designer (Hannah), no major edits

3. Traces aggregate into process patterns

Over time, traces reveal:

  • High-value processes: Campaign launches that hit Beta/Market Ready gates
  • Common paths: “Insurance-like campaigns usually use Service 2-pager; dbt campaigns use Sprint”
  • Deviations: “Campaigns tagged ‘sprint’ sometimes end up as Service 2-pager despite brief saying ‘one-pager’”
  • Blind spots: Steps we’re missing (e.g. no case study matching → placeholder used)

Example pattern (from multiple campaign traces):

  • Pattern: “Campaign launch → design-ready copy”
  • Common path: Brief → pick archetype → generate → designer handoff → gate decision
  • Deviation: Some campaigns skip design-ready copy (use existing template)
  • Why: Time pressure; designer has existing asset

4. Agent decisions improve from traces

Agents learn from traces:

  • Archetype selection: “Campaigns with ‘sprint’ in title → Sprint archetype 80% of the time”
  • Section inclusion: “Designers always delete section X from 2-pagers → make it optional”
  • Case study matching: “No insurance case studies yet → use demo proof nugget”

Example: After 10 campaign runs, the design-ready copy agent could learn:

  • “Single service + 100M segment → Service 2-pager (not Sprint)”
  • “Time-bound offer + ‘holiday’ → Seasonal archetype”
  • “Designers consistently remove ‘Trusted by’ when no case study → hide that section if empty”

How to contribute to the context graph

When creating a PR

Ask: “How does this PR improve our process knowledge?”

  1. Does it add new entities or relationships?

    • New agent? New archetype? New campaign type? Document it.
    • Example: “Adds design-ready-copy-agent entity; links campaigns → archetypes → assets”
  2. Does it capture a new process step or decision point?

    • New workflow? New gate? New tool integration? Document the sequence.
    • Example: “Adds process step: campaign brief → archetype selection → copy generation”
  3. Does it learn from past traces?

    • Did you look at how campaigns actually ran? Did you fix a blind spot?
    • Example: “Removes section X from taxonomy because designers always delete it”
  4. Does it enable future trace capture?

    • Can we log runs? Can we measure outcomes? Can we correlate entities?
    • Example: “Adds run log format so we can track archetype selection over time”

When deploying a new agent or process

Use the deployment-win Slack template and include:

  • Process this replaces or creates (the “how” shift)
  • PRD highlights (the decision points that become traceable)
  • Where it lives (so traces can reference entity IDs)

Then, log the first run as a trace (see §5 below).


Lightweight trace capture (today)

We don’t have full Glean-style observability yet. Start with:

1. Frontmatter/metadata in key files

Add to campaign briefs, agent PRDs, generated assets:

---
campaign_id: insurance-broker-lead-intake
service_line: insurance-workflow-automation
archetype: service_2pager
agent: design-ready-copy-agent
run_id: design-ready-copy-2026-02-04-insurance-broker
---

2. Run log (markdown table)

Create gtm/agents/RUN_LOG.md or per-agent logs:

Run IDCampaignArchetypeInputOutputDecisionsOutcomeDate
design-ready-copy-2026-02-04-insurance-brokerinsurance-broker-lead-intakeservice_2pagerbrief.md2pager.mdSingle service → 2-pagerUsed, no edits2026-02-04

3. Process pattern docs

When you see a pattern emerge, document it:

  • gtm/agents/PATTERNS.md: “Campaigns with X → archetype Y”
  • gtm/campaign-launch/PROCESS_PATTERNS.md: “Common paths from brief → gate”

PR quality assessment: context graph lens

When reviewing a PR, ask:

  • Does it add entities/relationships? (knowledge graph layer)
  • Does it capture process steps? (trace layer)
  • Does it learn from past traces? (pattern layer)
  • Does it enable future trace capture? (observability layer)

Not just: “Does it compile?” or “Does it follow the template?”

But also: “Does it make our system smarter about how work actually gets done?”


Future: full context graph

As we scale:

  • Deep connectors: Integrate with HeyReach, HubSpot, Notion, Slack to capture actual tool usage
  • Personal graphs: Per-person timelines of work (privacy-preserved)
  • Aggregated patterns: Anonymized, high-value process patterns across the team
  • Agent learning: Agents that suggest next steps based on traces, not just static rules

For now, start with explicit entity IDs, run logs, and process pattern docs so we can answer: “How do we actually launch campaigns? What works? Where do we deviate?”