PR Quality: Context Graph Checklist

Purpose: Assess PR quality not just by “does it compile?” but by “does it improve our process knowledge and make agents smarter over time?”
Related: CONTEXT_GRAPH_APPROACH.md


Context graph lens for PR review

When reviewing a PR, ask these questions:

1. Knowledge graph layer (entities & relationships)

  • Does it add new entities? (agents, archetypes, campaign types, asset types, templates)
  • Does it define relationships? (campaign → archetype → asset, agent → input → output)
  • Are entities traceable? (IDs, frontmatter, or clear naming so we can link them)

Example: Design-ready copy agent PR added:

  • ✅ Entity: design-ready-copy-agent
  • ✅ Entity: service_2pager archetype
  • ✅ Relationship: campaign → archetype → asset

2. Trace layer (process steps & decisions)

  • Does it capture a new process step? (workflow sequence, decision points, gates)
  • Are decisions traceable? (why this archetype vs that, which sections included/excluded)
  • Can we log runs? (inputs, outputs, outcomes)

Example: Design-ready copy agent PR added:

  • ✅ Process step: brief → pick archetype → fill sections → output
  • ✅ Decision point: archetype selection (single service → Service 2-pager)
  • ✅ Run loggable: campaign ID, archetype, input/output paths

3. Pattern layer (learning from traces)

  • Does it learn from past traces? (fixes blind spots, removes unused sections, adjusts based on actual usage)
  • Does it document patterns? (common paths, deviations, why paths differ)

Example: Design-ready copy agent PR:

  • ✅ Taxonomy learned from 7 PDF examples (not invented from scratch)
  • ✅ Anti-pattern section learned from “overblown” example (what NOT to do)

4. Observability layer (enabling future traces)

  • Does it enable trace capture? (run logs, metadata, IDs)
  • Can we measure outcomes? (did designer use it? did campaign hit gate?)
  • Can we correlate entities? (campaign → agent → asset → outcome)

Example: Design-ready copy agent PR:

  • ✅ Run log format suggested in CONTEXT_GRAPH_APPROACH.md
  • ✅ Frontmatter/metadata format documented
  • ✅ First run trace logged (in deployment Slack message)

PR description template

When creating a PR, include a “Context graph evolution” section:

## Context graph evolution
 
**New entities:**
- [List entities added: agents, archetypes, campaigns, assets, etc.]
 
**New relationships:**
- [List relationships: campaign → archetype → asset, etc.]
 
**New process steps:**
- [List workflow steps: brief → pick archetype → generate → handoff]
 
**Enables trace capture:**
- [How can we log runs? How can we measure outcomes?]

Quality gates

A PR should improve at least one of these layers:

  • Knowledge graph: Adds entities/relationships (makes system more structured)
  • Trace layer: Captures process steps (makes workflows observable)
  • Pattern layer: Learns from traces (makes agents smarter)
  • Observability: Enables trace capture (makes future learning possible)

Not just: “Follows template” or “Doesn’t break existing code”

But also: “Makes our system smarter about how work actually gets done”