Slack Message: Automated Feedback Loop System

Deployment: Automated Feedback Loop for Agents

πŸ“Ž Demo: [Add demo link when recorded]

What this is
A system that closes the loop from agent runs β†’ feedback β†’ pattern learning β†’ agent improvements. After you run any agent, you get prompted for structured feedback (2-3 min), which auto-generates run logs, analyzes patterns using β€œthinking to summary” approach, and shows the impact of learnings. First test: Ticket Creation Agent (Eden Wikipedia data request).

PRD highlights
β€’ Feedback-driven learning β€” Every agent run prompts for structured feedback (outcome, quality, what worked/didn’t, completeness). Feedback is auto-logged and analyzed for patterns.
β€’ Pattern extraction with confidence levels β€” Patterns move from LOW (1-2 examples) β†’ MEDIUM (3-4 examples) β†’ HIGH (5+ examples). When patterns reach MEDIUM confidence, PRs are suggested to update agent PRDs.
β€’ Context graph evolution β€” Each deployment adds new entities (agents, patterns, run logs), relationships (campaign β†’ agent β†’ output), and process steps (traceable workflows). This helps assess PR quality: does it improve process knowledge, not just code?

Process this replaces or creates
β€’ Replaces: Manual, ad-hoc agent improvements based on gut feel. No systematic way to learn from agent runs or track what actually works.
β€’ Creates: Structured feedback loop: Run agent β†’ Prompt feedback β†’ Auto-log β†’ Analyze patterns β†’ Suggest improvements β†’ Update agent PRD β†’ Agent gets smarter.
β€’ Creates: Pattern library (PATTERNS.md) that tracks learned behaviors (e.g., β€œTicket titles should not include β€˜Linear Ticket:’ prefix”, β€œAll tickets need success criteria and point assignment”).
β€’ Creates: Run log (RUN_LOG.md) that captures traces: which agents ran, what inputs/outputs, what decisions were made, what outcomes occurred. This becomes the foundation for pattern analysis.

Where it lives
In brainforge-vault (PR ready):
β€’ System spec: gtm/agents/AGENT_FEEDBACK_LOOP.md
β€’ Context graph approach: gtm/agents/CONTEXT_GRAPH_APPROACH.md
β€’ Process guide: gtm/agents/FEEDBACK_LOOP_PROCESS.md
β€’ Pattern library: gtm/agents/PATTERNS.md
β€’ Run log: gtm/agents/RUN_LOG.md
β€’ PR quality checklist: gtm/agents/PR_CONTEXT_GRAPH_CHECKLIST.md
β€’ Feedback prompts: gtm/agents/feedback-prompts/
β€’ First test run: gtm/agents/feedback-sessions/ticket-creation-2026-02-05-eden-wikipedia.md

How this evolves our context graph
β€’ New entities: automated-feedback-loop-system, ticket-creation-agent, pattern-library, run-log, context-graph-approach.
β€’ New relationships: Agent runs β†’ feedback sessions β†’ pattern analysis β†’ PR suggestions β†’ agent improvements (traceable via run logs and pattern confidence levels).
β€’ New process step: Run agent β†’ prompt feedback β†’ auto-log β†’ analyze patterns (thinking β†’ summary) β†’ show impact β†’ suggest PRs when patterns reach MEDIUM confidence.
β€’ Enables trace capture: Every agent run is logged with metadata (run ID, inputs, outputs, decisions, outcomes, quality scores). Patterns are extracted and tracked with confidence levels. This creates a learning system where agents improve over time based on actual usage patterns, not assumptions.

First test results (Ticket Creation Agent):
β€’ Time saved: 14.5 minutes (0.5 min agent vs 10-15 min manual)
β€’ Quality: 8/10 (good, with 3 clear improvements identified)
β€’ Patterns identified: 4 patterns (3 fixes: title format, success criteria, point assignment; 1 reinforcement: data source references)
β€’ Impact: 3 PRs ready to create when patterns reach MEDIUM confidence (after 2-3 more runs)


Invite critical feedback: What’s broken, missing, or annoying? What would make you actually use this (or use it more)? Reply in thread or DM me.