Refire Workflow — Linear-Only Plan

Created: 2026-04-06
Owner: L&D + Platform
Initiative: Feedback loop and change management (M3.2 — Refire)
Supersedes: Supabase-backed capture described in refire-workflow-implementation-plan.mdLinear is the system of record; no refire_signals table or silent execute_sql inserts.


Overview

Refire is the moment a team member corrects, rejects, or amends AI output from a Cursor skill or workflow—or changes the skill/workflow artifact because it was wrong. This plan routes those signals only through Linear: L&D receives and triages every qualifying incident; Platform receives build or product changes when L&D escalates.

Restaurant metaphor (unchanged):

KitchenBrainforge
GuestTeam member using a skill/workflow
DishAI-generated output
RefireEdit, correction, or rejection of that output
Front of houseL&D triage
KitchenPlatform implementation

Goals

  1. Capture meaningful refires with minimal friction (agent-assisted, no separate database).
  2. Triage in one visible queue owned by L&D first.
  3. Route to Platform when the resolution requires repo, automation, or product changes.
  4. Measure volume and patterns using Linear (labels, project, views, API/export)—not SQL over Supabase.

Architecture (Linear as database)

Per incident (inner loop)
  User refires skill output OR clearly flags skill/workflow wrong
       ↓
  Cursor rule / skill step: classify signal + Hattie level
       ↓
  Linear MCP: create or update L&D issue (standard template + Refire fields)
       ↓
  Agent continues the conversation (optionally silent to user)

Ongoing (outer loop)
  L&D: Triage → Investigating → Resolved (training/docs)
                    ↘ Routed to Platform (linked or new issue)
  Platform: implements skill/rule/code changes; links PR in Linear

Design choice: Linear issues are the durable log. Comments hold follow-ups; labels and project power reporting. Optional: periodic refire-analysis skill uses list_issues / get_issue (Linear MCP) instead of SQL.


Detection signals

After a turn that used a skill or workflow (per agent skill list or explicit user invocation), treat the next user message as a candidate refire when it matches:

  • Explicit: “wrong”, “incorrect”, “fix this”, “redo”, “not quite”, “that’s not right”, “bad output”
  • Amendment: “change X to Y”, “instead use”, “update this to”, “should have been”
  • Implicit: contradicts prior output; user pastes prior output with edits; narrow correction that clearly targets the last artifact

Skill/workflow file edits: When the user or agent edits .cursor/skills/**/SKILL.md, workflow-related .cursor/rules/*.mdc, or closely tied references—and the intent is “the skill was wrong”—create or update a Refire issue (signal type: skill_edit) with paths in the description.

Noise control: Do not open a ticket for every vague follow-up. Prefer:

  • explicit refire language, or
  • high-confidence amendment tied to the immediately previous skill output, or
  • documented skill file change with refire intent

If uncertain, comment on an open Refire for that skill/week instead of spawning a duplicate (see deduplication).


Hattie classification (unchanged semantics)

Classify each captured refire for routing hints:

LevelMeaningTypical fix
taskWrong content in the outputPrompt/grounding/examples in skill
processWrong approach or missing step orderSkill steps, tool order, checks
self-regulationRepeated systematic gapChecklist, guardrails, “always do X”

Store as a label or a line under Notes / Constraints (see template below).


Linear setup

Configure once in Linear (exact names from workspace):

ElementPurpose
TeamL&D (primary intake)
Projecte.g. “Refire — skills & workflows”
Default stateTriage (or equivalent)
Labels (suggested)refire, source:cursor-skill, source:skill-file-edit, hattie:task, hattie:process, hattie:self-regulation, routed:platform (set when escalated)

Platform escalation: Create a linked issue on the Platform team (or move per team policy), copy Refire-specific fields, set routed:platform, and reference the L&D issue ID in Context.


Issue title and description

Title rules

  • Start with a verb: Investigate:, Fix:, Update:
  • Include skill/workflow slug when known, e.g. Investigate: meeting-prep refire — wrong ticket counts

Description structure

Use Brainforge’s required Linear sections (see knowledge/standards/04-prompts/tickets/linear-ticket-generation-from-transcript.md): Context, Goal, Scope (In / Out), Acceptance Criteria, Notes / Constraints, Open Questions.

Append a Refire block (headings verbatim for machine/human consistency):

Refire — Skill / workflow

  • Name: (e.g. meeting-prep)
  • Paths: (e.g. .cursor/skills/meeting-prep/SKILL.md)

Refire — Signal

  • Type: correction | amendment | redo | clarification | rejection | skill_edit
  • Hattie level: task | process | self-regulation

Refire — What happened

  • Original output (short): one sentence or redacted snippet
  • Correction / request: what the user wanted instead (quote if safe)

Refire — Context

  • Optional: client, Linear ticket, Slack thread, session note (no secrets; minimize PII)

L&D triage

  • Hypothesis: (filled by L&D)
  • Resolution path: training only | doc update | escalate to Platform
  • Platform issue: (link when created)

Deduplication and updates

Before save_issue:

  1. Search Linear in the Refire project for open issues with the same skill name and similar correction (keywords from user message).
  2. If match: add a comment with the new signal summary and date; optionally bump priority if user stressed urgency.
  3. If no match: create a new issue.

Follow knowledge/standards/04-prompts/tickets/linear-ticket-generation-from-transcript.md: prefer updating over duplicating.


Cursor implementation (repo deliverables)

ArtifactRole
.cursor/rules/refire-feedback.mdc (or merge into existing rules)After skill/workflow output, scan next message; classify; call Linear MCP to create/update issue; do not block the user’s task
Optional: .cursor/skills/refire-analysis/SKILL.mdWeekly/on-demand: list_issues filtered by refire label + project; summarize top skills, suggest Platform candidates
knowledge/people/learning-development/refire-log/README.md (optional)Human-facing: what Refire is, which Linear project, label meanings, how L&D triages

Not in scope for this plan: Supabase migrations, refire_signals DDL, or execute_sql for capture.


L&D → Platform handoff

  1. L&D validates the refire, adds L&D triage notes.
  2. If Platform work is needed: create Platform issue (or sub-issue), link back, set routed:platform.
  3. Platform ships change; links PR; L&D may close L&D issue or mark Resolved with pointer to PR.

Metrics and OKR alignment

Roadmap OKR (example): active delivery team uses Refire (signals captured).

Proxies without Supabase:

  • Count of issues with label refire created per week (Linear view or API).
  • Issues per skill (parse from title or a dedicated custom field if you add one later).
  • Absorption / Doordash pairing: compare Refire issue count or comment activity for a skill before vs after a Doordash changelog entry (manual or scripted from Linear API).

Phased rollout

  1. MVP: Linear project + labels + rule that creates L&D issues on high-confidence refires + dedup policy + README pointer to this plan.
  2. V2: refire-analysis skill; tighter title conventions for reporting.
  3. V3: Optional automation for skill-file edits (agent checklist + optional git hook narrative in docs only if adopted).

Risks and mitigations

RiskMitigation
Board noiseStrict create thresholds; comment-on-duplicate; weekly merge hygiene
PII in ticketsRedact in Refire — Context; use internal-safe summaries
False positivesRequire tie to previous skill turn or explicit refire language
Reporting vs SQLStandardize labels + project; use Linear views or thin API script