Lesson 1: The 6 Delivery Standards and Their AI Workflows

Course: Quickstart | Module: 02 — Delivery Standards × AI
Time estimate: ~35 minutes
Prerequisite: Module 1 complete — Cursor connected to the repo


Pre-work (before reading)

Take 2 minutes. For each standard below, write one sentence about what you currently do (not what you think you should do):

  • How do you prepare for client meetings?
  • How do you handle a task that falls outside what you know?
  • How do you track and communicate delivery progress?
  • How do you write client updates or internal messages?
  • How do you share what you’ve learned with the team?
  • How do you get better at things that are hard for you?

You’ll return to these answers in Lesson 2.


I Do — The 6 Delivery Standards and their AI workflows

Each standard is stated as Brainforge defines it, then unpacked into the observable behaviors that distinguish strong from weak demonstration, and paired with the AI workflow that serves it most directly.


Standard 1: Presence

What it means: You show up prepared. You know the context before the meeting, not during it. You’re not the person who asks “can you remind me what we discussed last time?”

Observable behaviors at high mastery:

  • Runs a meeting prep skill before every client call
  • References specific prior discussion points during the meeting
  • Asks follow-up questions based on earlier commitments, not generic questions
  • Notices when the conversation is drifting from the plan and redirects

Observable behaviors at low mastery:

  • Reads the meeting invite as the only prep
  • Asks clients to re-explain context that’s already in the vault (the vault = knowledge/ in the Brainforge platform repo — where transcripts, meeting notes, and client context live)
  • Generic participation — could have been in the meeting or not

AI workflow that serves Presence:
The meeting-prep skill. Before any client call, run @meeting-prep [client name] in Cursor. The skill pulls recent transcripts, open Linear tickets, and Slack context into a 2-minute brief. Takes 90 seconds. Eliminates the “what did we discuss last time?” problem permanently.

Worked example:

You have an Eden Health standup in 20 minutes. Instead of skimming a few Slack messages, you open Cursor and type: Run the meeting prep skill for Eden Health. Cursor reads the last 3 Granola transcripts, the current Linear board, and surfaces the two open blockers. You walk into the call knowing exactly what to reference.


Standard 2: Ownership

What it means: You don’t wait to be told what to do. You identify what needs doing, do it, and communicate that you’ve done it. You treat blockers as yours to remove, not yours to escalate.

Observable behaviors at high mastery:

  • Creates Linear tickets for gaps they spot, not just tasks they’re assigned
  • When stuck, tries 2–3 things before asking for help — and brings the attempts to the conversation
  • Updates Linear status without being asked
  • Communicates proactively when something will be late, before it is late

Observable behaviors at low mastery:

  • Waits for task assignment before starting work
  • Escalates blockers on first contact without attempting resolution
  • Linear board doesn’t reflect actual work state

AI workflow that serves Ownership:
The idea-to-tickets skill and the Linear MCP agent. When you identify a gap or a new piece of work — even if you’re not sure whether it’s in scope — use Cursor to draft a ticket. The skill structures your rough idea into a properly formatted Linear issue. You own the work before it’s formalized.

Worked example:

During a client call, you notice no one has set up Snowflake access for the new analyst. It’s not on the Linear board. Instead of flagging it in Slack and waiting, you open Cursor: Create a Linear ticket for setting up Snowflake access for [Client] — new analyst starting Monday. The ticket is drafted, structured, and saved. You’ve owned the gap before the meeting even ends.


Standard 3: Delivery Excellence

What it means: The work you produce is accurate, complete, and on time. You don’t ship things that haven’t been checked. You notice quality issues before the client does.

Observable behaviors at high mastery:

  • Reviews own output against a standard before sharing (not just “does this look right?”)
  • Catches errors before they reach the client
  • Delivery rhythm is predictable — clients know what to expect and when
  • When quality slips, self-identifies it and corrects proactively

Observable behaviors at low mastery:

  • Shares first drafts as final output
  • Quality checks are client-dependent (they catch it, not you)
  • Delivery is reactive — responds to “where is X?” rather than proactively sharing X

AI workflow that serves Delivery Excellence:
The ep-audit skill and sow-vs-delivered-audit skill. Before sending a weekly update or heading into a client review, run a quick EP audit to check your Linear board state. Know before your client does what’s done, what’s late, and what’s blocked.

Worked example:

It’s Thursday. Your weekly client update goes out Friday. Instead of writing it from memory, you run: Run EP audit for [Client]. Cursor checks the Linear board, surfaces two tickets that slipped to overdue this week, and flags a blocker you haven’t communicated yet. You update the tickets and include the blocker proactively in Friday’s update. Client sees transparency; they don’t see chaos.


Standard 4: Communication

What it means: Your written and verbal communication is clear, appropriately toned, and serves the recipient — not just the sender. You adapt your register to the audience.

Observable behaviors at high mastery:

  • Messages are purpose-clear (the recipient knows exactly what’s needed)
  • Tone matches the relationship stage and the content sensitivity
  • Length matches the need — not padded, not truncated
  • No “just following up” emails — every message moves something forward

Observable behaviors at low mastery:

  • Messages require follow-up questions to interpret
  • Tone mismatches (too casual with senior stakeholders, too formal with peers)
  • Walls of text where bullet points would serve
  • Vague calls to action or no call to action

AI workflow that serves Communication:
The humanizer skill and client-touchpoint-drafter. When drafting a sensitive client message, a stakeholder update, or anything where tone matters — write your draft, then run the humanizer. Catches AI patterns, removes filler, tightens structure. For client check-ins, use the touchpoint drafter to generate a context-aware first draft from the vault.

Worked example:

You need to tell a client their report will be delayed by two days. You write a draft. Before sending, you paste it into Cursor: Humanize this — it’s for a client who is time-sensitive and we have a strong relationship. Cursor strips the filler, tightens the explanation, and adjusts the tone from apologetic to confident-and-transparent. The message lands better.


Standard 5: Collaboration

What it means: You make it easier for others to work with you. You share context, flag blockers early, and contribute to team knowledge rather than hoarding it.

Observable behaviors at high mastery:

  • Saves meeting notes, transcripts, and decisions to the vault — not just in their own head
  • Tags relevant people in Linear when their input is needed
  • Shares wins and learnings in ai-wins and retros, not just privately
  • When asking for help, brings context (“here’s what I’ve tried, here’s where I’m stuck”)

Observable behaviors at low mastery:

  • Knowledge stays in their browser tabs, not the vault
  • People have to ask them for context they should have shared proactively
  • Collaboration is reactive — responds to asks, doesn’t initiate knowledge-sharing

AI workflow that serves Collaboration:
The sync-granola-to-vault skill and vault-writing habits. After every client meeting, run the Granola sync. 30 seconds. Every transcript is stored in knowledge/clients/{client}/transcripts/. The team can prep for the next meeting without asking you what happened on the last call.

Worked example:

Monday meeting with LMNT done. You had a dense conversation about data pipeline priorities. Instead of keeping your notes in a Google Doc no one else knows exists, you run: Sync Granola to vault. The meeting transcript is in the vault in 30 seconds. On Thursday, when a colleague needs to jump in for you, they can prep themselves. You’ve collaborated by not hoarding.


Standard 6: Continuous Improvement

What it means: You actively get better. You reflect on what’s not working, seek input, and change your approach based on evidence — not just intuition.

Observable behaviors at high mastery:

  • Reviews past output before starting similar work (learns from own history)
  • Uses post-project retros as learning inputs, not just process boxes
  • When AI produces a bad output, diagnoses why and adjusts the prompt — doesn’t just retry
  • Contributes one skill, rule, or improvement to the platform per quarter

Observable behaviors at low mastery:

  • Makes the same mistakes on similar projects
  • Skips retros or goes through the motions
  • Blames tools when outputs are poor (“Cursor doesn’t work for this”)
  • Passive consumer of the platform, never contributor

AI workflow that serves Continuous Improvement:
The skill-creator skill and feedback sessions. When you run an agent and the output isn’t what you needed, write a feedback session (knowledge/gtm/agents/feedback-sessions/). When you identify a repeated task that has no skill yet, draft a new one. Your contributions compound across the team.

Worked example:

You run the client-touchpoint-drafter skill for a check-in with a particularly complex client relationship. The output is technically correct but misses the relationship nuance. Instead of just re-prompting, you open a feedback session file: What worked, what didn’t, what I’d change in the prompt. Next time you run it, you adjust. Three months later, the skill is better for everyone because of your input.


AI system primitives — a diagnostic model

Time: ~8 minutes

Before you move to Module 3 and deepen how you use those tools, it helps to have a shared vocabulary for why they sometimes work perfectly and sometimes produce nothing useful. The answer is almost always the same: one of six core primitives is missing.

Activation — before reading the framework:

Think of the last time Cursor gave you output that was useless or wrong. In one sentence: what do you think went wrong?

Hold that answer. By the end of this section, you’ll be able to name the primitive that was missing — and that means you’ll be able to fix it next time rather than just re-prompting and hoping.


Every great AI workflow depends on six primitives. They are not engineering concepts — they are diagnostic vocabulary for anyone who uses AI tools at work. When a workflow feels clunky, or Cursor produces something irrelevant, or a skill returns empty results, the problem is almost always a gap in one of these six areas.

PrimitiveWhat it means in daily Cursor useSymptom when it’s missing
ContextCursor has what it needs to understand your specific situation — the client, the relationship history, the current stateOutput is generic; ignores your actual client, project, or context
Specification”Done” is defined clearly enough that you could verify whether the output meets itOutput is vague; you can’t tell if it’s right or wrong
VerificationThere’s a way to check that the output is accurate before you use itYou’re guessing whether to trust what Cursor produced
ExecutionCursor can actually perform the action — it has access to the right tools, MCPs, or filesCursor describes what to do instead of doing it; or a skill call returns nothing
ObservationYou can see what happened during the run and whyYou can’t debug when something goes wrong; the failure is opaque
SafetyThere are guardrails before irreversible actions — wrong path, wrong client, wrong ticketOne bad run causes real damage that takes time to undo

Using this as a diagnostic checklist:

When a Cursor workflow fails, work through the six primitives in order:

  1. Did Cursor have the right context? (If not: add vault path, paste transcript, name the client explicitly)
  2. Was “done” defined? (If not: add constraints and output format to the prompt)
  3. Is there a way to verify the output? (If not: add a check step or ask Cursor to explain its reasoning)
  4. Could Cursor actually execute the action? (If not: check MCP connection, skill configuration, or file path)
  5. Can you see what happened? (If not: ask Cursor to summarize what it did and what it received as input)
  6. Were there guardrails on irreversible actions? (If not: add “save a draft first” or “confirm before updating”)

Connection to Module 3: Each of the tools you’ll learn in Module 3 — prompt habits, skills, agents, MCPs, model selection — serves one or more of these primitives. When a tool stops working, this model is your first diagnostic step.


← Back to Module 2 Overview | Next → Lesson 2: Self-assessment and habit commitment