agents.md — Brainforge Cursor Agent Operating Guide

This file defines how Brainforge Cursor agents (1) navigate Brainforge repositories and (2) interface with users (follow-ups, confirmations, outputs). It is the default operating manual. Task-specific steering should live in additional markdown “prompt” files inside standards/.


0) Core Principle

Standards first, then context.

  1. Extract the right standard prompt/workflow from standards/ based on what the user is asking for.
  2. Load the needed context from:
    • Client repo (client-specific context and deliverables), and/or
    • knowledge/ (internal/raw/interim documentation).
  3. Execute using the selected standards prompt, then write outputs to the correct destination repo.

1) Repository Map (Source of Truth by Role)

A) standards/ (agent operating system)

Purpose: The definitive standards for how agents operate and how work should be produced.

This directory holds:

  • Standard prompts (email writing, summaries, agendas, PRDs, SOPs, SOWs, ticket generation, etc.)
  • Standard guides (tone, structure, quality bar, formatting rules)
  • Standard workflows/processes (meeting → agenda → summary → action items → tickets)
  • Templates and reusable patterns (client-agnostic)

Playbook is always consulted first to select the correct prompt/workflow before writing anything.

Hard rule: No client-specific details live here.


B) knowledge/ (internal/raw/interim documentation)

Purpose: Internal-only material and “what happened” records that clients/non-technical stakeholders typically do not see.

Vault holds (examples):

  • Meeting transcripts (internal or cross-team)
  • Review notes, QA notes, retrospectives
  • Interim drafts and internal summaries
  • Internal meetings not tied to a specific client
  • Decision breadcrumbs (context, rationale, change history)

Write policy: Prefer date-stamped, append-friendly notes. Avoid rewriting history without an addendum.


C) Client repositories (client-specific source of truth)

Purpose: Everything specific to a named client engagement.

Client repos hold (examples):

  • Discovery documents
  • Prior meeting agendas and meeting notes (client-facing)
  • Gantt charts, schedules, project plans
  • Contracts, SOWs, client deliverables
  • Live summaries and client-specific outputs
  • Client-specific policies, requirements, constraints

Hard rule: Never copy one client’s details into another client repo or into Playbook.


2) Standard Agent Execution Flow (Always Follow)

Step 1 — Classify the user’s request

Determine the task type (examples):

  • Write an email
  • Write a summary
  • Turn transcript into agenda
  • Generate/update tickets
  • Write a PRD
  • Write a SOW
  • Write an SOP
  • Produce a client deliverable (discovery doc, meeting notes, etc.)
  • Prep for a meeting → use the Meeting Prep skill (.cursor/skills/meeting-prep/SKILL.md); see also standards/04-prompts/meetings/meeting-prep.md
  • Update Data Platform Documentation (client sheet) → use data-platform-doc (.cursor/skills/data-platform-doc/SKILL.md) in update mode, or the data-platform-doc-update alias; see rule .cursor/rules/data-platform-doc-update.mdc. Update mode fills and refreshes existing tabs only; it does not create missing standard tabs. If the workbook is missing standard tabs, use kickoff (copy from the canonical template) or audit (read-only) to list gaps—see knowledge/delivery/05-tools-and-skills/data-platform-updates.md for which mode to use.
  • Client Slack updates (weekly kick-off, daily touchpoint, end-of-week) → Follow standards/02-writing/Communications/slack-client-updates-guide.md; use the weekly-kick-off-update and end-of-week-update commands and the client-touchpoint-drafter skill as appropriate.

Step 2 — Retrieve the correct Playbook prompt/workflow

Always search in standards/ first, typically under:

  • standards/04-prompts/
  • standards/02-writing/
  • standards/03-knowledge/
  • standards/01-onboarding/

The selected Playbook prompt is the agent’s “method.”

Step 3 — Determine required context (and where it comes from)

Context must come from the right place:

  • Client-specific context → client repo
    (discovery docs, agendas, Gantt charts, contracts, prior deliverables)

  • Internal/raw/interim contextknowledge/
    (transcripts, review notes, internal meeting material, internal summaries)

If context is missing, ask the minimum follow-ups needed (see Section 5).

Step 4 — Execute the Playbook prompt using the gathered context

Apply the Playbook standards for:

  • tone and structure
  • formatting constraints
  • quality bar (clarity, correctness, minimal assumptions)
  • for prose deliverables (SOW, PRD, email, summary, etc.): apply /humanizer or humanizer patterns to remove AI-generated writing artifacts before finalizing

Step 5 — Write outputs to the correct destination repo

  • Reusable prompt/workflow/template → Playbook (client-agnostic only)
  • Internal transcript/review/meeting record → Vault
  • Client deliverable → Client repo

Step 6 — Confirm before high-impact actions

Before actions like ticket creation, large edits, file moves, or client-facing publishing, perform a confirmation gate (see Section 6).


3) Where Things Are Found vs. Where They Go

“Found in…”

  • How to do the task (prompt/workflow/format): standards/
  • Internal evidence/raw notes/transcripts/reviews: knowledge/
  • Client-specific evidence/deliverables/contracts/plans: client repo

“Goes to…”

  • Standard prompt/workflow/template update: standards/
  • Internal transcript/review note/internal meeting summary: knowledge/
  • Client-facing doc / client-specific artifact: client repo
  • Plans: Use knowledge/plans/ — operational (daily/weekly) → knowledge/plans/operational/; strategic (company, GTM) → knowledge/plans/strategic/; project plans → co-located with project (knowledge/engineering/{project}/plans/, knowledge/clients/{client}/plans/). See knowledge/plans/README.md for details.

Quick reference by deliverable type

For a full breakdown by deliverable type (meeting transcripts, SOWs, service-line artifacts, code/config, review notes, interim deliverables, etc.) with exact path examples, see:

KNOWLEDGE_AND_STANDARDS_GUIDE.md - the single source of truth for “where does this go?“


4) Playbook Prompt Organization (Prompts Folder as the Router)

  • standards/prompts/
    • tickets/ (create/update/groom tickets)
    • prd/ (PRDs, specs, requirements)
    • sow/ (sales SOWs, scopes, assumptions, pricing narrative)
    • sop/ (SOPs, internal processes, runbooks)
    • email/ (email drafting/editing rules and templates)
    • meetings/ (agenda creation, summaries, action items, follow-ups)
    • review/ (reviewing generated text and documents)

Prompt selection rule

When the user asks for a deliverable, the agent should:

  1. Identify the deliverable type (tickets / PRD / SOW / SOP / email / agenda / summary).
  2. Navigate to the corresponding Playbook prompt folder.
  3. Use the most specific prompt available (task- and audience-specific).
  4. If multiple prompts apply, choose the one that matches:
    • audience (internal vs client-facing)
    • input type (transcript vs notes vs policy docs)
    • output format constraints (markdown, single-block, schema, etc.)
  5. Review the deliverable and assess whether it meets review guidelines. For prose, apply the humanizer skill (.cursor/skills/humanizer/) to remove AI patterns.

Minimal “prompt index” expectation

Each prompt markdown should clearly state:

  • Use case(s) it covers
  • Expected inputs (and where they usually come from: Vault vs client repo)
  • Output format (and where it should be written)
  • Any confirmation gates (e.g., ticket creation)

5) Agent ↔ User Interface Standards

First response pattern (default)

  1. Restate the request (1–2 sentences).
  2. State the Playbook prompt/workflow you will use (by folder/path).
  3. State which context sources you will consult (Vault, client repo, both).
  4. Provide a short execution plan (3–6 bullets).
  5. Ask only the minimal follow-ups required to proceed.

Follow-up questions (ask only what’s necessary)

Ask follow-ups only when missing info risks:

  • using the wrong prompt
  • pulling context from the wrong repo
  • writing to the wrong destination
  • producing a client-facing artifact with internal-only content

Typical minimal questions (in order):

  1. Which client repo (if client-specific)?
  2. Is the output client-facing or internal?
  3. What are the inputs (transcript link/file, meeting notes, prior docs) and where are they stored?
  4. Any format constraints (length, markdown style, single code block, etc.)?

6) Confirmation Gates (Must Confirm Before Acting)

The agent must confirm before:

  • creating/updating external tickets (e.g., Linear)
  • making large multi-file edits
  • moving/renaming folders or files
  • producing or publishing client-facing deliverables from internal Vault material
  • changing Playbook standards (prompts/workflows/templates)

Confirmation format (compact): “I will use Playbook prompt: <path>.
I will pull context from: <vault paths> and/or <client repo paths>.
I will write output to: <destination path>.
Proceed?”


7) 1Password and Secrets (Always Use CLI)

When credentials, env vars, or secrets from 1Password are needed:

  • Always use the 1Password CLI (op) to retrieve values. Run the relevant op commands in the terminal (e.g. op item list, op read "op://knowledge/item/field", op item get "Item Name" --vault "Vault Name").
  • Do not ask the user to look up values in the 1Password app or in the 1Password web interface. The agent must use the CLI.
  • If the user is not signed in, prompt them to run op signin once; then continue using the CLI for lookups.

Reference:

  • Setup and usage: standards/03-knowledge/engineering/setup/1password-cli-setup.md
  • Vault and examples: standards/03-knowledge/engineering/setup/README.md
  • Common vault for team credentials: Brainforge AI Team
    • List items: op item list --vault "Brainforge AI Team"
    • Get item: op item get "Item Name" --vault "Brainforge AI Team" (use exact item title, e.g. “platform env”)

8) Common User Requests → Playbook Prompt → Context → Destination

A) “Generate tickets”

  • Prompt: standards/prompts/tickets/...
  • Context: Vault transcripts/review notes + client repo requirements (if client-specific)
  • Destination: ticket system (confirm first) + optionally a client repo log/note

B) “Write a PRD”

  • Prompt: standards/prompts/prd/...
  • Context: client repo discovery docs + prior agendas/notes; Vault internal notes if relevant
  • Destination: client repo (or internal repo if PRD is internal-only)

C) “Write a sales SOW”

  • Prompt: standards/prompts/sow/...
  • Context: client repo discovery + scope constraints; Vault internal review notes as needed
  • Destination: client repo (or sales/internal location as defined by the engagement)

D) “Write an SOP”

  • Prompt: standards/prompts/sop/...
  • Context: Vault review notes + existing Playbook standards
  • Destination: Playbook (if reusable) or Vault (if internal/interim)

E) “Turn a transcript into a meeting agenda”

  • Prompt: standards/prompts/meetings/agenda-from-transcript...
  • Context: Vault transcript (primary) + client repo prior agendas (if client-specific)
  • Destination: client repo agendas folder (or internal if non-client meeting)

F) “Write an email”

  • Prompt: standards/prompts/email/...
  • Context: client repo or Vault depending on topic; avoid leaking internal notes into client-facing emails
  • Destination: draft text output (and optionally saved in the appropriate repo if requested)

9) Context Add-ons (Optional Steering Files)

This file sets defaults. Additional steering should live in Playbook, typically as:

  • standards/prompts/... (task execution)
  • standards/workflows/... (multi-step sequences)
  • standards/guides/... (tone/format/constraints)

When a user says “follow instructions in X.md,” treat that file as higher priority for that task, as long as it does not violate repo separation rules.


10) Definition of Done (DoD)

A task is done when:

  • the agent selected the correct Playbook prompt/workflow
  • the agent used context from the correct repo(s)
  • the output is written to the correct destination
  • confirmations were obtained for high-impact actions
  • the final output is copy-paste ready and meets formatting constraints

📚 Essential Reading for All Agents:

  • Cursor Agent Best Practices - Official Cursor best practices that all Brainforge agents should follow. Covers:

    • Planning before coding (Plan Mode)
    • Context management strategies
    • Code review workflows
    • Running agents in parallel
    • Debug mode for tricky bugs
    • And more essential patterns
  • How to Use Cursor - Setup and walkthrough for using Cursor at Brainforge, including mode selection and multi-repo workspaces

  • Cursor Skills - Available skills in standards/.cursor/skills/, including the humanizer skill for removing AI writing patterns from prose

All Brainforge Cursor agents should be familiar with both this operating guide and the Cursor best practices document.