PRD Prompt Guide

This guide provides comprehensive prompts for Cursor agents to write refined, world-class Product Requirements Documents.


When to Use This Guide

Use these prompts when:

  • Starting a new PRD from discovery notes, transcripts, or stakeholder interviews
  • Refining an existing draft PRD for clarity and completeness
  • Adding specific sections (technical architecture, data model, etc.)

What makes a world-class PRD:

  • Specific pain points with quantified impact (time, money, effort)
  • Clear staged milestones (POC → MVP → V1) with concrete deliverables
  • Measurable success criteria
  • Testable functional requirements
  • Assumptions and open questions explicitly flagged
  • No invented details—only facts from source materials

Pre-Flight Checklist

Before generating a PRD, gather these inputs:

InputSourceRequired
Discovery notes or meeting transcriptsClient repo or knowledge/Yes
Stakeholder interviews or pain pointsMeeting notesYes
Existing context (SOW, prior docs)Client repoIf available
Technical constraints or preferencesSME inputIf available
Target audience for the PRDUser clarificationYes

If inputs are missing: Ask for them before proceeding. Do not invent stakeholder names, metrics, or technical details.


PRD Generation Prompt

Use this prompt to generate a PRD from discovery materials:

You are a technical product writer creating a PRD for Brainforge.

ROLE:
- Write with precision and clarity
- Balance product thinking with engineering feasibility
- Focus on what we know, flag what we don't

INPUTS PROVIDED:
- Discovery notes, transcripts, or stakeholder interviews
- Any existing context (SOW, prior docs, client materials)
- Target audience for this PRD

TEMPLATE STRUCTURE (follow this order):
1. TL;DR (3 sentences max: what, why, expected outcome)
2. Summary (what this PRD is about, why it exists)
3. Background and Context (problem context, what led here)
4. Problem and Value (specific pain, cost of inaction, stakeholder value table)
5. Goals and Non-Goals (what we will and won't do)
6. Staged Milestones (POC, MVP, V1 with scope and success criteria)
7. Users and Use Cases (primary users, user flow, key use cases)
8. Functional Requirements (inputs, processing, outputs, workflow logic)
9. Technical Approach (architecture, components, data model, APIs)
10. Assumptions (numbered, with risk if wrong)
11. Open Questions (with owner, needed-by date, status)
12. Tradeoffs (decisions with alternatives considered)
13. Success Metrics (measurable targets with measurement method)
14. Dependencies (technical, external, team)
15. Risks and Mitigations (likelihood, impact, mitigation strategy)
16. Timeline Summary (phase, scope, duration, status)
17. References (links to source materials)

QUALITY STANDARDS:

Problem Statements:
- Quantify pain: "Brokers spend 2-7 days gathering information per submission"
- Name specific inefficiencies: "Manual pre-filling of 9+ page supplementary forms"
- State cost of inaction: "Revenue per head constrained; independents fail due to service capacity"

Stakeholder Value:
- Use tables with specific benefits per role
- Example:
  | Stakeholder | Value |
  |-------------|-------|
  | Brokers | Cut submission time from 1 week to ≤1 day |
  | CSRs | Reduce manual data entry by 50%+ |

Staged Milestones:
- POC: Validate core technical assumption (minimal deliverables, clear success gate)
- MVP: End-to-end functionality for limited scope (numbered features, explicit exclusions)
- V1: Production scale (additions to MVP, production-grade metrics)
- Mark timelines as "[TBD, needs technical review]" if not confirmed

User Flows:
- Number each step: "1. Create new submission → 2. Upload documents → 3. System generates profile"
- Identify the primary user explicitly

Technical Approach:
- Include architecture diagram (ASCII or description)
- List components with technology choices and notes
- Provide simplified data model (SQL or schema description)
- Document key API endpoints if relevant

Success Metrics:
- Every metric must be measurable
- Include target value AND measurement method
- Example: "Time from intake → ready-to-submit | ≤1 day | Timestamp comparison"

Open Questions:
- Assign an owner to every question
- Set a "needed by" date or phase
- Track status (Open/Resolved)

CONSTRAINTS:
- Do NOT invent stakeholder names, metrics, or technical details not in source materials
- Do NOT over-specify timelines without technical input
- Do NOT omit Non-Goals (critical for scope control)
- Flag uncertainty explicitly: "Assumption: X. Risk if wrong: Y."

OUTPUT:
A complete PRD following the template structure, ready for stakeholder review.

PRD Refinement Prompt

Use this prompt to improve an existing draft:

You are refining an existing PRD draft for clarity, completeness, and accuracy.

REVIEW CHECKLIST:

1. Hallucination Guard
   - Are all stakeholder names from source materials?
   - Are all metrics based on real data or explicitly flagged as estimates?
   - Are technical choices grounded in discovery or marked as recommendations?

2. Specificity Check
   - Replace vague language with concrete details:
     - "Reduce time" → "Reduce from X days to Y days"
     - "Improve efficiency" → "Cut manual data entry by 50%"
     - "Multiple forms" → "9+ page supplementary forms"
   - Quantify wherever possible

3. Testable Requirements
   - Can each functional requirement be verified as done/not done?
   - Are acceptance criteria implicit or explicit?

4. Assumption Coverage
   - Is every assumption numbered?
   - Does each assumption state "Risk if wrong"?
   - Are critical assumptions near the top?

5. Tradeoffs Documentation
   - Are key decisions documented with alternatives?
   - Does each tradeoff show pros, cons, and effort?
   - Is the chosen option clearly marked with rationale?

6. Scope Clarity
   - Are Non-Goals explicit and complete?
   - Is MVP scope distinct from V1 scope?
   - Are "Not in MVP" items listed?

7. Actionability
   - Do open questions have owners?
   - Are timelines marked TBD if not technically reviewed?
   - Are dependencies categorized (Technical, External, Team)?

REFINEMENT ACTIONS:
- Strengthen weak sections (add specificity, quantify impact)
- Flag invented or assumed details for verification
- Ensure all sections from template are present
- Verify internal consistency (goals match success metrics match scope)
- Add missing tables where appropriate (stakeholder value, risks, dependencies)

OUTPUT:
Refined PRD with tracked changes or clear notes on what was modified.

Section-Specific Prompts

Add Technical Architecture

Add a Technical Approach section to this PRD.

Include:
1. Architecture Overview (ASCII diagram or clear description)
2. Key Components table:
   | Component | Technology | Notes |
   |-----------|------------|-------|
3. Data Model (simplified SQL schema or entity descriptions)
4. API Endpoints table (if relevant):
   | Method | Endpoint | Purpose |
   |--------|----------|---------|

Base technical choices on:
- Discovery notes and stakeholder preferences
- Existing tech stack mentioned in context
- Standard Brainforge patterns (Supabase, Next.js, etc.)

Mark uncertain choices as "[Recommended, pending technical review]"

Add Success Metrics

Add a Success Metrics section to this PRD.

For each metric include:
| Metric | Target | How Measured |
|--------|--------|--------------|

Categories to cover:
- Adoption (usage, activation)
- Efficiency (time saved, automation rate)
- Quality (accuracy, error rates, user satisfaction)
- Business (revenue impact, cost reduction)

Rules:
- Every metric must have a measurable target
- Every metric must have a measurement method
- Avoid vanity metrics—focus on outcomes that matter to stakeholders

Add Staged Milestones

Add Staged Milestones to this PRD with POC, MVP, and V1 phases.

For each phase include:
- Goal (one sentence)
- Scope (checkbox list of deliverables)
- Success Criteria (measurable gate to proceed)
- Timeline ([TBD, needs technical review] unless confirmed)

POC should:
- Validate core technical assumption
- Be minimal (1-3 deliverables)
- Have a clear "proceed/pivot" gate

MVP should:
- Deliver end-to-end functionality for limited scope
- List explicit exclusions ("Not in MVP")
- Target 1-3 pilot users or use cases

V1 should:
- Scale to full production use
- List additions beyond MVP
- Include production-grade success criteria

Quality Checklist

Before finalizing, verify:

CheckQuestion
TL;DRDoes it fit in 3 sentences?
ProblemIs pain quantified (time, money, effort)?
GoalsAre non-goals explicit?
MilestonesAre timelines marked TBD if not reviewed?
RequirementsCan each be verified as done/not done?
MetricsIs every success criterion measurable?
QuestionsDoes every open question have an owner?
AssumptionsDoes each state risk if wrong?
Data ModelDoes it support the stated requirements?
ReferencesAre source materials linked?

Common Pitfalls

PitfallWhy It’s BadFix
Inventing stakeholder namesErodes trust, causes confusionOnly use names from source materials
Over-specifying timelinesCreates false commitmentsMark “[TBD, needs technical review]“
Missing non-goalsScope creep, misaligned expectationsAlways include explicit non-goals
Vague success criteriaCan’t measure successQuantify: “from X to Y”
Skipping tradeoffsDecisions look arbitraryDocument alternatives considered
Copy-paste from templatesGeneric, not tailoredCustomize every section to the project

Usage Examples

Generate a Full PRD

Generate a PRD from this discovery transcript using the PRD Generation Prompt.

Context:
- Client: [Name]
- Discovery source: [transcript/notes path]
- Audience: [technical team / stakeholders / both]

Refine an Existing Draft

Refine this PRD draft using the PRD Refinement Prompt.

Focus areas:
- [ ] Strengthen problem quantification
- [ ] Add missing non-goals
- [ ] Ensure all open questions have owners

Add a Specific Section

Add the Technical Approach section to this PRD.

Constraints:
- Use [Supabase/existing stack] as the database
- Follow [client's] existing architecture patterns
- Mark uncertain choices as recommendations

References

  • Template: standards/02-writing/PRDs/prd-template.md
  • Example: standards/02-writing/PRDs/examples/prd-contextual-ai.md
  • Review Guide: standards/04-prompts/review/review-prompts.md
  • Checklist: standards/02-writing/PRDs/prd-checklist.md