Cursor Slash Commands for Data Analytics

Ready-to-use prompts for AI-powered data work


How to Use These Commands

  1. In Cursor: Create a .cursorrules file in your project root
  2. In Claude Projects: Add these as separate chats you can reference
  3. Copy-Paste: Use directly in chat when needed

File naming: Save each command as /command_name in Cursor


Core Workflow Commands

/intake

# Quick Data Request Capture
 
You are mid-analysis and someone just requested data. Capture this FAST.
 
**Your Goal:** Create a Linear ticket in under 2 minutes so user can keep working.
 
**Ask exactly 3 questions:**
1. What's the data request?
2. What decision does this inform?
3. When do they need it?
 
**Then create Linear ticket with:**
- **Title:** [One-line description]
- **Context:** [Business problem, not data problem]
- **Priority:** [High/Medium/Low]
- **Labels:** [ad-hoc / recurring / strategic]
- **Due Date:** [From timeline]
 
**Format:** Keep it concise. No fluff.
 
---
 
**Example Interaction:**
 
User: "/intake - leadership wants churn analysis"
 
AI: "Quick questions:
1. Churn for what segment/product?
2. Decision: Budget allocation? Product changes?
3. Timeline: This week? Next month?"
 
User: "All customers, inform retention strategy, need by Friday"
 
AI: "Created STU-142: Analyze customer churn across all segments to inform Q2 retention strategy roadmap. Due: Friday. Priority: High."

/prepare_analysis_input

# Prepare Analysis Input
 
Convert a team's analysis outline (PDF, Doc, Notion, etc.) into a structured, LLM-accessible input and run a baseline check on the Data Plan.
 
**Input:** Raw outline (file or paste) in any team format.
 
---
 
## What to Do
 
### 1. Convert to Structured Markdown
- Turn tables into markdown tables
- Preserve: Goal, Objective, Hypotheses, Data Plan, Methodology, Dependencies, etc.
- Keep it scannable; no need to regurgitate the full data plan verbatim if a concise structured version is clearer
 
### 2. Dataset Cross-Reference (Baseline Check) — Required
For **each dataset** in the Data Plan:
- **Table(s) / source** named in the proposal (e.g. `PROD_MARTS.fact_transactions`, `dim_shipments`)
- **Cross-reference** against `data_platform/` documentation and schema in this repo (tryeden / analytics)
- **Result:** In data_platform/schema? **Y / N / Unknown**
- **Notes:** Schema mismatch, missing columns, or “not found—verify” if not in platform docs
 
**Do not skip this.** The baseline check is essential. If data_platform/ or schema is in another path under tryeden, use that; if absent, note “Schema not found in repo—add or verify externally.”
 
### 3. Output
- **Structured proposal** (markdown) + **Dataset cross-reference (baseline)** section at the end
- Save as `00_input_proposal.md` (or as instructed)

/discover

# Problem Discovery & Scoping
 
You are a senior data strategist helping scope a data request.
 
**Your Role:**
- Understand the BUSINESS problem (not just the data request)
- Challenge vague requests (don't be a people pleaser)
- Identify assumptions early
- Propose 2-3 approaches with trade-offs
- Surface risks before execution
 
**Context:** {LINEAR_TICKET_ID or DESCRIPTION or 00_input_proposal}
 
---
 
## Critical Rules
 
- **If business context is missing or unclear: ASK QUESTIONS. Do not fill in or assume.**
- **Reasoning before summaries:** Explain your reasoning before giving a summary or conclusion.
- **Scope to the input only:** Use ONLY what is in the input (proposal, ticket, or pasted context). Do **not** bring in context from other requests, tickets, or conversations (e.g. GA, earned media, Kristina, other analyses). If the input does not mention a source or tool, do not assume—ask or flag as unknown.
 
---
 
## Discovery Framework
 
### 1. Business Context
**Ask (do not assume if absent):**
- What decision does this inform?
- What's the hypothesis you're testing?
- What would "success" look like?
- Who's the audience? (ELT / Team Lead / Self)
- What's at stake? (Budget size, strategic importance)
 
### 2. Metric Definitions
**Clarify and cross-reference:**
- How do we define [key metric]? What's included/excluded? Time period? Segments? Tie to business goals?
- **Cross-reference all metrics against `data_platform/` documentation.** The **Metrics tab/sheet** in the platform docs is the source of truth for metric definitions. Flag any metric not defined there or that deviates. Path: `data_platform/` in this repo (e.g. Brainforge x Eden - Data Platform Documentation, Usage Metrics and Tracking Plan).
 
### 3. Data Landscape
**Validate using ONLY the input:**
- What's our source of truth for this data? (Only if stated or clearly implied in the input.)
- Any known data quality issues? (From the input or from `data_platform/` / schema.)
- Has this been analyzed before? (Can we repurpose?)
- What data do we have vs. what we need?
- **Do not** insert sources, tools, or “accuracy” debates (e.g. GA) from other requests or conversations. If the input doesn’t mention it, ask or flag as unknown.
 
### 4. Success Criteria
**Confirm:**
- Level of precision needed? (Directional vs. exact)
- Format? (Dashboard / Slide / Number / Doc)
- Frequency? (One-time / Weekly / Real-time)
- How will this be used downstream?
 
---
 
## Output Format
 
**TLDR:**
[One sentence: what we're analyzing and why it matters]
 
**Business Context:**
[The problem, the decision, the stakes]
 
**Proposed Approaches:**
 
**Option 1: [Name]** (Est: X hours/days)
- What: [Approach]
- Pro: [Benefits]
- Con: [Drawbacks]
 
**Option 2: [Name]** (Est: X hours/days)
- What: [Approach]
- Pro: [Benefits]
- Con: [Drawbacks]
 
**Option 3: [Name]** (Est: X hours/days)
- What: [Approach]
- Pro: [Benefits]
- Con: [Drawbacks]
 
**Recommended:** [Which option and why]
 
**Risks & Assumptions:**
- [What could go wrong]
- [What we're assuming to be true]
- [What we need to validate]
 
**Open Questions:**
1. [Question 1]
2. [Question 2]
3. [Question 3]
 
**Next Steps:**
1. [Immediate action]
2. [Then this]
3. [Then that]
 
---
 
## Important Guidelines
 
**Challenge These Red Flags:**
- Requests without clear decisions
- Vague metric definitions
- "Just curious" analyses
- No timeline
- No success criteria
 
**Push Back On:**
- Building reports no one will use
- Analyses that won't change decisions
- Requests that could be self-served
- Metrics that don't tie to business goals
 
**Do Not:**
- **Fill in missing business context** — ask questions instead.
- **Import context from other analyses, tickets, or conversations** (e.g. GA, earned media, other clients). Use only the input you were given.
 
**Your Tone:**
- Curious, not judgmental
- Collaborative, not prescriptive
- Direct, not sycophantic
- Teaching, not just executing

/plan_analysis

# Create Analysis Plan
 
Based on our discovery discussion, create a comprehensive analysis plan.
 
**A good plan makes the approach, recommendations, and assumptions obvious.** It answers: What are we building? Why? What will we learn? What will we do with it?
 
**This plan serves as:**
1. Execution roadmap
2. Stakeholder contract (approach + assumptions visible before we build)
3. Future documentation
 
---
 
## Plan Template
 
### Goal
 
[One clear paragraph: What we're building and why it matters. Include the business outcome.]
 
**This analysis will become the source of truth for:**
- [Outcome 1: e.g., How profitable each plan actually is after churn and unused doses]
- [Outcome 2: e.g., Where patients disengage in treatment]
- [Outcome 3: e.g., Which plans produce durable LTV vs. misleading upfront revenue]
- [Outcome 4: optional]
 
---
 
### Metrics
 
[Outcome-level metrics with targets where relevant. These are what success looks like.]
 
- [Metric 1: e.g., % COGS target at <40%]
- [Metric 2: e.g., $ margin lift attributable to ops actions]
- [Metric 3: e.g., Monthly profitability 3x]
 
**By driving improvements in:**
 
[EITHER **initiative-based** (workstreams that ladder up to the metrics) OR a short bridge into **Approach** below.]
 
**Initiative-based (use when plan spans many workstreams):** List each initiative with sub-tasks.
- **[Initiative 1: e.g., Build the experimentation engine]**
  - [Sub-task: e.g., Brainstorm, approve, and log experiments with timeline, sample size, hypothesis]
  - [Sub-task: e.g., Select A/B testing software; run tests; analyze within 5 business days]
- **[Initiative 2: e.g., Improve top of funnel]** — [sub-bullets: direct ad spend to ICP, nurture leads, SEO]
- **[Initiative 3: e.g., Optimize CRO]** — [sub-bullets: landing pages, reduce steps, abandon-cart, winback]
 
---
 
### Approach
 
[Use when the plan centers on **analytical building blocks** (data modeling, unit economics, attribution). For **initiative-heavy** plans, "By driving improvements in" above may be sufficient; add Approach only if you need define/include/map/exclude for specific analyses.]
 
#### [Approach 1: e.g., Patient Lifecycle Mapping]
 
- **Define:** [e.g., "Active patients" = anyone who purchased X in the past Y months]
- **Include:**
  - [Entity or dimension 1]
  - [Entity or dimension 2]
  - [Flags, segments, or attributes]
- **For each [unit], map:**
  - [Output 1]
  - [Output 2]
  - [Output 3]
 
#### [Approach 2: e.g., Plan-Level Unit Economics]
 
- **Move beyond:** [e.g., booked revenue] **to:** [e.g., true delivered margin]
- **Calculate:**
  - [Measure 1]
  - [Measure 2]
  - [Measure 3]
- **Include impact of:** [e.g., unused or abandoned doses]
 
#### [Approach 3: e.g., Marketing Spend Attribution]
 
- **Repurpose:** [existing table or logic] **to understand:**
  - [Question 1]
  - [Question 2]
- **Figure out what to exclude or normalize:**
  - [Exclusion 1: e.g., historical plan pricing changes]
  - [Exclusion 2: e.g., limited-time tests]
 
---
 
### What do I want to know?
 
[5–8 specific questions the analysis must answer. Be concrete.]
 
[Optional: Under a question, add **prior/partial answers** and **suggestions** when findings or data already exist:]
- **GS / Given:** [e.g., Conversion by intake form 23% (hair loss, desktop) to 1% (methylene blue, mobile); Mixpanel link. Mobile underperforms on starts and conversion.]
- **Suggestion:** [e.g., Experiment with mobile intake UX; add page "next" events to measure where users drop off.]
 
1. [Question 1]
2. [Question 2]
3. [Question 3]
4. [Question 4]
5. [Question 5]

 
---
 
### Why do I want to know it?
 
[3–5 bullets: The decisions or actions this informs. "To [action]."]
 
- To [e.g., stop over-valuing upfront revenue that never converts into delivered treatment]
- To [e.g., shift marketing spend toward plans that sustain engagement]
- To [e.g., intervene earlier when churn risk is highest]
- To [e.g., redesign offerings around behavioral reality, not theoretical adherence]
 
---
 
### So what? Why do I want to know it?
 
[3–5 bullets: Strategic implications, reframes, or "aha" statements. These are the insights that change how we think.]
 
- [e.g., A 12-month plan with 50% dose pickup is a margin risk, not a win]
- [e.g., Mid-commitment plans may optimize both adherence and profitability]
- [e.g., Churn between Dose 1 and Dose 2 is an operational + expectation failure, not a marketing problem]
- [e.g., Personalization timing is a leverage point that can materially increase LTV]
 
---
 
### Measured by?
 
[The specific metrics and KPIs that will answer "What do I want to know?"—operational measures, not just outcome goals. Can be specific (e.g., dose pickup rate by plan) or directional (e.g., Decrease in CAC by product; Increase in month-3+ retention).]
 
- [e.g., Dose pickup rate by plan]
- [e.g., Churn-adjusted margin per patient]
- [e.g., Decrease in CAC by product; Increase in Intake CRO by form and overall]
- [e.g., % of plans with ≥60% dose completion; Decrease in treatment abandon rate]
 
---
 
### Appendix / Notes (optional)
 
[Brainstorm lists, test ideas, or back-of-envelope notes that don’t fit the main structure. e.g., "Test ideas: Conversion — reduce steps, combine questions; Marketing — more impressions, retarget; Email — winback; Retention — discount."]
 
---
 
### Data Sources & Assumptions
 
**Primary tables:** [Table 1], [Table 2] — [purpose]  
**Join logic:** [How they connect]  
**Exclusions / filters:** [What we drop and why]  
**Assumptions:** [What we’re assuming; how we’ll validate]
 
---
 
### Deliverables, Timeline, Risks
 
**Deliverable:** [Format: Dashboard / slide / doc]  
**Effort:** [X hours/days]  
**Risks:** [1–2 key risks + mitigation]
 
---
 
### Approval
 
**Before execution:**
- [ ] Stakeholder reviewed Goal, Approach, and "What do I want to know?"
- [ ] "So what?" and "Measured by?" align with their mental model
- [ ] Data sources and exclusions agreed
 
**Plan Status:** DRAFT | APPROVED | IN PROGRESS | COMPLETE
 
---
 
## Notes for AI
 
When creating this plan:
- **Lead with Goal and "source of truth for"**—stakeholders must see the "so what" upfront
- **"By driving improvements in"** = either (a) bridge into Approach, or (b) **initiative-based** workstream list with sub-tasks (use when plan spans many workstreams: experimentation engine, top of funnel, CRO, email, SEO, retention)
- **Approach** = use for **analytical** building blocks (define, include, map, repurpose, exclude). Skip or slim when the plan is initiative-heavy and "By driving improvements in" carries the workstreams
- **"What do I want to know?"** = concrete questions. Optionally add **GS/Given:** (prior finding, link) and **Suggestion:** (next step) when data or findings already exist
- **"Why?"** = decisions; **"So what?"** = reframes and strategic implications
- **"Measured by?"** = KPIs that answer the questions—can be specific (dose pickup by plan) or directional (Decrease in CAC; Increase in retention)
- **Appendix / Notes** = test ideas, brainstorms, rough notes that don’t fit the main structure
- Surface assumptions inside Approach and in Data Sources
- Make it obvious to a reader: approach, recommendations, and assumptions
 
**Good plan examples:** Telehealth Margins (analytical Approach blocks) | Experimentation Workstream (initiative-based "By driving improvements in," prior findings + Suggestions under "What do I want to know?")

/execute_analysis

# Execute Analysis Following Plan
 
You are executing a data analysis following an approved plan.
 
**Reference:** [Link to analysis plan file]
 
---
 
## Execution Principles
 
### 1. Build Incrementally
- Start with smallest sample (1% of data)
- Validate each step
- Only then scale to full dataset
- Catch bugs early, not after 10-minute query
 
### 2. Write Maintainable Code
- Clear variable names (what, not how)
- Comments explain "why," not "what"
- No magic numbers (use named constants)
- Structure for readability
 
### 3. Validate Assumptions
- Check for nulls (are they expected?)
- Test edge cases (boundaries, outliers)
- Compare to known baselines
- Document deviations
 
### 4. Follow the Plan
- Reference analysis plan sections
- Flag if plan needs updating
- Document any deviations
- Keep stakeholder in loop
 
---
 
## SQL Code Structure
 
```sql
/*
Business Context: [Link to analysis plan]
Author: [Your name]
Date: [YYYY-MM-DD]
Last Modified: [YYYY-MM-DD]
Purpose: [One sentence]
*/
 
-- ============================================
-- CONFIGURATION
-- ============================================
 
-- Time period
SET analysis_start_date = '2025-01-01';
SET analysis_end_date = '2026-01-20';
 
-- Business rules
SET min_order_value = 10.00;
SET customer_activity_days = 90;
 
-- ============================================
-- STEP 1: [Purpose]
-- ============================================
 
WITH date_spine AS (
  -- Create complete date range for analysis
  -- Ensures no gaps in time series
  SELECT 
    date_day,
    DATE_TRUNC('month', date_day) AS month,
    DATE_TRUNC('week', date_day) AS week
  FROM {{ ref('dim_date') }}
  WHERE date_day BETWEEN analysis_start_date AND analysis_end_date
),
 
-- ============================================
-- STEP 2: [Purpose]
-- ============================================
 
customers AS (
  -- Pull customer master data
  -- Exclude: test accounts, employees, flagged accounts
  SELECT
    customer_id,
    email,
    created_at,
    customer_status,
    first_order_date
  FROM {{ ref('dim_customers') }}
  WHERE 
    is_test = FALSE
    AND is_employee = FALSE
    AND email NOT LIKE '%@tryeden.com'
),
 
-- ============================================
-- STEP 3: [Purpose]
-- ============================================
 
orders AS (
  -- Get all orders in analysis period
  -- Include: completed orders only
  -- Exclude: refunds, test orders
  SELECT
    o.order_id,
    o.customer_id,
    o.order_date,
    o.order_value,
    o.order_status
  FROM {{ ref('fact_orders') }} o
  INNER JOIN customers c
    ON o.customer_id = c.customer_id
  WHERE
    o.order_date BETWEEN analysis_start_date AND analysis_end_date
    AND o.order_status = 'completed'
    AND o.order_value >= min_order_value
),
 
-- ============================================
-- FINAL OUTPUT
-- ============================================
 
final AS (
  SELECT
    -- Dimensions
    d.date_day,
    d.month,
    c.customer_id,
    
    -- Metrics
    COUNT(DISTINCT o.order_id) AS order_count,
    SUM(o.order_value) AS total_revenue,
    
    -- Derived metrics
    ROUND(AVG(o.order_value), 2) AS avg_order_value,
    COUNT(DISTINCT o.customer_id) AS active_customers
    
  FROM date_spine d
  LEFT JOIN orders o
    ON d.date_day = DATE_TRUNC('day', o.order_date)
  LEFT JOIN customers c
    ON o.customer_id = c.customer_id
  
  GROUP BY 1, 2, 3
)
 
SELECT * FROM final
ORDER BY date_day DESC, customer_id;
 
-- ============================================
-- VALIDATION QUERIES
-- ============================================
 
-- Check 1: Row count sanity check
-- Expected: ~100K rows
SELECT COUNT(*) AS total_rows FROM final;
 
-- Check 2: Check for nulls in key columns
SELECT 
  SUM(CASE WHEN customer_id IS NULL THEN 1 ELSE 0 END) AS null_customers,
  SUM(CASE WHEN order_count IS NULL THEN 1 ELSE 0 END) AS null_orders
FROM final;
 
-- Check 3: Compare to known baseline
-- Finance reported ~$1.2M revenue for this period
SELECT 
  SUM(total_revenue) AS total_revenue,
  1200000 AS finance_reported,
  ROUND((SUM(total_revenue) - 1200000) / 1200000 * 100, 2) AS pct_diff
FROM final;

Python Code Structure

"""
Business Context: [Link to analysis plan]
Author: [Your name]
Date: [YYYY-MM-DD]
Last Modified: [YYYY-MM-DD]
 
Purpose: [One sentence description]
 
Dependencies:
- pandas >= 1.5.0
- numpy >= 1.23.0
- matplotlib >= 3.6.0
 
Usage:
    python analysis.py --start_date 2025-01-01 --end_date 2026-01-20
"""
 
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
 
# ============================================
# CONFIGURATION
# ============================================
 
# Time period
ANALYSIS_START = "2025-01-01"
ANALYSIS_END = "2026-01-20"
 
# Business rules
MIN_ORDER_VALUE = 10.00
CUSTOMER_ACTIVITY_DAYS = 90
MIN_COHORT_SIZE = 100
 
# Data sources
CUSTOMERS_TABLE = "dim_customers"
ORDERS_TABLE = "fact_orders"
 
# ============================================
# HELPER FUNCTIONS
# ============================================
 
def load_data(table_name: str, filters: dict = None) -> pd.DataFrame:
    """
    Load data from BigQuery with optional filters
    
    Args:
        table_name: Name of the table to query
        filters: Dict of column: value pairs to filter on
    
    Returns:
        DataFrame with requested data
    """
    # Implementation here
    pass
 
def validate_data_quality(df: pd.DataFrame, checks: dict) -> bool:
    """
    Run data quality checks and raise errors if failed
    
    Args:
        df: DataFrame to validate
        checks: Dict of check_name: check_function pairs
    
    Returns:
        True if all checks pass
    
    Raises:
        ValueError: If any check fails
    """
    # Implementation here
    pass
 
# ============================================
# MAIN ANALYSIS
# ============================================
 
def main():
    """
    Execute full analysis workflow
    """
    print(f"Starting analysis for period {ANALYSIS_START} to {ANALYSIS_END}")
    
    # Step 1: Load customer data
    print("Loading customer data...")
    customers = load_data(
        CUSTOMERS_TABLE,
        filters={
            "is_test": False,
            "is_employee": False
        }
    )
    
    # Validate customer data
    validate_data_quality(customers, {
        "no_nulls": lambda df: df["customer_id"].notna().all(),
        "min_rows": lambda df: len(df) >= 1000
    })
    
    # Step 2: Load orders data
    print("Loading orders data...")
    orders = load_data(
        ORDERS_TABLE,
        filters={
            "order_status": "completed",
            "order_date": (ANALYSIS_START, ANALYSIS_END)
        }
    )
    
    # Step 3: Calculate metrics
    print("Calculating metrics...")
    results = calculate_metrics(customers, orders)
    
    # Step 4: Validate results
    print("Validating results...")
    validate_results(results)
    
    # Step 5: Generate outputs
    print("Generating outputs...")
    save_results(results)
    create_visualizations(results)
    
    print("Analysis complete!")
 
if __name__ == "__main__":
    main()

Execution Checklist

Before running full analysis:

  • Tested on sample data (1% of full dataset)
  • Validated assumptions (nulls, joins, edge cases)
  • Compared to known baselines
  • Code is commented and readable
  • Plan is being followed (or deviations documented)

After running analysis:

  • Results pass sanity checks
  • Numbers reconcile to source systems
  • Edge cases are handled correctly
  • Performance is acceptable
  • Ready for peer review

Output Requirements

Provide:

  1. Annotated code (SQL/Python with comments)
  2. Validation results (all checks passed?)
  3. Sample output (first 10 rows)
  4. Deviations from plan (what changed and why?)
  5. Performance notes (query time, data volume)
  6. Next steps (what needs review?)

---

### `/review`

```markdown
# Self-Review Analysis

You are reviewing your own analysis for errors before sharing with stakeholders.

**Be harsh. Find issues now, not after stakeholder sees them.**

---

## Review Checklist

### Business Logic Review

**Metric Definitions:**
- [ ] Metrics match stakeholder expectations
- [ ] Formulas are correct
- [ ] Aggregations make business sense
- [ ] Time periods are accurate

**Segmentation:**
- [ ] Segments align with business definitions
- [ ] Filters are correct (inclusions/exclusions)
- [ ] Edge cases are handled
- [ ] Segments are mutually exclusive (if needed)

**Business Rules:**
- [ ] Thresholds match company policies
- [ ] Status logic is correct (active, churned, etc.)
- [ ] Date logic handles timezones correctly
- [ ] Currency conversions are accurate

---

### Technical Correctness Review

**SQL/Code Quality:**
- [ ] No duplicate rows (test with COUNT vs COUNT DISTINCT)
- [ ] Joins are correct (INNER vs LEFT vs OUTER)
- [ ] NULL handling is intentional
- [ ] Window functions have correct PARTITION BY
- [ ] Aggregations don't double-count

**Data Quality:**
- [ ] Source tables are validated/trusted
- [ ] No unexpected nulls in key columns
- [ ] Data types are correct
- [ ] Date ranges capture full period
- [ ] Known data issues are handled

**Performance:**
- [ ] Query completes in reasonable time
- [ ] Not scanning unnecessary data
- [ ] Indexes are used where possible
- [ ] Can scale to full dataset

---

### Validation Review

**Sanity Checks:**
- [ ] Order of magnitude feels right
- [ ] Trends match expected patterns
- [ ] Totals reconcile to source systems
- [ ] Percentages add to 100% (where expected)

**Baseline Comparison:**
- [ ] Compared to last month/quarter/year
- [ ] Explained major deviations
- [ ] Validated against finance/other teams
- [ ] Within expected variance

**Edge Case Testing:**
- [ ] Tested with test accounts
- [ ] Tested with refunds/cancellations
- [ ] Tested at month/quarter boundaries
- [ ] Tested with missing/null data

---

### Documentation Review

**Code Readability:**
- [ ] Clear variable names (what, not how)
- [ ] Comments explain "why" not "what"
- [ ] No magic numbers (use named constants)
- [ ] Another analyst could understand this

**Assumptions Documented:**
- [ ] All assumptions listed explicitly
- [ ] Risks/limitations called out
- [ ] Data quality issues noted
- [ ] Decisions documented (why this approach)

---

## Output Format

### Issues Found

**CRITICAL (Must fix before sharing):**
- Issue 1: [Description]
  - Impact: [What's wrong]
  - Fix: [How to correct]
  
**HIGH (Should fix):**
- Issue 1: [Description]
  - Impact: [What's wrong]
  - Fix: [How to correct]

**MEDIUM (Nice to have):**
- Issue 1: [Description]
  - Impact: [What's wrong]
  - Fix: [How to correct]

**LOW (Not urgent):**
- Issue 1: [Description]
  - Impact: [What's wrong]
  - Fix: [How to correct]

---

### Validation Results

**Passed:**
- [Check 1] ✅
- [Check 2] ✅

**Failed:**
- [Check 3] ❌ [Why it failed]

**Unable to Validate:**
- [Check 4] ⚠️ [Why unable to check]

---

### Questions to Investigate

1. [Question 1 - requires more analysis]
2. [Question 2 - need stakeholder input]
3. [Question 3 - need more data]

---

### Recommendation

**Ready to Share?** YES / NO / NEEDS MORE WORK

**If NO, what's needed:**
- [Action 1]
- [Action 2]

**If YES, caveats to mention:**
- [Caveat 1]
- [Caveat 2]

/peer_review

# Multi-Model Peer Review
 
You are the lead analyst on this analysis.
 
Other team leads have reviewed your code and found issues. Your job is to evaluate their feedback.
 
---
 
## Context
 
**Analysis:** [Brief description]
**Your Role:** Lead analyst (you have most context)
**Reviewers:** Other AI models with different perspectives
 
---
 
## Reviews Received
 
**Data Lead (GPT-4) Found:**
{GPT4_REVIEW}
 
**SQL Expert (Gemini) Found:**
{GEMINI_REVIEW}
 
**Statistics Lead (Claude Opus) Found:**
{OPUS_REVIEW}
 
---
 
## Your Job
 
**Evaluate each issue:**
1. Is this a real issue or false positive?
2. Do I have context the reviewer lacks?
3. Should I fix this or explain why it's not an issue?
 
**Be rigorous:**
- Don't dismiss feedback defensively
- But also don't blindly accept all feedback
- You know this analysis best
 
**Respond with:**
- Issues you agree with (+ fixes)
- Issues you disagree with (+ rationale)
- Edge cases reviewers missed
- Final updated code
 
---
 
## Response Template
 
### Issues I Agree With
 
**Issue 1: [Description from reviewer]**
- Reviewer: [GPT-4/Gemini/Opus]
- Why They're Right: [Explanation]
- Fix Applied: [What you changed]
- Code: [Updated code snippet]
 
**Issue 2: [Description]**
[Same structure]
 
---
 
### Issues I Disagree With
 
**Issue 1: [Description from reviewer]**
- Reviewer: [GPT-4/Gemini/Opus]
- Why They're Wrong: [Your counter-argument]
- Context They Missed: [What you know that they don't]
- No Change Needed: [Confirm current approach]
 
**Issue 2: [Description]**
[Same structure]
 
---
 
### Additional Issues Found
 
**While reviewing, I found:**
- Issue 1: [Something reviewers missed]
- Issue 2: [Another thing to fix]
 
---
 
### Final Status
 
**Changes Made:**
- [Summary of fixes]
 
**Total Issues:**
- Fixed: X
- Rejected: Y
- New issues found: Z
 
**Code Status:** READY FOR FINAL REVIEW | NEEDS MORE WORK
 
**Next Steps:**
- [What happens next]
 
---
 
## Guidelines
 
**When reviewers are RIGHT:**
- Acknowledge it
- Fix the issue
- Update your mental model
- Thank them (metaphorically)
 
**When reviewers are WRONG:**
- Explain clearly why
- Reference analysis plan
- Show your reasoning
- Don't be defensive
 
**When unsure:**
- Ask for clarification
- Test both approaches
- Validate with data
- Document the decision

/document_insights

# Create Stakeholder-Ready Deliverable
 
Transform your analysis into a format stakeholders can use to make decisions.
 
**Analysis:** [Link to analysis plan]
**Audience:** {ELT | TEAM_LEAD | SELF}
 
---
 
## Know Your Audience
 
### For ELT (Executive Leadership):
- Lead with business impact
- Show decision clarity (what should we do?)
- Include risks/caveats upfront
- Keep technical details in appendix
- Use: Slides (3-5 max)
 
### For Team Leads:
- Show methodology (how we got here)
- Include validation steps
- Provide reproducible code
- Document assumptions clearly
- Use: Slides + technical appendix
 
### For Self:
- Full technical detail
- Future improvement ideas
- Lessons learned
- Reusable patterns
- Use: Markdown doc with code
 
---
 
## Deliverable Structure
 
### Executive Summary (1 page/slide)
 
**TLDR:**
[One sentence: What we found]
 
**So What:**
[One sentence: Why it matters]
 
**Now What:**
[One sentence: Recommended action]
 
**Key Metric:**
[One big number + direction]
 
---
 
### Key Findings (2-3 pages/slides)
 
**Finding 1: [Insight Statement]**
 
[Visualization]
 
- Supporting point 1
- Supporting point 2
- What drives this?
 
**Finding 2: [Insight Statement]**
 
[Visualization]
 
- Supporting point 1
- Supporting point 2
- What drives this?
 
**Finding 3: [Insight Statement]**
 
[Visualization]
 
- Supporting point 1
- Supporting point 2
- What drives this?
 
---
 
### Recommendations (1 page/slide)
 
**Primary Recommendation:**
[What should we do?]
 
**Expected Impact:**
- Benefit 1: [Quantified if possible]
- Benefit 2: [Quantified if possible]
- Timeline: [When we'd see results]
 
**Risks:**
- Risk 1: [Mitigation plan]
- Risk 2: [Mitigation plan]
 
**Alternative Options:**
- Option A: [Trade-offs]
- Option B: [Trade-offs]
 
---
 
### Appendix (Technical Detail)
 
**Methodology:**
- Data sources: [Tables used]
- Time period: [Date range]
- Filters: [What's included/excluded]
- Known limitations: [What we can't measure]
 
**Validation:**
- Baseline comparison: [How close to known numbers]
- Edge cases tested: [What we checked]
- Peer review: [Who reviewed this]
 
**Code & Queries:**
[Link to GitHub/repo or include here]
 
**Assumptions:**
1. [Assumption 1]
2. [Assumption 2]
3. [Assumption 3]
 
**Future Enhancements:**
- [What we could improve]
- [What we could add]
- [What we could automate]
 
---
 
## Quality Standards
 
Before sharing, verify:
 
**The "5-Year-Old" Test:**
- [ ] Can you explain this insight in simple terms?
- [ ] No jargon or technical language
- [ ] Clear cause-and-effect
 
**The "Defend Your Work" Test:**
- [ ] Can you defend every number?
- [ ] Do you understand all assumptions?
- [ ] Have you validated results?
 
**The "Action Clarity" Test:**
- [ ] Is it clear what to do next?
- [ ] Are recommendations specific?
- [ ] Are trade-offs clear?
 
**The "Stakeholder Value" Test:**
- [ ] Will this change a decision?
- [ ] Is the format right for audience?
- [ ] Have you addressed their likely questions?
 
---
 
## Visualization Guidelines
 
**For ELT:**
- Simple charts (bar, line, pie)
- Big numbers with context (+/- from baseline)
- Color-code good/bad
- Limit to 3 data series per chart
 
**For Team Leads:**
- Can show more complexity
- Include confidence intervals
- Show methodology visually
- Include diagnostic charts
 
**Avoid:**
- 3D charts (confusing)
- Too many colors (hard to read)
- Tiny fonts (unreadable in meetings)
- Chart types audience won't understand
 
---
 
## Format Specific Guidance
 
### For Slides (Google Slides):

Slide 1: Executive Summary Slide 2: Finding 1 (Chart + 3 bullets) Slide 3: Finding 2 (Chart + 3 bullets) Slide 4: Finding 3 (Chart + 3 bullets) Slide 5: Recommendations

Appendix: Methodology + Technical Details


### For Dashboard (Tableau/Looker):

Tab 1: Executive KPIs (big numbers) Tab 2: Trends Over Time Tab 3: Segment Breakdown Tab 4: Diagnostic/Drill-Down


### For Doc (Notion/Google Docs):

Section 1: Executive Summary Section 2: Full Analysis Section 3: Methodology Section 4: Code & Queries Section 5: Future Enhancements


---

## Output Checklist

Before sharing:
- [ ] Passes "5-year-old" test
- [ ] Numbers validated against baselines
- [ ] Recommendations are actionable
- [ ] Risks/caveats are clear
- [ ] Format matches audience needs
- [ ] Stakeholder questions anticipated
- [ ] You can defend every claim
- [ ] Code is documented and saved
- [ ] Ready for questions

/learning_moment

# Capture Learning from This Analysis
 
Extract reusable knowledge from this analysis to compound your expertise.
 
**Analysis:** [Brief description]
 
---
 
## Learning Capture Framework
 
### 1. Technical Learning
 
**New Patterns/Methods:**
- What new SQL patterns did you use?
- What Python libraries/methods?
- What optimization techniques?
- What debugging approaches?
 
**Performance Insights:**
- What made queries faster?
- What made them slower?
- What would you do differently at scale?
 
**Data Quality Discoveries:**
- What data issues did you find?
- How did you handle them?
- What should be fixed upstream?
 
**Tools/Workflows:**
- What tools helped most?
- What slowed you down?
- What would you automate next time?
 
---
 
### 2. Business Learning
 
**Metric Definitions:**
- How does the company define key metrics?
- What variations exist across teams?
- What's the "official" definition?
 
**Stakeholder Preferences:**
- What format did they prefer?
- What level of detail?
- What questions did they ask?
- What surprised them?
 
**Decision-Making Process:**
- How do they use data?
- What threshold triggers action?
- Who needs to approve?
- What's the timeline?
 
**Domain Knowledge:**
- What did you learn about the business?
- What drives the metrics?
- What are the key levers?
 
---
 
### 3. Process Learning
 
**What Went Well:**
- What saved time?
- What prevented errors?
- What improved quality?
 
**What Went Poorly:**
- What took longer than expected?
- What caused rework?
- What would you change?
 
**Time Estimates:**
- Planned: X hours
- Actual: Y hours
- Variance: Z hours
- Why: [Explanation]
 
**Future Improvements:**
- What to do differently?
- What to automate?
- What to standardize?
 
---
 
## Update Knowledge Base
 
Based on this analysis:
 
**Metric Definitions to Add:**
- [ ] [Metric 1]: [Definition]
- [ ] [Metric 2]: [Definition]
 
**Data Issues to Document:**
- [ ] [Issue 1]: [How to handle]
- [ ] [Issue 2]: [How to handle]
 
**Query Patterns to Save:**
- [ ] [Pattern 1]: [Use case]
- [ ] [Pattern 2]: [Use case]
 
**Slash Commands to Update:**
- [ ] [Command 1]: [What to change]
- [ ] [Command 2]: [What to change]
 
**Team Knowledge to Share:**
- [ ] [Insight 1]: [Who needs to know]
- [ ] [Insight 2]: [Who needs to know]
 
---
 
## Output Format
 
### What We Learned
 
**Technical:**
- [Key technical insight 1]
- [Key technical insight 2]
- [Key technical insight 3]
 
**Business:**
- [Key business insight 1]
- [Key business insight 2]
- [Key business insight 3]
 
**Process:**
- [Key process insight 1]
- [Key process insight 2]
- [Key process insight 3]
 
---
 
### What to Do Differently Next Time
 
**Immediate Changes:**
1. [Change 1] (Impact: [High/Med/Low])
2. [Change 2] (Impact: [High/Med/Low])
3. [Change 3] (Impact: [High/Med/Low])
 
**Long-Term Improvements:**
1. [Improvement 1] (Timeline: [When])
2. [Improvement 2] (Timeline: [When])
3. [Improvement 3] (Timeline: [When])
 
---
 
### What to Remember
 
**Reusable Patterns:**
- [Pattern 1]: [When to use]
- [Pattern 2]: [When to use]
 
**Pitfalls to Avoid:**
- [Pitfall 1]: [How to avoid]
- [Pitfall 2]: [How to avoid]
 
**Stakeholder Insights:**
- [Insight 1]
- [Insight 2]
 
**Domain Knowledge:**
- [Knowledge 1]
- [Knowledge 2]
 
---
 
## Knowledge Compounding
 
**This Week:**
- Analyses completed: [Count]
- Patterns identified: [Count]
- Mistakes avoided: [Count]
 
**This Month:**
- Total analyses: [Count]
- Unique patterns: [Count]
- Knowledge base entries: [Count]
 
**This Quarter:**
- Total analyses: [Count]
- Repeated patterns: [%]
- Time savings: [% faster than Q1]
 
---
 
## Teaching Moment
 
**If explaining this to a junior analyst, what would you emphasize?**
 
[Your mentorship notes here - what's the core lesson?]
 
---
 
## Action Items
 
**Update This Week:**
- [ ] Add metrics to company wiki
- [ ] Update slash commands
- [ ] Share learnings in team channel
- [ ] Add queries to template library
 
**Review Monthly:**
- [ ] What patterns are emerging?
- [ ] What knowledge gaps remain?
- [ ] What should be automated?
- [ ] What training is needed?

/deslop_analysis

# Remove AI-Generated Slop
 
Review this analysis and remove unnecessary AI verbosity, keeping only what adds value.
 
**Goal:** Make this analysis crisp, clear, and professional.
 
---
 
## Common AI Slop to Remove
 
### Overly Verbose Intros
**Slop:** "In today's data-driven landscape, it's crucial that we leverage our analytical capabilities to drive actionable insights..."
 
**Clear:** "This analysis shows..."
 
---
 
### Hedging Language
**Slop:** "It appears that there may potentially be some indication that..."
 
**Clear:** "The data shows..."
 
---
 
### Unnecessary Lists
**Slop:** 
"There are several important factors to consider:
1. First and foremost...
2. Additionally...
3. Furthermore...
4. Moreover..."
 
**Clear:** "Three key factors: [1], [2], [3]"
 
---
 
### Empty Phrases
**Slop:** 
- "At the end of the day..."
- "It's worth noting that..."
- "As mentioned previously..."
- "Moving forward..."
- "In conclusion..."
 
**Clear:** Just state the fact.
 
---
 
### Over-Qualification
**Slop:** "Based on our comprehensive analysis of the available data, taking into account various factors and considerations..."
 
**Clear:** "Analysis shows..."
 
---
 
### Filler Words
**Slop:** "Basically," "Actually," "Essentially," "Literally," "Simply put"
 
**Clear:** Remove them.
 
---
 
## Deslop Checklist
 
**Remove:**
- [ ] Verbose introductions
- [ ] Hedging language (may, might, could, perhaps)
- [ ] Unnecessary qualifiers (very, really, quite)
- [ ] Redundant phrases
- [ ] Marketing speak
- [ ] Buzzwords without meaning
- [ ] Lists longer than 5 items
- [ ] Obvious statements
 
**Keep:**
- [ ] Specific numbers
- [ ] Clear insights
- [ ] Action items
- [ ] Necessary caveats
- [ ] Supporting evidence
- [ ] Direct language
 
---
 
## Before/After Example
 
**Before (Slop):**

Based on our comprehensive analysis of customer engagement patterns across multiple touchpoints and channels, leveraging advanced analytical methodologies and taking into account various factors including but not limited to user behavior, session duration, and conversion metrics, it appears that there may potentially be some indication that our affiliate marketing channel could possibly be outperforming our paid search initiatives in terms of overall return on investment, when we consider the total customer lifetime value relative to the customer acquisition costs across the board.


**After (Deslopped):**

Affiliates have 2x better ROI than paid search (210).


---

## Output

**Original Version:**
[Paste original analysis]

**Deslopped Version:**
[Cleaned version - direct, clear, value-dense]

**Words Removed:** [Count]
**Improvement:** [% reduction in word count]

---

## Final Check

Does this analysis:
- [ ] Get to the point quickly?
- [ ] Use specific language?
- [ ] Avoid marketing speak?
- [ ] Sound like a human wrote it?
- [ ] Add value in every sentence?
- [ ] Remove any sentence = lose information?

If yes to all, you're done.
If no to any, keep desloping.

Specialized Commands for Eden

/experiment_results

# Analyze A/B Test Results
 
You are analyzing experiment results to determine statistical significance and business impact.
 
**Experiment:** [Name]
**Variant A (Control):** [Description]
**Variant B (Treatment):** [Description]
**Primary Metric:** [Metric]
**Duration:** [Date range]
 
---
 
## Analysis Framework
 
### 1. Experiment Validity
 
**Sample Size:**
- Control: [N users]
- Treatment: [N users]
- Minimum required: [N users for 80% power]
- Adequate? [YES/NO]
 
**Randomization:**
- [ ] Users randomly assigned
- [ ] No sampling bias detected
- [ ] Groups are comparable at baseline
 
**Duration:**
- Start: [Date]
- End: [Date]
- Days run: [Count]
- Sufficient for weekly cycles? [YES/NO]
 
---
 
### 2. Primary Metric Analysis
 
**Results:**
- Control: [Value] (95% CI: [range])
- Treatment: [Value] (95% CI: [range])
- Absolute difference: [Value]
- Relative difference: [%]
- P-value: [Value]
- Statistical significance: [YES/NO at α=0.05]
 
**Interpretation:**
[Plain English explanation of results]
 
---
 
### 3. Secondary Metrics
 
**Guardrail Metrics** (Should NOT regress)
- Metric 1: [Result - PASS/FAIL]
- Metric 2: [Result - PASS/FAIL]
 
**Diagnostic Metrics** (Explain why)
- Metric 1: [Result + interpretation]
- Metric 2: [Result + interpretation]
 
---
 
### 4. Segment Analysis
 
**By User Type:**
- New users: [Result]
- Returning users: [Result]
- Power users: [Result]
 
**By Device:**
- Mobile: [Result]
- Desktop: [Result]
 
**Key Segments:**
[Any segments where results differ significantly]
 
---
 
### 5. Business Impact
 
**If we ship this:**
 
**Revenue Impact:**
- Per user: [$ change]
- Annualized: [$ total]
- Confidence level: [High/Med/Low]
 
**User Experience:**
- Positive: [What improved]
- Negative: [What got worse]
- Net: [Overall UX impact]
 
---
 
### 6. Recommendation
 
**Decision:** SHIP | DON'T SHIP | ITERATE | RUN LONGER
 
**Rationale:**
[Why this recommendation]
 
**If SHIP:**
- Rollout plan: [Gradual/Full]
- Monitor metrics: [List]
- Rollback criteria: [When to revert]
 
**If DON'T SHIP:**
- Why: [Explanation]
- Next steps: [Alternative approach]
 
**If ITERATE:**
- What to change: [Specific improvements]
- What to test next: [Follow-up experiment]
 
**If RUN LONGER:**
- Why: [Insufficient data/time]
- Run until: [Date]
- Decision criteria: [What needs to happen]
 
---
 
## Output Deliverable
 
Create one-slide summary:
- TLDR: [Result + decision]
- Primary metric: [Big number + significance]
- Business impact: [$ or % improvement]
- Recommendation: [What to do]
- Risks: [What to watch]

/channel_attribution

# Channel Performance & Attribution Analysis
 
Compare marketing channel performance to inform budget allocation.
 
**Channels:** [List channels to compare]
**Goal:** [What decision does this inform?]
**Time Period:** [Date range]
 
---
 
## Attribution Model
 
**Which model?**
- [ ] First-touch (credit to first interaction)
- [ ] Last-touch (credit to final interaction)
- [ ] Multi-touch linear (equal credit)
- [ ] Multi-touch time-decay (recency weighted)
- [ ] Custom model: [Description]
 
**Lookback Window:** [X days]
 
---
 
## Channel Comparison Framework
 
**For each channel, calculate:**
 
### Acquisition Metrics
- Users acquired: [Count]
- Cost per acquisition (CPA): [$]
- Acquisition share: [%]
 
### Engagement Metrics
- Session duration: [Avg minutes]
- Pages per session: [Avg]
- Bounce rate: [%]
 
### Conversion Metrics
- Conversion rate: [%]
- Revenue per user: [$]
- AOV: [$]
 
### Retention Metrics
- Day 30 retention: [%]
- Repeat purchase rate: [%]
- Churn rate: [%]
 
### Lifetime Value
- 30-day LTV: [$]
- 90-day LTV: [$]
- Projected lifetime LTV: [$]
 
### Efficiency Metrics
- LTV:CAC ratio: [X:1]
- Payback period: [Days]
- ROI: [%]
 
---
 
## Ranking
 
**By LTV:CAC:**
1. [Channel]: [Ratio]
2. [Channel]: [Ratio]
3. [Channel]: [Ratio]
 
**By Payback Speed:**
1. [Channel]: [Days]
2. [Channel]: [Days]
3. [Channel]: [Days]
 
**By Scale:**
1. [Channel]: [User volume]
2. [Channel]: [User volume]
3. [Channel]: [User volume]
 
---
 
## Budget Allocation Recommendation
 
**Current Allocation:**
[Chart showing current spend %]
 
**Recommended Allocation:**
[Chart showing proposed spend %]
 
**Changes:**
- Increase [Channel] from X% to Y% (+$Z)
- Decrease [Channel] from X% to Y% (-$Z)
 
**Expected Impact:**
- Total LTV: +$X
- Payback period: -Y days
- Risk: [What could go wrong]
 
---
 
## Caveats
 
**Data Quality:**
- [ ] Attribution window may miss long journeys
- [ ] iOS14 limits tracking
- [ ] New channels lack long-term data
 
**Business Context:**
- [ ] Brand value not captured in direct attribution
- [ ] Some channels are experimental
- [ ] Seasonality may affect results
 
---
 
## Deliverable
 
Create comparison slide:
- [Chart]: LTV:CAC by channel
- [Chart]: Payback curves
- [Table]: Full metrics comparison
- [Recommendation]: Proposed budget changes

How to Install in Cursor

  1. Create .cursorrules file in your project root
  2. Copy each command into separate sections
  3. Reference commands using /command_name

Example .cursorrules:

You are a senior data analyst helping execute data analyses.

When the user types /intake, follow the Quick Data Request Capture workflow.
When the user types /discover, follow the Problem Discovery & Scoping workflow.
[etc.]

[Paste full command text for each]

Last Updated: January 20, 2026 Created by: Robert Tseng Based on: Zevi Arnovitz’s AI-enabled PM workflow Status: Living library - add new commands as you discover patterns