Agentic Data Workflows

Purpose: Transform traditional data consulting into AI-powered data product development

Based on: Zevi Arnovitz’s AI-enabled PM workflow (from Lenny’s Podcast)

Status: Ready to test ✅


📚 What’s in This Folder

1. framework.md (1,579 lines)

The complete philosophy and methodology

Read this to understand:

  • Why traditional “AI writes SQL” doesn’t work at scale
  • How structured planning prevents bad AI outputs
  • The 7 core principles translated from PM work to data work
  • Complete workflow: Request → Discovery → Plan → Execute → Review → Document → Learn
  • How to compound knowledge over time

Key sections:

  • Planning over eager execution
  • Slash commands as compound knowledge
  • Learning faster > analyzing faster
  • Multi-model peer review
  • Slop is a people problem
  • 10x learner mindset
  • AI makes juniors more valuable

When to read: Before implementing anything (30-45 min read)


2. slash_commands.md (800+ lines)

Ready-to-use prompts for Cursor/Claude

Contains 8 core commands:

  1. /intake - Quick request capture (2 min)
  2. /discover - Problem exploration (20-30 min)
  3. /plan_analysis - Analysis blueprint (20 min)
  4. /execute_analysis - Code generation with best practices
  5. /review - Self-review checklist
  6. /peer_review - Multi-model reconciliation
  7. /document_insights - Stakeholder deliverables
  8. /learning_moment - Knowledge capture

Plus Eden-specific:

  • /experiment_results - A/B test analysis
  • /channel_attribution - Marketing performance
  • /deslop_analysis - Remove AI verbosity

How to use:

  • Copy-paste into Cursor chat when needed
  • Or create .cursorrules file for persistent commands
  • Update commands when you find mistakes (compound learning)

When to use: During every analysis


3. quickstart.md (400+ lines)

Get started TODAY - not someday

Hour-by-hour implementation:

  • Hour 1: Setup (ChatGPT “Data CTO” + Cursor commands)
  • Hour 2: First analysis (complete workflow)
  • Hour 3-4: Second analysis (should be 2x faster)

Includes:

  • Step-by-step setup instructions
  • Real example (affiliate vs paid search CVR)
  • Common first-day issues + solutions
  • Success metrics to track
  • Troubleshooting guide

When to read: Right before starting (15 min), then follow along


🚀 Quick Start (5 Minutes)

Option 1: Immediate Use (No Setup)

In any AI chat (Cursor, Claude, ChatGPT):

  1. Open slash_commands.md
  2. Copy the /discover command text
  3. Paste into your chat
  4. Add your analysis context
  5. AI guides you through problem exploration

Example:

[Paste /discover command]

Context: Mitesh wants to compare affiliate vs paid search 
conversion rates to decide Q2 budget allocation.

Option 2: Full Setup (30 min)

Follow quickstart.md Hour 1:

  1. Create ChatGPT “Data CTO” project
  2. Install slash commands in Cursor
  3. Set up multi-model review workflow
  4. Test with sample analysis

Then: Run first real analysis (Hour 2 of quickstart)


💡 The Core Insight

Traditional AI Use (Doesn’t Scale):

❌ User: "Analyze customer churn"
❌ AI: *immediately writes 500-line SQL with assumptions*
❌ Result: Wrong metric, bad joins, 3 hours of rework

Agentic Workflow (Scalable):

✅ User: /discover - churn analysis
✅ AI: "Let me understand the problem first: [10 questions]"
✅ User: [Answers, clarifies, validates]
✅ AI: /plan_analysis - creates blueprint
✅ User: Reviews with stakeholder, confirms approach
✅ AI: /execute_analysis - builds correct solution
✅ User: /peer_review with 3 models
✅ AI: /document_insights for stakeholder
✅ User: /learning_moment - captures knowledge
✅ Result: Aligned, validated, maintainable, learned from

Key difference: Planning before execution, validation at every step, learning compounded


🎯 Success Metrics

Week 1:

  • ChatGPT “Data CTO” project created
  • 5+ analyses completed using workflow
  • Slash commands refined (updated 3+ times)
  • Knowledge base started

Week 4:

  • 20+ analyses completed
  • 2x faster than old workflow
  • Team asking “how are you shipping so fast?”
  • Multi-model review preventing bugs

Week 12:

  • 50+ analyses completed
  • 5x faster than old workflow
  • Handling senior-level strategic work
  • Teaching others the workflow

🔧 How This Workflow Works

The 7-Step Process:

1. /intake       → Capture request (don't lose flow)
2. /discover     → Understand business problem (not data request)
3. /plan         → Create blueprint (align before building)
4. /execute      → Build with best practices
5. /review       → Self-review + multi-model peer review
6. /document     → Stakeholder-ready deliverable
7. /learn        → Extract reusable knowledge

Time investment:

  • First analysis: ~3 hours
  • After 10 analyses: ~1 hour
  • After 50 analyses: ~30 min
  • Quality: Consistently high (because validated at every step)

📊 Real Example: Affiliate CVR Analysis

Traditional approach (2 hours, high rework risk):

  1. Write SQL based on vague request (30 min)
  2. Realize wrong metric (30 min wasted)
  3. Rewrite query (30 min)
  4. Find join bug after sharing with stakeholder (embarrassing)
  5. Fix and reshare (30 min)
  6. Total: 2+ hours, lost stakeholder trust

Agentic workflow (90 min, no rework):

  1. /intake - Capture request (2 min)
  2. /discover - Clarify metric definition, time period, segments (15 min)
  3. Stakeholder check-in - Confirm approach (5 min)
  4. /plan_analysis - Create blueprint (10 min)
  5. /execute_analysis - Build query (20 min)
  6. /review + /peer_review - Catch bugs before sharing (20 min)
  7. /document_insights - Create 3-slide deck (10 min)
  8. /learning_moment - Capture for next time (5 min)
  9. Total: 90 min, zero rework, stakeholder trust increased

🛠️ Implementation Paths

Week 1: Use ChatGPT “Data CTO” for all analyses

  • Get comfortable with discovery phase
  • Learn to frame problems correctly
  • Build intuition for good analysis plans

Week 2-3: Move to Cursor with slash commands

  • Start using /discover and /plan commands
  • Keep execution simple (basic SQL)
  • Focus on process, not speed

Week 4+: Full agentic workflow

  • Add multi-model review
  • Use all commands
  • Measure speed improvements
  • Start teaching others

Path 2: Immediate (For Urgent Work)

Today:

  1. Read quickstart.md (15 min)
  2. Copy /discover command (2 min)
  3. Use on next analysis (30 min discovery)
  4. Compare to your normal approach

This Week:

  • Use /discover on every analysis
  • Add other commands as needed
  • Refine based on what breaks

🧠 The Learning Multiplier

Traditional analyst:

  • 1 analysis/week → 1 learning/week → 52 learnings/year

AI-enabled analyst (with /learning_moment):

  • 5 analyses/week → 5 learnings/week → 260 learnings/year
  • Each learning updates slash commands
  • Mistakes never repeat
  • Knowledge compounds exponentially

Result after 6 months:

  • 5x more analyses completed
  • 5x deeper expertise
  • Battle-tested workflow
  • Reusable for all future clients

⚠️ Common Mistakes to Avoid

1. Skipping Discovery

  • “I know what they want” → You don’t, and 30 min upfront saves 3 hours rework

2. Trusting AI Blindly

  • Always validate results against baselines
  • Use multi-model peer review
  • YOU own the output

3. Not Documenting Learnings

  • Every analysis is a learning opportunity
  • /learning_moment captures it
  • Update slash commands to compound knowledge

4. Over-Automating

  • Keep manual checkpoints
  • Stay close to the data
  • Automation should enable, not replace thinking

📖 Reading Order

If you’re new:

  1. This README (you’re here) ✅
  2. quickstart.md → Hour 1 Setup
  3. quickstart.md → Hour 2 First Analysis
  4. Reference slash_commands.md as needed
  5. Read framework.md for deep understanding

If you’re experienced with AI:

  1. slash_commands.md → Copy commands
  2. quickstart.md → Hour 2 Example
  3. Start using on real work
  4. Refine based on results

If you’re skeptical:

  1. quickstart.md → Just read Hour 2 example
  2. Try /discover on one analysis
  3. Compare to your normal approach
  4. Decide if you want to continue

🔄 Feedback Loop

After every analysis, ask:

  • What went well? (Keep doing)
  • What went poorly? (Change process)
  • What did I learn? (Update knowledge base)
  • What would I do differently? (Update slash commands)

Weekly review:

  • How many analyses completed?
  • How much time saved?
  • What patterns emerging?
  • What should be automated?

Monthly review:

  • Am I faster than last month?
  • Is quality improving?
  • Am I learning new skills?
  • Should I adjust the workflow?

🎯 Next Steps

Right Now:

  1. Open quickstart.md
  2. Read Hour 1 (setup)
  3. Create your “Data CTO” in ChatGPT
  4. Pick a real analysis to test on

Today:

  1. Complete Hour 1 setup
  2. Run one analysis using full workflow
  3. Document what you learned

This Week:

  1. Complete 5-10 analyses
  2. Refine slash commands
  3. Build Eden-specific templates
  4. Share learnings with team

This Month:

  1. Master the workflow (should be automatic)
  2. Measure business impact
  3. Train others
  4. Package for other clients

💬 Questions?

“Is this overkill for small analyses?”

  • Learn the full workflow first
  • After 20+ analyses, you’ll know what to skip
  • Shortcuts come after mastery

“How do I know if it’s working?”

  • Track: Time per analysis, rework rate, stakeholder satisfaction
  • Should see 50% speed improvement by Week 4
  • Should see 3x improvement by Month 3

“What if AI makes a mistake?”

  • That’s your mistake, not AI’s (you own the output)
  • Use multi-model peer review
  • Always validate against baselines
  • Ask: “What in my prompt caused this?”
  • Update slash command to prevent it

“Can I customize this?”

  • Absolutely! These are templates
  • Update based on your workflow
  • Add Eden-specific commands
  • Share improvements with team

📞 Getting Help

If stuck:

  1. Ask your “Data CTO” (ChatGPT project)
  2. Use /learning_moment to understand why
  3. Check quickstart.md common issues
  4. Test on simpler example first

If AI gives bad output:

  1. Not AI’s fault (it’s your prompt)
  2. Use /deslop_analysis to clean
  3. Be more specific in instructions
  4. Give more context

Last Updated: January 20, 2026 Status: Ready for production use Next Review: After 10 analyses completed