Agentic Data Workflows
Purpose: Transform traditional data consulting into AI-powered data product development
Based on: Zevi Arnovitz’s AI-enabled PM workflow (from Lenny’s Podcast)
Status: Ready to test ✅
📚 What’s in This Folder
1. framework.md (1,579 lines)
The complete philosophy and methodology
Read this to understand:
- Why traditional “AI writes SQL” doesn’t work at scale
- How structured planning prevents bad AI outputs
- The 7 core principles translated from PM work to data work
- Complete workflow: Request → Discovery → Plan → Execute → Review → Document → Learn
- How to compound knowledge over time
Key sections:
- Planning over eager execution
- Slash commands as compound knowledge
- Learning faster > analyzing faster
- Multi-model peer review
- Slop is a people problem
- 10x learner mindset
- AI makes juniors more valuable
When to read: Before implementing anything (30-45 min read)
2. slash_commands.md (800+ lines)
Ready-to-use prompts for Cursor/Claude
Contains 8 core commands:
/intake- Quick request capture (2 min)/discover- Problem exploration (20-30 min)/plan_analysis- Analysis blueprint (20 min)/execute_analysis- Code generation with best practices/review- Self-review checklist/peer_review- Multi-model reconciliation/document_insights- Stakeholder deliverables/learning_moment- Knowledge capture
Plus Eden-specific:
/experiment_results- A/B test analysis/channel_attribution- Marketing performance/deslop_analysis- Remove AI verbosity
How to use:
- Copy-paste into Cursor chat when needed
- Or create
.cursorrulesfile for persistent commands - Update commands when you find mistakes (compound learning)
When to use: During every analysis
3. quickstart.md (400+ lines)
Get started TODAY - not someday
Hour-by-hour implementation:
- Hour 1: Setup (ChatGPT “Data CTO” + Cursor commands)
- Hour 2: First analysis (complete workflow)
- Hour 3-4: Second analysis (should be 2x faster)
Includes:
- Step-by-step setup instructions
- Real example (affiliate vs paid search CVR)
- Common first-day issues + solutions
- Success metrics to track
- Troubleshooting guide
When to read: Right before starting (15 min), then follow along
🚀 Quick Start (5 Minutes)
Option 1: Immediate Use (No Setup)
In any AI chat (Cursor, Claude, ChatGPT):
- Open
slash_commands.md - Copy the
/discovercommand text - Paste into your chat
- Add your analysis context
- AI guides you through problem exploration
Example:
[Paste /discover command]
Context: Mitesh wants to compare affiliate vs paid search
conversion rates to decide Q2 budget allocation.
Option 2: Full Setup (30 min)
Follow quickstart.md Hour 1:
- Create ChatGPT “Data CTO” project
- Install slash commands in Cursor
- Set up multi-model review workflow
- Test with sample analysis
Then: Run first real analysis (Hour 2 of quickstart)
💡 The Core Insight
Traditional AI Use (Doesn’t Scale):
❌ User: "Analyze customer churn"
❌ AI: *immediately writes 500-line SQL with assumptions*
❌ Result: Wrong metric, bad joins, 3 hours of rework
Agentic Workflow (Scalable):
✅ User: /discover - churn analysis
✅ AI: "Let me understand the problem first: [10 questions]"
✅ User: [Answers, clarifies, validates]
✅ AI: /plan_analysis - creates blueprint
✅ User: Reviews with stakeholder, confirms approach
✅ AI: /execute_analysis - builds correct solution
✅ User: /peer_review with 3 models
✅ AI: /document_insights for stakeholder
✅ User: /learning_moment - captures knowledge
✅ Result: Aligned, validated, maintainable, learned from
Key difference: Planning before execution, validation at every step, learning compounded
🎯 Success Metrics
Week 1:
- ChatGPT “Data CTO” project created
- 5+ analyses completed using workflow
- Slash commands refined (updated 3+ times)
- Knowledge base started
Week 4:
- 20+ analyses completed
- 2x faster than old workflow
- Team asking “how are you shipping so fast?”
- Multi-model review preventing bugs
Week 12:
- 50+ analyses completed
- 5x faster than old workflow
- Handling senior-level strategic work
- Teaching others the workflow
🔧 How This Workflow Works
The 7-Step Process:
1. /intake → Capture request (don't lose flow)
2. /discover → Understand business problem (not data request)
3. /plan → Create blueprint (align before building)
4. /execute → Build with best practices
5. /review → Self-review + multi-model peer review
6. /document → Stakeholder-ready deliverable
7. /learn → Extract reusable knowledge
Time investment:
- First analysis: ~3 hours
- After 10 analyses: ~1 hour
- After 50 analyses: ~30 min
- Quality: Consistently high (because validated at every step)
📊 Real Example: Affiliate CVR Analysis
Traditional approach (2 hours, high rework risk):
- Write SQL based on vague request (30 min)
- Realize wrong metric (30 min wasted)
- Rewrite query (30 min)
- Find join bug after sharing with stakeholder (embarrassing)
- Fix and reshare (30 min)
- Total: 2+ hours, lost stakeholder trust
Agentic workflow (90 min, no rework):
/intake- Capture request (2 min)/discover- Clarify metric definition, time period, segments (15 min)- Stakeholder check-in - Confirm approach (5 min)
/plan_analysis- Create blueprint (10 min)/execute_analysis- Build query (20 min)/review+/peer_review- Catch bugs before sharing (20 min)/document_insights- Create 3-slide deck (10 min)/learning_moment- Capture for next time (5 min)- Total: 90 min, zero rework, stakeholder trust increased
🛠️ Implementation Paths
Path 1: Gradual (Recommended for Learning)
Week 1: Use ChatGPT “Data CTO” for all analyses
- Get comfortable with discovery phase
- Learn to frame problems correctly
- Build intuition for good analysis plans
Week 2-3: Move to Cursor with slash commands
- Start using
/discoverand/plancommands - Keep execution simple (basic SQL)
- Focus on process, not speed
Week 4+: Full agentic workflow
- Add multi-model review
- Use all commands
- Measure speed improvements
- Start teaching others
Path 2: Immediate (For Urgent Work)
Today:
- Read quickstart.md (15 min)
- Copy
/discovercommand (2 min) - Use on next analysis (30 min discovery)
- Compare to your normal approach
This Week:
- Use
/discoveron every analysis - Add other commands as needed
- Refine based on what breaks
🧠 The Learning Multiplier
Traditional analyst:
- 1 analysis/week → 1 learning/week → 52 learnings/year
AI-enabled analyst (with /learning_moment):
- 5 analyses/week → 5 learnings/week → 260 learnings/year
- Each learning updates slash commands
- Mistakes never repeat
- Knowledge compounds exponentially
Result after 6 months:
- 5x more analyses completed
- 5x deeper expertise
- Battle-tested workflow
- Reusable for all future clients
⚠️ Common Mistakes to Avoid
1. Skipping Discovery
- “I know what they want” → You don’t, and 30 min upfront saves 3 hours rework
2. Trusting AI Blindly
- Always validate results against baselines
- Use multi-model peer review
- YOU own the output
3. Not Documenting Learnings
- Every analysis is a learning opportunity
/learning_momentcaptures it- Update slash commands to compound knowledge
4. Over-Automating
- Keep manual checkpoints
- Stay close to the data
- Automation should enable, not replace thinking
📖 Reading Order
If you’re new:
- This README (you’re here) ✅
- quickstart.md → Hour 1 Setup
- quickstart.md → Hour 2 First Analysis
- Reference slash_commands.md as needed
- Read framework.md for deep understanding
If you’re experienced with AI:
- slash_commands.md → Copy commands
- quickstart.md → Hour 2 Example
- Start using on real work
- Refine based on results
If you’re skeptical:
- quickstart.md → Just read Hour 2 example
- Try
/discoveron one analysis - Compare to your normal approach
- Decide if you want to continue
🔄 Feedback Loop
After every analysis, ask:
- What went well? (Keep doing)
- What went poorly? (Change process)
- What did I learn? (Update knowledge base)
- What would I do differently? (Update slash commands)
Weekly review:
- How many analyses completed?
- How much time saved?
- What patterns emerging?
- What should be automated?
Monthly review:
- Am I faster than last month?
- Is quality improving?
- Am I learning new skills?
- Should I adjust the workflow?
🎯 Next Steps
Right Now:
- Open
quickstart.md - Read Hour 1 (setup)
- Create your “Data CTO” in ChatGPT
- Pick a real analysis to test on
Today:
- Complete Hour 1 setup
- Run one analysis using full workflow
- Document what you learned
This Week:
- Complete 5-10 analyses
- Refine slash commands
- Build Eden-specific templates
- Share learnings with team
This Month:
- Master the workflow (should be automatic)
- Measure business impact
- Train others
- Package for other clients
💬 Questions?
“Is this overkill for small analyses?”
- Learn the full workflow first
- After 20+ analyses, you’ll know what to skip
- Shortcuts come after mastery
“How do I know if it’s working?”
- Track: Time per analysis, rework rate, stakeholder satisfaction
- Should see 50% speed improvement by Week 4
- Should see 3x improvement by Month 3
“What if AI makes a mistake?”
- That’s your mistake, not AI’s (you own the output)
- Use multi-model peer review
- Always validate against baselines
- Ask: “What in my prompt caused this?”
- Update slash command to prevent it
“Can I customize this?”
- Absolutely! These are templates
- Update based on your workflow
- Add Eden-specific commands
- Share improvements with team
📞 Getting Help
If stuck:
- Ask your “Data CTO” (ChatGPT project)
- Use
/learning_momentto understand why - Check quickstart.md common issues
- Test on simpler example first
If AI gives bad output:
- Not AI’s fault (it’s your prompt)
- Use
/deslop_analysisto clean - Be more specific in instructions
- Give more context
Last Updated: January 20, 2026 Status: Ready for production use Next Review: After 10 analyses completed