CTA Data Operations Q1 2026 — Statement of Work (Milestone-Based)
Date: December 22, 2025
Version: 2.0
Client: Consumer Technology Association (CTA)
Author: Brainforge AI
1. Overview
This Statement of Work defines Brainforge’s continued engagement to scale and expand CTA’s modern DataOps platform throughout Q1 2026. Building on the foundation established in 2025 (dbt staging models, Snowflake infrastructure, and initial marts), this SOW focuses on expanding data coverage, building business-ready analytics datasets, and enabling self-service analytics across the organization.
New in v2.0: This SOW uses milestone-based delivery with story point estimation and sprint-based pricing, replacing the previous hourly estimate model.
2. Objectives
- Expand Data Coverage: Complete staging models for all priority data sources and establish pipelines for new data sources
- Build Business-Ready Marts: Create dimensional models and fact tables that enable self-service analytics for key business entities
- Enable Team Productivity: Onboard CTA team members (Kyle, Kai, and others) to dbt and Snowflake, enabling them to build and maintain models independently
- Improve Data Accessibility: Establish role-based access control, documentation, and training to make data discoverable and usable
- Support CES 2026: Deliver analytics-ready datasets and reporting capabilities for CES event operations and post-event analysis
- Establish Operational Excellence: Complete CI/CD pipelines, testing frameworks, and monitoring to ensure reliable data operations
3. Scope of Work
3.1 In-Scope
Data Source Expansion
- Complete Remembers staging models for all remaining modules (exhibit, speaker, award, app, etc.)
- Build staging models for Salesforce Marketing Cloud (SFMC) data
- Build staging models for Salesforce CRM data (post-CES, Feb+)
- Ingest historical S3 archive data (~300-400 tables from legacy SQL Server for entity resolution)
- Establish data pipelines for new sources as identified (e.g., Polytomic event data, Formstack data, scanner data)
- Create monthly backup automation for Remembers data (drop and clone process)
- Migrate webhooks database data to proper Snowflake structures
- Katherine’s SFTP Workflow Migration: Convert daily CES invite Python/Postgres workflow to dbt (8-10 flat files → transformations → 5 views → FTP upload to Marketing Cloud)
Marts Layer Development
- Build dimensional models (dim_member, dim_organization, dim_event, dim_country, etc.)
- Create fact tables (fct_registrations, fct_purchases, fct_email_engagement, etc.)
- Develop business reports:
- Active membership report (replaces full-day manual Excel process for membership team)
- Member engagement report
- Event performance reports
- Build CES-specific analytics datasets:
- Registration funnels (track 29+ min avg registration time)
- Badge scan analytics (lead retrieval + foot traffic)
- Session scanner data (session attendance patterns)
- Exhibitor ROI analysis
- Entity Resolution Implementation: Build DataOps ID as canonical identifier across systems, starting with CES vendor integration
- Create intermediate models for complex business logic and reusable transformations
Infrastructure & Operations
- ETL Platform Finalization: Complete Polytomic evaluation and setup, or establish alternative approach for P0 data sources (SFMC, Merits FTP, etc.)
- Orchestration Implementation: Evaluate and implement Snowflake native task orchestration (preferred) vs GitHub Actions for dbt execution
- Complete GitHub Actions CI/CD pipeline for dbt (automated testing on PRs)
- Set up Snowflake CLI integration for automated repo refresh on merges
- Implement comprehensive dbt testing framework (data quality tests, schema tests, custom tests)
- Establish Snowflake role-based access control (RBAC) with functional roles
- Set up Snowflake warehouses for different workload types (ETL, Transform, BI/Reporting)
- Create monitoring and alerting for pipeline health
- Document Snowflake grants and access patterns
- FTP Integration: Establish S3 → Snowflake → dbt → FTP workflow for Marketing Cloud data delivery
Team Enablement
- Onboard Kyle to dbt and Snowflake (training, access setup, workflow documentation)
- Onboard Kai (new BI analyst) to Snowflake and reporting workflows
- Record dbt/Snowflake onboarding sessions for future team members and “citizen data engineers”
- Cursor Demo: Provide training on Cursor for Katherine and Jay (both expressed interest)
- Create team training materials and documentation
- Establish code review process and best practices
- Enable team members to build marts from intermediate models
- Support “citizen data engineers” (Anna P/K/R, Chris Deathloff, Quinn, JC) with safe environment to learn
Documentation & Knowledge Management
- Complete schema.yml documentation for all models
- Create data dictionary and business glossary
- Document naming conventions and project structure
- Establish data lineage documentation
- Create runbooks for common operations
- Integrate Snowflake catalog with Glean (exploration and implementation)
3.2 Out-of-Scope
- Okta authentication optimization (separate discovery workstream - identified as causing 80% of customer support requests)
- Shopify digital asset store evaluation (separate discovery workstream - authentication loop and download failures)
- CES Registration Security Fix (DataOps ID vendor integration for lead retrieval - Katherine managing separately)
- Full BI tool implementation (deferred to Q2 - Power BI available for interim use)
- Long-term managed services beyond Q1 2026
- Custom application development outside of data workflows
- Data source migrations or system replacements
- Marketing content creation or business strategy
- Email deliverability fixes (DMARC/DKIM/SPF configuration)
4. Milestone-Based Delivery
Milestone 1: Data Foundation Complete
Target Date: End of Sprint 2 (January 24, 2026)
Story Points: 55
Deliverables:
- ✅ All Remembers staging models complete (accounting, app, award, crm, crm_v2, exhibit, purchase, shopping, speaker)
- ✅ SFMC staging models with documentation (sends, opens, clicks, bounces, unsubscribes, jobs, lists)
- ✅ Historical S3 archive ingestion strategy defined and initial load started (~300-400 tables)
- ✅ Monthly backup automation script operational
- ✅ Webhooks data migrated to proper Snowflake structures
- ✅ ETL platform decision finalized (Polytomic or alternative)
Acceptance Criteria:
- All staging models pass dbt tests (unique, not_null, relationships)
- Documentation complete in schema.yml for all staging models
- Monthly backup automation runs successfully without manual intervention
- Data quality reports show <5% error rate across staging models
Milestone 2: CES Analytics Ready
Target Date: End of Sprint 3 (February 7, 2026)
Target Date Note: Post-CES (CES 2026 is January 7-10)
Story Points: 65
Deliverables:
- ✅ Katherine’s SFTP workflow migrated to dbt (daily CES invite process automation)
- ✅ Salesforce CRM staging models operational (post-CES, Feb+)
- ✅ CES-specific data pipelines: Merits FTP, session scanners, lead retrieval data
- ✅ Entity resolution framework: DataOps ID implementation started
- ✅ Initial dimensional models: dim_member, dim_organization, dim_event, dim_country
- ✅ CES analytics datasets operational:
- Registration funnels (track 29+ min registration time)
- Badge scan analytics (lead retrieval + foot traffic)
- Session scanner data (attendance patterns)
- Exhibitor ROI analysis
Acceptance Criteria:
- Katherine’s SFTP workflow runs daily without manual intervention
- CES analytics datasets available for post-event analysis by Feb 10
- Entity resolution matches 80%+ of records across historical data
- Dimensional models pass all dbt tests and have complete documentation
Milestone 3: Marts & Reports Production-Ready
Target Date: End of Sprint 5 (March 7, 2026)
Story Points: 70
Deliverables:
- ✅ Fact tables operational: fct_registrations, fct_purchases, fct_email_engagement, fct_lead_scans, fct_session_attendance
- ✅ Business reports delivered:
- Active membership report (6-column spreadsheet replacement - “P Negative-1” priority)
- Member engagement report
- Event performance reports
- ✅ Intermediate models for complex transformations and reusable business logic
- ✅ Comprehensive dbt testing framework (data quality tests, schema tests, custom tests)
- ✅ GitHub Actions CI/CD pipeline operational (automated testing on PRs)
- ✅ Snowflake CLI integration for automated repo refresh on merges
Acceptance Criteria:
- Active membership report replaces manual Excel process (validated by membership team)
- All fact tables have 100% test coverage
- CI/CD pipeline shows 95%+ success rate on PR merges
- Business reports match legacy report outputs (within 2% variance for validation)
Milestone 4: Infrastructure & Automation Complete
Target Date: End of Sprint 6 (March 21, 2026)
Story Points: 50
Deliverables:
- ✅ Orchestration implementation complete (Snowflake native tasks or GitHub Actions)
- ✅ Snowflake RBAC implementation with functional roles (data analyst, data engineer, read-only, etc.)
- ✅ Snowflake warehouse configuration optimized (ETL, Transform, BI/Reporting workloads)
- ✅ FTP integration operational for Marketing Cloud data delivery workflow
- ✅ Monitoring and alerting setup for pipeline health
- ✅ Grants documentation and access patterns documented
- ✅ Historical S3 archive ingestion complete (~300-400 tables)
Acceptance Criteria:
- All dbt models run on automated schedule without manual triggers
- RBAC tested and validated by Jay Heavner (IT)
- Monitoring alerts trigger successfully for pipeline failures (tested)
- FTP integration delivers files to Marketing Cloud daily
Milestone 5: Team Enablement & Knowledge Transfer
Target Date: End of Sprint 6 (March 21, 2026)
Story Points: 35
Deliverables:
- ✅ Kyle onboarding complete (access, training, first models built independently)
- ✅ Kai onboarding complete (Snowflake access, reporting workflows)
- ✅ Cursor training delivered for Katherine and Jay
- ✅ Recorded onboarding sessions for future team members and “citizen data engineers”
- ✅ Training materials and documentation (dbt, Snowflake, best practices)
- ✅ Code review process documentation
- ✅ Best practices guide for building marts and maintaining models
Acceptance Criteria:
- Kyle builds and deploys at least 2 models independently
- Kai can run and modify reports in Snowflake without assistance
- Training recordings available in shared drive
- Code review process documented and used for at least 5 PRs
Milestone 6: Documentation & Self-Service Analytics
Target Date: End of Q1 (March 31, 2026)
Story Points: 30
Deliverables:
- ✅ Complete schema.yml documentation for all models
- ✅ Data dictionary and business glossary published
- ✅ Naming conventions and project structure documentation
- ✅ Data lineage documentation
- ✅ Operational runbooks (common operations, troubleshooting, maintenance)
- ✅ Glean integration assessment and implementation plan (if feasible)
Acceptance Criteria:
- 100% of dbt models have schema.yml documentation
- Data dictionary accessible to all CTA team members
- Runbooks successfully used by Kyle/Kai to resolve at least 2 issues independently
- Glean integration assessment delivered with go/no-go recommendation
5. Requirements & Inputs
Access & Permissions
- Continued access to Snowflake, dbt Cloud, and GitHub repository
- Access to new data sources as they are identified (Polyatomic, Formstack, etc.)
- Administrative permissions for Snowflake RBAC setup
- Access to Glean for integration exploration
Documentation
- Data source documentation and API specifications
- Business requirements for marts and reports
- Existing data dictionaries and business glossaries
- Historical data quality issues and known data problems
Stakeholder Availability
- Katherine Bayless (Data Operations): Bi-weekly strategic alignment, ad-hoc questions
- Kyle (Data Analyst): Weekly onboarding sessions, model review, requirements gathering
- Kai (Data Analyst): Training sessions, requirements for member engagement reports
- Jay Heavner (IT): RBAC setup coordination, infrastructure approvals
- Other CTA team members: Ad-hoc requirements gathering, testing, feedback
Data & Systems
- Access to Remembers data (via Snowflake Share) - ✅ Active
- Access to SFMC data (API key available in AWS Secrets Manager)
- Access to Salesforce CRM data (post-CES, Feb+, currently in “do not disturb” mode)
- Historical S3 archive access (CTA-DataOps-Archive bucket with ~300-400 tables from legacy SQL Server)
- Historical CES scan data (S3 bucket access - session scanner files “too big to import manually”)
- Formstack/webhooks data for migration - ✅ Working (webhook → S3 → Snowflake)
- Merits registration data (FTP access, flat files only - no APIs)
- EventPoint data (good APIs available, post-CES priority)
- Polytomic connector availability (or alternative ETL approach if Polytomic doesn’t proceed)
6. Project Timeline
Q1 2026 Sprint Schedule (6 Sprints × 2 Weeks)
| Sprint | Dates | Focus | Milestones |
|---|---|---|---|
| Sprint 1 | Jan 6-17 | Foundation | Remembers staging, SFMC staging kickoff |
| Sprint 2 | Jan 20-31 | Data expansion | M1: Data Foundation Complete |
| Sprint 3 | Feb 3-14 | CES analytics | M2: CES Analytics Ready |
| Sprint 4 | Feb 17-28 | Marts development | Fact tables, business reports |
| Sprint 5 | Mar 3-14 | Reports & CI/CD | M3: Marts & Reports Production-Ready |
| Sprint 6 | Mar 17-28 | Infrastructure & training | M4, M5, M6: Complete |
Total Duration: 12 weeks (Q1 2026)
7. Assumptions
- CTA stakeholders (Katherine, Kyle, Kai, Jay) are available for scheduled meetings and ad-hoc questions
- Data sources remain accessible and stable throughout Q1
- Remembers contract and data sharing agreement continues as expected
- New data sources (Polyatomic, Formstack) can be accessed within Q1 timeline
- CTA team members can dedicate time for training and onboarding
- Business requirements for marts and reports can be gathered within Q1
- No major organizational changes that would impact data operations
- Snowflake capacity and performance remain adequate for growing data volumes
- GitHub repository access and permissions remain stable
8. Risks
| Risk | Impact | Mitigation |
|---|---|---|
| Data source access delays | Medium | Identify access requirements early; establish backup plans for critical sources; prioritize sources with existing access; Salesforce CRM blocked until post-CES (Feb+) |
| Polytomic evaluation delays | Medium | Maintain alternative approach (AWS Glue, manual pipelines) if Polytomic doesn’t proceed; have backup ETL plan ready |
| Team onboarding slower than expected | Medium | Start onboarding early; provide recorded sessions; create self-service documentation; schedule regular check-ins; Katherine’s “calculated neglect” approach |
| Business requirements unclear | Medium | Schedule regular requirements gathering sessions; start with high-priority use cases (“P Negative-1” = Remembers); iterate based on feedback |
| CES timeline constraints | High | No major changes before CES (Jan 2026); Katherine’s SFTP workflow must keep running; post-CES window for changes (Feb+) |
| Data quality issues discovered | Medium | Build comprehensive testing framework; document known issues (e.g., Remembers dashboard showing 1,658 vs 1,100 actual members); create data quality reports; prioritize critical fixes |
| Entity resolution complexity | Medium | Start with historical data profiling; build DataOps ID as canonical identifier; test with one vendor before expanding |
| Infrastructure capacity constraints | Low | Monitor Snowflake usage; optimize queries; scale warehouses as needed; plan for growth; Katherine wants to be “poster child for leveraging all Snowflake features” |
| Scope creep from other workstreams | Medium | Maintain clear boundaries with Okta/Shopify discovery work; Katherine’s security fix separate; prioritize Q1 deliverables; document dependencies |
| Finance scrutiny on costs | Medium | Demonstrate ROI early; leverage frugal approach (“squeeze value out of tools we have”); consolidate into Snowflake where possible; use AWS Marketplace for procurement ease |
9. Communication Plan
Regular Meetings:
- Weekly Technical Working Session (60 min): Kyle + Brainforge team (dbt development, model review)
- Bi-weekly Strategic Alignment (30 min): Katherine Bayless + Uttam Kumaran
- Monthly Infrastructure Review (30 min): Jay Heavner + Brainforge team (RBAC, infrastructure)
- Sprint Planning (60 min, every 2 weeks): Review previous sprint, plan next sprint
- Sprint Retrospective (30 min, every 2 weeks): Lessons learned, process improvements
- Ad-hoc sessions: As needed for requirements gathering, training, and issue resolution
Async Communication:
- Dedicated Slack channel for daily questions and updates
- GitHub repository for all code, documentation, and issues
- Shared workspace (Google Drive or equivalent) for documentation and deliverables
- End-of-sprint status email summarizing progress, blockers, and upcoming milestones
Escalation Path:
- Technical blockers escalated to Ashwini Sharma or Uttam Kumaran
- Business requirements questions escalated to Katherine Bayless
- Infrastructure/access issues escalated to Jay Heavner
- Timeline risks communicated within 24 hours of identification
10. Pricing
Pricing for this SOW uses sprint-based delivery with story point estimation, replacing the previous hourly rate structure.
Pricing Model
Sprint-Based Pricing:
- Price per Sprint: $18,000 per 2-week sprint
- Total Sprints: 6 sprints (Q1 2026)
- Total Q1 Engagement: $108,000
Story Point Distribution:
| Milestone | Story Points | Sprints | Cost |
|---|---|---|---|
| M1: Data Foundation Complete | 55 | Sprint 1-2 | $36,000 |
| M2: CES Analytics Ready | 65 | Sprint 3 | $18,000 |
| M3: Marts & Reports Production-Ready | 70 | Sprint 4-5 | $36,000 |
| M4: Infrastructure & Automation | 50 | Sprint 6 | $9,000 |
| M5: Team Enablement | 35 | Sprint 6 | $6,000 |
| M6: Documentation & Self-Service | 30 | Sprint 6 | $3,000 |
| Total | 305 points | 6 sprints | $108,000 |
What’s Included per Sprint:
- Development work toward milestone deliverables
- Code review and quality assurance
- Testing and validation
- Documentation (inline and schema.yml)
- Sprint planning, retrospective, and status updates
- Ad-hoc stakeholder meetings and support
- Slack/email communication and support
Billing Structure:
- Invoiced at the end of each sprint (every 2 weeks)
- Payment due within Net 30 days
- Each invoice includes:
- Sprint summary (deliverables completed, story points delivered)
- Progress toward milestones
- Upcoming sprint plan
- Any blockers or risks
Velocity Tracking:
- Target velocity: ~50 story points per sprint
- Velocity will be tracked sprint-over-sprint to improve planning
- If velocity falls below 80% of target, root cause analysis and mitigation plan provided
Pricing Rationale:
- Based on 2-person team (Ashwini Sharma + Uttam Kumaran oversight) at ~80 hours per sprint combined
- Averages to ~$225/hour blended rate (consistent with existing Brainforge CTA Agreement dated November 12, 2025)
- Sprint-based pricing provides predictability and aligns with milestone delivery
- Story points allow flexibility for complexity without hourly tracking overhead
11. Payment Terms
- Payment Schedule: End of each sprint (bi-weekly)
- Payment Terms: Net 30
- Invoice Timing: Invoices submitted within 3 business days of sprint completion
- Payment Method: As per existing Brainforge CTA Agreement dated November 12, 2025
Milestone-Based Bonuses (Optional):
- If CTA wishes to incentivize early completion or exceptional quality, milestone bonuses can be discussed
- Proposed structure: 10% bonus for milestones delivered ahead of schedule with full acceptance criteria met
12. Open Questions
These questions will be addressed during Q1 execution:
- Polyatomic Integration: Has Polytomic responded with evaluation results? If not proceeding, what is backup ETL approach for SFMC, Merits FTP, EventPoint?
- Orchestration Decision: Snowflake native task orchestration vs GitHub Actions? Need to finalize for Katherine’s SFTP workflow.
- Historical S3 Data Priority: Which of the ~300-400 tables should be ingested first for entity resolution? What is the timeline?
- CES Scanner Data: Session scanner files “too big to import manually” - what is file size/format? When will data be available?
- BI Tool Selection: Deferred to Q2 per Katherine. Power BI available for interim use. When to revisit Sigma evaluation?
- Glean Integration: What are the technical requirements for Snowflake catalog integration? Priority post-Q1 based on Dec discussions.
- DataOps ID Scope: Is Katherine’s vendor integration proceeding? How does this affect broader entity resolution work?
- Citizen Data Engineers: Which additional team members (Anna P/K/R, Chris Deathloff, Quinn, JC, Tom Moschello) should be onboarded in Q1?
- CES Post-Event Analysis: What are the specific reporting needs for post-CES analysis (Feb-Mar)? Lead gen, foot traffic, exhibitor ROI?
- Salesforce CRM Timeline: When post-CES (Feb? March?) will CRM data access be available? What are integration priorities?
13. Sign-Off
By signing below, both parties acknowledge understanding and agreement with the scope, deliverables, timeline, and approach outlined in this Statement of Work.
Client (CTA):
Name: ___________________________
Title: ___________________________
Date: ___________________________
Signature: _______________________
Brainforge AI:
Name: Uttam Kumaran
Title: Managing Lead
Date: December 22, 2025
Signature: _______________________
Appendix A: Key Stakeholders
| Name | Role | Involvement |
|---|---|---|
| Katherine Bayless | Senior Director, Data Engineering | Strategic sponsor, bi-weekly alignment, executive liaison, decision maker, “P Negative-1” prioritization |
| Kyle | Data Analyst (Market Research → Data) | Primary model builder, weekly working sessions, requirements gathering, R/Python skills, eager to learn dbt |
| Kai | Business Intelligence Analyst (New) | Member engagement reports, training sessions, requirements gathering, strong governance background |
| Jay Heavner | VP of IT | 20+ years tenure, infrastructure approvals, RBAC coordination, access management, Okta/systems owner |
| Ashwini Sharma | Data Engineer (Brainforge) | Primary technical lead, dbt development, infrastructure setup, orchestration implementation |
| Samuel Roberts | Full-Stack Engineer (Brainforge) | Okta/Shopify discovery support, integration work |
| Uttam Kumaran | Managing Lead (Brainforge) | Client POC, strategic alignment, project oversight |
Additional Stakeholders (Ad-hoc):
- Anna P, Anna K, Anna R (Membership): Requirements for active member report
- Chris Deathloff (Market Research): Analytics needs, data literacy champion
- Quinn, JC (Business Intelligence): Survey analysis, reporting needs
- Tom Moschello (ExpoCAD): Show floor data, data quality
Appendix B: Story Point Estimation Guide
Story Point Scale (Fibonacci):
- 1 point: Trivial (< 2 hours) - Simple config change, minor documentation update
- 2 points: Small (2-4 hours) - Single staging model, simple test addition
- 3 points: Medium-Small (4-8 hours) - Complex staging model with multiple sources
- 5 points: Medium (1-2 days) - Dimensional model with business logic, RBAC setup
- 8 points: Large (2-3 days) - Complex fact table with multiple joins, CI/CD pipeline setup
- 13 points: Extra Large (1 week) - Entity resolution framework, full orchestration implementation
- 21 points: Epic (2+ weeks) - Historical S3 archive ingestion (300-400 tables), Katherine’s SFTP workflow migration
Estimation Principles:
- Points reflect complexity, uncertainty, and effort (not just time)
- Team velocity will be calibrated after Sprint 1
- Complex work (high uncertainty) gets higher points even if time estimate is lower
- Story points are relative to each other, not absolute time
Appendix C: Success Metrics
Brainforge will track these metrics during Q1 to measure progress and success:
Sprint Velocity Metrics:
- Story points completed per sprint (target: ~50 points/sprint)
- Sprint completion rate (target: 100% of planned story points)
- Milestone delivery on-time percentage (target: 100%)
Data Coverage Metrics:
- Number of staging models completed
- Number of data sources integrated
- Percentage of Remembers modules with staging models
- Data freshness (time from source to staging)
Marts Development Metrics:
- Number of dimensional models delivered
- Number of fact tables delivered
- Number of business reports operational
- Model test coverage percentage (target: 100%)
Team Enablement Metrics:
- Number of team members onboarded
- Number of models built by CTA team members
- Time to first model (for new team members)
- Code review participation rate
Operational Excellence Metrics:
- CI/CD pipeline success rate (target: 95%+)
- Average time for PR review and merge
- Number of data quality issues caught by tests
- Pipeline uptime percentage (target: 99%+)
Business Impact Metrics:
- Number of dashboards/reports using marts data
- Number of active Snowflake users
- Number of questions answered by data team
- CES reporting capabilities delivered
Appendix D: Milestone Acceptance Process
Acceptance Process for Each Milestone:
-
Completion Notification
- Brainforge notifies Katherine Bayless when milestone deliverables are complete
- Notification includes summary of deliverables, tests passed, documentation links
-
Stakeholder Review (3-5 business days)
- CTA team (Katherine, Kyle, Kai, Jay as relevant) reviews deliverables
- Testing and validation of key functionality
- Review of documentation and code quality
-
Acceptance or Feedback
- Accepted: Milestone marked complete, sprint invoice processed
- Feedback Required: Minor issues documented, resolved within 2-3 business days
- Rejected: Major issues requiring rework (rare, escalated to leadership)
-
Sign-Off
- Katherine Bayless provides written sign-off (email or Slack) for each milestone
- Sign-off triggers invoice for that sprint
- Any punch list items documented and prioritized for next sprint
Acceptance Criteria Reminders:
- All acceptance criteria listed under each milestone must be met
- Tests must pass (dbt test, custom data quality tests)
- Documentation must be complete (schema.yml, README, runbooks)
- Stakeholder validation (Katherine, Kyle, Kai, Jay as relevant)
Appendix E: Comparison to v1.0 (Hourly Model)
Key Changes from v1.0 to v2.0:
| Aspect | v1.0 (Hourly) | v2.0 (Sprint-Based) |
|---|---|---|
| Pricing Model | Hourly estimates (410-530 hours) | Fixed sprint price ($18k per 2-week sprint) |
| Estimation | Hour ranges per phase | Story points per milestone |
| Billing Frequency | Monthly | Bi-weekly (end of sprint) |
| Flexibility | Scope changes require hour re-estimation | Story points and velocity adjust naturally |
| Predictability | Hour tracking required, variance possible | Fixed sprint cost, predictable total |
| Overhead | Time tracking and hourly reporting | Focus on deliverables and outcomes |
Total Cost Comparison:
- v1.0: 410-530 hours × 92,250 - $119,250
- v2.0: 6 sprints × 108,000 (fixed)
Benefits of v2.0 Approach:
- Fixed cost per sprint provides budget predictability
- Story points allow better complexity estimation without time tracking overhead
- Milestone-based delivery aligns with business value
- Sprint cadence enables faster feedback and course correction
- Reduces administrative overhead for both teams