Tech-Enabled Agency Platform: Forecasting, Creative Automation & Engineering Velocity
Prepared by: BrainforgeAI (Uttam Kumaran, Sam Roberts)
Date: November 17, 2025
Audience: Zac Fromson, Bobby Palmieri
Executive Summary
Lilo Social will shift from paying 7,000 per brand for fragmented SaaS tools to owning a unified, AI-enabled tech stack that scales across 60+ client accounts without per-seat economics. By automating manual forecasting workflows (currently spreadsheet-based), implementing cohort-driven revenue prediction, and accelerating creative brief generation, the agency gains 15–20 hours per week of operator capacity while delivering faster, data-informed campaign decisions to clients.
This positions Lilo Social to compete with larger tech-enabled agencies, defend margins against rising tool costs, and win enterprise brands that demand real-time performance visibility and operational transparency.
Near-term outcomes:
- Eliminate Orca Analytics and similar per-brand SaaS costs (currently $300–600/brand/month)
- Reduce creative research cycles from hours to minutes
- Enable non-technical team members to ship features via cursor/vibe-code workflows
- Own codebase infrastructure and reduce vendor lock-in
We will operate under these constraints:
- Build for internal velocity, not external productization: Focus on agency operator workflows, not selling software to other agencies
- Maximize AI-assisted development: Leverage cursor, vibe-code, and rapid prototyping to ship weekly instead of monthly
- Reduce vendor sprawl: Consolidate onto Lilo-owned AWS infrastructure with full transparency and cost control
- Maintain BFCM stability: No production disruptions during peak season; parallel development tracks
Technical Discovery Questions (For Today’s Meeting)
From Sam Roberts, Lead Architect:
Architecture & Ownership
- Platform Demo: Walk through existing Stitch platform in action (auth flows, API integrations, report generation)
- Ownership Clarity: Stitch code shows copyright Lilo Social LLC, but BRNZ-platform is copyright 2025 Brainz. Are we decoupling completely from BRNZ, or maintaining integration?
- MCP Servers: Confirm that MCP servers (Klaviyo, Shopify, Meta, Google Ads) are forked/tweaked versions with custom auth layers
- Hosting & Deployment: Current GitLab setup, AWS configuration, deployment architecture, CI/CD pipelines
- Access Gaps: What’s blocking (e.g., Google MCP setup requiring homepage, API keys, vendor handoff dependencies)?
Code Quality & Extensibility
- Repository Structure: How modular is the codebase? Can we easily add features via cursor without deep architecture knowledge?
- Tech Stack: Angular frontend, Python MCPs, Node.js backend? MongoDB + RabbitMQ messaging? Confirm dependencies
- Documentation: Current state of API docs, data flow diagrams, deployment runbooks
- User Management: Why is auth still tied to vendor system? What’s required to decouple?
Integration Realities
- Data Quality: Historical data completeness for cohort modeling (Shopify orders, Klaviyo campaigns, Meta/Google spend)
- API Limits: Known rate limits or pagination issues with Shopify, Klaviyo, Meta, Google
- Foreplay API: Access confirmed? Rate limits? Bulk ad retrieval feasibility?
Pilot Playbook (8-Week Timeline)
Phase 1: Platform Stabilization & Code Ownership (Weeks 1–2)
By Driving Actions…
- Achieve full code ownership and infrastructure independence
- Enable non-engineers to contribute features via cursor workflows
- Eliminate vendor hosting dependency and reduce monthly burn
What do we want to deliver?
- Technical Audit & Documentation: Complete codebase review, architecture diagram, dependency map, API endpoint catalog
- AWS Migration: Transfer hosting from vendor AWS to Lilo-owned instance with IAM roles, secrets management, and cost monitoring
- User Management Decoupling: Replace vendor auth system with self-contained authentication (consider Auth0, Clerk, or custom JWT)
- Bug Fixes: Resolve broken connectors (Meta, Google, Klaviyo token issues, Slack update failures)
- Codebase Refactor: Modularize repo structure to enable cursor-driven feature development by non-engineers
Why do we want this?
- Reduce monthly vendor hosting fees (currently invoiced retroactively)
- Enable weekly feature velocity instead of monthly vendor cycles
- Eliminate blocking dependencies on external team for simple changes
Measured by:
- Lilo team deploys one feature update independently via cursor by the end of phase 1
- AWS infrastructure fully migrated with 100% uptime during transition
- User authentication no longer dependent on vendor dashboard
- All critical bugs resolved and connectors operational
Deliverables:
- Architecture Decision Records (ADRs) documenting tech stack choices
- Repository README with cursor workflow guide for non-engineers
- AWS infrastructure runbook with deployment instructions
- Fixed authentication system with user management UI
Phase 2: Revenue Forecasting Engine (Weeks 3–6)
Context
Lilo Social currently runs 12-month revenue forecasts manually in Google Sheets, requiring significant analyst time per brand. Tools like Orca Analytics provide automation but create unsustainable per-brand subscription costs at scale (typically $300–600/brand/month). By building a custom forecasting engine that ingests live data from Shopify, Meta, Google Ads, and Klaviyo, Lilo can automate cohort-based revenue predictions, understand seasonal patterns, and adjust goals mid-year without spreadsheet gymnastics.
The goal: Replicate Orca’s core functionality while maintaining full control over assumptions, formulas, and integration depth.
By Driving Actions…
- Automate 12-month revenue forecasting with live data ingestion
- Predict returning customer revenue using cohort-based models
- Enable daily pacing alerts for variance detection (forecast vs. actuals)
What do we want to deliver?
P0: 12-Month Forecast Builder
- Input fields (user-editable, blue highlighted):
- Total revenue target (annual or remaining year)
- Customer acquisition cost (CAC) by channel
- Ad spend budget by month
- Seasonality method: Equal split OR % based on prior year data
- New customer acquisition targets by month
- Output views: Monthly → Quarterly → Annual rollup
- Editable mid-year: Adjust any assumption and regenerate forecast in real-time
- Export: CSV/Excel download for client presentations
P0: Returning Customer Revenue Forecasting (Critical Component) This is the primary technical challenge—Lilo needs strongest support here
- Python cohort analysis engine based on Shopify order history:
- Ingest 2–3 years of customer order data by cohort (first purchase date)
- Calculate retention rates by elapsed months (e.g., Month 0, 1, 2…24+)
- Predict future returning customer counts per cohort using historical decay curves
- Handle subscription brands vs. one-time purchase brands differently
- Separate new vs. returning revenue in all forecasts
- Account for seasonal spikes (November/December holiday cohort behavior)
- Build on existing Python code from prior agency (provided as reference)
P1: Daily Pacing Dashboard (Multi-Tab Interface) Pulls from 12-month forecast and shows day-by-day tracking for current month
Tab 1: Business Overview
- Top summary: “Through yesterday, did 26K, $5K behind”
- Metrics tracked: Total revenue, new customer revenue, existing customer revenue, ad spend, MER
- Visual graphs showing pacing trend lines (forecast vs. actual)
- Day-by-day breakdown table showing variance % and $ for each metric
Tab 2: Meta Channel
- Daily ad spend, purchases, CAC target vs. actual
- Pacing indicators: “Yesterday spent $442 more, 6% ahead”
Tab 3: Google Channel
- Same structure as Meta tab
- Separate tracking for Google Ads performance
Editable Future Days:
- For days yet to come: Adjust planned spend (e.g., “Moving forward, want to spend $800/day”)
- Auto-recalculate how changes impact forecast for rest of month
Alerts:
- Slack notifications for >10% variance events on key metrics
- Flag channels pacing significantly ahead or behind
P1: Data Ingestion Pipeline
- Automated daily pulls from Shopify (orders, revenue), Meta (spend, ROAS), Google Ads (spend, conversions), Klaviyo (email revenue)
- Incremental updates to avoid re-processing full history
- Error handling and data quality checks
Why do we want this?
- Eliminate per-brand forecasting tool subscriptions (avoiding recurring costs that scale linearly with brand count)
- Reduce manual forecast prep time by 80%+
- Enable real-time decision-making: “Should we increase Meta spend this week?”
- Surface underperforming channels before end-of-month surprises
Measured by:
- Forecast accuracy within 10% of actuals on 3 pilot brands by Week 6
- Daily pacing dashboard live with <24hr data latency
- Lilo team adjusts forecast assumptions and regenerates in <2 minutes
- Cohort model correctly predicts returning customer revenue for November holiday cohort
Deliverables:
- 12-month forecast builder UI (similar to ChatGPT prototype shared)
- Python cohort model script (based on provided Colab notebook)
- Daily pacing dashboard with channel-level variance tracking
- Slack alert integration for pacing notifications
- Data ingestion pipeline with error logging
Acceptance Criteria:
- Data Trust: Shopify revenue matches dashboard actuals within 2%
- Cohort Model Accuracy:
- Returning customer predictions within 20% mean absolute error (based on historical cohort backtest)
- Model correctly differentiates subscription vs. one-time purchase brand behaviors
- Handles edge cases: new brands (<12mo history), seasonal brands (holiday spikes)
- Forecast Flexibility: Team can switch between “equal split” and “historical %” seasonality methods and see instant recalc
- Daily Tracking: Yesterday’s actuals visible by 9am ET; variance calculations accurate
- Usability: Non-technical team member can adjust forecast assumptions without engineering support
- Performance: Dashboard loads in <3 seconds with 12 months of data
Phase 3: Creative Brief Automation (Weeks 5–7, Parallel Track)
Context
Lilo Social’s creative team currently spends 3–5 hours per brief manually searching ad libraries (Foreplay, Meta Ad Library), curating 8–10 inspiration examples, and writing context for designers. For an agency managing 60 brands with weekly creative refreshes, this creates a throughput bottleneck. By integrating Foreplay’s API and using AI to generate brief copy, Lilo can reduce brief creation time from hours to minutes while maintaining creative quality.
By Driving Actions…
- Automate ad inspiration curation from Foreplay API
- Generate contextual brief copy using AI (OpenAI or Anthropic Claude)
- Enable one-click PDF export for designer handoff
What do we want to deliver?
P0: Foreplay API Integration
- Connect to Foreplay API with authentication
- Filter ads by: Industry, format (image/video/carousel), date range, performance signals
- Display grid view of filtered ads with thumbnails and metadata
P0: Brief Builder UI
- Select 8–10 ads from Foreplay search results
- Add text annotations below each ad (editable fields)
- AI-generated copy suggestions: “Write a brief description explaining why this ad is effective”
- Override/edit AI-generated copy as needed
P0: PDF Export
- One-click “Download Brief” button
- PDF format includes: Brand name, brief title, date, selected ads with annotations
- Placeholder for brand guidelines (logo, color palette)
- Professional layout suitable for designer handoff
P1: Brief Templates
- Save common brief formats (e.g., “BFCM Sale,” “Product Launch,” “Retention Campaign”)
- Pre-populate copy structure based on template type
Why do we want this?
- Reduce creative brief prep from 3–5 hours to <15 minutes
- Scale brief output: 10 briefs/week → 50 briefs/week without hiring
- Standardize brief quality across team members
- Free creative strategists to focus on high-value work (strategy, not curation)
Measured by:
- Generate 3 production-ready briefs during pilot week
- Team subjectively rates AI copy quality as “usable with minor edits”
- PDF format matches existing brief standards (designer approval)
- <5 minute end-to-end time from idea to PDF download
Deliverables:
- Foreplay API integration with search/filter UI
- AI-powered brief copy generation (GPT-4 or Claude)
- PDF rendering engine with brand template support
- Brief builder UI (based on ChatGPT Canvas prototype)
Acceptance Criteria:
- Foreplay API Reliability: Successfully retrieve 100+ ads per search
- AI Copy Quality: 70%+ of generated descriptions require zero edits (team subjective assessment)
- PDF Output: Designers confirm format is production-ready without manual reformatting
- Speed: Brief creation time <15 minutes from blank slate to PDF export
Success Metrics
Platform Ownership & Velocity
- Code Independence: Lilo team ships 1 feature/week via cursor without BrainforgeAI support by Month 2
- Vendor Cost Reduction: Eliminate redundant hosting fees and reduce dependency on per-brand tool subscriptions
- Deployment Speed: Feature idea → production in <7 days (vs. current 30–60 day cycles)
Forecasting Impact
- Tool Cost Savings: Eliminate per-brand forecasting tool costs (Orca and similar tools at $300–600/brand/month)
- Analyst Time Savings: 15+ hrs/week freed from manual spreadsheet updates
- Forecast Accuracy: <20% error margin on revenue predictions across pilot brands
- Decision Velocity: Real-time pacing alerts reduce “discovery-to-action” lag from weeks to hours
Creative Automation Impact
- Brief Throughput: 3x increase in briefs produced per week
- Time Savings: 80% reduction in brief prep time (5 hrs → <1 hr)
- Quality Maintenance: Designer satisfaction with brief format remains >80%
Overall Business Impact
- Client Retention: Faster turnaround on campaign requests improves NPS
- New Business: Tech stack becomes competitive differentiator in pitches
- Margin Defense: Avoid per-brand SaaS sprawl as agency scales to 100+ clients
Risks & Mitigations
| Risks | Mitigations |
|---|---|
| BFCM peak season instability | All production changes frozen Nov 20–Dec 5; development happens in staging only. Deploy forecasting post-peak. |
| Historical data quality gaps for cohort modeling | Validate 3 pilot brands’ Shopify data in Week 1. Use industry benchmarks as fallback if incomplete. Start with 6-month lookback, expand to 12-month as data improves. |
| Vendor handoff delays (code access, API keys, AWS credentials) | Begin with parallel AWS setup and new feature development (forecasting, creative tools). Backfill existing platform integration once vendor unblocks. |
| Foreplay API rate limits or deprecation | Confirm rate limits in discovery call. Design adapter layer to swap providers (Motion, MagicBrief, SavedAds) if Foreplay becomes unreliable. |
| Scope creep from “just one more feature” requests | Fixed 8-week sprint with locked deliverables. Maintain backlog for Phase 4 (post-pilot) prioritization. Weekly check-ins enforce scope discipline. |
| Team adoption resistance (“Why change what works?”) | Deliver Slack-native alerts and familiar UI patterns (Orca-like dashboards). Weekly demos with team feedback loops. Emphasize time savings, not complexity. |
| AWS migration causes downtime during transition | Blue-green deployment strategy. Maintain vendor hosting as fallback during cutover. Migrate off-peak hours with rollback plan. |
Operating Principles
- Weekly sprint velocity: Monday kickoff, Wednesday demo, Friday ship. No month-long “PM cycles.”
- Vibe-code friendly: Repository structure enables cursor-assisted development by non-engineers. Lilo team members should ship features themselves.
- No tool sprawl: Maximize existing stack (Shopify, Klaviyo, Meta, Google, Slack, AWS). Avoid new vendor contracts unless high ROI.
- Transparent pricing: Fixed monthly cost, no surprise invoices. 14-day cancellation policy.
Meetings & Cadence
- Kickoff (Week 1): Access checklist, repo handoff, AWS credentials, API keys
- Daily Internal Standups (BrainforgeAI team): Progress sync, blocker resolution
- Weekly Client Standups (1 hour): Demo working features, adjust priorities, gather feedback via Slack + Loom
- Bi-Weekly Tech Review: Sam Roberts + Lilo engineering lead (if applicable) to review architecture decisions
- Monthly Executive Review: KPIs, budget burn, next-phase roadmap
Team & Pricing
Brainforge typically staffs a 3-role pod:
- Strategist (Uttam): Main client POC, aligns outputs to operator objectives, builds roadmap for new impact areas
- Lead Architect (Sam): Codebase ownership, infrastructure design, enables cursor-friendly development workflows
- Engineer (TBD): Feature implementation, data pipeline development, bug fixes
Fixed/Retainer Models
TBD on these phase costs until after today’s meeting.
| Service | Fixed Fee / Monthly |
|---|---|
| Phase 1 Only (Stabilization) | TBD |
| Phases 1–3 (Full Build) | TBD |
| Retain & Iterate (Post-Launch) | TBD |
Overage Rate: TBD (will confirm after discussion; applies to Retain & Iterate only)
Hourly Rates (if preferred over fixed model)
| Labor Category | Level | Hourly Rate |
|---|---|---|
| Managing Lead / Strategist | Executive | $250/hour |
| Senior Engineer / Architect | Senior | $200/hour |
| Technical Project Manager | Mid-Level | $150/hour |
| Engineer | Mid-Level | $150/hour |
High-Value Levers (Configurable Add-Ons)
- Cursor Enablement Workshop: Train 2–3 Lilo team members to ship features independently ($1,500 value, included in Phases 1–3)
- Discount: 10% off monthly retain if commit to 6-month post-launch engagement
Billing & Payment Terms
- Minimum Billing Unit: 1 hour, billed in 0.25-hour increments thereafter
- Email/Phone Response (15 mins or less): Not Billed
- Invoicing: Bi-weekly or Monthly (Net 15 or Net 30 terms)
- Currency: All rates are in USD
- 14-Day Cancellation Policy for ongoing retainers
Case Studies
- DTC Brand: Implemented real-time, full-funnel visibility with 100% accurate LTV/CAC benchmarks
- DTC Brand: Eliminated 800+ redundant dashboards, saving 40+ hrs/month
- CPG Brand: Centralized fragmented feedback into one dashboard in <30 days
- Home Services Provider: Implemented AI observability, reducing issue resolution to <60 minutes
Appendix
The Brainforge Approach
Today’s agency operators face relentless pressure to scale faster, defend margins, and compete with larger, tech-enabled competitors. While many invest in SaaS tools, fragmented systems create data silos, manual workflows, and unsustainable per-seat costs.
At Brainforge, we recognize that buying more software isn’t the answer. Our approach focuses on building owned infrastructure that consolidates workflows, eliminates vendor lock-in, and enables rapid feature iteration through AI-assisted development.
Instead of adding another tool to your stack, we help you build a platform that grows with your agency—one that your team can extend, customize, and ship features on weekly cycles, not vendor roadmaps.
The ultimate goal: Move from vendor-dependent workflows to owned, operator-controlled systems that scale without marginal cost increases.
Stack May Include
- Data Warehouse: PostgreSQL (AWS RDS), BigQuery, Snowflake (Comparisons)
- ETL & Orchestration: Airbyte, Fivetran, dbt (Comparisons)
- Business Intelligence: Metabase, Superset, Retool (Comparisons)
- LLM/Chatbot/MCP Server: Anthropic Claude, OpenAI, custom MCP layers (Comparisons)
Next Steps:
- Sign mutual NDA (Lilo to provide)
- Grant access: Git repo, AWS credentials, API keys (Shopify, Klaviyo, Meta, Google, Foreplay)
- Schedule Phase 1 kickoff: Week of Nov 25 or Dec 2 (post-BFCM)
- Review Sam’s technical discovery questions during today’s platform demo