Meeting Agenda: Lilo Social Project Kickoff

Date: Monday, December 2, 2025
Time: 1 hour
Attendees: Zac Fromson (Lilo Social), Uttam Kumaran (Brainforge), Surf (Senior Application Architect, Brainforge)


Meeting Objectives

  1. Officially kick off the Lilo Social tech platform project
  2. Deep dive into current platform architecture with Surf’s technical perspective
  3. Validate infrastructure and hosting decisions (AWS vs. alternatives)
  4. Assess team technical depth and identify key technical stakeholders
  5. Confirm access requirements and handoff timeline from previous vendor
  6. Align on Phase 1 priorities and immediate next steps

Demos/Walkthroughs Requested

  • Complete platform walkthrough for Surf’s benefit

    • Auth flows and user management (current vendor dependency)
    • Agent chat interface (Klaviyo, Shopify, Meta, Google MCPs)
    • Daily Slack report generation and visualizations
    • Current brand hub structure (60 brand setup)
    • What’s working well vs. what’s broken
  • Architecture deep dive

    • Frontend tech stack (Angular web app)
    • Backend services and API structure
    • Database layer (MongoDB + RabbitMQ)
    • MCP server architecture (Python services)
    • Deployment pipeline and CI/CD process
  • Repository structure tour

    • Walk through the 10 separate repos
    • Understand why current structure exists
    • Dependencies between repos
    • Build and deployment process for each
  • Current hosting setup

    • AWS architecture (services being used)
    • Why currently on vendor’s AWS account
    • Security and access control setup
    • Monitoring and logging infrastructure

Questions to Ask

Infrastructure & Hosting Strategy

  1. Are you committed to AWS or open to alternatives like Railway for hosting?

    • What’s driving the preference for AWS? (familiarity, enterprise requirements, existing setup?)
    • Have you evaluated Railway, Render, or other modern hosting platforms?
    • Cost considerations for AWS vs. alternatives
    • What’s your comfort level with DevOps and infrastructure management?
  2. Current AWS setup clarity:

    • Which AWS services are currently being used? (EC2, ECS, Lambda, RDS, etc.)
    • What’s the monthly AWS bill currently?
    • Are there any enterprise requirements driving AWS? (compliance, SLAs, etc.)
    • What level of AWS expertise exists on your team?
  3. Infrastructure ownership goals:

    • Why is moving to your own AWS instance a priority?
    • Timeline for getting off vendor’s AWS account?
    • Who will manage infrastructure ongoing? (Lilo team or Brainforge support?)
    • Backup and disaster recovery requirements

Team Technical Depth & Roles

  1. Who on their team is the most technical and what is the team’s technical depth?

    • Who has coding experience? (You mentioned Bobby has been using Replit)
    • What languages/frameworks is your team comfortable with?
    • Who’s been handling UI/UX design work internally?
    • Who will be the day-to-day technical contact post-handoff?
  2. Current technical workflows:

    • How much are you and Bobby currently building with Cursor/vibe-code?
    • What’s your experience with git, version control, deployment?
    • Have you worked with Docker, CI/CD pipelines before?
    • What’s your team’s capacity for learning new tools/frameworks?
  3. Team structure for this project:

    • Who from Lilo will be involved day-to-day?
    • Decision-making authority for technical choices?
    • How much time can you dedicate to weekly standups, reviews, testing?
    • Any other developers you’re considering hiring or contracting?

Platform Architecture Understanding

  1. Can they walk through the platform again so Surf can fully understand the current architecture (front end, back end, services) before proposing changes like a monorepo?

    • What are the 10 separate repos and their purposes?
    • How do frontend, backend, and MCP servers communicate?
    • API architecture and endpoint structure
    • Data flow from integrations → storage → display
  2. MCP server architecture:

    • Are these forks of existing open-source MCPs?
    • Custom auth layers and API key management approach
    • How are they deployed separately vs. integrated?
    • Token management and refresh logic
  3. Current limitations and pain points:

    • What’s breaking most often? (Meta tokens, Klaviyo rate limits, etc.)
    • What takes the longest to fix when something breaks?
    • Where do you spend most time on manual workarounds?
    • What would you change about the current architecture if starting over?
  4. User management blocker details:

    • How exactly is auth tied to vendor’s system?
    • What’s required to decouple? (OAuth migration, JWT implementation, etc.)
    • Do you need to migrate existing users or can start fresh?
    • What user roles and permissions do you need?

Code Quality & Extensibility

  1. Repository access and code review:

    • Can we get git repo access today? (GitLab, GitHub, or both?)
    • Code documentation state (README files, inline comments, API docs)
    • Test coverage (unit tests, integration tests, E2E tests)
    • Code quality tools in use (linters, formatters, type checking)
  2. Modularity and cursor-friendliness:

    • How easy is it to add new features today?
    • Where is technical debt concentrated?
    • What parts of the codebase are most fragile?
    • Any “don’t touch this” areas we should know about?
  3. Dependencies and tech stack confirmation:

    • Frontend: Angular version? Any component libraries?
    • Backend: Node.js + Express? Any other frameworks?
    • Database: MongoDB setup (Atlas, self-hosted?)
    • MCP Servers: Python version? FastAPI, Flask, or custom?
    • Message Queue: RabbitMQ purpose and configuration
    • Package management: npm, pip, Docker, other?

Integration & Data Quality

  1. API integrations status:

    • Which connectors are fully working?
    • Which are broken or partially working?
    • Google MCP status (mentioned needing “homepage setup”—what does this mean?)
    • Any API rate limiting issues you’ve encountered?
  2. Historical data for forecasting:

    • Shopify order history: how far back per brand? (need 24+ months ideally)
    • Customer cohort structure: existing exports showing repeat purchase patterns?
    • Klaviyo campaign history depth?
    • Meta/Google spend data completeness?
    • Any known data quality issues (refunds, duplicates, canceled orders)?
  3. Data storage and access:

    • Where is historical data currently stored?
    • MongoDB structure and schema
    • Data retention policies
    • Backup and export processes

Vendor Handoff Logistics

  1. Current vendor relationship:

    • Contract status and termination timeline?
    • What handoff documentation will they provide?
    • Any ongoing commitments or invoicing?
    • Their responsiveness for knowledge transfer?
    • Do they have access that needs to be revoked?
  2. Access requirements checklist:

    • Git repositories (all 10 repos)
    • AWS console access (IAM roles needed)
    • API keys (Shopify, Klaviyo, Meta, Google Ads, Foreplay, Anthropic)
    • MongoDB connection strings
    • Environment variables and secrets documentation
    • Domain and DNS management access
    • CI/CD pipeline credentials
  3. Intellectual property clarity:

    • Stitch code shows copyright Lilo Social LLC—confirm full ownership
    • BRNZ-platform shows copyright Brainz—what’s the relationship?
    • Any licensed components or vendor-retained IP?
    • Open source dependencies and license compliance

Project Priorities & Timeline

  1. Phase 1 priorities confirmation:

    • Platform stabilization vs. new feature development balance
    • User management decoupling urgency
    • Bug fixes prioritization
    • AWS migration timeline preference
  2. Feature prioritization:

    • Forecasting tool urgency (originally targeting January)
    • Creative brief automation (Foreplay integration) timeline
    • Any other features in backlog we should know about?
    • Quick wins you’d like to see in first 2 weeks?
  3. Success metrics for Phase 1:

    • What does “done” look like for platform stabilization?
    • How will we know AWS migration was successful?
    • User management acceptance criteria
    • Bug fix definition of “working well”
  4. Communication and workflow:

    • Weekly standup timing preference
    • Slack workspace setup
    • How you want to track progress (Asana, Linear, GitHub issues?)
    • Loom vs. live meetings preference
    • Emergency contact procedures

Future Vision & Scalability

  1. Long-term platform vision:

    • Where do you see this platform in 6 months? 12 months?
    • What’s the ultimate feature set you’re building toward?
    • Plans to scale beyond 60 brands?
    • Any plans to white-label or sell to other agencies?
  2. Team growth plans:

    • Are you planning to hire internal developers?
    • What capabilities do you want to build in-house?
    • Long-term reliance on Brainforge vs. building internal team?
    • Training and knowledge transfer priorities
  3. Technical debt tolerance:

    • How important is code elegance vs. shipping fast?
    • When should we push back on quick fixes vs. proper solutions?
    • Technical debt you’re comfortable carrying vs. must-fix

Key Discussion Topics

Monorepo Strategy

  • Surf’s perspective on consolidating 10 repos into 1
  • Trade-offs: simplicity vs. deployment flexibility
  • Migration path and rollout strategy
  • Impact on cursor-driven development

Hosting Decision (AWS vs. Railway)

  • Cost comparison for your scale
  • DevOps complexity trade-offs
  • Migration effort required
  • Long-term maintainability

Architecture Modernization

  • Opportunities to simplify current setup
  • Serverless vs. container-based approach
  • Database optimization opportunities
  • Caching and performance improvements

Team Enablement Strategy

  • Cursor-driven development training plan
  • Repository structure for non-engineer contributions
  • Documentation standards we’ll implement
  • Code review process for Lilo team submissions

Resources Mentioned/Requested

To be filled out during/after meeting

Access & Credentials

  • Git repository access (all repos)
  • AWS console IAM user creation
  • API keys consolidated document
  • MongoDB connection strings
  • Environment variables list
  • CI/CD pipeline access

Documentation

  • Current architecture diagrams (if exist)
  • API endpoint documentation
  • Deployment runbooks
  • Existing technical documentation or Notion pages

Code & Data

  • Python cohort analysis notebook/script (mentioned from previous agency)
  • Sample Shopify data exports for forecasting validation
  • Orca Analytics spreadsheet (forecasting model to replicate)
  • Existing UI mockups or Figma files

Vendor

  • Vendor handoff schedule
  • Outstanding invoices or commitments list
  • Access revocation checklist

Action Items

To be filled out during/after meeting

Lilo Social (Zac/Bobby)

  • Provide git repository access to Surf and Uttam
  • Create AWS IAM user with appropriate permissions
  • Compile API keys and credentials in secure doc
  • Share existing architecture/technical documentation
  • Identify 2-3 pilot brands for forecasting validation
  • Confirm BFCM blackout period for production changes
  • Schedule vendor handoff call if needed

Brainforge (Uttam/Surf)

  • Complete codebase audit and document findings
  • Create architecture decision record (ADR) for hosting choice
  • Draft detailed Phase 1 week-by-week plan
  • Prepare AWS vs. Railway cost/complexity comparison
  • Set up Brainforge Slack workspace for project
  • Schedule follow-up technical deep dive (if needed)

Joint

  • Weekly standup scheduling (propose Thursday 2PM PT based on LMNT pattern?)
  • Establish communication protocols (Slack, Loom, emergency contact)
  • Define Phase 1 success criteria document

Notes

Space for additional notes during the meeting

Key Insights

Technical Discoveries

Concerns / Red Flags

Quick Win Opportunities

Decisions Made


Next Steps After Kickoff

  1. This Week (Dec 2-6):

    • Complete codebase audit
    • Document current architecture
    • Begin AWS migration planning
    • Identify critical bugs to fix
  2. Week of Dec 9:

    • Start platform stabilization work
    • User management decoupling
    • Repository consolidation planning
    • First weekly standup
  3. Week of Dec 16:

    • AWS migration execution
    • Bug fixes implementation
    • Cursor-friendly repo structure implementation
    • Team enablement documentation
  4. By End of December:

    • Platform fully stabilized
    • Lilo team deploying first cursor-driven feature
    • Ready to begin forecasting tool development

Meeting Success Criteria

By end of today’s call, we should have:

  • ✅ Surf has complete understanding of current platform architecture
  • ✅ Decision made on AWS vs. Railway (or path to decision)
  • ✅ Clear picture of Lilo team’s technical capabilities
  • ✅ Access requirements documented and timeline set
  • ✅ Phase 1 priorities confirmed and understood
  • ✅ Communication protocols established
  • ✅ Immediate next steps with owners assigned
  • ✅ No major surprises that would change timeline or scope

Brainforge Team Talking Points

Our Approach Reminder:

  • Weekly sprint velocity: Monday kickoff, Wednesday demo, Friday ship
  • We want you shipping features via cursor yourself
  • Transparent pricing: Fixed monthly, 14-day cancellation
  • You own everything: code, infrastructure, data

What We Bring:

  • Surf: Senior application architect with platform modernization expertise
  • Experience with agency tools and automation (we’ve built similar forecasting/creative tools)
  • Understanding of your economics (can’t afford $7k/brand SaaS sprawl)
  • We’re building similarly internally (we use the same approach for our own tools)

Questions We Want to Answer Today:

  • Is the current architecture salvageable or should we start fresh in places?
  • Can we move faster with Railway vs. AWS for your use case?
  • Where can we get quick wins in first 2 weeks?
  • What’s the biggest technical blocker preventing velocity?