Research plan: Roadway-style product on Brainforge primitives

Date: 2026-04-06
Type: Research-only plan (no implementation in this document)
Companion memo: roadway-style-growth-marketing-on-brainforge-primitives.md


Scope: Pure research. No routes, no nav changes, no new services in this document—only how it could be assembled from what we already have plus common OSS and ad-platform APIs.

Sales context (from internal ask)

Agencies pay roughly $30–50k/mo for tools positioned like Roadwaymeasure, monitor, and manage performance marketing with AI. The question is what we could credibly compose from internal apps/primitives and OSS, and what must come from Meta/Google (or warehouse) integrations.

External narrative research (optional)

  • last30days (brainforge-platform/.cursor/skills/last30days/SKILL.md): multi-source recency research for buyer language, pain, and objections—not for architecture.
  • Suggested prompts: AI tools for Meta/Google ads agencies; performance marketing automation complaints; agency reporting stack fatigue.
  • Repo discovery: Cursor semantic search + rg under apps/platform/; optional brainforge-platform/.cursor/skills/research-codebase-opportunities/SKILL.md for broader tooling audits.

Internal primitives (The Forge / platform)

These are existing composition points—a Roadway-like surface could be conceptually built by combining them; no commitment to build.

CapabilityPrimitive / locationRelevance to “measure / monitor / manage”
Workspace shell, navapps/platform/src/components/Sidebar.tsx, (main) routesHost a dedicated “Growth” or “Demos” area without a new product shell.
KPI + list + filters layoutapps/platform/src/components/DashboardLayout.tsx, MetricCardCampaign summary row + filtered entity list (meetings today; could be campaigns/ad sets with different data).
Client-scoped dashboard + chatapps/platform/src/components/ClientDashboardTemplate.tsxPattern: CopilotKit + N8nCopilotKitChat, quickPrompts, webhook to /api/brainforge/client/.../chat—same pattern for narrating metrics if n8n (or backend) receives structured context.
Department templateapps/platform/src/components/DepartmentDashboardTemplate.tsxSame Copilot + layout pattern for internal org views.
Standalone AI demo pageapps/platform/src/app/(main)/demo/copilotkit/page.tsxShows full-page CopilotKit + n8n runtime—useful reference for chat-first growth flows.
Main dashboard Copilotapps/platform/src/app/(main)/dashboard/page.tsxCopilotKit + n8n at scale in a real page.
Agent pagesapps/platform/src/app/(main)/agent/[agentId]/page.tsxMastra + CopilotKit pattern for specialized agents (e.g. “Campaign analyst” as a first-class agent).
Dense tables@mui/x-data-grid in apps/platform/package.jsonCampaign / ad set grids, bulk actions UI.
Search across entitiesapps/platform/src/app/api/brainforge/search/global/route.ts, Turbopuffer indexingResearch note: today indexes meetings, Slack, deals—not ads; extending index to “campaign entities” would be a product decision + pipeline.
Analytics / product eventsapps/platform/src/lib/analytics/posthogClient.tsFunnel or usage around any future growth module.
AI runtime plumbingCopilotKit routes, apps/platform/src/lib/n8n-adapter.ts, apps/platform/src/mastra/Orchestration for NL → tools → formatted answers; Langfuse appears in dependencies for observability.
Data / ETL (org-wide)apps/dagster-pipelines/ (monorepo subtree)Research option: scheduled ingestion of warehouse-exported ad metrics (Snowflake/BigQuery tables) if clients already centralize ads data—avoids duplicating Fivetran in some deals.

Internal services (conceptual wiring)

  • n8n: Already integrated with CopilotKit streaming (apps/platform/src/lib/n8n-adapter.ts); workflows could (in a full build) call HTTP nodes to read-only metrics APIs or Supabase.
  • Supabase: Multiple project patterns in repo; natural place for tenant-scoped metric snapshots, alert state, and audit logs if we store ads-derived rows ourselves.
  • HubSpot / Deals: Indexed for search; relevant if the sales motion ties campaigns to accounts/deals (research: CRM linkage for agency pitches).
  • Slack: Platform rules reference Slack MCP/Supabase for comms—monitoring layer could mirror alerts to Slack (pattern already familiar internally).

Open source and common external building blocks

Not prescriptive—options to evaluate per client.

LayerOSS / vendor examplesNotes
Ad source APIsMeta Marketing API, Google Ads API, LinkedIn Marketing (as needed)OAuth, app review, rate limits; often the long pole for “real” measure.
ELT to warehouseAirbyte, Meltano, dlt, custom PythonMany agencies already have Fivetran/Supermetrics; Brainforge might consume warehouse instead of owning extraction.
Transform / metricsdbt, SQL in Dagster assetsUnified ROAS, CPA, MER definitions; tests on freshness and row counts.
Time-series / OLAPClickHouse (OSS or cloud), BigQuery, SnowflakeStore daily campaign facts; optional cube semantics layer.
Semantic / NL-to-SQLCube.dev, Lightdash, Looker (vendor), or thin custom layer + LLMRoadway-like “ask your data” often sits on curated metrics, not raw tables.
Charts in React@mui/x-charts, Apache ECharts, Nivo, TremorForge today has no chart library pinned; any of these is a research add for rich trends.
AlertingGrafana Alerting, Prometheus (if metrics exported), custom rules in app + queue“Monitor” phase: threshold + anomaly (simple: z-score or YoY/WoW in SQL).
Job schedulingDagster (in monorepo), Temporal (OSS), CelerySync jobs, alert evaluation, backfills.
LLM app patternsCopilotKit (already), Mastra (already), optional tool frameworksManage with human-in-the-loop: suggest pause/scale, require approval before Marketing API write.

Composable architecture options (research)

Option A — Workspace-first, minimal backend

  • UI composed from DashboardLayout + Data Grid + CopilotKit.
  • Data: fixtures or small Supabase tables seeded from CSV exports.
  • Fastest story for “what it could feel like”; weakest on “live measure.”

Option B — Warehouse-first (fits data-heavy agencies)

  • Dagster (or client’s existing scheduler) loads Snowflake/BigQuery facts.
  • Forge reads via API routes or PostgREST/Supabase views; Copilot/n8n receives aggregated JSON in context.
  • Strong measure credibility if definitions are aligned with client finance.

Option C — Full pipeline ownership

  • OAuth to ad platforms + ELT + dbt + alerting + approved writes.
  • Highest scope for 2–3 months; overlaps commercial tools (Roadway, etc.) most directly.
flowchart TB
  subgraph optB [Warehouse-first research pattern]
    WH[(Warehouse facts)]
    DAG[Dagster or client ETL]
    API[Forge API layer]
    UI[DashboardLayout plus Grid plus CopilotKit]
    WH --> DAG
    DAG --> API
    API --> UI
  end
  subgraph optC [Full stack pattern]
    Ads[Meta Google APIs]
    ELT[Airbyte or custom]
    WH2[(Warehouse)]
    AI[LLM plus rules plus approvals]
    Ads --> ELT --> WH2 --> API2[Forge API]
    API2 --> UI2[Forge UI]
    AI --> API2
  end

Phased research model (engagement-shaped, not a build plan)

PhaseResearch focusTypical building blocks
0Tenancy, OAuth, legal (platform TOS), rate limitsMeta/Google dev apps; scope of “read-only” vs “write”
1Measure — entity model, grain (ad set day), KPI dictionaryWarehouse + dbt docs; API contract for UI
2Monitor — alert taxonomy, noise vs signal, Slack/emailRule engine location (app vs Grafana vs n8n)
3Manage — which actions are automated, audit, rollbackHuman approval queue; Marketing API mutation safety
4Agency RBAC, multi-brand, client-facing sub-accountsSupabase RLS patterns; product copy

Explicit risks to research with Legal/Security: ad account access, PII in forms/leads, storing creative assets, automated spend changes.

Research deliverables (for Sales / Solutions, not engineering tickets)

  1. Internal primitive map (memo + pointer to exact files).
  2. OSS/vendor shortlist per layer with when to use (warehouse vs direct API).
  3. 2–3 architecture options with tradeoffs (time to credibility, ongoing ops burden).
  4. Phase dependency list suitable for SOW scoping without implying we are building yet.

Out of scope for this research memo

  • Any code changes, env setup, or n8n workflow creation.
  • Commitments on Meta/Google app approval timelines.
  • Pricing or resourcing estimates (can be a follow-on).