Date: 2026-04-06
Type: Internal research (no implementation commitment)
Audience: Sales, Solutions, Engineering (scoping)
Related ask: Demo / 2–3 month custom-build positioning vs tools like Roadway (measure, monitor, manage paid marketing with AI).
1. Market and buyer narrative (proxy research)
Note: This section uses web search and public positioning statements as a proxy for multi-source “last 30 days” research. For cited social/reddit-style intel, run the in-repo last30days skill (brainforge-platform/.cursor/skills/last30days/SKILL.md) with prompts such as: “AI tools for Meta Google ads agencies,” “performance marketing automation complaints,” “agency reporting stack fatigue.”
How buyers describe the category
Measure: Cross-channel reporting, attribution, incrementality, “single view” of spend and outcomes.
Monitor: Alerts on CPA/ROAS drift, budget pacing, creative fatigue, anomaly detection.
Manage: Recommendations and/or autonomous actions (with varying degrees of human approval)—often framed as “AI coworkers” or “agents.”
Competitive / adjacent landscape (illustrative, not exhaustive)
Segment Examples (public positioning) Implication for Brainforge Full-stack AI ads Roadway (measure/monitor/manage + AI coworkers), Onmo (multi-channel orchestration), Alpomi (agency multi-account monitoring), XPathLabs / groas-style “agent” narratives Buyers expect unified UI + AI ; differentiation often claims ROAS/uplift—credibility requires data depth , not chat alone. Attribution / MMM Rockerbox, Measured, mix-modeling vendors Strong on measure ; may partner rather than replace a workspace copilot. Lower-cost AI ad tools Synter, GetHookd, Scalable (varied breadth) Price pressure below enterprise; enterprise agencies still pay for trust, SSO, audit, multi-tenant .
Objections to anticipate
“We already have Supermetrics / Fivetran + Looker.” → Position Brainforge as decision layer + workflow + AI on top of their warehouse, not a duplicate ELT (see Option B architecture).
“We won’t give you write access to ad accounts.” → Read-only phase + human-approved mutations; clear audit trail.
“Black-box AI changed our bids.” → Human-in-the-loop , explainable deltas (rule + LLM summary), rollback story.
“Meta/Google app review takes forever.” → Scope warehouse-first MVP to de-risk; OAuth apps as phase gate.
Sources (web, retrieved 2026-04-06)
These are existing composition points in the monorepo. A Roadway-like experience would compose them; none of this implies shipped product.
2.1 Workspace UI and layout
Capability Location Notes Nav / demo grouping apps/platform/src/components/Sidebar.tsx“Demos,” clients, agents, tools—pattern for surfacing a growth module. KPI row + filters + list apps/platform/src/components/DashboardLayout.tsx, MetricCardSame layout used for meetings; swap data source for campaign entities. Client hub shell apps/platform/src/components/ClientDashboardTemplate.tsxClient-scoped dashboard + chat rail. Department shell apps/platform/src/components/DepartmentDashboardTemplate.tsxInternal org variant of same pattern. Route transition / Copilot wrapper apps/platform/src/components/RouteTransition.tsxApp-wide CopilotKit usage pattern.
2.2 AI chat and orchestration
Capability Location Notes n8n + CopilotKit runtime apps/platform/src/app/api/copilotkit-n8n/route.ts, apps/platform/src/lib/n8n-adapter.tsN8nWebhookAgent, streaming adapter for workflow-backed chat.Standalone Copilot demos apps/platform/src/app/(main)/demo/copilotkit/page.tsx, apps/platform/src/app/api/ABCcopilotkit/route.tsFull-page chat tied to alternate runtime. Main dashboard AI apps/platform/src/app/(main)/dashboard/page.tsxProduction-scale CopilotKit + n8n on a core page. Mastra agents apps/platform/src/mastra/ (index.ts, azure.ts, tools under mastra/tools/)Specialized agents, Azure OpenAI config. Agent detail UI apps/platform/src/app/(main)/agent/[agentId]/page.tsxCopilotKit + agent-specific UX. Reusable chat components apps/platform/src/components/N8nCopilotKitChat.tsx, CopilotKitChat.tsx, ModularChatAgent.tsxQuick prompts, webhooks, layout. CopilotKit (direct) apps/platform/src/app/api/copilotkit/route.tsAlternative runtime path.
2.3 Search, deals, analytics
Capability Location Notes Global search (Turbopuffer) apps/platform/src/app/api/brainforge/search/global/route.ts, apps/platform/src/lib/turboPuffer/Today: meetings, Slack, deals—ads entities would be net-new index if desired. Product analytics apps/platform/src/lib/analytics/posthogClient.tsUsage / funnel instrumentation for any new module. HubSpot deals indexing apps/platform/src/lib/turboPuffer/indexingUtils.ts (e.g. indexHubSpotDeals)CRM ↔ campaign linkage is a product choice for agency accounts.
2.4 Data / ETL (organization-level)
Capability Location Notes Dagster monorepo apps/dagster-pipelines/Legacy ETL; schedules default off; migration toward platform-native jobs per AGENTS.md. Candidate for scheduled pulls or warehouse transforms if Brainforge operates the pipeline. Standards knowledge/standards/03-knowledge/engineering/setup/dagster-*.mdOperational setup for Dagster work.
2.5 Internal services (conceptual, not exhaustive)
n8n: Workflow-backed responses, HTTP to internal APIs or third parties (read-only first recommended).
Supabase: Tenant data, RLS, audit tables—fits alert state , approval queue , metric snapshots if stored in Postgres.
Slack: Familiar delivery channel for monitor alerts (patterns exist across platform rules and skills).
Azure OpenAI: Primary LLM path via Mastra/CopilotKit configuration (mastra/azure.ts).
3. OSS and external API map
Evaluate per client —no single stack is mandatory.
Layer Options When to prefer Ad sources Meta Marketing API , Google Ads API , LinkedIn Marketing APIDirect measure from source; requires OAuth, permissions, rate limits, sometimes app review. ELT Airbyte, Meltano, dlt , custom Python Client lacks warehouse feeds; Brainforge or partner operates replication. Transform dbt, SQL assets in Dagster Need governed KPI definitions (ROAS, CPA, MER, blended vs. platform-reported). Warehouse / OLAP Snowflake, BigQuery, ClickHouse, Databricks Warehouse-first Option B; agencies often already have one.Semantic / metrics API Cube.dev, Lightdash, Looker, or thin metrics JSON from your API NL-to-SQL is safer over curated metrics , not raw fact tables. Charts in React @mui/x-charts, ECharts, Nivo, TremorForge does not pin a chart library today; pick one if trend lines are required. Alerting Grafana Alerting, custom scheduled jobs (Dagster/cron) + Slack webhook Monitor layer; start with simple WoW/threshold rules before ML anomalies.Job orchestration Dagster (in repo), Temporal, Celery Syncs, backfills, alert evaluation. LLM observability Langfuse (already in platform dependencies) Trace prompts/tool calls for manage features. Approval / workflow Build in Supabase + UI, or extend n8n Human-in-the-loop before Marketing API writes.
4. Composable architecture options (tradeoffs)
Option A — Workspace-first, minimal backend
Compose: DashboardLayout + @mui/x-data-grid + CopilotKit / N8nCopilotKitChat.
Data: Fixtures, CSV seed, or small Supabase tables.
Pros: Fastest story and UX prototype; low infra.
Cons: Weakest credible measure ; not a substitute for live pipelines in technical diligence.
Option B — Warehouse-first (fits many agencies)
Compose: Client (or Brainforge Dagster ) loads Snowflake/BigQuery facts → API or Supabase views → Forge UI + copilot context JSON.
Pros: Aligns with “we already ETL”; Brainforge focuses on semantic layer + UX + AI ; faster time to real numbers if tables exist.
Cons: Depends on client data quality and agreed metric definitions ; transformation ownership must be explicit in SOW.
Option C — Full pipeline ownership
Compose: OAuth to ad platforms + ELT + dbt + alerting + approval-gated writes + Forge UI.
Pros: Closest parity to Roadway-class “all-in-one”; clearest 2–3 month consulting shape.
Cons: Highest security, compliance, and ops burden; Meta/Google approval and rate-limit risk.
Diagram (logical)
flowchart TB
subgraph optB [Warehouse_first]
WH[Warehouse facts]
DAG[Dagster or client ELT]
API[Forge API]
UI[DashboardLayout Grid CopilotKit]
WH --> DAG --> API --> UI
end
subgraph optC [Full_stack]
Ads[Meta Google APIs]
ELT[Airbyte or custom]
WH2[Warehouse]
AI[LLM rules approvals]
Ads --> ELT --> WH2 --> API2[Forge API]
API2 --> UI2[Forge UI]
AI --> API2
end
5. Phased research model (SOW-shaped)
Use this for scoping conversations , not as a delivery promise.
Phase Research / scoping focus Typical building blocks Risks / dependencies 0 Tenancy, OAuth apps, legal/TOS, read vs write Developer apps, sandbox ad accounts, data processing agreements App review latency; agency’s client granting access 1 Measure — grain (e.g. ad set × day), KPI dictionary, freshness SLAsdbt docs, API contract, sample dashboards Definition drift vs finance; platform attribution mismatch 2 Monitor — alert taxonomy, noise vs signal, channelsRules engine placement; Slack/email; optional Grafana Alert fatigue; false positives erode trust 3 Manage — which mutations are allowed, audit, rollbackApproval queue; Marketing API write wrappers; Langfuse traces Irreversible spend changes; need kill switch 4 Agency RBAC, multi-brand, client-facing views Supabase RLS; SSO; product permissions model Multi-tenant data isolation audits
Legal / security topics to flag early: lead/form PII, creative asset storage, automated bidding/spend changes, sub-processor list for LLM providers.
6. Deliverables checklist (this document)
# Deliverable Section 1 Internal primitive map with file paths §2 2 OSS/vendor shortlist by layer §3 3 2–3 architecture options + tradeoffs §4 4 Phase dependency list for SOW scoping §5 5 Market/buyer bullets + objection stubs §1
7. Optional next steps (not part of this research)
Run last30days for cited social/web synthesis (see note in §1).
Pick a default architecture option per ICP (e.g. agencies with Snowflake → Option B).
Produce a one-page Sales talk track and Roadway parity checklist as separate collateral if requested.
Canonical in-repo plan shell: roadway-style-growth-marketing-research-plan.md
Cursor skill for this workflow: .cursor/skills/composable-product-research-on-primitives/SKILL.md
Repo root AGENTS: AGENTS.md, apps/platform/AGENTS.md, apps/dagster-pipelines/AGENTS.md.