Service Definition System
Purpose: Canonical home for all Brainforge service offerings — how we pitch them, deliver them, plan them, and spin them up in Linear. Notion: Three databases — Offers, SOP, Demos. Last Updated: 2026-04-08
Three-Line Service Hierarchy
Brainforge has three service lines. Every offering belongs to exactly one line and one subservice within it.
Service Line → Subservice → Offering → Phase → Deliverable → Ticket
(Initiative) (grouping) (Project) (Milestone) (Epic) (Issue)
| Service Line | Subservices |
|---|---|
| AI | Workflow Automation · Knowledge Engineering · Copilots & Agents |
| Data | Data Platform · Data Modeling · Reverse ETL |
| Strategy & Analytics | Data Strategy · Measurement & KPIs · Reporting & Insights |
Linear issue labels (canonical):
workflow-automation,knowledge-engineering,copilots-agents,data-platform,data-modeling,reverse-etl,data-strategy,measurement-kpis,reporting-insights— see linear-cleanup-taxonomy.md §4. Agent audits may also reference.cursor/skills/linear-service-label-audit/references/taxonomy.md(legacy slugs such asdata-infrastructureuntil migration). Vault folders below may still use legacy directory names (ai-infrastructure/,data-infrastructure/,analytics-bi/,metrics-kpis/); that is not the same as the Linear label string.
Folder Structure
knowledge/sales/services/
├── README.md ← this file
├── templates/
│ ├── offer-template.md ← pitch template
│ ├── sop-template.md ← delivery SOP template
│ ├── demo-template.md ← demo template
│ ├── implementation-plan-template.md ← week-by-week phases + milestones
│ ├── sow-project-plan-template.md ← **moved** to `knowledge/delivery/03-project-lifecycle/` (stub remains here)
│ └── linear-template.md ← starter epics + tickets ("clone this")
├── ai/
│ ├── workflow-automation/
│ │ ├── insurance-lead-processor/
│ │ └── ai-growth-sprint-workshop/
│ ├── knowledge-engineering/
│ │ ├── custom-context-graph/
│ │ └── cursor-workshop/
│ └── ai-infrastructure/ ← legacy slug for Copilots & Agents until folder migration
│ └── custom-deployment/
├── data/
│ ├── assessment-audit/ ← legacy audit bucket; fold audits into subservices over time
│ │ ├── dbt-audit/
│ │ ├── digital-ads-visibility-audit/
│ │ ├── marketing-data-audit/
│ │ ├── product-analytics-audit/
│ │ └── snowflake-audit/
│ ├── data-infrastructure/ ← legacy slug for Data Platform until folder migration
│ │ ├── dataops-program/
│ │ ├── full-data-platform/
│ │ └── data-warehouse-dbt/
│ ├── analytics-bi/ ← legacy slug for Data Modeling until folder migration
│ │ └── product-analytics-platform/
│ ├── reverse-etl/ ← outbound syncs / activation from warehouse (often retainer)
│ └── activation-attribution/ ← legacy offering bucket; no longer a core Data subservice
│ └── edge-to-activation/ ← offer, sop, demo, implementation-plan, linear-template (+ link to technical playbook in `knowledge/standards/03-knowledge/`)
└── strategy-analytics/
├── data-strategy/
├── metrics-kpis/ ← legacy slug for Measurement & KPIs until folder migration
├── reporting-insights/
└── technical-due-diligence/ ← legacy offering bucket; no longer a core Strategy & Analytics subservice
Current Data Naming Decisions
These notes capture the reasoning behind the current Data taxonomy so future cleanup work does not need to reconstruct the logic from chat history.
Decision 1: Audits are not a standalone Data subservice
Assessment & Audit is useful as a delivery motion, but it is too cross-cutting to be the long-term subservice name. Audits can happen within every Data subservice:
Data Platform AuditData Modeling Audit
The taxonomy should treat audits as a phase, engagement type, or offering variant rather than a permanent peer to build-oriented subservices.
Decision 2: Use Data Platform instead of Data Engineering
Data Engineering describes an internal capability or discipline more than a client-facing subservice. It is broad, but it points more to how Brainforge works than to what a client is buying.
Data Platform is the preferred name because it better captures the outcome area:
- ingestion and data movement
- warehousing and storage foundations
- dbt and transformation foundations
- orchestration and scheduling
- observability, reliability, and platform operations
Decision 3: Use Data Platform instead of Data Infrastructure
Data Infrastructure is directionally close, but it reads narrower and more plumbing-oriented than the actual scope. The team uses this bucket for more than raw infra work, so Data Platform is a better umbrella.
In practice:
Data Engineering= internal capability languageData Infrastructure= narrower technical framingData Platform= preferred subservice name
Decision 4: Use Data Modeling instead of Analytics & BI
Analytics & BI mixes two different kinds of work:
- implementation work on marts, semantic layers, and transformation logic
- reporting, dashboards, and stakeholder-facing analysis
Those are related, but they do not belong under the same service line forever. For the Data line, the clearer subservice name is Data Modeling because it keeps the build layer inside Data and leaves room for reporting and insights to sit under Strategy & Analytics.
Decision 5: Remove Activation & Attribution from core Data subservices
Activation & Attribution is real work Brainforge does, but it does not feel like a foundational Data subservice on the same level as Data Platform and Data Modeling.
For now:
- it should not be treated as a core
Datasubservice - it may later live under
Strategy & Analytics - or it may remain an offering family / cross-line package rather than a permanent subservice
Linear labels vs vault folder slugs
Linear subservice labels use the canonical kebabs above (data-platform, data-modeling, …). Legacy issue labels data-infrastructure, analytics-bi, metrics-kpis, and ai-infrastructure should be relabeled when touched. Vault paths may keep older folder names until a folder migration; do not change folder paths just to match Linear — update standards when folders move.
Decision 6: Reverse ETL as a Data subservice (proposal)
Status: Proposal from delivery (Service Lead input); Head of Delivery confirmation before treating the taxonomy as locked.
What it covers: Ongoing work to move governed data from the warehouse to downstream systems (reverse ETL tools, native syncs, operational and marketing destinations—e.g. Census, Hightouch, Polytomic patterns). Clients often see this as retainer / maintenance (“keep our syncs healthy, add destinations, fix breaks”) rather than a standalone named project.
How it relates to other buckets:
- Data Platform — Still the umbrella for the warehouse, ingestion, transforms, and platform operations. Reverse ETL is the outbound slice from that platform.
- Activation / Edge-type engagements — May include initial reverse ETL setup; this subservice is the natural home for sustaining and iterative reverse ETL tickets on long-running retainers.
Folder / Linear: knowledge/sales/services/data/reverse-etl/ · Linear subservice label slug reverse-etl (create in the workspace when running label cleanup).
Proposed Data Offers
These are the current Data offers proposed by Head of Delivery. The goal is to hand ownership of refinement, packaging, and artifact completion to the relevant Service Leads over time rather than keep offer definition centralized.
Ownership model
- Current status: proposed by Head of Delivery
- Next owner: Service Leads validate naming, merge overlaps, retire weak offers, and complete artifacts
- Artifact expectation: each kept offer should eventually have
offer.md,sop.md,implementation-plan.md, andlinear-template.md
Data Platform
| Proposed offer | Status | Notes |
|---|---|---|
| Full Data Platform | Proposed | End-to-end platform build or rebuild across ingestion, warehouse, transformation, orchestration, and operations. |
| Data Warehouse + dbt | Proposed | Warehouse and transformation foundation for clients who need the core stack in place quickly. |
| Data Platform Transition & Management | Proposed | Take over, stabilize, and run an existing platform while improving it over time. |
| DataOps Program | Proposed | Reliability, testing, observability, deployment workflows, governance, and operating discipline. |
| Source Integration & Ingestion Build | Proposed | Connect source systems and establish dependable ingestion into the platform. |
Data Modeling
| Proposed offer | Status | Notes |
|---|---|---|
| Omni Zero-to-One | Proposed | Net-new modeling and semantic-layer implementation around Omni. |
| Product Analytics Platform | Proposed | Build product and event-data models into trustworthy analytics-ready datasets. |
| Business Data Model / Mart Build | Proposed | Create marts and business-ready models for teams like finance, marketing, and ops. |
| Semantic Layer Implementation | Proposed | Define governed metrics, entities, dimensions, and self-serve access patterns. |
| Modeling Modernization | Proposed | Refactor and standardize legacy models, marts, and transformation logic. |
Reverse ETL
| Proposed offer | Status | Notes |
|---|---|---|
| Reverse ETL Ongoing / Retainer | Proposed | Sustain and extend warehouse→destination syncs (models exposed, new destinations, scheduling, monitoring, break-fix); typical T&M or retainer. Full offer / SOP / implementation-plan / linear-template artifacts TBD when packaged. |
Current Strategy & Analytics Naming Decisions
These notes capture the reasoning behind the current Strategy & Analytics taxonomy so future cleanup work does not need to reconstruct the logic from chat history.
Decision 1: Keep Data Strategy
Data Strategy remains the right umbrella for roadmap, architecture direction, tool selection, sequencing, and broader advisory work. It is distinct from platform implementation because it focuses on what should be built, in what order, and why.
Decision 2: Use Measurement & KPIs instead of Metrics & KPIs
Metrics & KPIs is close, but Measurement & KPIs is a better umbrella because it covers:
- metric definitions and dictionaries
- KPI frameworks
- measurement plans
- attribution logic
- instrumentation and measurement design decisions
This makes the subservice less of a static scorecard label and more of a real advisory and design practice.
Decision 3: Keep Reporting & Insights
Reporting & Insights is the correct home for dashboards, recurring reporting, stakeholder-facing analysis, and decision support. This is where the old reporting-heavy part of Analytics & BI should live after the Data split.
Decision 4: Remove Technical Due Diligence from core subservices
Technical Due Diligence is real work Brainforge does, but it behaves more like an offering or engagement type than a permanent peer subservice. It should remain available in the catalog, but not as one of the core Strategy & Analytics buckets.
Linear label for this subservice is canonical measurement-kpis (legacy issue label: metrics-kpis). Vault folder may remain metrics-kpis/ until renamed.
Proposed Strategy & Analytics Offers
These are the current Strategy & Analytics offers proposed by Head of Delivery. The goal is to hand ownership of refinement, packaging, and artifact completion to the relevant Service Leads over time rather than keep offer definition centralized.
Ownership model
- Current status: proposed by Head of Delivery
- Next owner: Service Leads validate naming, merge overlaps, retire weak offers, and complete artifacts
- Artifact expectation: each kept offer should eventually have
offer.md,sop.md,implementation-plan.md, andlinear-template.md
Data Strategy
| Proposed offer | Status | Notes |
|---|---|---|
| Data Strategy Sprint | Proposed | Short advisory engagement to align goals, architecture direction, and next-step priorities. |
| Data Roadmap | Proposed | Sequenced roadmap for platform, modeling, measurement, and reporting investments. |
| Architecture Advisory | Proposed | Ongoing or scoped guidance on technical direction and operating tradeoffs. |
| Tool Selection | Proposed | Structured evaluation and recommendation across warehouses, BI, reverse ETL, and related tooling. |
Measurement & KPIs
| Proposed offer | Status | Notes |
|---|---|---|
| Metrics Definition & Dictionary | Proposed | Standardize business logic, definitions, and ownership for core metrics. |
| KPI Framework | Proposed | Establish executive and team KPI structure aligned to business goals. |
| Measurement Plan | Proposed | Define how key outcomes will be measured across tools, teams, and workflows. |
| Attribution Design | Proposed | Design attribution logic and measurement methodology for channel and funnel analysis. |
Reporting & Insights
| Proposed offer | Status | Notes |
|---|---|---|
| Executive Dashboard Design | Proposed | Create stakeholder-facing dashboard structure and decision-ready reporting views. |
| Recurring Insights Retainer | Proposed | Ongoing reporting, analysis, and recommendation cadence for leadership teams. |
| Business Review Reporting | Proposed | Monthly or quarterly reporting packs for structured business reviews. |
| Decision Support Analysis | Proposed | Ad hoc or scoped analytical work to answer high-value business questions. |
Non-core but retained in catalog
| Offer family | Status | Notes |
|---|---|---|
| Technical Due Diligence | Proposed offering | Keep available as a strategy offer / engagement type without treating it as a permanent subservice. |
Current AI Naming Decisions
These notes capture the reasoning behind the current AI taxonomy so future cleanup work does not need to reconstruct the logic from chat history.
Decision 1: Keep Workflow Automation
Workflow Automation is clearly a real service area for Brainforge today. It is already visible in delivery planning and client-facing assets, and it matches a large share of near-term AI value creation: triggered workflows, approvals, routing, handoffs, and human-in-the-loop execution.
Decision 2: Keep Knowledge Engineering
Knowledge Engineering remains a distinct and important bucket because much of Brainforge’s AI work depends on context quality, retrieval design, structured grounding, and reusable knowledge systems. This includes internal knowledge hubs, context graphs, chat-over-docs/data experiences, and agent memory design.
Decision 3: Use Copilots & Agents instead of AI Infrastructure
AI Infrastructure is too enabling-layer-focused for how Brainforge actually packages the work and for where the platform is heading. In practice, the team repeatedly talks about and builds:
- copilots
- meeting agents
- lead research agents
- voice agents
- embedded AI assistants
Those are closer to the client-visible product and delivery surface than the underlying runtime or deployment plumbing. Copilots & Agents is the better umbrella for the future-facing AI bucket.
Linear label for this subservice is canonical copilots-agents (legacy issue label: ai-infrastructure). Vault folder may remain ai-infrastructure/ until renamed.
Proposed AI Offers
These are the current AI offers proposed by Head of Delivery. The goal is to hand ownership of refinement, packaging, and artifact completion to the relevant Service Leads over time rather than keep offer definition centralized.
Ownership model
- Current status: proposed by Head of Delivery
- Next owner: Service Leads validate naming, merge overlaps, retire weak offers, and complete artifacts
- Artifact expectation: each kept offer should eventually have
offer.md,sop.md,implementation-plan.md, andlinear-template.md
Workflow Automation
| Proposed offer | Status | Notes |
|---|---|---|
| AI Growth Sprint Workshop | Active (TOFU) | Lighter-weight front-door workshop for “not now” AI buyers; outputs prioritized use cases plus a 30-60-90 action plan that routes qualified teams into deeper implementation work. |
| Insurance Lead Processor | Proposed | Automate intake, enrichment, and submission-ready lead workflows in insurance and similar ops-heavy environments. |
| Intake Optimizer | Proposed | Reduce manual triage and improve routing, summarization, and intake throughput. |
| Workflow Builder | Proposed | Design and deploy custom AI-assisted workflows across Slack, forms, CRM, and operations tools. |
Knowledge Engineering
| Proposed offer | Status | Notes |
|---|---|---|
| Custom Context Graph | Proposed | Build a structured grounding layer so agents and copilots operate with the right business context. |
| Brainforge OS Setup | Proposed | Stand up an internal AI knowledge and operating environment for teams using Brainforge-style tooling. |
| MCP Development | Proposed | Build or configure MCP-based integrations that give AI systems controlled access to tools and data. |
| Internal Knowledge Hub | Proposed | Deliver a chat-over-knowledge experience for SOPs, docs, tickets, and operational memory. |
Copilots & Agents
| Proposed offer | Status | Notes |
|---|---|---|
| AI Copilot Integration Sprint | Proposed | Ship a focused copilot or assistant around one high-value workflow or user job. |
| Embedded Assistant Build | Proposed | Add an operator-facing AI assistant into an internal product, workflow, or application. |
| Voice Agent Implementation | Proposed | Design and deploy practical voice-agent workflows where they are a fit. |
| Custom Deployment & Hosting | Proposed | Productionize and host AI copilots and agents with the required integrations and runtime support. |
Four Artifacts Per Offering
Each {offering-slug}/ folder should contain all four artifacts to be a complete, composable package:
| Artifact | Purpose | Audience |
|---|---|---|
offer.md | Pitch: scope, pricing, proof points | Sales, CSO |
sop.md | How delivery runs it: phases, DRIs, handoffs, risks | Delivery |
implementation-plan.md | Week-by-week phases, milestones, dependencies (offering-level). Per-client SOW ↔ plan: sow-project-plan-template.md. | CSO + Delivery |
linear-template.md | Starter epics + tickets by phase (“clone this”) | Delivery Lead |
Use the templates in templates/ to create new artifacts. An offering is considered complete when all four files exist.
Per client (not inside {offering-slug}/): After SOW sign-off and before cutting tickets, duplicate knowledge/delivery/03-project-lifecycle/sow-project-plan-template.md (e.g. into knowledge/clients/{client}/resources/ or Notion). It aligns initiatives, projects, client-visible milestones, technical approach, and sign-offs.
Naming Conventions
| Layer | Convention | Example |
|---|---|---|
| Service Line | Short, title case | AI / Data / Strategy & Analytics |
| Subservice | 2–3 word descriptor, title case | Workflow Automation / Data Platform |
| Offering | Outcome-first noun phrase, title case | Product Analytics Platform / dbt Audit |
| Phase | Phase {N} — {Name} | Phase 0 — Audit / Phase 1 — Pilot |
| Folder slug | kebab-case | product-analytics-platform / dbt-audit |
| SOW file | sow-{line-slug}-{offering-slug}-{client}.md | sow-data-dbt-audit-cta.md |
| Linear canonical project | {Line} — {Offering} (Canonical) | Data — dbt Audit (Canonical) |
| Delivery model tag (in offer.md) | lowercase | fixed-scope / T&M / retainer / hybrid |
Offering Variants (Verticals)
Vertical-specific variants (Insurance, Health, E-commerce) are metadata, not a hierarchy level. Add a variants/ subfolder or a ## Variants section in offer.md for industry-specific pitch or scope adjustments. The canonical folder structure stays the same across all verticals.
Adding a new subservice
A subservice is a long-lived grouping under exactly one service line (navigation + taxonomy), not a one-off client project. New subservices need Head of Delivery alignment so Operating, Linear labels, and GTM stay consistent.
- Name it — 2–3 words, title case (e.g.
Measurement & KPIs). Pick a folder slug: kebab-case, stable (e.g.metrics-kpis). Subservices have no Linear object; optional Linear label for issues often reuses the same kebab slug. - Update this file — Add the subservice to the Three-Line Service Hierarchy table above; add a row under Folder Structure (
{line}/{subservice-slug}/); add a short Naming decision subsection if the choice is non-obvious (same style as Data / Strategy & Analytics sections). - Create the folder —
knowledge/sales/services/{line}/{subservice-slug}/(optionalREADME.mddescribing scope and how offerings nest). - Move or add offerings — Existing work that was under a legacy bucket (e.g.
assessment-audit/,activation-attribution/) can stay until reparented; when you add offers, follow Adding a New Offering inside the new subservice folder. - Linear + agent taxonomy — Update
.cursor/skills/linear-service-label-audit/references/taxonomy.md(subservice slug table). Add the label in the Linear workspace when running cleanup; seeknowledge/standards/04-prompts/tickets/linear-cleanup-taxonomy.md§4. - Delivery ops hub — Update Subservices in
knowledge/delivery/service-lines/{ai|data|strategy-analytics}/README.mdfor the affected line. - Operating — When allocations use sub-service names, update Operating metadata with Ops/HoD once the name is canonical.
Adding a New Offering
- Identify the correct
{service-line}/{subservice}/folder - Create
{offering-slug}/inside it - Copy all four templates from
templates/and fill them in - Add a row to the Linear canonical project for the service line
- Update
standards/02-writing/PLAYBOOK_INDEX.mdif a matching playbook exists
How This Connects to the Rest of GTM
| You need… | Look here |
|---|---|
| Pricing, rate card, quick quote | gtm/pricing/ — RATE_CARD, SERVICE_CATALOG, PRICING_CALCULATOR |
| Deep service definition (agent memory) | gtm/agents/memory/services.md |
| Sales assets (decks, one-pagers) | gtm/agents/service-assets/ |
| Demo transcripts / campaign demos | gtm/campaign-launch/demos/ |
| SOWs and deal structure | gtm/sales/sow-framework/ |
| Playbook for running the engagement | standards/03-knowledge/{domain}/ |
Maintained by: Delivery + GTM Review: At the end of each engagement retro (DRI updates the offering’s SOP and linear-template); when pricing or scope changes (DRI updates offer.md).