Brainforge Services Library

Purpose: Centralized catalog of all Brainforge services for agent reference and content creation Status: Active Last Updated: February 5, 2026


📋 Services Overview

This document catalogs all services currently offered by Brainforge. Use this for:

  • Content creation (service-focused posts, proposals, etc.)
  • Agent knowledge base
  • Service discovery and recommendations
  • Consistent service naming across all materials

1. Service

Data Audit

DBT Audit

Tech Anchors: [Snowflake, dbt, Fivetran, Omni/Rill, M365, Slack]

Related Scope Modules: [relation to modules DB]

Playbook Link: [ ]
Case Link: [ ]
Last Reviewed: [Date]


Who it’s for:

  • Role(s):

    • CTO w/ data engineering background
      • Person who will bite on it will most likely be a data engineer/analytics engineer who is wearing a hat that can make the decision.
    • Non-technical stakeholder: Very frustrated business stakeholder.
      • Things that they’ve heard in meetings
      • They know you’ve asked for data before, but it’s still running.
      • Taking forever to get an answer on the data
    • Working on the company w/ daily
    • As we get into larger enterprise clients, it’s more likely that they set this up.
      • If we’re able to go into a sector on the sales side, we are dbt partners. always an opportunity to call dbt and say, “Hey we’re running dbt audit service.
      • Often times, people in big companies will put dbt into the job descriptions when they’re hiring
      • Many people may not have it at all. A way for us to say, you need this tool. This audit service is something that we’ll be doing several types of audits.
        • We were basically discussing this with the service leads. These are all the services that we do that we haven’t mapped out.
        • Snowflake Audit →
  • Industry lens: ?? [e-commerce | CPG | B2B SaaS]

  • Preconditions:

    • dbt → can then scale to a general audit of the infrastructure
    • we can then help them
    • A lot of people think things are going to change with this tool
    • Was very hyped 3-5 years ago. How do we help you get closer to those.

Business problems:

  • [Problem 1 in buyer words]
  • [Problem 2]
  • [Problem 3]

Outcomes in 30 days (tie to §1):

  • Time saved: [target + measurement method, e.g., ”↓ research time from 45m → 10m per lead via time study”]
  • Revenue increase: [target + method, e.g., ”↑ qualified meetings +15 percent from lead research SLAs”]
  • Profit increase: [target + method, e.g., ”↓ cost per ticket by X with deflection”]
  • Employee efficiency: [target + method, e.g., ”↑ cases per agent from A → B via handle-time logs”]

Measurement notes:


Scope & Configuration

Default bundle (included):

  • Depends on what you need
    • Audit - If you just need a roadmap that you can present can be done in 3-4 weeks
    • Implementation - hard to place. For some people could be a month, for others it could be three months.
      • revenue and sales data → inventory data
    • Priority data mat → where data is fetched from
      • revenue, sales, product data mat
      • Our revenue has been in shambles for the last two years.
  • [Module A] — [what it delivers]
  • [Module B] — [what it delivers]
  • [Module C] — [what it delivers]

Optional add-ons:

  • [Module D] — [when to use]
  • [Module E] — [when to use]

Prereqs:

  • Data & access: Git provider, DBT,
  • [sources, warehouse, SSO, APIs]
  • Environment: [client cloud vs Brainforge, environments needed]
  • SMEs: Demi, Awaish, etc.
    • A lot of our clients we’re the first ones to write dbt.

Deliverables:

  • Artifacts: [tracking plan / schemas / prompts / eval sets]
  • Automations/Services: [agents, workflows, integrations]
  • Dashboards/Reports: [KPI board, adoption report]
  • Runbooks: [ops, rollback, admin console]
  • Handoffs: [training, office hours, video walk-throughs]

Success metrics:

  • Business KPIs: [2–3 tied to outcomes]
    • Runtime →
    • Documentation → on dbt sources
  • Execution KPIs: [latency, match rate, test coverage, incident MTTR]
  • Adoption KPIs: [weekly active users, task coverage, deflection rate]
    • Harder to see with a one-man team
    • She much prefers to use the new thing
    • Ease of using the new infrastructure. Hard to

Timeline & Plan:

Phase 0, Audit (optional): [1 week, outputs]

Phase 1, Pilot: [2–4 weeks] milestones, gates, demo dates

Phase 2, Scale: [4–8 weeks] rollout plan, ops model


Commercials:

  • Model: [Fixed pilot | T&M | Milestones | Outcome-based]
  • Anchor price/range: [Y pilot], inclusions/exclusions
  • Assumptions: [what pricing assumes, e.g., data in Snowflake]

Risks & mitigations:

  • Access to github, dbt, (git provider)
    • → [Mitigation]
  • Concurrent data request → would need to have another team working on this
  • The volume of people’s infrastructure → some have really large data and large models going on there → early recognization. You need to be able to quickly recognize that your data stack has 4,000 models. My previous company wasn’t able to do this. Things like that would be helpful.
    • I think being able to recognize and communicate early.
    • Being able to get a range of how many models. Each model has 1,000 lines.
    • Number of models would be a good start to
  • [Risk] → [Mitigation]
  • [Risk] → [Mitigation]

Proof:

  • Case snippet: [2–3 lines narrative, no unverified numbers]
  • Demo links: [UI clip], [KPI board], [eval report]
  • References (if allowed): [contact or anonymized quote]

Delivery notes (internal):

  • Squad & roles, RACI, ceremonies, tooling, Definition of Done.
  • Compliance flags: [PII, SOC 2, HIPAA].
  • Change log / version: [v1.0], date.

  • Snowflake Audit
  • Tooling Consolidation
  • BI Migration
  • Reporting Source Migration (Pipelines and models)
  • AI Assisted Analysis (Amber)
    • Competitive Analysis

Edge-to-Activation

One-liner: We help you get 95%+ reporting accuracy before client/browser tracking can fail.

Tech Anchors: [Cloudflare Workers, Edge Computing, GA4, Segment, Snowflake, dbt, Omni/Rill, M365, Slack]

Related Scope Modules: [relation to modules DB]

Playbook Link: [ ]
Case Link: [Eden case - 17% attribution recovery]
Last Reviewed: [Date]

Domain / Service Line: Data Platform & Analytics


Who it’s for:

  • Role(s):

    • Head of Marketing
    • CMO
    • Head of Growth
  • Industry lens: E-commerce, CPG, B2B SaaS (digital acquisition–driven businesses)

  • Preconditions:

    • Meaningful paid media or multi-channel acquisition spend
    • Revenue tracked in accounting/CRM but inconsistently reflected in marketing tools
    • Increasing pressure around attribution accuracy, CAC, and reporting confidence

Business problems:

  • “We don’t trust our numbers, but we’re still being judged on them.”
  • 15–30% of customers and conversions are invisible to client-side tracking, creating blind spots in attribution and decision-making.
  • Marketing, finance, and product operate from different versions of the truth, leading to internal friction and conservative growth decisions.
  • Feeding algorithms junk data → 17% margin of error means a 30% variance in the information your campaign algorithms are getting
  • Stopping effective campaigns because of data inaccuracy. When you miss out on conversions, successful campaigns look like failures. Without edge layer data, you end up pausing campaigns you should’ve 10x’d.
  • Wasted dollars on remarketing. You end up spending money to send ads to people that have already seen them. Personalization is lost.

ICP Pain (buyer language):

Have a hard time:

  • Scaling ad spend without fear
  • Killing underperforming channels confidently
  • Defending budget requests
  • Explaining performance to leadership

Outcomes in 30 days (tie to §1):

  • Time saved:

    ↓ time spent reconciling GA4, Segment, ad platforms, and accounting data via automated Edge vs. client-layer comparison reports.

  • Revenue clarity:

    Identification and attribution of previously unattributed customers (e.g., Eden recovered attribution visibility for ~17% of customers).

  • Profit protection:

    Reduced overpayment to partners and channels driven by inaccurate attribution (Eden corrected six-figure misallocations).

  • Employee efficiency:

    GTM and analytics teams shift from manual reconciliation and debate to execution and optimization.

Measurement notes:

Discrepancy reports comparing client-layer vs. Edge-layer data; attribution variance tracked week-over-week.

GTM Outcomes:

  • Lower CAC w/out increasing spend by capturing existing conversions
  • More confident decisions scaling campaigns by fixing attribution issues
  • Greater unity between departments by creating a single source of truth

Successes:

  • 95% reporting accuracy with Eden
  • Eden recovered ~17% of customers with no prior attribution visibility ($6.2M MRR company)
  • Corrected six-figure misallocations to partners

Scope & Configuration

Default bundle (included):

  • Signal Recovery (Edge Layer) — Capture high-fidelity visit and conversion signals before browser/client tracking can fail.
  • Identification & Session Mapping — Accurate session counts and source validation independent of cookies.
  • Analytics Bridging — Connect Edge data to GA4, Segment, and other client tools for attribution clarity.

Service Areas:

Service AreaWhat We Do (Technical)Business Outcome
Edge-Level Signal CaptureWe deploy an Edge Worker to capture HTTP request metadata (headers, URLs, timestamps, user agents) before client-side execution.Establishes a high-fidelity baseline of real traffic and conversions unaffected by ad blockers, browser privacy controls, or script failures.
Edge Identification & SessionizationWe generate Edge-level session logic independent of browser cookies.Produces more accurate visit and session counts and exposes gaps in client-side tracking.
Client ID BridgingWe map Edge sessions to existing analytics and CDP identifiers when available (collect client layer identifiers: _ga, segment anonymous id, mixpanel distinct id, etc.).Reliably connects traffic to on-site behavior and conversions, improving attribution accuracy without replacing existing tools.
Activation & Tool OrchestrationOrchestration (GTM, Tealium) - plug our Edge tables into your existing stack.Creates a trusted source of truth that supports confident marketing decisions, scaling, and optimization.

Optional add-ons:

  • Compliance Controls (PII Cleaning) — When operating in regulated or privacy-sensitive environments. Remove PII from requests, controlling compliance very closely.
  • Bot Filtering & Traffic Quality — When paid traffic quality or AI training data integrity is a concern. Identify & filter bots before they load the website.

Services:

  1. Signal Recovery
  2. Identification & Session Mapping
  3. Analytics Bridging
  4. Compliance Controls
  5. Bot Filtering
  6. Edge-Side Personalization
  7. Client-Layer Tooling
  8. Insight Enablement

Levels of Implementation:

  • Level 1 → Validation layer: Raw data - no business logic. People from these campaigns loaded these pages X number of times. This could be the demo.
  • Level 2 → Confirmation of total number of people that arrived by which source. People arriving to the website + people completing conversions. Ability to attribute specific actions. We can demonstrate the gap in trackability for different traffic sources.
  • Level 3 → Personalization: See the client layer. Now, we start sharing identifiers that enable GTM, GA4, Segment, Amplitude, North Beam. Link edge layer to segment data. Cross reference the edge with all other things in the data warehouse. Transactions and accounting data.

Prereqs:

  • Data & access:

    Access to analytics tools (GA4, Segment, etc.), ad platforms, and accounting or transaction data.

  • Environment:

    Client cloud environment with Edge deployment (e.g., Cloudflare Workers) or equivalent.

  • SMEs:

    Marketing owner (attribution), analytics/engineering contact, and finance stakeholder for validation.


Deliverables:

  • Artifacts:

    Attribution validation report, discrepancy analysis, tracking/measurement plan.

  • Automations / Services:

    Edge data pipelines, identifier mapping logic, client-layer integrations.

  • Dashboards / Reports:

    Client vs. Edge comparison views, attribution gap reports, GTM KPI summaries.

  • Runbooks:

    Data validation process, escalation paths, and change management guidelines.

  • Handoffs:

    Walkthrough of findings, enablement session for GTM and analytics teams.


Success metrics:

  • Business KPIs:
    • Attribution coverage (% of revenue with known source)
    • CAC accuracy / variance reduction
    • Marketing-influenced revenue confidence
  • Execution KPIs:
    • Match rate between Edge and client-layer identifiers
    • Latency and reliability of Edge capture
  • Adoption KPIs:
    • Usage of discrepancy reports in GTM reviews
    • Reduction in internal attribution disputes

Timeline & Plan:

  • Phase 0 — Audit (optional):

    1 week

    Deploy Edge in parallel, generate discrepancy report, quantify signal loss.

  • Phase 1 — Pilot:

    2–4 weeks

    Validate sources, connect client identifiers, demonstrate attribution recovery.

  • Phase 2 — Scale:

    4–8 weeks

    Roll out across properties/channels, integrate into reporting and decision workflows.


Commercials:

  • Model:

    Fixed-price audit → scoped pilot → ongoing engagement

  • Anchor price/range:

    Audit as low-cost wedge; pilot priced based on traffic volume and integrations.

  • Assumptions:

    Existing analytics stack in place; warehouse optional but preferred.


Risks & mitigations:

  • Risk: Client expects Edge to replace product analytics

    Mitigation: Position clearly as attribution and validation layer, not behavioral analytics.

  • Risk: Legal/privacy concerns

    Mitigation: Edge-side PII stripping and consent-aware configuration.

  • Risk: Misinterpretation of recovered signal

    Mitigation: Guided readout and documented assumptions in reports.

  • Risk: Volume of infrastructure → some have really large data and large models

    Mitigation: Early recognition and communication. Ability to quickly recognize data stack scale (e.g., 4,000 models). Get range of how many models early in process.


Proof:

  • Case snippet:

    Eden (250,000 over the course of a year due to inaccurate attribution data. They had 26,000 orders with 4,100 new customers, and 17% (697 customers) were unattributed.

  • Demo links:

    Edge vs. client-layer discrepancy report (demo), attribution gap visualization.

  • References:

    Available upon request (or anonymized).


Delivery notes (internal):

  • Squad: GTM lead, data/analytics engineer, platform owner
  • Compliance flags: PII handling, consent management
  • Version: v1.0 — [Date]

Key Technical Notes:

  • Edge is an attribution tool, not a product analytics tool
  • We get the best raw data there is to get in the moment - the most accurate way of getting raw data
  • Client layer typically misses 15-17% (Eden example - probably more for other products)
  • We work with anything in the client layer
  • We create a system where we can cross reference the edge with all other things in the data warehouse (Segment anonymous ID linked, transactions and accounting data via Bask webhook-fed database)

Insurance Workflow Automation (Contextual AI)

One-liner: Transform cold leads into structured risk profiles and submission-ready drafts in minutes, not hours.

Tech Anchors: [Contextual AI, Google Drive, Box, SharePoint, Content Management Systems]

Related Scope Modules: [relation to modules DB]

Playbook Link: [ ]
Case Link: [ ]
Last Reviewed: [Date]

Domain / Service Line: AI-Powered Workflow Automation


Who it’s for:

  • Role(s):

    • Primary Buyer: VP/Director of Operations, COO (budget authority $10k+/month)
    • Secondary Buyer: Head of Underwriting, Managing Partner/Principal
    • Users: Insurance brokers evaluating early-stage leads, brokerage operations teams, underwriting teams
  • Industry lens: Commercial insurance brokerage, specialty insurance (cyber, E&O, D&O, professional liability), risk management

  • Company Characteristics (ICP):

    • Revenue: 100M annual revenue (ideal: 50M)
    • Size: 10-200 employees (ideal: 20-100 employees)
    • Lead Volume: 50+ leads/month (minimum), 75-150 leads/month (ideal), 100-300 leads/month (excellent)
    • Type: Commercial insurance focus (B2B), specialty or general commercial brokerages
    • Stage: Growth stage, scaling operations, competitive pressure
  • Preconditions:

    • Brokerage receiving 50+ cold leads or early inquiries per month
    • Currently spending 2+ hours per lead on manual extraction and risk profile creation
    • Need to evaluate lead quality and risk profile quickly
    • Working with partial documentation (certificates, policies, transcripts)
    • Desire to move from zero-to-one in lead qualification faster
    • Budget authority for $10k/month recurring spend
    • Documents available in digital format (PDFs, Word docs, emails)
    • Technology-forward culture, willing to adopt automation
  • Qualification Criteria:

    • Must-Have: 50+ leads/month, budget authority ($10k/month), commercial insurance focus, 2+ hours per lead currently, digital documents
    • Nice-to-Have: 100+ leads/month, multiple brokers/team members, existing CMS integration, growth stage
    • Not a Fit: <30 leads/month, personal lines only, no budget authority, completely paper-based
  • Target Segments (by priority):

    1. Tier 1: Mid-market commercial brokerages (20-100 employees, 50M revenue, 50-150 leads/month)
    2. Tier 2: Specialty insurance brokerages (10-50 employees, 30M revenue, 30-80 leads/month)
    3. Tier 3: Growing regional brokerages (50-200 employees, 100M revenue, 100-300 leads/month)

See Also: insurance-workflow-icp.md for detailed ICP profile, qualification criteria, ROI calculations, and decision maker profiles.


Business problems:

  • “I rarely get clean workflows or documentation when we have an intake.”
  • Cold leads arrive with incomplete information, making it difficult to quickly assess if a lead is worth following
  • Manual extraction of risk categories, coverage details, and exposure points from multiple documents is time-consuming and error-prone
  • Drafting submission emails and risk profiles requires synthesizing information across policies, certificates, and conversation transcripts
  • Without structured risk profiles, brokers struggle to identify key risks, exposures, and gaps in coverage
  • Missing information isn’t clearly flagged, leading to incomplete assessments and wasted time on unqualified leads

ICP Pain (buyer language):

Have a hard time:

  • Quickly determining if a cold lead is worth pursuing
  • Extracting structured insights from messy, incomplete documentation
  • Creating credible submission drafts without spending hours on manual work
  • Identifying coverage gaps and risk exposures when information is scattered
  • Moving from initial inquiry to market-ready submission efficiently

Outcomes in 30 days (tie to §1):

  • Time saved:

    ↓ time spent manually extracting risk information and drafting submission emails from hours to minutes per lead via automated document analysis and email generation.

  • Revenue clarity:

    Faster lead qualification enables brokers to focus on qualified opportunities, reducing time spent on unqualified leads and accelerating pipeline velocity.

  • Profit protection:

    Reduced time-to-submission means faster time-to-revenue for qualified leads, while early identification of unqualified leads prevents wasted effort.

  • Employee efficiency:

    Brokers shift from manual document review and email drafting to strategic relationship building and risk assessment, with AI handling the structured extraction and initial drafting.

Measurement notes:

Time-to-risk-profile creation tracked per lead; citation accuracy verified against source documents; email draft quality measured by broker edit time.

GTM Outcomes:

  • Faster lead qualification (zero-to-one in minutes vs. hours)
  • More consistent risk assessment with explicit source citations
  • Higher quality submissions with structured risk profiles
  • Ability to work with partial information without losing insights

Successes:

  • Transforms cold leads with partial documentation into structured risk profiles
  • Extracts and cites specific information from policies, certificates, and transcripts
  • Generates submission-ready email drafts grounded in actual documentation
  • Identifies gaps and missing information explicitly

Scope & Configuration

Default bundle (included):

  • Lead Coverage & Risk Profile Creation — Extract key risks, exposures, and coverage details from uploaded documents with explicit source citations.
  • Document Analysis & Extraction — Process policies, certificates of insurance, service agreements, and conversation transcripts to surface structured insights.
  • Gap Identification — Flag missing information and document sources clearly, enabling informed decisions about lead qualification.
  • Email Draft Generation — Create submission-ready email drafts grounded in actual documentation, following broker-specific templates and preferences.

Service Areas:

Service AreaWhat We Do (Technical)Business Outcome
Risk Profile ExtractionAnalyze uploaded documents (policies, certificates, transcripts) to extract major risk categories, occupational security data risks, client-related risks, growth considerations, and potential concerns with explicit source citations.Creates structured risk profiles that help brokers quickly assess lead quality and identify key exposures without manual document review.
Citation & Source TrackingEvery insight includes specific citations showing exactly where information comes from (page numbers, document sections, transcript timestamps).Enables brokers to verify information accuracy and understand data provenance, building trust in AI-generated insights.
Gap AnalysisIdentify missing information and explicitly call out which documents are needed but not provided, while still surfacing available insights.Helps brokers make informed decisions about lead qualification even with incomplete documentation.
Email Draft GenerationGenerate submission-ready email drafts based on extracted risk profiles, following broker-specific templates and communication styles stored in the system.Saves hours of manual drafting while ensuring emails are grounded in actual documentation and broker preferences.
Multi-Document SynthesisCross-reference information across multiple document types (policies, certificates, transcripts, service agreements) to create comprehensive risk assessments.Provides holistic view of client risk profile even when information is scattered across different documents.
Underwriting Profile CreationGenerate detailed underwriting risk and coverage profile tables with quotes, coverage limits, special notes, considerations, and recommendations.Creates market-ready submission artifacts that wholesalers can use immediately, accelerating time-to-market.

Optional add-ons:

  • Custom Template Integration — When brokers have specific email templates, submission formats, or risk assessment frameworks they want the system to follow.
  • CMS Integration — When brokers want to connect Contextual AI to Google Drive, Box, SharePoint, or other content management systems for automated document ingestion.
  • Workflow Automation — When brokers want to automate the entire zero-to-one process from lead intake to submission-ready artifacts.

Levels of Implementation:

  • Level 1 → Basic risk profile extraction: Extract key risk categories and exposures from uploaded documents with citations. This could be the demo.
  • Level 2 → Comprehensive analysis: Full risk profile plus gap identification plus email draft generation. Ability to work with partial information and explicitly flag what’s missing.
  • Level 3 → Full workflow automation: Multi-document synthesis, underwriting profile creation, submission-ready artifacts, CMS integration, and broker-specific template learning.

Prereqs:

  • Data & access:

    Access to lead documentation (policies, certificates, transcripts, service agreements) either via upload or CMS integration (Google Drive, Box, SharePoint).

  • Environment:

    Contextual AI platform access and configuration for broker-specific templates and preferences.

  • SMEs:

    Insurance broker or operations lead to provide templates, preferences, and validation of extracted insights.


Deliverables:

  • Artifacts:

    Structured risk profiles, coverage analysis reports, gap identification documents, submission email drafts, underwriting profile tables.

  • Automations / Services:

    Document analysis workflows, email generation templates, citation tracking system, CMS integration (if applicable).

  • Dashboards / Reports:

    Lead qualification dashboard, risk profile summaries, gap analysis reports.

  • Runbooks:

    Document upload process, template customization guide, citation verification process.

  • Handoffs:

    Training on Contextual AI platform, template customization session, workflow optimization guidance.


Success metrics:

  • Business KPIs:
    • Time-to-risk-profile creation (target: minutes vs. hours)
    • Lead qualification accuracy improvement
    • Time-to-submission reduction
  • Execution KPIs:
    • Citation accuracy rate
    • Email draft quality (measured by broker edit time)
    • Gap identification accuracy
  • Adoption KPIs:
    • Number of leads processed per broker
    • Reduction in manual document review time
    • Broker satisfaction with generated artifacts

Timeline & Plan:

  • Phase 0 — Setup:

    1 week

    Configure Contextual AI platform, upload broker templates and preferences, establish document ingestion process.

  • Phase 1 — Pilot:

    2–3 weeks

    Process 5-10 sample leads, validate risk profile accuracy, refine email templates, optimize citation tracking.

  • Phase 2 — Scale:

    2–4 weeks

    Roll out to full brokerage team, integrate CMS (if applicable), establish ongoing optimization process.


Commercials:

  • Model:

    Hybrid pricing: Fixed-price setup + monthly subscription + optional per-lead processing.

  • Anchor price/range:

    ACV Requirement: Minimum 10k/month average)

    Pricing Structure:

    • Setup/Implementation Fee: 10k (one-time)
    • Monthly Subscription: 8k/month (base platform access)
    • Per-Lead Processing: 100 per lead (optional, for high-volume)
    • Total: 10k/month average

    Example Scenarios:

    • Base: 7k/month × 3 = $29k
    • Hybrid: 8k/month × 3 + 32k
    • High Setup: 6.5k/month × 3 = $29.5k
  • ROI Justification:

    Time Savings: 2-3 hours saved per lead × 125/hour broker rate = 375 value per lead

    Volume Threshold: 50+ leads/month minimum to justify $10k/month

    • 50 leads × 7,500/month value (break-even)
    • 75 leads × 11,250/month value ($1,250/month positive ROI)
    • 100 leads × 15,000/month value ($5,000/month positive ROI)

    Revenue Impact: Faster submissions = 10-20% higher close rates = additional revenue lift

    Combined ROI: Time savings + revenue impact typically yields 25,000/month total value vs. $10k/month cost

  • Assumptions:

    • Brokerage processes 50+ leads/month (minimum volume threshold)
    • Currently spending 2+ hours per lead on manual extraction
    • Has budget authority for $10k/month recurring spend
    • Brokerage has existing lead intake process
    • Documents available in digital format (or can be digitized)
    • Commercial insurance focus (not personal lines)

Risks & mitigations:

  • Risk: Inaccurate risk extraction leading to poor lead qualification decisions

    Mitigation: Explicit citation system allows brokers to verify all insights; human-in-the-loop validation required before final decisions.

  • Risk: Missing critical information in partial documentation scenarios

    Mitigation: Gap identification explicitly flags missing information; system surfaces available insights while clearly noting limitations.

  • Risk: Email drafts don’t match broker communication style

    Mitigation: Template customization and preference learning; brokers can edit all generated content; system learns from feedback.

  • Risk: Integration complexity with existing CMS or document systems

    Mitigation: Phased integration approach; start with manual upload, then add CMS integration as optional add-on.


Proof:

  • Case snippet:

    Demo shows transformation of cold lead with partial documentation (cyber policy, certificate of insurance, service agreement, demo transcript) into structured risk profile with explicit citations and submission-ready email draft. System successfully identified major risk categories, occupational security data risks, client-related risks, growth considerations, and potential concerns while flagging missing information.

  • Demo links:

    Contextual AI platform demo showing document upload, risk profile extraction, citation tracking, and email generation.

  • References:

    Available upon request (or anonymized).


Delivery notes (internal):

  • Squad: AI/automation lead, insurance domain SME (Ian), operations lead
  • Compliance flags: PII handling in insurance documents, data privacy for client information
  • Version: v1.0 — [Date]

Key Technical Notes:

  • Contextual AI platform enables document analysis with explicit citation tracking
  • Works with partial information - doesn’t require complete documentation to surface insights
  • Can process multiple document types: policies, certificates, transcripts, service agreements
  • Email generation learns broker-specific templates and communication styles
  • CMS integration available for Google Drive, Box, SharePoint, and other content management systems
  • Zero-to-one focus: transforms cold leads into structured, actionable risk profiles quickly

2. AI

Training

  • Cursor Training
  • Claude Code Training
  • N8N Training
  • Replit
  • Mastra
  • Eval Training

Agency/Service Work OS (Business Consciousness) - Brainforge Platform

  • Recording Tool Setup
  • Transcript Ingestion and Processing
  • Case Study
  • Marketing Files

Integrations

  • Google Chat Integration
  • Slack
  • Voice

Content

  • Slides
  • Images
  • Videos

Knowledge Engineering and Context Preparation

AI Hardening

Note: Need to think of the features and then we are doing AI building to do it

MCP

  • Custom MCP development
  • MCP deployment and hosting

📝 Usage Notes

For Content Creation:

  • Reference service names exactly as listed above
  • Use full service names in proposals and marketing materials
  • When creating service-focused posts, use the exact service name

For Agents:

  • This is the source of truth for all Brainforge services
  • Use this to understand what services are available
  • Reference when recommending services to prospects

Future Additions:

  • Pricing information (when available)
  • Timeline/duration (when available)
  • Deliverables (when available)
  • Target customer profiles (when available)
  • Service descriptions (when available)

See Also:


Maintained By: GTM Team Review Frequency: As services are added/updated