Campaign Brief: dbt Onboarding Accelerator
Service line: dbt Audit Service
Gate date: 2026-02-17 (2-week decision)
Brief owner: Luke Scorziell
Last updated: 2026-02-04
Use this brief to run the campaign and to score it against the Campaign Launch Checklist at the gate. Keep the checklist section updated so go/no-go is obvious.
1. Launch checklist progress (gate at 2 weeks)
Copy this block into the brief. Check off as you go; at gate date, use it for go/no-go.
Beta Test (must all be ✅ to advance)
- v1.0 delivery plan created (link: ________)
- 3 SOWs created and sent
- 1 signed contract secured (or strong pipeline)
- Delivery started
- 1 milestone reached
Gate criteria for this campaign: As long as we have SOWs sent and calls booked by Feb 17, continue. If not, kill.
Market Ready (commit when we go)
- Service posted on website
- Lead gen underway (this campaign)
- Supporting content calendar in execution (see Content below)
- Partners updated
- 5 customers attracted to a meeting
SOW & meeting tracking
| # | Company / contact | SOW sent | Signed | Meeting booked | Notes |
|---|---|---|---|---|---|
| 1 | |||||
| 2 | |||||
| 3 |
Target: 3 SOWs sent, 1 signed (or strong pipeline), 5 total prospects to a meeting (can be same or different from SOW list).
2. Positioning & hypothesis
Value Proposition:
“Best in the world at prepping dbt codebases for scale—solving poor documentation, long run cycles, and knowledge loss during staff changes for data leaders at growing companies.”
- Problem: Poor documentation, long run cycles, frustration and headaches when staff changes. Repeated code & lack of modularity, business context not documented, disorganized testing causing ignored failures.
- Customer (title / segment): Data leaders (Data Manager, Head of Data, VP Analytics, VP Data) at companies $20M+ in revenue who are actively hiring for analytics engineer roles.
- Solution (service name / approach): dbt Onboarding Accelerator — A comprehensive 3-4 week audit of your dbt codebase with a roadmap toward efficiency, confident data, and faster new hire ramp-up.
One-line test hypothesis:
“Data leaders at companies 10k for a dbt audit that helps them transition their new hire into greater success.”
One-pager / narrative: dbt Audit Service One-Pager
Pricing: $10,000 for 3-4 week audit + roadmap
3. Campaign overview
What we’re testing:
Testing whether the “hiring trigger” (companies actively recruiting analytics engineers) is a strong buying signal for dbt audit services. Success = 3 SOWs sent, calls booked, pipeline building by Feb 17.
Target audience:
- Titles: Data Manager, Head of Data, VP Analytics, VP Data, Director of Analytics, VP Engineering (data function)
- Segment: Companies $20M+ revenue, 200-500 employees, actively hiring for analytics engineer roles that mention dbt in job descriptions
- Industries: CPG, e-commerce, healthcare, software, tech-enabled services
Strategy:
Quality over quantity. Manual outreach via LinkedIn Sales Navigator. Check for mutual connections first (5+ mutuals → warm intro path with Robert approval; <5 mutuals → cold LinkedIn sequence). Content from Uttam builds credibility on dbt expertise and hiring/onboarding pain points, making outreach more resonant.
4. Execution: Outreach & leadgen
Execution stack
| Layer | Tool | Role |
|---|---|---|
| Lead identification | LinkedIn.com + Sales Navigator | Find companies hiring for analytics engineer roles mentioning dbt; filter by revenue, size, industry. |
| Account list | Sales Navigator | Account list: “1 - hiring analytics engineering roles dbt <1000 fte” (on Robert’s Sales Nav) |
| Lead list | Sales Navigator | Lead list: “4 - dbt companies hiring analysts” — Manager+ titles at target accounts |
| Outreach | Manual (LinkedIn) | Rico runs manual connection requests and messages; no automation at this stage. |
Rico is the outreach coordinator. This brief is the source of truth for messaging and sequencing.
Target account list (max 30)
Current list (14 accounts):
| Company | Category | Headcount | Annual Revenue | Revenue type | Sources |
|---|---|---|---|---|---|
| Astro Pak | 200-500 | $20M+ | LinkedIn job search | ||
| Charlie Health | Healthcare | 200-500 | $20M+ | LinkedIn job search | |
| Findhelp | Tech/Social Impact | 200-500 | $20M+ | LinkedIn job search | |
| Gatekeeper Systems | Software | 200-500 | $20M+ | LinkedIn job search | |
| Headway | Healthcare | 200-500 | $20M+ | LinkedIn job search | |
| Healthcare IT Leaders | Healthcare Tech | 200-500 | $20M+ | LinkedIn job search | |
| Hopper | Travel Tech | 200-500 | $50-100M | LinkedIn job search | |
| Hover | Software | 200-500 | $50-100M | LinkedIn job search | |
| Maven Clinic | Healthcare | 200-500 | $20M+ | LinkedIn job search | |
| Nourish | Healthcare | 200-500 | $20M+ | LinkedIn job search | |
| Outdoorsy | Travel/Rental | 200-500 | $20M+ | LinkedIn job search | |
| Parachute Health | Healthcare | 200-500 | $20M+ | LinkedIn job search | |
| Rent the Runway | E-commerce | 200-500 | $20M+ | LinkedIn job search | |
| Upside | Tech/Marketplace | 200-500 | $20M+ | LinkedIn job search |
ICP criteria (for adding accounts beyond this list):
- Above $20M in revenue
- Industry we’ve worked (CPG, e-commerce, health, software, etc.)
- 200-500 employees
- Hiring for analytics engineer
- Job description mentions dbt
Process for adding accounts (from Luke’s instructions to Rico):
- Use LinkedIn.com → Jobs
- AI search query: “analytics engineer using dbt at companies under 1000 fte”
- Open SalesNav in separate window
- Add companies to account list (not leads list) as you find them
- Include note about role they’re hiring for and dbt mention
ICP titles
- Data Manager
- Head of Data
- VP Analytics
- VP Data
- VP of Data & Analytics
- Director of Analytics
- Director of Data Engineering
- Head of Analytics Engineering
Automated vs human-in-the-loop
| What | Who | Notes |
|---|---|---|
| Mutual check | Rico (manual) | Check Sales Navigator for mutual connections before outreach. |
| Cold path (<5 mutuals) | Rico (manual) | Send connection request with note → wait for accept → send first message → follow-ups per sequence below. |
| Intro ask (5+ mutuals) | Rico → Robert (human) | Slack GTM channel tagging Robert & Luke for intro approval. Not auto-sent. Send all mutual intro requests in one message/thread. |
| Founder intro | Robert (human) | After mutual agrees, Robert sends intro email. |
| Meeting booking | Rico + Luke | Rico coordinates; Luke joins calls. |
Order of operations & rules
Step 1: Identify lead
- Use Sales Navigator lead list “4 - dbt companies hiring analysts”
- Apply filters: Manager+ titles, at target accounts
- Good examples: VP of Data, Head of Data, Manager Data Analytics
Step 2: Mutual connection check
- Check Sales Navigator for mutual connections
- If 5+ mutuals → Go to Step 3 (warm path)
- If <5 mutuals → Go to Step 4 (cold path)
Step 3: Warm intro path (5+ mutuals)
- Send all mutual intro requests in one message or thread (not one at a time)
- Slack GTM channel tagging Robert & Luke with: company name, contact name/title, mutual connection(s), why this person/company
- Wait for Robert’s approval
- Robert reaches out to mutual
- If mutual agrees, Robert sends founder intro email (using Uttam’s name)
- Rico follows up to book meeting
Step 4: Cold path (<5 mutuals)
- Send connection request with note (use template below)
- Wait for connection acceptance (24-72 hours)
- If accepted → send first message (Day 2-3)
- If no reply → send follow-up 1 (Day 7)
- If no reply → send follow-up 2 (Day 14)
- If no reply → mark as no-response; move to next lead
Step 5: Meeting booked
- Log in SOW tracking table above
- Prepare for discovery call (use dbt audit discovery framework)
Sequence definition (for manual execution)
| Step | Channel | Action | Message template | Delay |
|---|---|---|---|---|
| 1 | Send connection request | Cold connection request | — | |
| 2 | If connected → send message | Cold first message | 1-2 days | |
| 3 | If no reply → send follow-up | Cold follow-up 1 | 5 days | |
| 4 | If no reply → send follow-up | Cold follow-up 2 | 7 days | |
| — | — | Warm path (5+ mutuals) | Mutual intro request | — |
| — | After mutual agrees | Founder intro (Robert sends) | — |
Message library
Cold connection request
Use when: <5 mutuals, sending connection request
Hi [First Name],
Saw [Company] is hiring for an Analytics Engineer. We help data teams prepare dbt codebases for scale, especially during team transitions.
Would be great to connect!
Cold first message
Use when: Connection accepted, sending first message (Day 2-3)
Hi [First Name],
Thanks for connecting. I noticed [Company] is bringing on a new Analytics Engineer.
In our work with [industry] teams, we’ve found that a huge onboarding bottleneck for data folks is usually undocumented dbt logic and testing strategies. What takes 3 months to ramp up could take <4 weeks with the right foundations.
We just finished a dbt audit sprint for a [similar company] where we:
- Condensed 300+ lines of repeated code into reusable modules
- Documented business logic that only existed in one person’s head
- Created a testing framework so failures actually mean something
Not sure if this is on your radar as you scale the team, but happy to share what we learned. Up for a chat?
Best, Luke
Cold follow-up 1
Use when: No reply to first message (Day 7)
Hi [First Name],
Quick follow-up on my message last week.
One pattern we see often: teams ignore dbt test failures because there are too many false positives. But that means real data quality issues slip through.
In a recent audit, we found 40+ tests that weren’t catching anything useful—and 5 critical gaps that had no tests at all.
If you’re curious about what we’d find in your setup, I can send over our audit framework. Takes 15 minutes to walk through.
Interested?
—Luke
Cold follow-up 2
Use when: No reply to follow-up 1 (Day 14)
Hi [First Name],
Last note from me.
When you’re onboarding a new Analytics Engineer, one of the first things they’ll ask is “How do I understand what’s already built?”
If the answer is “Read through the codebase and ask around,” that’s a 12-week ramp-up. If the answer is “Here’s the documentation and testing strategy,” that’s one week.
We’re helping companies get to the second one. Happy to share how!
—Luke
Mutual intro request (to Robert & Luke, via Slack GTM channel)
Use when: 5+ mutuals, requesting Robert’s approval for warm intro
Send all mutual intro requests in one message or thread, not one at a time
Company: [Company Name]
Contact: [First Name Last Name], [Title]
Mutual connection: [Mutual’s Name] (our relationship: [how we know them])
Why this company:
[Company] is hiring for an Analytics Engineer and fits our ICP ($20M+ revenue, [industry], 200-500 employees). Job description mentions dbt. [Title] is likely the hiring manager or budget holder.
Angle:
Team is scaling and probably feeling the pain of onboarding into an undocumented dbt codebase. Our audit would help them set up the new hire for success.
Founder intro email (Robert sends after mutual agrees)
Use when: Mutual agrees to make intro
Subject: Quick intro — dbt audit for [Company]
Hi [Mutual’s First Name],
Hope you’re doing well. I wanted to intro you to Uttam, who leads our data engineering practice.
[First Name’s Company] is hiring for an Analytics Engineer, and I thought Uttam’s work might be timely. We help data teams prepare their dbt codebases for scale—documentation, testing, modularity—especially during team transitions.
We just wrapped an audit for a [similar company] where we found hundreds of lines of repeated code, undocumented logic, and testing gaps that were causing onboarding headaches.
Uttam can share what we’ve learned. Worth a quick conversation?
Best,
Robert
[Uttam, meet [First Name]. [First Name], meet Uttam.]
Outreach checklist (per prospect)
- Confirm ICP fit (revenue, size, industry, hiring for analytics engineer, dbt mentioned)
- Check mutual count on Sales Navigator
- If 5+ mutuals → Slack GTM channel for intro approval
- If <5 mutuals → Send cold connection request
- Follow sequence per timeline above
- Log outcome in SOW tracking table
5. Execution: Content (supporting calendar)
Goal
Build credibility around dbt expertise and make the hiring/onboarding pain point visible. Content makes cold outreach warmer—prospects may have already seen posts from Uttam on this exact problem. Use content in outreach: “Just posted about this—saw you’re hiring and thought it might resonate.”
Content Strategy
6 posts over 2-3 weeks following Problem → Solution → Service arc:
| # | Pillar | Topic / hook | Format | CTA Type |
|---|---|---|---|---|
| 1 | Problem | Hidden cost of dbt debt during onboarding | First-person observation | DM (Tier 4) |
| 2 | Problem | Why new analytics engineers take 3 months to ramp | Diagnostic list | One-pager download (Tier 2) |
| 3 | Solution | How top teams document dbt for knowledge transfer | Principle-based | dbt Health Quiz (Tier 1) |
| 4 | Solution | dbt testing strategy that prevents issues | Problem → Fix pattern | One-pager download (Tier 2) |
| 5 | Service | What we audit in a dbt code review | Framework list | Book call (Tier 5) |
| 6 | Service | dbt audit → roadmap (case study) | Process + case study | Book call (Tier 5) |
Content lives in:
Uttam’s LinkedIn. Ryan creates content outlines; posts drafted in Uttam’s voice using patterns from ../../content/cc-content-system/uttam-gpt/memory/.
Post 1 — Hidden cost of dbt debt during onboarding
Pillar: Problem
Topic: Hidden cost of dbt debt during onboarding
Format: First-person observation hook + diagnostic pattern
Post
The real cost of dbt debt doesn’t show up until someone new joins.
The person who built it has learned to work around the broken test that’s been failing for years. They know which models not to touch, know that “dbt is still running” is the answer to half the questions. To them it’s background noise. To a new hire it’s a minefield.
Here’s what actually happens when you onboard into a messy dbt codebase:
Tests have been failing so long everyone ignores them. The team says “that’s just how it is.” The new person doesn’t know which failures are real and which are legacy noise, they either waste time fixing non-issues or learn to ignore tests too, and the next real issue slips through.
Nothing is documented. The logic lives in someone’s head, and often that person has left. The new analytics engineer is reading 600 or 800 lines of code to figure out what a model does and why it was built that way. There’s no “what” or “why,” only “what’s there.”
One person was doing everything. We’ve seen it over and over: one data person building what the business needs every day, they don’t have the bandwidth to stop, audit, and refactor. So the mess compounds. When you finally hire help, the new person inherits a system that was never designed for two people to understand.
Add it all up, and onboarding becomes a multi-month archaeology project instead of a productive ramp. The hidden cost isn’t the salary, it’s the months where both people are stuck in “how does this even work” instead of “how do we make it better.”
This isn’t a talent problem, it’s an infrastructure debt problem that shows up at the worst time.
If this sounds like the codebase you’re handing to your next hire, you’re not alone. We’ve audited stacks where the only way to understand the system was to trace it line by line. DM me if you want to talk through what an audit would surface before the new person starts.
CTA Strategy
Tier 4 (Direct Message): Only DM CTA in the entire campaign
Why: Problem post, opens conversation about pain point recognition
Post 2 — Why new analytics engineers take 3 months to ramp
Pillar: Problem
Topic: Why new analytics engineers take 3 months to ramp
Format: Diagnostic list with numbered points
Post
New analytics engineers don’t take 3 months to ramp because they’re slow. They take 3 months because the codebase was never built for anyone else to read.
We’ve seen it repeatedly. You hire someone sharp, they can write SQL, they understand the business. But they land in a dbt project where models are 600-line monoliths, there’s no staging → intermediate → mart clarity, and the only documentation is “we’ll fix that later.” Every question leads to “go read the code” or “ask so-and-so.” So-and-so is the only one who knows why that test has been red for two years and why it’s “fine.”
Here’s what actually stretches the ramp:
-
No modularity. When something breaks, the new person is debugging “line 585” in a file that does ten things. There’s no way to isolate the problem. In a modular setup you trace to a 100-line slice and fix it, in a monolith you’re guessing.
-
No clear DAG. Staging feeds intermediate feeds mart is the idea. We’ve seen mart models feeding back into intermediate, cycles in the graph. The new person has to reverse-engineer the flow before they can safely change anything.
-
Assumptions live in people’s heads. What’s the grain of this table? Why do we exclude those rows? The logic isn’t written down, so every change is a risk. The new engineer either blocks on the incumbent for every decision or makes a change and breaks something nobody knew depended on it.
The result is predictable: months of “where does this come from?” and “why was it built this way?” before they can own a single improvement. It’s not a capability gap, it’s a design gap. The system was built to ship, not to transfer knowledge.
This isn’t really a hiring problem, it’s a “we never made the codebase readable” problem.
If your team is about to bring on a second data or analytics engineer and the first one is the only one who can navigate the dbt project, it’s worth asking what an audit would find. We do audits that map the real structure, document assumptions, and produce a roadmap so the new person has something to lean on.
Link to our one-pager in the comments.
CTA Strategy
Tier 2 (Lead Magnet): Download one-pager
What to set up: Comment with direct link to https://files.brainforge.ai/sales/services/brainforge_dbt_audit_service.pdf
Comment text: “Here’s our dbt audit one-pager: https://files.brainforge.ai/sales/services/brainforge_dbt_audit_service.pdf”
Post 3 — How top teams document dbt for knowledge transfer
Pillar: Solution
Topic: How top teams document dbt for knowledge transfer
Format: Principle-based teaching
Post
Your dbt project is fragmented by design. Models here, logic there, and “why we did it” in someone’s head. When that person leaves or you hire a second analytics engineer, the project doesn’t explain itself.
The fix isn’t more Confluence pages, it’s making the documentation live where the code lives, and making it answer the questions the next person will actually ask.
Here’s how teams that transfer knowledge well do it:
Document the logic of what you’re doing. Not just column names, the business rule. Why this join, why this filter. So when someone opens the model in six months they don’t have to infer intent from 200 lines of SQL.
State your assumptions explicitly. Grain of the table, what’s in scope and what’s out, what would break if the source changed. That’s part of the documentation too. People who look at it later need to know what the model assumes about the data.
Use naming and structure that reflect the data and the flow. Model names that reflect sources and what the model does, staging → intermediate → marts. When the next person sees a filename they get a signal, when they see the DAG they see the story.
Define sources properly. Not hard-coded table names, sources in sources.yaml so you can run freshness checks and so the next person knows where raw data lives and how fresh it’s expected to be. Teams that skip this lose the ability to monitor and to onboard cleanly.
Add it all up, and the codebase stops being a black box. New hires and new stakeholders can read the project and get to “what is what and why it was done” without a tribal-knowledge tour. That’s how top teams document dbt: not as an afterthought, as the thing that makes the system maintainable.
If your dbt project would leave the next owner guessing, we can help. Our audits always include documentation and assumption clarity.
Not sure where your dbt stands? Take our 2-minute health check: https://dbtaudit.lovable.app/
CTA Strategy
Tier 1 (Tracked Link): dbt Health Quiz
What it does: 7-question assessment (runtime, buffer, data spikes, incremental tables, failure handling, source addition) that gives risk score
Why this post: Solution post about documentation - natural fit for “check your current state” CTA
Post 4 — dbt testing strategy that prevents issues
Pillar: Solution
Topic: dbt testing strategy that prevents issues
Format: Problem → Fix pattern
Post
dbt tests aren’t broken because you have the wrong tests. They’re broken because too many teams let failures become background noise.
We’ve seen it over and over: a test has been failing for years, everyone knows, “we need to overlook it.” So the test stays, but nobody acts on it. New people don’t know if it’s real or legacy. When a real issue shows up, it’s buried in the same red. The testing layer stops protecting the business and starts adding confusion.
Here’s what actually prevents issues:
Tests that mean something. Uniqueness, not-null, relationships, custom logic that matches your business rules. If a test fails, there’s a clear action, not “we’ve always had that failure.”
Source freshness. Define your sources and set freshness expectations. You find out when yesterday’s data didn’t land before someone asks why the dashboard is empty. Without it you’re debugging at 9 a.m. instead of catching it at 6.
No permanent “known failure” list. If a test is wrong, fix or remove it. If it’s right, fix the data or the model. Letting tests fail forever is the same as having no tests, it trains the team to ignore red.
The goal isn’t more tests, it’s tests that catch real problems and that the team actually trusts. When tests catch issues before they hit the dashboard, and when new hires can tell real failures from noise, you have a testing strategy that prevents issues instead of creating them.
This isn’t a tooling problem, it’s a discipline problem. Tests only work when the team commits to acting on what they find.
If your dbt project has tests that everyone ignores, we can help. Our audits always include test design and source freshness, and we help teams get to a state where red means “fix this” instead of “ignore this.”
Link to our one-pager in the comments.
CTA Strategy
Tier 2 (Lead Magnet): Download one-pager
What to set up: Comment with direct link to https://files.brainforge.ai/sales/services/brainforge_dbt_audit_service.pdf
Comment text: “Here’s our dbt audit one-pager: https://files.brainforge.ai/sales/services/brainforge_dbt_audit_service.pdf”
Post 5 — What we audit in a dbt code review
Pillar: Service
Topic: What we audit in a dbt code review
Format: Framework list (what we check)
Post
When we do a dbt audit, we’re not just looking for “bad code.” We’re looking for the same things that make the difference between a codebase one person can barely hold in their head and one a team can own and improve.
Here’s what we actually check:
DAG integrity. Staging → intermediate → mart, no cycles. We’ve seen mart models feeding back into intermediate. That breaks the idea of an acyclic graph and makes every change risky. We map the real flow and call out violations.
DRY. Don’t repeat yourself. The same calculation in five places means five places to update and five ways to drift. We find repeated logic and flag where it should live once.
Modularity. Models around 100 lines, not 600. Long files are where “something’s wrong in line 585” comes from. We look for monoliths that should be split by business logic so the next person can debug and change safely.
Documentation and assumptions. Is the logic documented? Are assumptions (grain, scope, dependencies) stated? Without that, the only way to understand the system is to read every line. We note where docs are missing or vague.
Testing and source freshness. Do tests exist and do they catch real issues? Are sources defined so you can run freshness checks? We’ve seen stacks with no source definitions and no way to know if yesterday’s data made it in. We assess what’s there and what’s missing.
Naming and materialization. Do names reflect sources and purpose? For large tables, is the project using incremental materialization where it makes sense, or rebuilding everything every run and burning time and compute? We call out optimization and naming improvements.
We package this into a clear report and a prioritized roadmap. So you get “here’s what we found” and “here’s what we’d do first,” not a pile of notes.
If you’re about to hire a second analytics engineer, or your dbt runs are taking half the night and you don’t know where to start, an audit is the fastest way to get a plan.
Book a 30-min audit preview: https://scheduler.default.com/18126/member/30656bde-d1fb-4eca-8936-685859fd6f30
CTA Strategy
Tier 5 (Meeting Booked): Direct booking link
Why: Service post showing value → natural for “see what we’d find” call
Link: Uttam’s booking link with “How did you hear about us?” attribution
Post 6 — dbt audit → roadmap (case study)
Pillar: Service
Topic: dbt audit → roadmap (case study)
Format: Process reveal + case studies
Post
Here’s what we actually do when we run a dbt audit and turn it into a roadmap.
We don’t rip out the existing stack and start over. The team is still shipping, we come in parallel. We get access to the repo and the warehouse, we map how the system really works, and we produce an audit report and a prioritized roadmap: “Here are the problems, here’s what we’d fix first, second, third, here’s where you’re losing time and where the next person will get stuck.”
Then the client chooses. Some only want the roadmap, they use it to hire or to plan the next quarter. Others want us to implement. In that case we scope by what hurts most: “Give us your most pressing data mart, give us 6 weeks, and we’ll tackle it.” Revenue, sales, marketing, inventory—they pick, we deliver against the roadmap we already built.
We’ve done this with teams that had one analytics engineer and a codebase that had grown under “we need things to happen” pressure.
Magic Spoon was one. We audited, found bottlenecks and redundant logic, and identified where runtimes could come down (in that case from hours to something much closer to what the business needed).
Another client, Urban Stems, had a solo data owner who couldn’t stop daily work to refactor. We audited, built a parallel cleaner infrastructure alongside the existing one, and then migrated them over. So they kept running while we fixed the foundation.
The pattern is the same: audit first, roadmap second, then either they run with the roadmap or we implement the highest-priority slice. No big-bang rewrite, no “throw everything away.” Just a clear picture of what’s wrong and a path to fix it in order.
If you’re sitting on a dbt project that feels clunky, or you’re about to bring on a second person and don’t want them to spend three months in the weeds, this is the motion.
Book a call: https://scheduler.default.com/18126/member/30656bde-d1fb-4eca-8936-685859fd6f30
CTA Strategy
Tier 5 (Meeting Booked): Direct booking link
Why: Case study post with social proof → strong conversion moment for booking
Link: Uttam’s booking link
6. Roles & owner
| Action | Owner |
|---|---|
| Brief owner / gate decision | Luke Scorziell |
| Outreach execution (coordinator) | Rico |
| Intro approval (5+ mutuals) | Robert |
| Founder intro send | Robert (using Uttam’s name) |
| Content outlines | Ryan |
| Content drafting & publish | Uttam (LinkedIn) |
7. Gate decision (fill at gate date)
Gate date: 2026-02-17
- Go — Beta criteria met (SOWs sent, calls booked); proceeding to full rollout (website, leadgen, content, partners, 5 to meeting).
- Conditional go — SOWs sent and calls booked; Market Ready actions committed with follow-up date.
- No-go — No SOWs sent or calls booked; iterate or pause. Next test date: _______________.
Gate criteria reminder: As long as we have SOWs sent and calls booked by Feb 17, continue. If not, kill.
Notes: