Batch Content Strategy and Outline

Purpose: Plan and outline batches of content for any campaign. Use this to decide pillar mix, post structure, CTA mix, and per-post outline shape before drafting. Generalizes what works in the dbt content sequence.
Last updated: 2026-02-04


1. Batch content strategy (how to plan a batch)

Pillar mix

Default arc (same as dbt): Problem (2) → Solution (2) → Service (2).

  • Problem: Surface pain the audience recognizes; no solution yet (or light tease).
  • Solution: How top teams do it / how we think about it; teaching or methodology.
  • Service: What we do, what we deliver, proof (framework list or case study).

Deviate when the campaign demands it (e.g. event-led: 1 problem, 1 solution, 2 service + 1 event; or lead-magnet-led: more solution posts with Tier 2 CTAs).

Post count

  • Default: 6-post sequence.
  • Shorter: 4-post test when validating a new service or channel.
  • Longer: 8-post when you have more solution/teaching content or a longer runway.

Format variety (by pillar)

PillarSuggested formatNotes
ProblemDiagnostic ListContrarian hook, numbered diagnosis, consequence, reframe, soft CTA
SolutionSilo-to-Signal or Problem → Common Fix → Better FixMethodology/process or positioning/differentiation
ServiceB2B Framework (list) or Process Reveal + case study”What we check” or “here’s what we do + cases”

Use the post structure quick reference below to pick the exact pattern name and open it in the CC content system.

CTA mix

Spread CTAs across tiers 1–5 so you’re not only measuring DMs. See CTA_FRAMEWORK.md for full definitions.

CTA mix checklist (per batch):

  • At least one post driving Tier 1 (tracked link click).
  • At least one post driving Tier 2 (lead magnet download).
  • At least one post driving Tier 5 (meeting booked).
  • Tier 3 (event signup) if you have an event or webinar.
  • Tier 4 (DM) sparingly; don’t default every post to “DM us.”

Timing / batching

  • Decide posts per week (e.g. 2–3) so the batch runs in a defined window.
  • Use the 2-week loop in CTA_FRAMEWORK.md: track by period, fill the tracking table by tier, read the signal, adjust the next batch.

2. Post structure (from CC content system — Robert GPT)

Source of truth for how each post is built: the CC content system. For Robert GPT: Format Index and linkedin-patterns.

Workflow: Decide post type → pick format from Format Index → open the example file and structure pattern named there → draft in that structure. Output in CAMPAIGN_POST_TEMPLATE.md (First Draft: Post + Carousel; Outline: Facts, Implications, CTA).

Pillar → pattern quick reference

When filling the batch table (section 3), use the Format column for the pattern name below. Then open that pattern in linkedin-patterns.md and follow its steps.

PillarSuggested patternExample / when to use
ProblemDiagnostic List FormatOperational pain, “why X keeps happening,” channel/stack diagnosis. ~340 words.
SolutionSilo-to-Signal StructureMethodology, process, “here’s how we solve it.” ~420 words.
SolutionProblem → Common Fix → Better FixPositioning, “common fix fails, better fix at source.” ~340 words.
ServiceB2B Framework / teaching list”What we check” (numbered list) → deliverable → CTA.
ServiceProcess Reveal + case study”Here’s what we do” → process steps → case examples → reframe → CTA.

Content structure types (choose by shape)

Use this when you want to pick a structure first (e.g. “we need a listicle” or “numbered list”). Then use the pattern name in the Format Index / linkedin-patterns to draft.

Content structureWhat it looks likePattern name(s) to use
Listicle”Here are N reasons / N things…” — numbered items, each with a short explanation.Diagnostic List Format, Industry Opinion List
Numbered list1, 2, 3… — clear numbered points (diagnosis, steps, or “what we check”).Diagnostic List Format, B2B Framework / teaching list
How-to / methodology”Here’s how we solve it” — process or approach, often with a bridge from problem to solution.Silo-to-Signal Structure
Problem → solution (comparison)“Common fix fails; here’s the better fix” — positioning or differentiation.Problem → Common Fix → Better Fix
Reframe / “who told you X?”Challenge a false trade-off or assumption; flip the sequence.Sequence Flip Narrative
Case study / process reveal”Here’s what we do” + concrete examples or client outcomes.Process Reveal + case study
Bullet list (short)3–5 bullets, punchy; good for wins, takeaways, or quick tips.Weekly Wins Format
Event / short CTAVery short; energy + “if you’re here, DM me” or time-bound CTA.Quick Event Format
Story / reflectionPersonal arc: vulnerability or milestone → lesson → wins or forward look.Vulnerability-to-Wins Arc, Story-Driven Reflection
Announcement + hookNews (partnership, launch) + contrarian or insight hook.Contrarian Announcement Structure
Lead magnetValue buildup → comment keyword or link to gated asset.Lead Magnet - Download/Freebie

When filling the batch table (section 3), you can use either the pattern name (e.g. Diagnostic List Format) or the content structure (e.g. listicle) — then look up the pattern in linkedin-patterns and follow its steps.


Content structure template: Problem → Fix

Use this when the post is solution or positioning: you name a problem, then present the fix (or contrast a common fix that fails with a better fix). Good for differentiation, “how we think about it,” or teaching a better approach.

When to use: Solution pillar; positioning/differentiation; “common approach fails, here’s what actually works.”

Two variants:

VariantUse when
Simple Problem → FixOne clear problem, one clear fix. No need to dismiss other approaches.
Problem → Common Fix → Better FixYou’re positioning: the usual fix doesn’t work; your fix (or the right fix) does. ~340 words.

Simple Problem → Fix (structure)

  1. Hook — State the problem or the mistaken belief (e.g. “Most teams think X. It’s actually Y.” or “[Thing] isn’t broken, but [real cause] is.”)
  2. Consequence — One or two sentences: what happens because of the problem.
  3. Problem (short) — What’s actually going wrong; 1–3 concrete symptoms if needed.
  4. Bridge — “That’s the problem. Here’s how we fix it.” or “Here’s what actually works.”
  5. Fix — The solution: 3–5 points (bullets or short paragraphs). Be specific (what to do, not vague advice).
  6. Outcome — One line: what changes when you apply the fix.
  7. CTA — Match to tier (e.g. lead magnet for solution posts = Tier 2; link = Tier 1).

Problem → Common Fix → Better Fix (structure)

Full pattern: Robert GPT linkedin-patterns — “Problem → Common Fix → Better Fix”.

  1. Thesis front-load hook — “[Thing] isn’t broken, but [real cause] is.”
  2. Consequence — “Every [result] because [reason].”
  3. Symptoms — Two visceral examples (not five).
  4. Common fixes grouped — How people usually try to fix it (e.g. “switching tools… Others chase better models”). Group by category.
  5. Why they fail — “None of it works, because…”
  6. Problem mechanics — Brief: how the underlying issue actually works (e.g. how data gets lost).
  7. Better fix intro — “[Our fix / the right fix] fixes this at the source.”
  8. Differentiation — One clear line on why this fix is different (e.g. “before the damage happens” = upstream).
  9. What doesn’t change — Reassure (e.g. “Your tools stay. We’re not replacing anything.”).
  10. Cost of inaction — “Every day you [action] is a day you’re [consequence].”
  11. CTA — Direct; match to tier (Tier 1 link or Tier 5 meeting for service/positioning).

Notes: ~340 words. Group failed fixes by category; don’t list every one. Lead with confident positioning, not defensive hedging. No “not a silver bullet” language.


Outline shape (copy per post)

  • FACTS / EVIDENCE: [Problem: what’s going wrong, why common fix fails if applicable] [Fix: what actually works, why it’s different]
  • IMPLICATIONS: [What changes when they apply the fix; cost of inaction if relevant]
  • CTA: [One line + tier]

Content structure template: Framework

Use this when the post teaches a framework or lists what you check/deliver: stages, principles, audit dimensions, or a repeatable model. Good for service positioning (“here’s how we think about it”) and credibility without hard selling.

When to use: Service pillar; teaching a maturity model, audit checklist, or stage-based approach; “what we look at” / “what we deliver.”

Two variants:

VariantUse when
List framework (“What we check”)You’re explaining the dimensions/criteria of your service (e.g. audit checklist, assessment criteria). One hook, then numbered list with 1–2 sentences each.
Stage-based frameworkYou’re teaching a maturity or stage model (e.g. by revenue, team size). Each stage has a threshold, list of items/tools, and why it matters. Ends with philosophical close + invitation.

List framework — “What we check” (structure)

Example: dbt audit post — “What we audit in a dbt code review.”

  1. Hook — Reframe what you’re really looking for (e.g. “We’re not just looking for bad code. We’re looking for what makes a codebase ownable.”).
  2. Lead-in — “Here’s what we actually check:” or “Here’s the framework we use:”
  3. Numbered list — 4–7 items. Each item: Bold label. One or two sentences (what it is, why it matters, what you find). Be specific.
  4. Deliverable — One or two sentences: what they get (e.g. “We package this into a clear report and a prioritized roadmap. So you get ‘here’s what we found’ and ‘here’s what we’d do first.’”).
  5. CTA — Match to tier (e.g. Tier 5 for “book a call to see what we’d look at”; Tier 2 for “comment KEYWORD for the checklist”).

Outline shape (copy per post):

  • FACTS / EVIDENCE: [List the framework dimensions/criteria with 1-line each; add what you deliver.]
  • IMPLICATIONS: [What they get from it; when it’s useful (e.g. before hiring, when runtimes are a problem).]
  • CTA: [One line + tier]

Stage-based framework (structure)

Example: Robert GPT b2b-services-stage-framework.md — marketing measurement by revenue stage.

  1. Framework promise — “Here’s a framework for [X] based on [maturity dimension]:” or “Here’s how we think about [X] at different stages:”
  2. Stage 1 — Name + threshold (e.g. revenue under $5M). Bullet list of tools/methods. 2–3 sentences: why these at this stage, what’s unique.
  3. Stage 2 — Name + threshold. Bullet list. Why these + new complexity introduced.
  4. Stage 3 — Name + threshold. Bullet list. Why these + organizational/cross-functional dimension.
  5. Stage 4 — Name + threshold. Bullet list. Why these + strategic/operational focus.
  6. Philosophical close — Universal principle (e.g. “Every stage brings new challenges. Every tool is just a lens. The key is knowing which lens belongs where.”). Reframe what success means.
  7. Invitation — “Does this align with anyone else’s [experience]?” or “Curious how others approach [X]?”

Notes: Use clear thresholds (revenue, team size, etc.). Name specific tools where it adds credibility. Collaborative tone; you’re teaching, not pitching. ~400 words for 4 stages.

Outline shape (copy per post):

  • FACTS / EVIDENCE: [Stage names + thresholds; 1-line summary of what defines each stage and key tools/approaches.]
  • IMPLICATIONS: [What matters most isn’t the list, it’s (principle/outcome).]
  • CTA: [Invitation to align or discuss; tier as needed.]

Other formats (e.g. Quick Event Format, Contrarian Announcement, Vulnerability-to-Wins) are in the Format Index; use when the post type matches (event, partnership, personal reflection).

If the campaign uses Uttam GPT: Use that system’s Format Index and patterns under sales/content/cc-content-system/uttam-gpt/ the same way (choose format, then follow that pattern).


3. Batch outline template (fill per campaign)

Fill this table at the start of each campaign. One row per post.

#PillarTopic / hookContent structure / typeFormat (from Format Index)
1
2
3
4
5
6

Workflow reminder: Structure selection (Format Index + patterns) → Draft → Alignment → CTA. See CAMPAIGN_POST_TEMPLATE.md and CTA_FRAMEWORK.md.

Optional: Add a line for source evidence (e.g. “Demilade–Luke call, Uttam patterns, Magic Spoon / Urban Stems case details”) so drafters know where to pull facts.


Example 6-post outline with context (dbt campaign)

This is a filled outline using the dbt content sequence so you can see what a full batch looks like with real context. Use it as a template: copy the structure and swap in your own campaign (topics, facts, CTAs).

Campaign: dbt audit & analytics engineering
Audience: Analytics engineers, data leads, heads of data
Source evidence: Demilade–Luke dbt audit service call (2026-01-20), Uttam consulting patterns, Urban Stems / Magic Spoon case details

Context for this outline (from Luke–Demilade–Uttam call)
Use this when drafting the 6 posts; pull facts and phrasing into Facts/Evidence and implications.

  • What a dbt audit is and why you need it: dbt = SQL on steroids; version control and software-engineering practices applied to data. Why audit: “It gets really messy.” Built with “we need things to happen, we need to see the numbers” — over time it’s hard to know what is what and why things were done; runs get long. Many teams want overnight runs so by morning BI has yesterday’s numbers. If ingestion ends at 2:30 AM and dbt takes 4 hours, you’re close to when people (e.g. marketing by 5–6 AM) log in; if something breaks and you restart, it can take forever. Knowing bottlenecks and shortening dbt runs (e.g. halve the time) = more leeway for business hours and room to add sources or increase refresh cadence.
  • Principles we check (audit scope): DAG (direct acyclic graph): staging → intermediate → mart; no cycles (e.g. mart feeding back to intermediate). DRY (don’t repeat yourself). Modularity: ~100 lines, not 600; otherwise “something’s wrong in line 585” — hard to debug; modular bits let you trace to a slice. Naming: reflect data sources and what the model does. Documentation: logic of what you’re doing; assumptions clearly stated (grain, scope, what would break if source changed). Testing: good tests for data quality; sources in sources.yaml (not hard-coded table names) so you can run source freshness. Materialization: incremental where it makes sense vs rebuilding everything — saves runtime and compute.
  • Who buys / who we work with: Champion = technical (analytics engineer, data engineer, CTO with data background) or frustrated business stakeholder who’s heard “dbt is still running,” runtimes are long, or answers about data take forever. Day-to-day contact = analytics engineer. Precondition: they use dbt; “things feel clunky” or they’re not getting the promise of dbt — we help them get closer.
  • Process and deliverables: Audit = get access (Git, dbt), map how the system really works, produce audit report + prioritized roadmap. Timeline: ~3–4 weeks for “just a roadmap.” Then client chooses: roadmap only (hire or plan next quarter) or implementation. Implementation scoped by data mart (revenue, sales, marketing, inventory): “Give us your most pressing data mart, 6 weeks, we’ll tackle it.” No big-bang rewrite; team keeps shipping; we can build parallel infrastructure then migrate.
  • Urban Stems (case study): One person built everything over years; not built on good principles. Emily (new person) took over, “lost,” “cluster of rubbish,” hard to make sense; patching added to the mess. One data person can’t stop daily work to audit and refactor. We built parallel infrastructure, then migrated; they kept running. Outcomes: modular code (~100 lines), clear naming (e.g. loted vs unlotted goods, hub-and-spoke vs direct to fulfillment), easier to debug; Emily preferred the new setup and kept using it after we left. Role: one-person team — data analyst / analytics engineer doing everything.
  • Magic Spoon (case study): Not crippled; “everything was fine,” wanted to see if there was gain. Findings: long-running models (some 40+ min, some 25+ min); full run ~2.5 hours. Potential to get runs down to ~1h 40 or 1h 30 (knock off an hour). No sources in sources.yaml — hard-coded table names; no source freshness. No tests, no documentation; logic only by reading code, hard for a new hire to ramp. Three pain points: efficiency (runtime), documentation, sources implementation. Solution (in progress): change materialization (e.g. incremental) to drive efficiency.
  • Outcomes / KPIs: Runtime is the most measurable. Documentation as deliverable. Adoption (e.g. Emily using new infrastructure) — real but harder to quantify.
  • Risks (for project): Access (Git, dbt) not coming quickly. Competing data requests; not staffed to multitask. Volume (e.g. 4,000 models) — early recognition, communicate early; scope by number of models and complexity (200 models × 8k lines vs 600 × 100 lines).

Batch table (dbt)

#PillarTopic / hookContent structure / typeFormat usedCTA tierExample CTA
1ProblemHidden cost of dbt debt during onboardingListicleDiagnostic List1 or 4”See what we look for when we audit for onboarding risk: [tracked link].“
2ProblemWhy new analytics engineers take 3 months to rampNumbered listDiagnostic List4”If your team is about to bring on a second engineer… DM us if you want to see what an audit would find.”
3SolutionHow top teams document dbt for knowledge transferHow-to / methodologySilo-to-Signal / teaching2”Comment DOCS and I’ll send the one-pager.”
4Solutiondbt testing strategy that prevents issuesProblem → solution (comparison)Problem → Common Fix → Better Fix1 or 2”See how we design tests so red means ‘fix this’: [tracked link].” Or: “Comment TESTS and I’ll send the guide.”
5ServiceWhat we audit in a dbt code reviewNumbered list (B2B framework)B2B Framework (list)5”Book a 30-min ‘what we’d look at’ call: [booking link].“
6Servicedbt audit → roadmap (case study)Case study / process revealProcess Reveal + case study3 or 5”Register for the webinar: [event link].” Or: “Book a call and we’ll walk you through what an audit looks like: [booking link].”

Post 1 — Hidden cost of dbt debt during onboarding

(Uttam voice. See Uttam GPT style-voice and dbt examples.)

Post (first draft):

Most teams think onboarding fails because they hired the wrong person.

It’s actually the system you’re about to hand them, silently setting them up to fail.

We keep seeing the same things:

  • Tests have been failing for years. Now they’re just noise. New hires can’t tell what’s real
  • Docs don’t exist. People are reading 600–800 lines of code to learn what it does, what it depends on, and why
  • One person shipped the whole dbt project under “we need this shipped” pressure, so when you finally hire a second person, the system was never built for two people to understand

Add it up and onboarding turns into a multi-month reverse engineering exercise.

That’s not a talent problem. It’s infrastructure debt that shows up at the worst time.

Both people get stuck in “how does this even work” instead of “how do we make it better.”

If this sounds like the codebase you’re handing to your next hire, DM me. Happy to talk through what I look for: tests you can trust again, missing ownership, undocumented assumptions, brittle dependencies, and quick wins.


  • Content structure / type: Listicle (Diagnostic List Format)
  • FACTS / EVIDENCE: Onboarding is often blamed on the hire; the real cause is the system handed to them. Tests failing for years become noise; new hires can’t tell what’s real. No docs; people read 600–800 lines of code to learn what it does, what it depends on, and why. One person shipped the whole dbt project under “we need this shipped” pressure, so the system was never built for two people to understand. Add it up → multi-month reverse engineering exercise. Reframe: not a talent problem, it’s infrastructure debt that shows up at the worst time.
  • IMPLICATIONS: Both people get stuck in “how does this even work” instead of “how do we make it better.”
  • CTA: Tier 4 — “If this sounds like the codebase you’re handing to your next hire, DM me. Happy to talk through what I look for: tests you can trust again, missing ownership, undocumented assumptions, brittle dependencies, and quick wins.”

Post 2 — Why new analytics engineers take 3 months to ramp

  • Content structure / type: Numbered list (Diagnostic List Format)
  • FACTS / EVIDENCE: Monolithic models (600+ lines) make debugging hard; modular ~100-line units allow tracing to a slice. DAG violations (mart → intermediate) force reverse-engineering. Undocumented assumptions (grain, exclusions) force dependency on the incumbent or risky changes.
  • IMPLICATIONS: Ramp stretches to months of discovery; design was optimized for shipping, not knowledge transfer.
  • CTA: Tier 4 — “DM us if you want to see what an audit would find (map structure, document assumptions, roadmap for the new person).”

Post 3 — How top teams document dbt for knowledge transfer

  • Content structure / type: How-to / methodology (Silo-to-Signal Structure)
  • FACTS / EVIDENCE: Documentation should live with the code and answer “what is what and why it was done.” Explicit assumptions (grain, scope, sensitivity to source changes) are part of documentation. Naming and DAG structure (staging → intermediate → mart) give clear signals. Sources in sources.yaml enable freshness and onboarding.
  • IMPLICATIONS: Codebase becomes readable without tribal knowledge; new hires get “what and why” without a full tour.
  • CTA: Tier 2 — “Comment DOCS and I’ll send the one-pager on how we document logic and assumptions in audits.”

Post 4 — dbt testing strategy that prevents issues

  • Content structure / type: Problem → solution (comparison) (Problem → Common Fix → Better Fix)
  • FACTS / EVIDENCE: Teams leave tests failing for years (“known” failures); new hires can’t tell real issues from legacy noise. Source freshness (sources.yaml + expectations) catches missing data before stakeholders notice. Tests only help when the team acts on failures; permanent “known failure” lists undermine trust.
  • IMPLICATIONS: Testing layer either protects the business or adds confusion; discipline (fix or remove) matters more than volume.
  • CTA: Tier 1 — “See how we design tests so red means ‘fix this’: [tracked link].” Or Tier 2 — “Comment TESTS and I’ll send the guide.”

Post 5 — What we audit in a dbt code review

  • Content structure / type: Numbered list / B2B framework (B2B Framework / teaching list)
  • FACTS / EVIDENCE: DAG integrity (staging → intermediate → mart, no cycles). DRY, modularity (~100 lines vs 600+ monoliths), documentation and assumptions, testing and source freshness, naming and materialization. We deliver a report and prioritized roadmap.
  • IMPLICATIONS: Audit = “here’s what we found” + “here’s what we’d do first”; useful before hiring a second engineer or when runtimes are a problem.
  • CTA: Tier 5 — “Book a 30-min ‘what we’d look at’ call: [booking link]. You leave with a short list of where we’d start.”

Post 6 — dbt audit → roadmap (case study)

  • Content structure / type: Case study / process reveal (Process Reveal + case study)
  • FACTS / EVIDENCE: Process: audit in parallel (team keeps shipping) → report + prioritized roadmap → client chooses roadmap-only or implementation (“most pressing data mart, 6 weeks”). Magic Spoon: bottlenecks and redundant logic → runtimes toward what business needed. Urban Stems: parallel cleaner infrastructure, then migrate; no stop-the-world refactor.
  • IMPLICATIONS: No big-bang rewrite; clear picture and path to fix in order. Works when the team can’t pause to refactor or when bringing on a second person.
  • CTA: Tier 3 — “Register for the webinar on audit → roadmap → implementation: [event link].” Or Tier 5 — “Book a call and we’ll walk you through what an audit looks like and what you’d get back: [booking link].”

Why this CTA spread (dbt): Tier 1 (posts 1, 4) for link clicks; Tier 2 (posts 3, 4 option) for lead magnet; Tier 3 (post 6 option) if you have a webinar; Tier 4 (post 2) for one DM invitation; Tier 5 (posts 5, 6) for meetings. Full drafts, carousels, and non-DM CTA tables: dbt content sequence.

Before drafting any campaign: Run through the pre-batch setup checklist.


4. Per-post outline shape (reusable)

Use this block for every post in a batch. Same shape as in CAMPAIGN_POST_TEMPLATE.md and the dbt sequence.

POST N — [Topic one-liner]

  • Content structure / type: [e.g. Listicle, Numbered list, How-to / methodology, Problem → solution (comparison), Case study / process reveal — see content structure types table in section 2]

FACTS / EVIDENCE:

  • [Bullet — source-backed]
  • [Bullet]
  • [Bullet]
  • [Bullet] (3–5 total)

IMPLICATIONS:

  • [So what for the reader/business]
  • [So what] (2–3 total)

CTA:

  • [One line] — Tier [1–5]

Non-DM CTAs (suggested):

TierCTAWhat to set up
1[e.g. “See what we look for: [tracked link].”]Tracked link (UTM or short); log clicks.
2[e.g. “Comment CHECKLIST and I’ll send it.”]Asset + form or comment-keyword delivery; log downloads.
5[e.g. “Book a 30-min preview: [booking link].”]Calendly/Cal.com + “How did you hear?”; log meetings.

This keeps every post aligned to the CTA framework and forces “what to set up” so tracking is possible.


5. Pre-batch setup checklist

Set these up before drafting so each batch is measurable.

TierWhat to set upWhere to log
1 — Tracked linksUTM or short link per post (or per campaign).Clicks in shortener/analytics or tracking table.
2 — Lead magnetAsset (one-pager, checklist, PDF) + delivery (comment keyword + manual send, or gated form).Form submissions or “asset sent” log.
3 — EventEvent page + registration (if using events).Signups per event.
4 — DMsNo setup.Weekly sheet or Slack count of real conversations.
5 — MeetingsBooking link (e.g. Calendly/Cal.com) + “How did you hear about us?” or campaign-specific link.Meetings booked where source = this campaign.

Details: CTA_FRAMEWORK.md — “What to set up for each engagement type.”