Insurance Content Plan - Week of February 17-21, 2026

Purpose: Weekly content plan for insurance broker campaign across Uttam’s account
Campaign: Insurance Broker Lead Intake Automation
Target: 100M revenue brokerages
Status: Ready for drafting
Last updated: 2026-02-11


📋 Weekly Overview

Strategy: Focus on niche pain-point content (Posts 7-10) that target specific documented pain from research. These posts amplify the custom outreach to named accounts (Shepherd, Scott, Starkweather, Houchens, BMB).

Posting Account: Uttam (primary for insurance vertical)
Frequency: 3-4 posts this week (can extend into following week)
Goal: Drive engagement from ICP personas, support outreach conversations


📅 Content Calendar

DayPost #HookTarget AudienceFormatStatus
Tuesday, Feb 18POST 7Why your LLM pilot failed (and what works)Shepherd, AI skepticsThought LeadershipOutline ready
Wednesday, Feb 19POST 8The 20-hour report costing $50K/yearScott, surety firmsCost QuantificationOutline ready
Thursday, Feb 20POST 9Hired Architect but team lives in spreadsheetsStarkweather, transformation firmsOrganizational ParadoxOutline ready
Friday, Feb 21POST 10Hiring “Intake Coordinators” = treating symptomHiring managers, all firmsSymptom vs Root CauseOutline ready

Alternative: If 4 posts in one week is too aggressive, spread Posts 9-10 into the following week (Feb 24-25).


🎯 Strategic Context

Why These Posts Matter

  1. Named Account Alignment: Each post targets specific firms from the Top 5 named accounts
  2. Conversation Starters: Can be used in DMs (“I wrote this about your exact situation”)
  3. Credibility Building: Shows deep understanding of segment-specific pain
  4. Multi-Threading: Hiring manager posts (Post 10) create secondary entry point

How to Use in Outreach

Direct Reference in DMs:

  • “Saw you’re at Shepherd. I wrote this about the LLM failure you documented: [link to Post 7]”
  • “Your VP mentioned 20-hour WIP reports. This is exactly what I’m talking about: [link to Post 8]”
  • “Your SVP’s quote about spreadsheets stuck with me. Wrote about it here: [link to Post 9]”

Tagging Strategy (use sparingly):

  • Only tag if genuinely relevant and non-spammy
  • Focus on building relationships first, tagging second
  • Consider commenting instead of tagging

📝 POST 7 - Why Your LLM Pilot Failed (and What Actually Works)

Date: Tuesday, February 18, 2026
Account: Uttam
Target: Shepherd Insurance, firms that tested and rejected AI
Campaign Brief Reference: Post 7 (lines 464-482)


Content Outline

Pillar: Problem (Niche)
Robert GPT Format: Thought Leadership - Problem Diagnosis (Acknowledge Failed Attempt)
Structure: Problem → Why It Failed → What Works Instead
Example Reference: 2026-01-edge-layer-does-doesnt.md (what works vs what doesn’t)


Hook (First 2-3 Lines)

“Your team tested AI for underwriting intake.

It failed. Non-deterministic outputs. Hallucinations. Unreliable table extraction.

You were right to reject it.”


Core Narrative (Body)

Problem Recognition:

  • Mid-market brokerages (100M) tested LLMs and RAG for intake automation
  • Many rejected them due to “non-deterministic” outputs, hallucinations, unreliable extraction
  • Real example: One brokerage publicly documented that LLMs “do not produce a reliable output every time”
  • The hype cycle promised automation but delivered unreliable black boxes

Why It Failed (The Shift):

  • This wasn’t your team’s fault
  • General LLMs are trained on everything: Wikipedia, Reddit, novels
  • They’re designed to be creative, not precise
  • Insurance requires 100% accuracy on structured data (schedules of vehicles, locations, values)
  • A model that “guesses” or “fills in the blank” = liability risk

What Works Instead (The Solution):

  • The problem isn’t AI itself — it’s general-purpose AI applied to precision tasks
  • Task-specific automation: trained only on your intake workflow
  • Every extraction cited (page, section, timestamp) — no hallucinations
  • Deterministic outputs: same input = same output, every time
  • Auditable: you can verify every data point back to source

The Reframe:

  • You rejected generic AI for the right reason: reliability
  • “Close enough” doesn’t work when you’re liable for errors
  • The next generation of insurance automation isn’t general — it’s surgical

CTA

“If you tested AI and it failed, you rejected it for the right reason.

Want to see the task-specific alternative that works? DM me.”


Key Evidence / Data Points

  • Shepherd Insurance case: “Non-deterministic,” “unreliable,” “hype has not lived up to expectations”
  • Insurance requires 100% accuracy on table extraction (schedules)
  • General LLMs trained on billions of general documents vs task-specific models
  • Liability risk: model guessing = broker liability

Writing Guidance

Tone: Empathetic, diagnostic, non-defensive
Voice: “I know you tried this and it failed. Here’s why — and what’s different now.”
Avoid: Being overly technical, blaming the prospect, overselling the solution
Do: Acknowledge their legitimate concerns, explain root cause clearly, show what changed

Key Phrases:

  • “You were right to reject it”
  • “This wasn’t your team’s fault”
  • “The problem isn’t AI — it’s general-purpose AI”
  • “Task-specific, not general”
  • “Cited, auditable, deterministic”

Potential Engagement

Who Will Resonate:

  • Ops leaders who tested and rejected AI
  • Risk managers concerned about liability
  • CIOs/CTOs who got burned by vendor promises
  • Anyone who heard “AI will solve everything” and found it didn’t

Expected Comments:

  • “This is exactly what happened to us”
  • “We tried X vendor and had these exact issues”
  • “What makes yours different?”
  • “How do you ensure accuracy?”

Response Strategy:

  • Validate their experience
  • Offer to show specific workflow/demo
  • Reference insurance-specific requirements
  • Move to DM for detailed conversation

📝 POST 8 - The 20-Hour Report Costing $50,000 a Year

Date: Wednesday, February 19, 2026
Account: Uttam
Target: Scott Insurance, surety-focused brokerages
Campaign Brief Reference: Post 8 (lines 485-503)


Content Outline

Pillar: Problem (Niche)
Robert GPT Format: Thought Leadership - Cost Quantification
Structure: Quantified Pain Hook + Hidden Cost Reveal
Example Reference: Use direct, data-driven hook


Hook (First 2-3 Lines)

“Work-In-Progress reports for surety clients.

Some brokerages spend 20 hours per report.

That’s $75,000 a year on one manual task.”


Core Narrative (Body)

The Task:

  • WIP reports for surety clients: reconcile % completion, costs incurred, billings for every active project
  • Required by carriers to assess risk and bond capacity
  • Financial data arrives in QuickBooks exports, Excel schedules, PDFs
  • Analysts manually map it to the surety’s required format

The Math (The Hidden Cost):

  • At some brokerages: 20 hours per WIP report
  • 50 surety clients × 20 hours each = 1,000 hours/year
  • At 75,000/year**
  • This is half an FTE’s annual capacity spent on report generation
  • Not analysis. Not client service. Just data entry.

The Scaling Problem:

  • In a hard market (rising remarketing volume), the bottleneck compounds
  • Firms hire “Production Underwriting Assistants” to handle volume
  • You’re scaling cost instead of solving the problem
  • Manual processes can’t scale at the speed clients expect

The Real Cost:

  • Opportunity cost: What could your team do with 1,000 hours back?
  • Speed cost: 20 hours per report = slow turnaround = lost deals
  • Error cost: Manual data entry = transcription errors = rework
  • Competitive cost: Competitors who automate handle 3x volume

CTA

“If WIP reports are eating your team’s time, let’s talk.

I’ll show you the 20-minute version. DM me.”


Key Evidence / Data Points

  • Scott Insurance VP quote: “highly manual process” (20 hours documented)
  • Math: 50 clients × 20 hrs = 1,000 hrs = 75/hr fully-loaded
  • Half an FTE’s annual capacity on one task
  • Surety market context: hard market = more volume

Writing Guidance

Tone: Data-driven, matter-of-fact, empathetic
Voice: “Let me show you the math you’re already living.”
Avoid: Being preachy, overselling, dismissing their current process
Do: Make the pain visible, quantify the opportunity cost, show the alternative

Key Phrases:

  • “20 hours per report”
  • “$75,000 a year on one task”
  • “Half an FTE’s annual capacity”
  • “Not analysis, not client service — just data entry”
  • “Manual doesn’t scale”

Potential Engagement

Who Will Resonate:

  • Heads of Surety
  • CFOs (financial data spreading is their domain)
  • Operations leaders dealing with surety bottlenecks
  • Anyone hiring “Production Underwriting Assistants”

Expected Comments:

  • “This is us exactly”
  • “How do you get it down to 20 minutes?”
  • “What about accuracy?”
  • “We use [X system], can you integrate?”

Response Strategy:

  • Validate the pain
  • Offer demo with sample contractor WIP
  • Explain financial data spreading automation
  • Show cited outputs (auditable)

📝 POST 9 - You Hired an Enterprise Architect but Your Team Still Lives in Spreadsheets

Date: Thursday, February 20, 2026
Account: Uttam
Target: Starkweather & Shepley, firms attempting digital transformation
Campaign Brief Reference: Post 9 (lines 506-525)


Content Outline

Pillar: Problem (Niche)
Robert GPT Format: Thought Leadership - Organizational Paradox
Structure: Paradox Hook (hiring Architect + hiring Assistants) + Root Cause
Example Reference: Diagnostic structure


Hook (First 2-3 Lines)

“Your brokerage hired an Enterprise Architect to build a modern data foundation.

At the same time, you’re hiring armies of Underwriting Assistants to ‘deal with all these spreadsheets and PDFs.’

That’s the 100M scaling paradox.”


Core Narrative (Body)

The Paradox:

  • Mid-market brokerages hire “Enterprise Architects” and “CTOs” to modernize
  • Simultaneously, they’re hiring “Underwriting Assistants,” “Processing Specialists,” “Intake Coordinators”
  • Real quote from SVP at a $78M brokerage: “If I could change one thing, it would be not having to deal with all these spreadsheets and PDFs
  • The Architect is designing a data warehouse
  • The team is drowning in static files
  • The assistants are human middleware, moving data from Excel to systems

The Root Cause:

  • The Architect can’t build the foundation without structured inputs
  • The assistants can’t structure the inputs fast enough
  • You have the strategy (modern data layer) but not the ingestion layer
  • You’re trying to build a skyscraper on quicksand

The Scaling Paradox (The “Missing Middle”):

  • Large enough to need architecture (100M revenue)
  • Not large enough to have automated ingestion (can’t afford $10M+ tech budgets)
  • Result: Hire human middleware to bridge the gap
  • Architect designs. Assistants execute. Neither can move fast enough.

The Real Fix:

  • Not downstream (better AMS, better reporting)
  • Upstream: automate the spreadsheet-to-structure step
  • Ingestion layer: turn static files into structured data
  • Feed your Architect’s data warehouse with actual structured inputs
  • Then the Architect can build; the assistants can focus on exceptions

CTA

“If you’re building a data foundation but your team is still in Excel, let’s talk.

I’ll show you the ingestion layer. DM me.”


Key Evidence / Data Points

  • Starkweather & Shepley SVP quote: “not having to deal with all these spreadsheets and PDFs”
  • Hiring pattern: Enterprise Architect + multiple Underwriting Assistants simultaneously
  • 100M segment: too large for manual, too small for enterprise platforms
  • Architect needs structured inputs; assistants can’t structure fast enough

Writing Guidance

Tone: Diagnostic, empathetic, strategic
Voice: “I see the bind you’re in. Here’s the missing piece.”
Avoid: Being critical of their hiring decisions, oversimplifying the problem
Do: Name the paradox clearly, show you understand the bind, offer the specific solution

Key Phrases:

  • “The scaling paradox”
  • “Human middleware”
  • “Architect designs, assistants execute, neither can move fast enough”
  • “Not downstream — upstream”
  • “Ingestion layer”

Potential Engagement

Who Will Resonate:

  • SVP Operations / Heads of Operations
  • Enterprise Architects / CTOs (they feel the pain of no structured inputs)
  • Hiring managers for assistants
  • Leadership dealing with digital transformation challenges

Expected Comments:

  • “This is exactly our situation”
  • “We’ve been trying to solve this for years”
  • “How do you structure unstructured data?”
  • “What’s the ingestion layer look like?”

Response Strategy:

  • Validate the strategic vision (modern data foundation)
  • Name the missing piece (ingestion)
  • Offer demo showing spreadsheet → structured data workflow
  • Position as the bridge between strategy and execution

📝 POST 10 - If You’re Hiring “Intake Coordinators,” You’re Treating a Symptom

Date: Friday, February 21, 2026
Account: Uttam
Target: Hiring managers, ops leaders at all target firms
Campaign Brief Reference: Post 10 (lines 527-545)


Content Outline

Pillar: Problem (Niche)
Robert GPT Format: Thought Leadership - Symptom vs Root Cause
Structure: Symptom (hiring) → Root Cause (process) → Fix (automation)
Example Reference: Problem → Common Fix → Better Fix pattern


Hook (First 2-3 Lines)

“Open LinkedIn. Search ‘Intake Coordinator’ at mid-market insurance brokerages.

Dozens of open roles.

They exist because the software doesn’t talk to the software.”


Core Narrative (Body)

The Symptom (What You See):

  • Search “Intake Coordinator,” “Processing Specialist,” “Underwriting Assistant” on LinkedIn
  • Dozens of open roles at 100M brokerages
  • Job descriptions: monitor email inboxes, download attachments, enter data into systems, “gather information by carrier”
  • These roles are human middleware
  • They exist to move data between systems that don’t integrate

The Cost:

  • Every “Intake Coordinator” you hire = 70K/year
  • That’s 70K/year confirming that your intake process isn’t automated
  • In a hard market (higher remarketing volume), you hire more Coordinators
  • You’re scaling cost, not throughput
  • Your competitors who automate intake handle 2-3× the volume with the same team size

The Root Cause:

  • Unstructured data (PDFs, spreadsheets, emails)
    • Rigid systems (AMS, carrier portals)
    • No ingestion layer
  • = Need for human middleware

The Real Fix:

  • Automate the ingestion layer
  • Turn unstructured data into structured data at the source
  • Your systems can then talk to each other
  • Coordinators can focus on exceptions, not repetitive data entry
  • Or: don’t hire the Coordinator at all, redeploy that 70K to revenue-generating roles

The Question:

  • Every time you post a “Coordinator” job, ask: “Why does this role exist?”
  • If the answer is “to move data between systems,” you’re treating a symptom
  • Fix the process, not the headcount

CTA

“If you’re hiring Coordinators, let’s talk about why.

I’ll show you the automation that eliminates the role. DM me.”


Key Evidence / Data Points

  • LinkedIn search: dozens of “Intake Coordinator” roles at mid-market brokerages
  • Cost: 70K/year per role
  • Role description: monitor inbox, download attachments, enter data
  • Human middleware = symptom of broken process
  • Competitors automating = 2-3× throughput advantage

Writing Guidance

Tone: Diagnostic, non-judgmental, clear
Voice: “I’m not criticizing your hiring. I’m showing you the root cause.”
Avoid: Being preachy about automation, dismissing the people in these roles
Do: Respect the people doing the work, focus on the process failure, show the opportunity

Key Phrases:

  • “Human middleware”
  • “Treating a symptom”
  • “Why does this role exist?”
  • “Scaling cost, not throughput”
  • “Fix the process, not the headcount”

Important: Frame this as a process problem, not a people problem. The Coordinators are doing necessary work because the process is broken. Automation isn’t about eliminating jobs — it’s about eliminating unnecessary manual work so people can focus on high-value tasks.


Potential Engagement

Who Will Resonate:

  • Hiring managers (they feel the pain of constantly recruiting for these roles)
  • Operations leaders (they see the cost of scaling headcount)
  • CFOs (they track the cost per acquisition, cost per placement)
  • Anyone frustrated by the “hiring treadmill”

Expected Comments:

  • “We’ve been hiring these roles non-stop”
  • “What do we do with the existing Coordinators?”
  • “Is automation really cheaper than hiring?”
  • “We tried automation and it didn’t work”

Response Strategy:

  • Validate their current situation (they need these roles given current process)
  • Show the root cause (unstructured → structured gap)
  • Address the “what about existing employees” question head-on: redeploy to higher-value work
  • Reference Post 7 if they mention failed automation (task-specific vs general)

🎨 Design & Format Guidelines

Visual Elements (Optional Carousels)

Post 7 (LLM Failure):

  • Slide 1: Hook + “You were right to reject it”
  • Slide 2: Why general LLMs failed (creative vs precise)
  • Slide 3: What works instead (task-specific, cited, deterministic)
  • Slide 4: CTA

Post 8 (20-Hour Report):

  • Slide 1: Hook + “$75,000 a year”
  • Slide 2: The math breakdown (visual)
  • Slide 3: The opportunity cost
  • Slide 4: CTA

Post 9 (Architect + Assistants):

  • Slide 1: Hook + The paradox
  • Slide 2: Root cause diagram (Architect → needs structured data ← Assistants can’t structure fast enough)
  • Slide 3: The missing piece (ingestion layer)
  • Slide 4: CTA

Post 10 (Hiring Coordinators):

  • Slide 1: Hook + “Human middleware”
  • Slide 2: The symptom (hiring) vs root cause (process)
  • Slide 3: What automation looks like
  • Slide 4: CTA

Formatting Best Practices

Line Breaks: Use generously for readability
Bold: Key phrases and numbers
Bullets: For lists and breakdowns
Emojis: Sparingly (1-2 max) — professional tone
Length: 200-350 words per post
Hashtags: 3-5 max (#InsuranceTech BrokerTech Insurance Automation DataAutomation)


📊 Success Metrics

Engagement Targets

Post-Level Metrics:

  • Impressions: 2,000-5,000 per post (Uttam’s account)
  • Engagement Rate: 3-5% (likes, comments, shares)
  • Comments: 5-10 per post (quality over quantity)
  • Profile Views: 50-100 new views per week

Audience Quality Metrics:

  • Comments from ICP personas (Ops leaders, Surety heads, CTOs, CFOs)
  • Connection requests from target segment (100M brokerages)
  • DMs from prospects asking questions
  • Named account engagement (Shepherd, Scott, Starkweather, etc.)

Campaign Metrics

Tied to Insurance Campaign Gates (see Campaign Brief):

  • 3 SOWs sent
  • 1 signed contract
  • 5 meetings booked

Content’s Role:

  • Support outreach conversations
  • Build credibility and expertise positioning
  • Create conversation starters for DMs
  • Generate inbound interest

🔄 Workflow

Monday (Feb 17) - Planning & Setup

  • Review this content plan
  • Confirm posting dates with Uttam
  • Identify any adjustments needed
  • Set up Notion entries for each post

Tuesday-Friday (Feb 18-21) - Execution

  • Draft Post 7 (Tuesday morning)
  • Review and approve Post 7 (Tuesday afternoon)
  • Publish Post 7 (Tuesday 10am EST)
  • Draft Post 8 (Wednesday morning)
  • Review and approve Post 8 (Wednesday afternoon)
  • Publish Post 8 (Wednesday 10am EST)
  • Draft Post 9 (Thursday morning)
  • Review and approve Post 9 (Thursday afternoon)
  • Publish Post 9 (Thursday 10am EST)
  • Draft Post 10 (Friday morning)
  • Review and approve Post 10 (Friday afternoon)
  • Publish Post 10 (Friday 11am EST)

Throughout Week - Engagement

  • Monitor comments on each post (respond within 2-4 hours)
  • Track DMs from post engagement
  • Log conversations in HubSpot
  • Identify prospects for custom outreach
  • Use posts as conversation starters in ongoing DM threads

Friday EOD (Feb 21) - Review

  • Review engagement metrics across all 4 posts
  • Document key comments and insights
  • Identify which pain points resonated most
  • Plan follow-up content for next week
  • Update campaign tracking

Campaign Materials

Demo Materials

Content System


✅ Pre-Publishing Checklist

For each post before publishing:

  • Hook is compelling (first 2-3 lines grab attention)
  • Pain point is specific and documented (from research)
  • Evidence/data is accurate (quotes, numbers, sources)
  • Tone is empathetic and diagnostic (not preachy)
  • CTA is clear and low-friction (DM me)
  • Length is 200-350 words
  • Formatting is clean (line breaks, bold, bullets)
  • Hashtags are relevant (3-5 max)
  • Linked to campaign tracking in HubSpot
  • Ready for engagement monitoring

🎯 Post-Week Actions

If Content Performs Well

  • Repurpose into email sequences
  • Create variations for remaining 10 accounts
  • Expand into case studies
  • Turn into lead magnets or short guides
  • Amplify through partner channels

If Content Underperforms

  • Review engagement data (which posts/topics fell flat?)
  • Gather feedback from GTM team
  • A/B test different hooks or structures
  • Adjust messaging based on comments
  • Try different posting times/days

Status: Ready for execution
Owner: Luke (GTM Lead) + Uttam (posting account)
Next Review: February 21, 2026 (EOD)


Questions or Adjustments?

If you need to:

  • Adjust posting schedule — Spread posts across 2 weeks if needed
  • Change format — Add/remove carousels, adjust length
  • Shift focus — Prioritize certain pain points over others
  • Add posts — We can create Posts 11-12 for additional angles
  • Integrate with outreach — Coordinate post timing with HeyReach sequence

All outlines are ready. Just say the word and we’ll draft full posts.