Q1 Leadership × Creative + Engagement Lead Review
Prepared by: Robert Tseng (CEO)
For: Hannah Kwon (Interim Brand Lead / Creative + Engagement Lead)
Date: March 2026
Purpose: Align expectations to reality, course-correct the creative + engagement lane toward measurable engagement momentum, and set a 2-week performance improvement plan for Q2 execution.
1. Adjusted Role, Alignment, and Leveling
What we hired for (from the JD)
This role owns how people engage with Brainforge by turning approved positioning/offers into:
- Compelling creative
- Clear calls to action (CTAs)
- Obvious next steps (Viewer → Engaged → Curious → Participant → Lead)
Core mandate
- Move leads forward with momentum (not just awareness)
Non-negotiable tenets
- Creative exists to invite action; every asset must create intrigue and present a specific CTA
- Early impact matters (assets should drive Tier 1/Tier 2 engagement behavior within the first 30–60 days of the engagement)
- Documentation and repeatability are required (patterns, templates, checklists, engagement observability standards)
- Bridge content → demand: creative must ladder into real conversations that Sales/GTM can act on
What you own end-to-end (part-time scope)
- Defined workstreams you are allocated (creative/engagement design and execution)
- Quality/usability of deliverables (decision-ready and audience-appropriate)
- Documentation + handoff for what you delivered and how it should iterate
- Contribution to repeatability (templates, standards, and workflows that reduce friction)
What you do NOT own
- GTM strategy ownership / narrative or positioning definition
- Partnerships ecosystem development
- CRM, sales ops, or follow-up execution
- Metrics implementation for tracking infrastructure (your responsibility is to define what must be measurable and ensure the signals are observable via Web/AI/Sales partners)
JD source: Creative + Engagement Lead
What actually happened (drift)
Observed performance signals (evidence-based)
- High volume of creative + sales asset execution via tracked work
- In the Nov 2025–Feb 2026 window, Hannah created 67 tickets in scope (of 179 total issues assigned to the three). She self-assigned 44 and delegated 23 (delegation rate 34%).
- The ticket titles include engagement-adjacent work such as: decks, case studies/exports, LinkedIn assets (carousels + promo graphics), event planning/landing page + RSVP flow, and email/follow-ups.
Evidence source: Marketing Linear Ticket - NOV 2025 - FEB 2026
- Process friction: copy “word-for-word” availability
- In onboarding/workflow discovery, Hannah flagged that the most time-consuming/difficult task is getting content/copy for sales collateral, and described spending 1–2 hours prompting AI to tweak copy when exact copy wasn’t ready.
Evidence source: Hannah Wang (workflow discovery)
- Observability gaps still show up in instrumentation readiness
- The “Tracking Engagement” page is explicit that tracking is WIP for key surfaces (e.g. “Lovable apps”, “Landing pages”, and “Default scheduler links”), indicating the engagement tier system is not yet fully operational end-to-end.
Evidence source: ✏️ Tracking Engagement
- You produced strong engagement systems thinking, but we still need outcomes reporting
- Hannah authored a clear LinkedIn activation framework with weekly workflow and measurable success metrics (e.g. new ICP-aligned followers, engagement rate, and target account engagement).
Evidence source: LinkedIn Engagement Amplification Strategy
The drift (expectations vs what the evidence implies)
- The JD expects engagement momentum with observable signals and a system that prevents copy/CTA rework cycles.
- The evidence shows good execution throughput, but also process friction (copy word-for-word availability) and incomplete observability readiness (tracking surfaces still WIP).
- Net: the lane can unintentionally become “creative production + coordination” rather than “creative system + measurable engagement momentum,” even if the output quality is solid.
Re-leveled role: Creative + Engagement Lead (CTA Standards & Engagement Observability)
New title (recommended refinement, not a demotion):
Creative + Engagement Lead (CTA Standards & Engagement Observability)
What you own (tightened to match the drift)
- CTA + engagement-tier standards applied to every shipped asset (single clear CTA ladder; Viewer → Engaged → Curious/Participant path)
- Engagement observability readiness for shipped assets (at minimum: the tier signal mapping and the instrumented surface/link)
- Creative handoff artifacts that remove copy ambiguity (approved copy packages, CTAs, and “ready-to-ship” acceptance criteria)
- Asset-to-signal traceability: for every top asset, you can answer “which creative caused which Tier 1/Tier 2 behavior”
What you do NOT own (guardrails)
- Implementing analytics infrastructure end-to-end (you define; Web/AI implements)
- CRM follow-up workflows and nurturing execution
- GTM narrative/positioning ownership (you translate approved inputs into engagement-ready execution)
Immediate success metrics (next 2 weeks)
- 100% of the 3 highest-priority assets shipped this sprint include: (1) explicit CTA ladder, (2) engagement-tier mapping, (3) tracking readiness checklist
- Copy friction decreases: each shipped asset uses an approved “word-for-word copy package” (no 1–2 hour AI re-prompt cycles for missing approved copy)
- Tracking readiness is confirmed (no “WIP by assumption” for the chosen surfaces; at least the top 1–2 surfaces for the sprint are fully instrumented or explicitly blocked with an owner + ETA)
Counterpart role needed
To make observability real (not aspirational), you need a named counterpart for instrumentation execution:
- Engagement Analytics Implementation Owner (Web/AI): ensures PostHog/event mapping and default scheduler links exist for the surfaces you’re using.
Optional additional support (only if the gate is met):
- Creative Ops Coordinator to reduce the operational burden of queuing requests and ensuring copy packages are consistently prepared for designers.
2. Performance Improvement Plan (2-Week Sprint + Next Quarter)
Week 1: System basics (lock the fundamentals)
-
Copy Source-of-Truth workflow
- Create a simple “approved copy package” format: positioning excerpt + offer + final CTA copy + any supporting rationale.
- Acceptance criterion: designers receive copy that requires no AI re-prompting to reach the approved text.
-
CTA Ladder + Engagement Tier mapping per asset
- For every asset shipped in Week 1, attach a one-page mapping:
- Intended viewer + job-to-be-done
- Which Tier 2 engagement behavior this asset should trigger
- What Tier 1 action it should lead to next
- The next-step CTA used
- For every asset shipped in Week 1, attach a one-page mapping:
-
Observability readiness checklist (minimum viable instrumentation)
- Choose the top 3 assets for the sprint.
- For each, confirm at least one measurable surface is instrumented and linked in the “Tracking Engagement” system.
- If a surface cannot be instrumented, document: block reason + owner + ETA (no silent WIP).
Week 2: Q2 roadmap deliverable (Written + grounded)
Week 2 deliverable is a written “Creative + Engagement Roadmap for Q2” with:
- A Q1 recap (what you shipped, what you learned, and which standards caused momentum vs dead content)
- A conversion/outcome report template filled with the evidence you have
- Two experiments per week with clear hypotheses and kill/continue criteria
Part A: Q1 recap — Bets, outputs, and results
Asset categories you shipped (from evidence)
- LinkedIn engagement assets
- LinkedIn carousels + promo graphics + weekly engagement components (from ticket titles and your LinkedIn activation strategy)
- Sales collateral & pitch materials
- Decks, case study exports, 1-pagers/whitepaper content tickets
- Event engagement surfaces
- Event planning + landing page / RSVP flow work
- Partner/channel activation collateral
- Partnership deck and related engagement assets
- Nurture-adjacent execution
- “Emails/Follow-ups” tickets show ongoing engagement sequencing work
Where momentum stalled (themes from evidence)
- Copy availability caused rework loops (word-for-word gaps)
- Tracking surfaces were still WIP, reducing the ability to confidently report “which creative caused which action”
Result needed (you must fill actual numbers)
- For each asset cohort: views/impressions, Tier 2 behaviors (replies/comments/shares with context), Tier 1 actions (signups/downloads/meetings/messages), and the next-step conversion rate.
Part B: Cohort conversion report (template)
Fill this table with real numbers (even if partial). If a number is missing, add “blocked” + owner + ETA.
| Asset Cohort | Surface | Shipped Date | Tier 2 engagement (evidence) | Tier 1 action (evidence) | Conversion / outcome | What follow-up worked | What follow-up failed |
|---|---|---|---|---|---|---|---|
| LinkedIn weekly activation (ICP influencer loop) | |||||||
| Engagement-led asset (top sales collateral) | Website / landing page | ||||||
| Event engagement surface (landing + RSVP) | Webflow/LP |
Part C: Q2 experimentation roadmap (2 experiments/week)
Each experiment must include: hypothesis, target audience/cohort, channel/tactic, expected signal within 1 week, and kill/continue criteria.
Examples you should adapt (pulled from your engagement philosophy):
- CTA ladder experiment: change CTA specificity on the same asset format (e.g., “Join the next session” vs “Get the template”), compare Tier 1 action rate.
- Format experiment: carousel “Friday wins” vs short video recap (same positioning, different format) and measure Tier 2 behaviors.
- Influencer loop experiment: modify commenting depth/structure (story + lesson + explicit CTA mention) and measure ICP engagement rate.
- Offer-path experiment: gating vs ungated variant for a single lead magnet and measure conversion to the next engagement step.
Rules:
- Avoid “endless list-building” without measurable signals.
- Every experiment must ship with a tracking plan (no “we’ll figure out analytics later”).
Part D: Handoff discipline (creative-to-implementation)
Once a creative brief is approved, the handoff is considered complete only when:
- CTA ladder and engagement-tier mapping are explicitly written
- Copy is word-for-word approved in the “copy package”
- Tracking surfaces are confirmed or blocked with owner + ETA
“Basics competency” checklist (end of Week 2 gate)
- Copy friction is reduced via approved word-for-word copy packages
- 3 sprint assets shipped with CTA ladder + engagement-tier mapping
- Observability readiness confirmed for sprint surfaces (or documented with owner + ETA)
- Q1 recap is honest: what shipped, what didn’t, and why (no narrative guesses)
- Cohort conversion table includes real evidence or explicit blocks
- Q2 roadmap includes 2 experiments/week with kill/continue criteria
- Handoff criteria is defined and applied to at least 1 cohort source
Next quarter OKR changes (for this lane)
Since OKRs for this role were not explicitly provided as a dedicated numeric set in the sources we pulled, the next-quarter objectives below are derived directly from the JD’s success criteria and the engagement framework you authored.
Keep (derived success criteria; tighten to be measurable)
- Engagement lift must be reported as Tier 2 → Tier 1 conversion, not just content volume
- Every shipped asset must include clear CTAs and traceable next steps
- Documentation/repeatability must show up in templates and checklists, not only internal notes
Re-scope (new measurable KRs)
- Observability readiness:
>= 90% of shipped sprint assets have confirmed tracking surfaces for the intended Tier signals - CTA clarity:
>= 95% of shipped assets pass the “single CTA ladder + engagement-tier mapping” acceptance criteria - Copy system:
reduce AI re-prompt cycles for approved copy by setting “no-approved-copy = no-brief” rule
3. CEO Resourcing & Leverage
What Robert commits to (process leverage)
- Provide faster “approved copy packages” for any asset in the sprint (so copy ambiguity cannot create rework)
- Enforce “tracking readiness before ship” for the chosen sprint assets (reduce WIP-only assumptions)
- Pair on CTA review for Week 1 assets to lock clarity early and prevent downstream dead content
Resourcing leverage (team/budget)
- Ensure the Web/AI Engagement Analytics Implementation Owner is available to close the top instrumentation gaps identified in “Tracking Engagement”
- If Week 1 gate is met, consider light operational support (Coordinator) to reduce queue/ops load while you focus on engagement standards and measurable momentum
What Hannah must do (homework before Q2 kickoff meeting)
- Artifact 1: 3 sprint “copy packages” (word-for-word approved copy + CTA ladder)
- Artifact 2: Tracking readiness checklist for those 3 assets (confirmed or blocked + owner + ETA)
- Artifact 3: Q1 cohort recap + the filled-in table template with whatever evidence exists today (even if partial)
Timeline
- Luke-style cadence, adapted for Hannah:
- Homework due: before the end-of-week leads/exec check-in
- Sprint begins: immediately after approval
- End of Week 1 check-in: 1 week from sprint start
- End of Week 2 gate review: 2 weeks from sprint start
Non-negotiables
- Engagement observability is required (we cannot manage what we cannot see)
- Copy ambiguity cannot block creative execution: either copy is approved, or the brief is not “ready-to-ship”
- Dead content gets killed fast; standards must increase while output volume is protected by tighter acceptance criteria
This document is a working agreement between Robert (CEO) and Hannah (Creative + Engagement Lead). It will be revisited at the end-of-month decision point.