SOP: Edge-to-Activation
One-liner: Playbook to run Edge-to-Activation from audit through pilot to scale — signal recovery, identifier bridging, and attribution validation.
Last updated: 2026-04-08
Change log: v1.7 — 2026-04-08 (fixed links to knowledge/standards/; added implementation plan and linear template to Links). v1.3 — 2026-03-10 (addressed PR feedback: complete QA & Testing Procedure, defined thresholds and escalation path, added missing artifacts, clarified warehouses and implementation levels, reframed Phase 1 gate, added data governance section, added Cursor skill fallback)
Domain / Service line: Data Platform & Analytics / Attribution & Signal Recovery
Tech anchors: Cloudflare Workers (or equivalent edge), GA4, Segment, Snowflake, dbt, Omni/Rill, M365, Slack
How to use this doc
This document is the delivery playbook for Edge-to-Activation. Use it for staffing, phase checklists, gates, and SOW copy. What’s in [brackets] varies by engagement; the rest is mandatory for each run. Edge is an attribution and validation layer — not a product analytics replacement; position that clearly with the client.
Offering package (same folder): Implementation plan (timeline, milestones, SOW paste block) · Linear template (issues/milestones to clone) · Offer · Demo
Technical reality (how it actually works)
Edge Layer Tracking: We use a CDN (Cloudflare, Fastly, or equivalent) to capture traffic before the client-side pixel fires. That gives a fuller, high-fidelity stream of visits and conversions. This is an additional data stream — it does not eliminate the need for client-side pixels or server-to-server tracking. Many clients (and their partners: affiliates, ad platforms, etc.) still operate in a pixel-first world; those systems consume pixel data. We add Edge so they have a more accurate baseline and can reconcile; we don’t replace.
Integration path: Edge data cannot be set directly into downstream systems (ad platforms, CDPs, BI tools). We stream Edge events into the data warehouse first, then:
- Use reverse ETL (or a server connection) to push Edge-derived data into activation tools, or
- Rely on a CDP (e.g. Segment) to receive from the warehouse and forward to destinations.
So the flow is: Edge (CDN) → warehouse → reverse ETL or CDP → downstream. Scope and timeline must account for warehouse + reverse ETL or CDP; “direct Edge into system” is not an option.
Client-side pixel verification / identity stitching: Edge can do identity and session stitching more accurately than client-side alone, but because affiliates and many platforms still depend on pixel data, we need verification and coexistence with the client-side pixel. That means extra integration work: mapping Edge sessions to client-side identifiers, testing that both streams align where expected, and documenting where they diverge (the “gap”). Set client expectation that this coexistence and verification step is part of the work.
Scope & configuration
Default bundle (included):
- Edge Layer Tracking (CDN) — Deploy capture at the CDN (Cloudflare, Fastly) so we see traffic before the client-side pixel fires; high-fidelity baseline of visits and conversions. Pixels and server-to-server tracking remain in place; Edge is an additional stream.
- Identification & Session Mapping — Edge-level session and identity stitching (often more accurate than client-side); requires verification and coexistence with client-side pixels where affiliates/platforms depend on pixel data.
- Warehouse → Activation — Stream Edge data into the client’s data warehouse; then connect to downstream tools via reverse ETL or CDP (e.g. Segment). We do not pipe Edge directly into systems.
Optional add-ons:
- Compliance Controls (PII Cleaning) — When regulated or privacy-sensitive; Edge-side PII stripping and consent-aware configuration.
- Bot Filtering & Traffic Quality — When paid traffic quality or AI training data integrity is a concern.
Implementation levels:
| Level | Phase | Deliverables | Typical Effort | Typical Cost |
|---|---|---|---|---|
| Level 1 (Validation) | Phase 0 | Cloudflare Worker implementation (sessions + conversions), warehouse schema configuration (at least partial), Edge events streaming to warehouse, demo-ready discrepancy report (“Edge sees X vs client tool sees Y”), attribution gap by source | 1 week | $8K |
| Level 2 (Attribution) | Phase 1 | Source attribution + conversion attribution + gap by source, Edge → warehouse → reverse ETL/CDP, testing framework | 4-6 weeks | 25K |
| Level 3 (Personalization) | Phase 2 | Edge linked to Segment/warehouse, cross-reference with transactions, full channel rollout, ongoing maintenance | Ongoing retainer | $20K/month |
Most engagements include Level 1 (Phase 0) + Level 2 (Phase 1). Level 3 (Phase 2) is optional retainer for full rollout and ongoing maintenance.
Prereqs
- Data & access: Access to analytics tools (GA4, Segment, etc.), ad platforms, and accounting or transaction data. Data warehouse required for activation (Edge → warehouse → reverse ETL or CDP); not optional if we’re connecting Edge to downstream tools.
- Environment: Client (or approved) CDN/Edge — Cloudflare, Fastly, or equivalent — for Edge Layer capture. Client-side pixels and any server-to-server tracking remain; we add Edge in parallel.
- SMEs: Marketing owner (attribution), analytics/engineering contact, finance stakeholder for validation. Engineering must be available for warehouse, reverse ETL or CDP, and pixel-coexistence work.
Data Governance & Privacy
Default Stance: Edge-to-Activation captures request metadata, cookies, query parameters, and session identifiers. We do not capture PII (names, emails, addresses) unless explicitly required and consented.
PII Handling
- Default: No PII captured at Edge layer
- If PII required: Use Compliance Controls add-on (PII stripping and consent-aware configuration)
- Storage: Edge events stored in warehouse; PII fields excluded or hashed if present
- Access: Warehouse access restricted to service accounts; no direct PII queries
Consent Signals
- Default: Edge capture does not require consent (request metadata only)
- If consent required: Integrate consent management platform (CMP) signals into Edge capture logic
- Documentation: Document consent handling approach in SOW and runbooks
Data Retention
- Default: Edge events retained per client’s warehouse retention policy
- Recommendation: 13 months minimum for attribution analysis
- Archival: Older data can be archived; document archival policy
DPIA Triggers
Data Protection Impact Assessment (DPIA) required if:
- Processing special category data (health, financial, etc.)
- Large-scale processing (>1M events/day)
- Automated decision-making based on Edge data
- Cross-border data transfers (EU → US, etc.)
Process: Work with client’s legal/compliance team to complete DPIA before Phase 1 if triggers present.
Compliance Add-on
When Compliance Controls add-on is included:
- Edge-side PII stripping implemented
- Consent-aware configuration enabled
- DPIA completed (if required)
- Data retention policy documented
Linear Project Setup
When to instantiate: After SOW is signed and scope confirmed, before Phase 0 starts (or before Phase 1 if Phase 0 is not included).
Note: In practice, Phase 0 and Phase 1 are often rolled up together, going straight to reverse ETL enablement. If combining phases, create tickets for both Phase 0 (Worker + warehouse schema) and Phase 1 (advanced identifiers + reverse ETL/CDP) in a single engagement.
Preferred Method: Use the Edge-to-Activation Tickets Cursor skill (.cursor/skills/edge-to-activation-tickets/SKILL.md) to create all tickets with a single command. Say “Create Edge-to-Activation tickets for {client}” in Cursor chat. The skill will prompt for required information and create all tickets automatically.
Manual Method: If you prefer to create tickets manually or the Cursor skill is unavailable, follow the structure below. See the Implementation Playbook for detailed MCP examples, or use the ticket templates in the Cursor skill fallback section below.
Required Information Before Creating Tickets
- Client name and Linear team assignment
- SOW signed and phase scope confirmed (Phase 0, Phase 1, Phase 2)
- Key stakeholders identified (GTM lead, data engineer, platform owner)
- Access requirements status (Cloudflare with Worker permissions, BigQuery service account, analytics tools)
- Timeline confirmed (target dates for each phase)
Ticket Structure
Create multiple full tickets per phase (no subtickets/subtasks). Each ticket is a complete Linear issue.
Phase 0 Ticket (if included)
- Title: “Phase 0: Signal Recovery Audit - {Client}”
- Labels: [“Phase 0”, “Edge-to-Activation”]
- Description: Implement Cloudflare Worker to capture sessions and conversions, configure warehouse schema (at least partial), stream to warehouse, produce discrepancy report. Use Phase 0 checklist above as acceptance criteria.
- State: “ToDo” or “Ready for Work”
- Priority: 2 (High)
- Note: Phase 0 involves almost full Worker implementation + warehouse schema configuration (data needs to stream somewhere). Captures sessions and conversions to enable discrepancy comparison. Advanced identifiers added in Phase 1. In practice, Phase 0+1 are often combined.
Phase 1 Tickets (multiple full tickets)
Create separate tickets for major components:
- “Phase 1: Enhance Cloudflare Workers - {Client}” — Add advanced identifiers and client-side tool parameters (Worker already implemented in Phase 0)
- “Phase 1: Enhance {Warehouse} Schema - {Client}” — Add advanced identifier columns and client-side tool parameters (basic schema already configured in Phase 0) (replace {Warehouse} with actual warehouse: BigQuery, Snowflake, Redshift, Databricks)
- “Phase 1: Implement Testing Framework - {Client}” — Dev → single-page → subdirectory → full rollout
- “Phase 1: Documentation and Handoff - {Client}” — Runbooks, reports, client enablement
Each ticket:
- Labels: [“Phase 1”, “Edge-to-Activation”]
- blockedBy: Previous Phase 1 ticket or Phase 0 ticket
- State: “Backlog” (except first ticket may be “ToDo” if Phase 0 not included)
Phase 2 Ticket (if retainer included)
- Title: “Phase 2: Rollout & Maintenance - {Client}”
- Labels: [“Phase 2”, “Edge-to-Activation”, “Retainer”]
- Description: Reference retainer scope and ongoing work from Phase 2 section above
- State: “Backlog” (starts after Phase 1)
- Note: Phase 2 is ongoing; ticket may be updated rather than closed
Label and State Recommendations
- Phase Labels: Use [“Phase 0”], [“Phase 1”], [“Phase 2”] for filtering and organization
- Service Label: [“Edge-to-Activation”] on all tickets
- AI vs Human: Add
ai-assignableorhuman-onlyperstandards/03-knowledge/engineering/setup/linear-labels-ai-human.md - State: First ticket in “ToDo” or “Ready for Work”; others in “Backlog” until dependencies clear
- Dependencies: Use
blockedByto link tickets within and across phases
Ticket Format
Follow the required Linear ticket format from standards/04-prompts/tickets/linear-ticket-generation-from-transcript.md:
- Title starts with phase prefix (“Phase 1: Setup…”)
- Description includes: Context, Goal, Scope (In/Out), Acceptance Criteria, Notes/Constraints, Open Questions
Cursor Skill Fallback
If the Cursor skill is unavailable, use these ticket templates to create tickets manually in Linear:
Phase 0 Ticket Template:
Title: Phase 0: Signal Recovery Audit - {Client}
Description:
## Context
New Edge-to-Activation engagement for {Client}. SOW signed, scope confirmed. Phase 0 focuses on signal recovery audit by implementing Cloudflare Worker to capture sessions and conversions.
## Goal
Implement Cloudflare Worker to capture sessions and conversions (thank you page visits), stream to warehouse, and produce discrepancy report showing "Edge sees X vs client tool sees Y" comparison.
## Scope
**In scope:** Cloudflare Worker implementation (almost full implementation), Worker deployment, warehouse schema configuration (at least partial - tables and schema to receive Edge events), Edge events streaming to warehouse, capture sessions and conversions, discrepancy report, attribution gap by source, scope alignment for Phase 1.
**Out of scope:** Advanced identifiers and client-side tool parameters (GA4 client ID, Segment anonymous ID - these come in Phase 1), reverse ETL/CDP configuration (though Phase 0+1 are often combined), full rollout.
## Acceptance Criteria
- Cloudflare Worker implemented and deployed (captures sessions and conversions)
- Worker runs before client-side pixel fires
- Warehouse schema configured (at least partially) - events table, sessions table, basic identifier columns
- Edge events streaming to warehouse successfully
- Baseline sessions and conversions captured from Edge
- Discrepancy report generated (client vs. Edge comparison)
- Attribution gap quantified by source/channel
- Client sign-off on findings and pilot scope
## Notes / Constraints
Edge is additional data stream; pixels remain. Phase 0 implements almost full Worker + warehouse schema (data needs to stream somewhere). Only missing advanced identifiers/parameters that connect to client-side tools (added in Phase 1). In practice, Phase 0+1 are often combined, going straight to reverse ETL enablement. Duration: 1 week (or combined with Phase 1).
## Open Questions
[Add any open questions]
Phase 1 Ticket Templates (create 4 tickets):
- Cloudflare Workers Enhancement:
Title: Phase 1: Enhance Cloudflare Workers - {Client}
Description:
## Context
Phase 1 pilot implementation for {Client}. Cloudflare Worker already implemented in Phase 0 (captures sessions and conversions).
## Goal
Add advanced identifiers and client-side tool parameters to existing Worker (GA4 client ID, Segment anonymous ID, etc.) and enhance for pilot channels.
## Scope
**In scope:** Add advanced identifiers/parameters connected to client-side tools, enhance Worker for pilot channels, verify Worker performance, cost validation.
**Out of scope:** Initial Worker implementation (done in Phase 0), BigQuery/Snowflake schema (separate ticket), testing framework, documentation.
## Acceptance Criteria
- Advanced identifiers added (GA4 client ID, Segment anonymous ID, etc.)
- Worker enhanced for pilot channels
- No performance impact on page load
- Cost validated (~$50/month for 100M requests)
## Notes / Constraints
Worker already implemented in Phase 0. Phase 1 adds identifiers and parameters that connect to client-side tools. Fire workers on all requests by default. See implementation playbook for technical details.
## Open Questions
[Add any open questions]
- Warehouse Schema Enhancement (replace {Warehouse} with BigQuery, Snowflake, etc.):
Title: Phase 1: Enhance {Warehouse} Schema - {Client}
Description:
## Context
Phase 1 pilot. Cloudflare Workers deployed. Warehouse schema partially configured in Phase 0 (basic tables and schema).
## Goal
Enhance warehouse schema with advanced identifier columns and parameters connected to client-side tools (GA4 client ID, Segment anonymous ID, etc.).
## Scope
**In scope:** Add advanced identifier columns for bridging, enhance schema for client-side tool parameters, identifier mapping logic, event insertion testing.
**Out of scope:** Initial warehouse setup (done in Phase 0), reverse ETL/CDP configuration, testing framework, documentation.
## Acceptance Criteria
- Advanced identifier columns added (GA4 client ID, Segment anonymous ID, etc.)
- Schema enhanced for client-side tool parameters
- Identifier bridging columns configured
- Events successfully inserting with new identifiers
- Identifier mapping logic implemented
## Notes / Constraints
Basic warehouse schema already configured in Phase 0. Phase 1 adds advanced identifiers and parameters. Two-table pattern: events (raw) + sessions (aggregated). See implementation playbook for schema patterns.
## Open Questions
[Add any open questions]
- Testing Framework:
Title: Phase 1: Implement Testing Framework - {Client}
Description:
## Context
Phase 1 pilot. Cloudflare Workers and {Warehouse} configured.
## Goal
Implement phased testing: dev → single-page → subdirectory → full rollout.
## Scope
**In scope:** Dev testing, single-page test, subdirectory test, full rollout, validation.
**Out of scope:** Cloudflare Workers setup, warehouse configuration, documentation.
## Acceptance Criteria
- Dev environment tested and validated
- Single-page test completed
- Subdirectory test completed
- Full rollout completed
- No performance issues or errors
- Client sign-off on testing results
## Notes / Constraints
Never compress testing phases. See SOP testing strategy section.
## Open Questions
[Add any open questions]
- Documentation and Handoff:
Title: Phase 1: Documentation and Handoff - {Client}
Description:
## Context
Phase 1 pilot implementation complete.
## Goal
Create runbooks, discrepancy reports, and enable client team.
## Scope
**In scope:** Attribution validation report, discrepancy analysis, runbooks, client walkthrough, enablement.
**Out of scope:** Phase 2 rollout.
## Acceptance Criteria
- Attribution validation report delivered
- Discrepancy analysis completed
- Runbooks created and reviewed
- Client walkthrough completed
- Team enabled on Edge data usage
- Client sign-off on Phase 1
## Notes / Constraints
Documentation should reference SOP and implementation playbook.
## Open Questions
[Add any open questions]
Phase 2 Ticket Template (if retainer included):
Title: Phase 2: Rollout & Maintenance - {Client}
Description:
## Context
Phase 1 pilot complete. Phase 2 retainer for full rollout and ongoing maintenance.
## Goal
Roll out Edge across all remaining channels and provide ongoing maintenance.
## Scope
**In scope:** Rollout to all channels, ongoing maintenance, reporting integration, runbooks, support.
**Out of scope:** Phase 1 pilot work.
## Acceptance Criteria
- Channel rollout backlog agreed
- Per-channel rollout on sustained cadence
- Documentation updated as requirements change
- Client has clear point of contact and escalation path
## Notes / Constraints
Phase 2 is retainer-based ($20K/month, minimum 3-6 months). Ongoing delivery, not one-time.
## Open Questions
[Add any open questions]
Label and State Guidance:
- Labels: [“Phase 0”], [“Phase 1”], [“Phase 2”], [“Edge-to-Activation”]
- State: First ticket “ToDo”, others “Backlog”
- blockedBy: Link dependencies (Phase 1 tickets blockedBy Phase 0, etc.)
Detailed MCP Usage
For detailed Linear MCP examples and workflow, see the Implementation Playbook.
➡️ Phase 0 / Pre- — Audit (optional)
Implement Cloudflare Worker to capture sessions and conversions, configure warehouse schema (at least partially), then quantify signal loss; low-cost wedge.
Duration: 1 week
Outcome: Discrepancy report (client-layer vs. Edge-layer), scope agreed, signal loss quantified. Client understands Edge is additive (pixels and s2s stay).
What Phase 0 includes:
- Almost full Cloudflare Worker implementation — Worker deployed and capturing sessions and conversions (thank you page visits)
- Warehouse schema configuration (at least partial) — Tables and schema set up to receive Edge events (data needs to stream somewhere)
- Basic discrepancy comparison — “Edge sees this many vs client tool sees this many” visibility comparison
What Phase 0 excludes (added in Phase 1):
- Advanced identifiers and parameters connected to client-side tools (GA4 client ID, Segment anonymous ID, etc.)
- Reverse ETL/CDP configuration (though in practice, Phase 0+1 are often combined and go straight to reverse ETL enablement)
- Full identifier bridging and mapping
Common practice: Phase 0 and Phase 1 are often rolled up together, going straight to reverse ETL enablement. If combining phases, include Phase 1 scope (advanced identifiers, reverse ETL/CDP) in the same engagement.
Checklist:
- Cloudflare Worker implemented and deployed — captures sessions and conversions (thank you page visits)
- Worker runs before client-side pixel fires (Edge capture at CDN layer)
- Warehouse schema configured (at least partially) — Tables and schema set up to receive Edge events (events table, sessions table, basic identifier columns)
- Edge events streaming to warehouse successfully
- No removal of pixels or server-to-server; Edge is an additional data stream
- Capture baseline: sessions and conversions from Edge
- Generate discrepancy report (client vs. Edge comparison) — “Edge sees X vs client tool sees Y”
- Quantify attribution gap by source/channel
- Gate: Client sign-off on discrepancy findings and scope for pilot (including warehouse + reverse ETL or CDP path) before Phase 1
DRIs (if applicable): DRI: Data/analytics engineer for Cloudflare Worker implementation, warehouse setup, and pipelines; DRI: Platform owner for Worker deployment; DRI: GTM lead for client readout and scope alignment.
Deliverables:
Attribution Validation Report (template structure):
- Executive summary: Edge vs client-side comparison, key findings
- Methodology: How Edge capture works, comparison approach
- Findings: Traffic volume comparison, conversion comparison, gap by source/channel
- Recommendations: Next steps, pilot scope proposal
- Appendix: Technical details, data freshness, identifier mapping approach
Discrepancy Analysis Format:
- Side-by-side comparison table: Client-side metrics vs Edge metrics
- Gap analysis: Where Edge sees more (recovered signal) vs where client-side sees more (expected)
- By source/channel breakdown: Which channels have largest gaps
- Sample recovered conversions: Examples of conversions Edge attributed that client-side missed
- Visualizations: Charts showing comparison and gaps
➡️ Phase 1 / Live — Pilot
Tangible pilot on the most important channels only, with a clear end-to-end testing framework and validation — so there’s enough confidence to move to Phase 2. Do not try to solve full affiliate / all-channel rollout in pilot.
Duration: 4–6 weeks (or combined with Phase 0 for faster delivery)
Common practice: Phase 0 and Phase 1 are often rolled up together, going straight to reverse ETL enablement. If combining phases, include both Phase 0 (Worker + warehouse schema) and Phase 1 (advanced identifiers + reverse ETL/CDP) scope in a single engagement. This is the typical delivery model for faster time-to-value.
Outcome: Edge → warehouse → reverse ETL or CDP live on key channels (e.g. paid, direct, 1–2 priority partners); advanced identifiers and client-side tool parameters added (GA4 client ID, Segment anonymous ID, etc.); end-to-end testing framework and validation in place; discrepancy view and success criteria met; client confident to commit to Phase 2 retainer for full rollout and maintenance.
What Phase 1 adds beyond Phase 0:
- Advanced identifiers and parameters connected to client-side tools (GA4 client ID, Segment anonymous ID, etc.)
- Full identifier bridging and mapping logic
- Reverse ETL/CDP configuration for activation
- End-to-end testing framework
- Pixel verification and coexistence documentation
Scope boundary: Pilot = most important channels only. Affiliate and “all other channels” = Phase 2 retainer. Edge for affiliate in e‑commerce is high-maintenance (publishers and requirements change); we prove the approach in pilot, then Phase 2 handles rollout and ongoing maintenance.
Before go-live, ensure:
- Edge (CDN) capture stable and low-latency for pilot channels (Cloudflare Worker already implemented in Phase 0; verify it’s working correctly)
- Edge events streaming into data warehouse (no direct push to downstream systems)
- Reverse ETL or CDP (e.g. Segment) configured for pilot scope
- Advanced identifiers added — GA4 client ID, Segment anonymous ID, and other client-side tool parameters connected
- Client-side pixel verification for pilot channels only; identity/session stitching validated where relevant
- End-to-end testing framework documented: what we test, how we validate, pass/fail criteria
- Discrepancy dashboard or report template ready for client
- Success and check metrics defined with thresholds:
- Match rate: ≥85% (where both Edge and client-side fire)
- Attribution coverage: ≥95% (revenue with known source)
- Edge capture rate: ≥95%
- Warehouse insertion success: ≥99%
- Sync success rate: ≥95%
- QA & testing per procedure (see QA & Testing Procedure below) — all validation checks passed
- Gate: Client validation of pilot results; Phase 2 proposal delivered and decision date scheduled (rollout + maintenance)
Daily / weekly must-haves:
- Monitor Edge vs. client match rate and capture reliability for pilot channels
- Weekly or biweekly discrepancy view for client (treatment vs. baseline)
- Escalation path followed if Edge capture, warehouse load, or reverse ETL/CDP sync degrades (see Escalation Path section)
DRIs (if applicable): DRI: Data/analytics engineer for pipelines, warehouse, reverse ETL/CDP, and pilot-channel verification; DRI: Platform owner for Edge infra; DRI: GTM lead for client comms and readouts.
Deliverables:
Identifier Mapping Runbook:
- Purpose: Map Edge session identifiers to client-side identifiers (GA4 client ID, Segment anonymous ID, etc.)
- Process: Extract identifiers from Edge events and client-side pixel data, create mapping table in warehouse
- Validation: Verify mapping accuracy (≥85% match rate where both fire)
- Maintenance: Update mapping as identifiers change
Client vs. Edge Comparison View Spec:
- Dashboard or report showing side-by-side comparison
- Metrics: Traffic volume, conversions, attribution by source
- Time period: Daily, weekly, monthly views
- Filters: By source, channel, date range
- Gap highlighting: Visual indicators for significant discrepancies
Warehouse → Reverse ETL/CDP Runbook:
- Purpose: Configure reverse ETL or CDP to receive Edge data from warehouse
- Process: Set up sync job, configure destination mappings, test data flow
- Validation: Verify events reach destination, check format and schema
- Monitoring: Set up alerts for sync failures
- Troubleshooting: Common issues and resolution steps
End-to-End Testing Framework:
- Test cases: Edge capture → warehouse → reverse ETL/CDP → destination
- Validation steps: Check each stage of data flow
- Pass/fail criteria: Thresholds for each validation step (see QA & Testing Procedure)
- Sign-off: Client approval before go-live
➡️ Phase 2 / Post- — Rollout & maintenance (retainer)
Roll out Edge across all other channels (including affiliate); ongoing maintenance as publishers and requirements change. Retainer only — typically $20,000/month — minimum commitment 3–6 months.
Duration: Ongoing (monthly retainer)
Outcome: Edge → warehouse → reverse ETL/CDP rolled out across all channels; affiliate and other high-maintenance channels covered; ongoing updates as publishers and requirements change; reporting integrated; runbooks and support in place. No “one and done” — Phase 2 is sustained delivery.
Why retainer: Edge deployment for affiliate in e‑commerce is a moving target (publishers change, requirements shift). Fixed project fee doesn’t fit; retainer aligns us with reality and keeps the engagement sustainable.
Retainer scope (typical $20K/mo):
- Rollout Edge capture and warehouse → reverse ETL/CDP to all remaining channels (beyond pilot)
- Ongoing maintenance: publisher changes, requirement updates, pixel verification and coexistence for new or changing partners
- Integrate Edge-derived data into client reporting (dashboards, GTM reviews)
- Data validation, discrepancy monitoring, escalation path
- Runbooks, change management guidelines, and support (Slack/email, defined response SLA)
- Monthly or biweekly check-in and prioritization (new channels, issues, enhancements)
Checklist (ongoing, not one-time):
- Channel rollout backlog agreed with client (priority order)
- Per-channel rollout and verification (affiliate, additional paid, etc.) on a sustained cadence
- Documentation updated as publishers/requirements change
- QA and validation per QA & Testing Procedure for each new channel or material change
- Client has clear point of contact and escalation path
QA & Testing Procedure
Purpose: Ensure consistent QA and testing across all Edge-to-Activation engagements. Use this procedure for Phase 0 and Phase 1 validation.
We need a dedicated playbook for QA and testing so every run is consistent and we don’t ship broken or misinterpreted data. Until the full playbook exists, use the checklist below as a starting point and expand in a separate doc (link here when created).
Playbook should cover:
- Edge capture validation — CDN capture firing before pixel; event volume and schema; latency and drop checks.
- Warehouse load — Edge events landing in warehouse; freshness; dedupe and key checks.
- Reverse ETL / CDP sync — Data leaving warehouse for Segment (or other CDP) or reverse ETL; destination receipt and format; error handling.
- Client-side vs. Edge reconciliation — Where we expect alignment (e.g. same user, same session); where we expect gap (e.g. blocked pixels); tests and thresholds.
- Pixel verification & coexistence — When client-side pixel and Edge both fire: mapping Edge session to pixel IDs; tests for affiliate or platform flows that depend on pixel data; documentation of “pixel required” vs “Edge sufficient” by use case.
- Pre-go-live checklist — Sign-off criteria before we call pilot or scale “done” (e.g. discrepancy report validated, match rate above X%, no critical sync failures).
Interim checklist (use until full playbook exists):
- Edge events present in warehouse and within expected latency
- Reverse ETL or CDP receiving Edge-derived data; no persistent failures
- Discrepancy report run and reviewed (client vs. Edge); gaps explained
- Where affiliates/platforms need pixel: verification that pixel + Edge coexistence is documented and tested
- Client sign-off on pilot/scale criteria before handoff
Owner / next step: [Assign DRI for “Edge-to-Activation QA & Testing Playbook” — build out full runbook and link from this section.]
Deliverables (standard)
- Phase 0 & 1: Attribution validation report, discrepancy analysis, tracking/measurement plan; pixel verification / coexistence doc for pilot channels; end-to-end testing framework (test cases, validation steps, sign-off criteria). Edge → warehouse pipeline; reverse ETL or CDP configuration; identifier mapping and bridging logic. Client vs. Edge comparison views, attribution gap reports. Runbooks; QA & Testing Procedure (see section above). Walkthrough and enablement for GTM/analytics.
- Phase 2 (retainer): Ongoing rollout to all channels; maintenance and updates (publishers, requirements); reporting integration; runbooks and support; monthly/biweekly cadence and SLA.
Escalation Path
Purpose: Define escalation procedures when Edge capture, warehouse load, or reverse ETL/CDP sync degrades or fails.
Severity Levels
| Severity | Definition | Response Time | Channel | DRI |
|---|---|---|---|---|
| Critical | Edge capture down, warehouse load failing, data loss, client-facing outage | <1 hour | Slack edge-activation-alerts + phone | Platform owner (Edge), Data engineer (warehouse) |
| High | Match rate <80%, sync failures >5%, client-reported data quality issues | <4 hours | Slack + email | Data engineer + GTM lead |
| Medium | Performance degradation, minor sync delays, non-critical errors | <1 business day | Slack | Data engineer |
| Low | Documentation updates, enhancement requests, non-urgent questions | Next business day | Slack or email | GTM lead |
Escalation Process
- Initial Detection: Monitor detects issue or client reports problem
- Severity Assessment: DRI assesses severity level
- Immediate Response: DRI responds per SLA for severity level
- Escalation Chain (if not resolved within SLA):
- Critical/High: Escalate to Platform owner → CSO
- Medium: Escalate to Data engineer lead → Platform owner
- Low: Escalate to GTM lead → CSO
Monitoring and Alerts
Automated Monitoring:
- Edge capture rate (alert if <95%)
- Warehouse insertion success rate (alert if <99%)
- Sync success rate (alert if <95%)
- Match rate (alert if <80% where both streams fire)
Alert Channels:
- Critical/High: Slack edge-activation-alerts channel
- Medium/Low: Slack DM to DRI
Manual Checks:
- Daily: Edge capture and warehouse load (during Phase 1)
- Weekly: Full reconciliation check
- Biweekly: Client discrepancy report review
Success metrics
- Execution KPIs: Match rate between Edge and client-layer identifiers; latency and reliability of Edge capture.
- Adoption KPIs: Usage of discrepancy reports in GTM reviews; reduction in internal attribution disputes.
- Business KPIs (brief): Attribution coverage (% revenue with known source); CAC accuracy / variance reduction; marketing-influenced revenue confidence. (Full outcome language in Offer.)
Risks & mitigations
| Risk | Mitigation |
|---|---|
| Client expects Edge to replace product analytics | Position clearly as attribution and validation layer, not behavioral analytics. |
| Client expects to eliminate pixels or server-to-server | Set expectation: Edge is an additional stream; pixels and s2s often stay (affiliates, platforms consume pixel data). We add; we don’t replace. |
| Affiliate / platform pixel dependency | Extra integration work: pixel verification and coexistence; map Edge to pixel IDs where needed; document “pixel required” vs “Edge sufficient” by use case. Scope and timeline should include this. |
| Edge “directly into” a system | We can’t. Flow is Edge → warehouse → reverse ETL or CDP. Scope and architecture must include warehouse + activation path. |
| Legal/privacy concerns | Edge-side PII stripping and consent-aware configuration; Compliance add-on when needed. |
| Misinterpretation of recovered signal | Guided readout and documented assumptions in reports. |
| QA / testing gaps | Build out Edge-to-Activation QA & Testing playbook (see section above); use interim checklist until playbook is done. |
| Large infrastructure / scale (e.g. 4,000 models) | Early recognition and communication; get range of data stack scale early in process. |
Delivery notes (internal)
- Squad & roles: GTM lead, data/analytics engineer, platform owner.
- RACI / ceremonies: Attribution scope and sign-off = marketing owner; technical delivery = data engineer; client comms and readouts = GTM lead.
- Tooling: CDN/Edge (Cloudflare, Fastly), data warehouse, reverse ETL or CDP (e.g. Segment), analytics and ad platform access. Warehouse required for activation path.
- Definition of done: Discrepancy report (Phase 0) or pilot validated and scaled (Phase 2); runbooks and handoff complete.
- Compliance flags: PII handling, consent management; document in SOW when Compliance add-on is used.
- Change log / version: v1.3 — 2026-03-10 (addressed PR feedback: complete QA & Testing Procedure, defined thresholds and escalation path, added missing artifacts, clarified warehouses and implementation levels, reframed Phase 1 gate, added data governance section, added Cursor skill fallback)
Case Study: Amble (MinuteMD)
Client: Amble (brand under MinuteMD)
Engagement: Edge-to-Activation Phase 0 + Phase 1
Timeline: 2-4 weeks (target)
Tech Stack: Cloudflare Workers → BigQuery
Context
- High-coverage attribution before page load
- Edge-layer tracking via Cloudflare Workers
- Data to BigQuery for modeling
- Timeline target: 2-4 weeks to rollout
Technical Decisions
-
Cloudflare Workers on All Requests:
- Volume: ~100M requests/month (25M in 7 days)
- Decision: Fire workers on all requests (not just page loads)
- Rationale: Filtering too complex/dangerous; cost is minimal (~$50/month)
- Learnings: Filtering requires URL rewriting and can break existing routing rules
-
Code Modularization:
- Spent 1-2 days modularizing Eden-specific code
- Created client-agnostic core modules
- Configuration-driven client-specific logic
- Enables faster onboarding for future clients
-
BigQuery Schema:
- Two-table pattern (events + sessions)
- Identifier columns for GA4, Segment bridging
- Partitioned by date for performance
Testing Approach
Phased Strategy:
- Dev Environment: Local testing and validation
- Single-Page Test: One production page (Thursday-Friday)
- Subdirectory Test: One product group (Monday-Wednesday)
- Full Rollout: Entire site (Monday launch)
Timeline:
- Week 1: Setup and configuration
- Week 2: Dev testing
- Week 3: Production testing (single-page + subdirectory)
- Week 4: Full rollout
Key Principle: Never compress testing phases, even if other work is ahead of schedule.
Challenges and Solutions
Challenge: Cloudflare access without Worker creation permissions
Solution: Requested proper permissions upfront; verified before starting work
Challenge: Code was Eden-specific, not reusable
Solution: Spent time modularizing before Amble work; created reusable core
Challenge: High traffic volume (100M requests/month)
Solution: Fired workers on all requests; cost still minimal (~$50/month)
Challenge: Filtering workers to only page loads
Solution: Decided against filtering; too dangerous given existing routing rules
Lessons Learned
- Modularization pays off: Spending 1-2 days modularizing code saves time on future clients
- Testing phases are non-negotiable: Phased testing reduces risk and builds confidence
- Filtering is dangerous: Unless client has Cloudflare specialist, avoid filtering workers
- Cost is rarely an issue: Cloudflare Workers are very cheap even at high volume
- Access verification critical: Verify Worker creation permissions, not just Cloudflare access
- Timeline visibility helps: Gantt chart with phases helps client understand progress
Outcomes
- Successful Edge capture deployment
- BigQuery tables created and receiving data
- Testing approach validated
- Code modularized for future reuse
- Timeline met (2-4 week target)
Reference: See Edge-to-Activation Implementation Playbook for detailed technical guidance and additional case study details.
SOW copy block (pasteable)
Scope: Phase 0: Brainforge will implement and deploy Cloudflare Worker to capture sessions and conversions (thank you page visits) at the Edge layer, configure warehouse schema (at least partially), stream events to warehouse, and produce a discrepancy report showing “Edge sees X vs client tool sees Y” comparison and gap by source; scope alignment for pilot. Note: In practice, Phase 0 and Phase 1 are often combined, going straight to reverse ETL enablement. Phase 1: Tangible pilot on the most important channels (as agreed); Edge → warehouse → reverse ETL or CDP; end-to-end testing framework and validation; pilot validation and success criteria. Phase 2 (retainer): Rollout across all other channels (including affiliate); ongoing maintenance as publishers and requirements change; reporting integration; runbooks and support. Edge is an additional data stream; pixels and server-to-server are not removed. Optional add-ons: Compliance Controls (PII cleaning), Bot Filtering.
Outcomes: Client gains visibility into attribution gaps and recovered conversion signal; pilot proves approach on key channels with clear testing and validation; Phase 2 retainer delivers full rollout and sustained maintenance (publishers, requirements, new channels).
Inclusions: Phase 0: Cloudflare Worker implementation, warehouse schema configuration (at least partial), Edge events streaming, discrepancy report, scope for pilot. Phase 1: Advanced identifiers, Edge → warehouse → reverse ETL/CDP on pilot channels; testing framework and validation; discrepancy view. Note: Phase 0+1 are often combined, going straight to reverse ETL enablement. Phase 2 (retainer): Rollout to all channels; ongoing maintenance; runbooks; support. Optional add-ons: Compliance, Bot Filtering.
Exclusions: Replacement of existing product analytics or behavioral analytics tools; full marketing mix modeling. Phase 2 is retainer-based — fixed-scope “full rollout” project not offered; ongoing maintenance required for affiliate and moving targets.
Client responsibilities: Provisioning of CDN/Edge and data warehouse access; access to analytics tools, ad platforms, and transaction data; collaboration on pixel verification and coexistence; designated Marketing, Analytics, and (if applicable) Finance stakeholders; for Phase 2 retainer, minimum commitment (e.g. 3–6 months) and timely input on prioritization.
Acceptance criteria: Phase 0: Discrepancy report delivered and reviewed; Client sign-off on findings and pilot scope. Phase 1: Pilot complete on agreed channels; testing framework and validation in place; Client sign-off and agreement to Phase 2 retainer. Phase 2: Per retainer SOW — rollout and maintenance delivered per agreed cadence and SLA.
Links
- Implementation Playbook (technical): edge-to-activation-implementation-playbook.md — Detailed technical and project management guidance
- Implementation plan (timeline / SOW): implementation-plan.md
- Linear template (clone this): linear-template.md
- Offer (pitch): offer.md
- Demo(s): demo.md
- Pricing / SOW: pricing, sow-framework