Edge-to-Activation Implementation Playbook
Technical and project management guidance for implementing Edge-to-Activation services, based on real client engagements.
Last updated: 2026-04-08
Overview
| Field | Value |
|---|---|
| Type | Task + checklist (technical implementation and phased rollout) |
| Domain | Data — activation & attribution |
| Created | 2026-04-02 |
| Typical duration | Phase 0 ~1 week; Phase 1 ~4–6 weeks; Phase 2 retainer ongoing (see implementation plan) |
| When to use | New Edge engagement, onboarding engineers, planning tests/timeline, architecture decisions |
| Success looks like | Edge events reliably in the warehouse with identifier bridging; phased tests passed without skipped gates; client sign-off; runbooks and monitoring agreed |
Related documents:
- SOP: Edge-to-Activation — Delivery playbook with phases, governance, and SOW copy
- Implementation plan (offering) — Week-by-week phases for SOW/kickoff
- Linear template (offering) — Issues/milestones to clone in Linear
- Offer: Edge-to-Activation — Sales and pitch materials
Registry: Listed in PLAYBOOK_INDEX.md.
Tags: #playbook #edge-to-activation #data #cdn #attribution
Introduction
Edge-to-Activation is a service that captures traffic and conversion signals at the CDN/Edge layer (before client-side pixels fire) to recover attribution data lost to ad blockers, privacy controls, and script failures. This playbook documents how to implement it based on successful client engagements.
When to Use This Playbook
- Starting a new Edge-to-Activation engagement
- Onboarding team members to Edge-to-Activation work
- Planning timeline and testing strategy
- Understanding technical architecture decisions
Key Principle
Edge is an additional data stream — we don’t replace client-side pixels or server-to-server tracking. Many partners and platforms still depend on pixel data. Edge adds a fuller picture and plugs into the warehouse, then flows to downstream tools via reverse ETL or CDP.
Architecture Overview
flowchart LR User[User Browser] -->|Request| CDN[CDN/Edge<br/>Cloudflare/Fastly] CDN -->|Edge Capture| Worker[Cloudflare Worker<br/>Captures before page load] Worker -->|Stream Events| Warehouse[Data Warehouse<br/>BigQuery/Snowflake] Warehouse -->|Reverse ETL/CDP| Activation[Activation Tools<br/>Segment/GA4/Ad Platforms] User -->|Page Load| Browser[Browser<br/>Client-side pixels] Browser -->|Pixel Events| Analytics[Analytics Tools<br/>GA4/Segment] Warehouse -.->|Reconciliation| Analytics style Worker fill:#e1f5ff style Warehouse fill:#fff4e1 style Activation fill:#e8f5e9
Data Flow:
- Edge Capture: CDN Worker captures request metadata, cookies, query parameters before page loads
- Warehouse: Edge events stream to data warehouse (BigQuery, Snowflake, etc.)
- Activation: Warehouse → reverse ETL or CDP (e.g. Segment) → downstream tools
- Reconciliation: Edge data can be compared with client-side pixel data for gap analysis
Critical Constraint: Edge data cannot be pushed directly into downstream systems. It must flow through the warehouse first, then via reverse ETL or CDP.
When the client is not on Cloudflare (Fastly and equivalents)
This playbook’s reference implementation is Cloudflare Workers (Wrangler, routes, secrets). Many engagements will use the same patterns on another edge platform.
What stays the same
- Capture before origin response; treat Edge as an additional stream to the warehouse; no direct push to ad platforms/CDPs — same as Architecture Overview.
- Warehouse-first: stream or batch to BigQuery/Snowflake; then reverse ETL/CDP; identifier bridging for reconciliation.
- Phased testing (dev → single page → subdirectory → rollout) and route/script conflict checks — any edge platform can have an existing script or rule on the same path; discover and merge/chain before go-live.
What changes
- Config surface: Fastly uses Compute@Edge (Rust/JS), VCL, or WAF rules differently from Workers; AWS CloudFront has Lambda@Edge; Akamai has its own model. Use the vendor’s docs for lifecycle, limits, and cold start.
- Secrets and egress: Align with client Infosec (secret store, outbound allow lists to warehouse APIs).
- Naming in Linear/docs: Use generic language in tickets when the warehouse is not BigQuery (e.g. “Configure warehouse schema — {Client}”) — see linear template.
Practical approach: Reuse the same module boundaries (config, handlers, attribution extractors, warehouse insert) and swap the edge adapter. Document the client’s platform in the engagement sow-project-plan and runbooks.
Technical Implementation
Cloudflare Workers Setup
Access Requirements
Before starting, ensure you have:
- Cloudflare account access with permissions to create Workers
- Worker creation permissions (not just read access)
- Understanding of client’s Cloudflare plan (affects cost and limits)
Common Issues: Client may grant Cloudflare access but not Worker creation permissions. Verify upfront. Make sure client is on a paid Worker plan (which removes invocation limits). In case we’re fine with limits, we need to make sure we’re not blocking requests once limit is hit (requests will resolve without Worker firing).
Worker Code Structure
eden-edge-worker/
├── src/ # Main source code
│ ├── index.js # Worker entry point
│ ├── config/ # Configuration module
│ │ └── default.js # Client-specific configuration
│ ├── handlers/ # Request handlers
│ │ ├── cookies.js # Cookie management handler
│ │ ├── sessionTracking.js # Session tracking handler
│ │ └── thankYou.js # Thank-you page handler
│ ├── lib/ # Utility libraries
│ │ ├── cookies.js # Cookie parsing utilities
│ │ ├── crypto.js # Cryptographic utilities (IP hashing)
│ │ ├── request.js # Request parsing utilities
│ │ └── urlParams.js # URL parameter extraction
│ ├── tracking/ # Tracking logic modules
│ │ ├── attribution.js # Attribution data collection
│ │ ├── collectors.js # Third-party cookie collectors
│ │ ├── session.js # Session management
│ │ └── user.js # User identification
│ └── bigquery/ # BigQuery integration
│ ├── auth.js # Google Cloud authentication
│ └── insert.js # BigQuery data insertion
├── tests/ # Test files
│ ├── worker.test.js # Unit tests for worker
│ └── integration/ # Integration tests
│ ├── README.md # Integration test documentation
│ └── bigquery.integration.test.js # BigQuery integration tests
├── wrangler.toml # Cloudflare Worker configuration
├── package.json # Node.js dependencies and scripts
├── vitest.config.mjs # Vitest unit test configuration
├── vitest.integration.config.mjs # Vitest integration test configuration
├── bigquery-schema.sql # BigQuery table schema
├── bigquery-alter-tables.sql # BigQuery table alterations
├── .prettierrc.json # Prettier code formatting config
└── README.md # Project documentation
Core Components
Entry Point (src/index.js)
The main Cloudflare Worker entry point that:
- Intercepts all HTTP requests
- Manages session and user cookies
- Handles thank-you page events
- Triggers session tracking for new sessions
- Uses
waitUntil()for async BigQuery operations
Key Flow:
- Check for existing session/user cookies
- Handle thank-you page events (fire-and-forget)
- If new session/user, generate IDs and set cookies
- If new session, trigger tracking to BigQuery
- Return proxied response to origin
Configuration (src/config/default.js)
Client-specific configuration module that defines:
- CLIENT_NAME: Client identifier (e.g., “amble”) used for cookie naming
- Session helpers: Cookie name, timeout, extraction logic
- User helpers: Cookie name, lifetime
- Thank-you page detection: URL pattern matching
- Cookie collectors: Third-party cookie extraction (PostHog, VWO, etc.)
- BigQuery column definitions: Schema for sessions and thank-you tables
Handlers (src/handlers/)
cookies.js
- Sets secure session and user cookies
- Configures cookie attributes (HttpOnly, Secure, SameSite)
sessionTracking.js
- Collects attribution data (UTM params, referrer, user agent)
- Extracts third-party click IDs (gclid, fbclid, etc.)
- Collects third-party cookies (PostHog, VWO, etc.)
- Hashes IP addresses for privacy
- Sends data to BigQuery asynchronously
thankYou.js
- Detects thank-you/conversion pages
- Extracts transaction data from
amtranidcookie - Sends conversion events to BigQuery
Tracking Modules (src/tracking/)
session.js
- Session ID generation and validation
- Session timeout management
- Cookie value extraction
user.js
- User ID generation and validation
- Long-term user identification (730 days)
attribution.js
- UTM parameter extraction
- Referrer parsing
- Click ID collection (gclid, fbclid, msclkid, etc.)
collectors.js
- Third-party cookie collectors:
- PostHog (
_ph) - VWO (
_vwo_ds,_vwo_uuid,_vwo_uuid_v2) - Wicked Reports (
wickedfu) - RevOffers (
revoffers_affil)
- PostHog (
BigQuery Integration (src/bigquery/)
auth.js
- Google Cloud service account authentication
- JWT token generation for BigQuery API
insert.js
- BigQuery row insertion
- Handles both sessions and thank-you page tables
- Error handling and retry logic
Utility Libraries (src/lib/)
- cookies.js: Cookie parsing and manipulation
- crypto.js: SHA-256 IP hashing for privacy
- request.js: Request metadata extraction
- urlParams.js: URL parameter parsing
Configuration Files
wrangler.toml
Cloudflare Worker configuration:
- Worker name:
edge-layer-tracker - Entry point:
src/index.js - Account ID and compatibility date
- Environment variables (set via
wrangler secret put):BIGQUERY_PROJECT_IDBIGQUERY_DATASET_IDBIGQUERY_TABLE_IDGOOGLE_SERVICE_ACCOUNT_KEY
- Non-sensitive vars:
SESSION_TIMEOUT_MINUTES(default: 30)USER_COOKIE_LIFETIME_DAYS(default: 730)
package.json
- Scripts:
dev: Local development serverdeploy: Deploy to Cloudflaredeploy:draft: Upload draft versiontail: View worker logstest: Run unit teststest:run: Run tests oncetest:integration: Run integration teststest:coverage: Run tests with coverage
- Dependencies: None (edge runtime)
- Dev Dependencies: Vitest, Wrangler, Cloudflare Workers testing tools
Test Configuration
vitest.config.mjs
Unit test configuration:
- Uses Cloudflare Workers test pool
- Excludes integration tests
- Mocks BigQuery environment variables
vitest.integration.config.mjs
Integration test configuration:
- Separate config for BigQuery integration tests
- Requires actual BigQuery credentials
Data Flow
Session Tracking Flow
Request → Worker
↓
Check session cookie
↓
New session? → Generate session ID
↓
Extract UTM params, referrer, user agent
↓
Extract click IDs (gclid, fbclid, etc.)
↓
Collect third-party cookies
↓
Hash IP address
↓
Insert into BigQuery (async via waitUntil)
↓
Set session/user cookies
↓
Return proxied response
Thank-You Page Flow
Request → Worker
↓
Detect thank-you page URL
↓
Extract amtranid cookie
↓
Parse transaction data
↓
Insert into BigQuery thank-you table (async)
↓
Continue normal flow
Key Features
- Zero Performance Impact: All BigQuery operations use
ctx.waitUntil()for async execution - Privacy-First: IP addresses are SHA-256 hashed before storage
- Multi-Client Support: Configurable per client via
CLIENT_NAME - Comprehensive Tracking: UTM params, click IDs, third-party cookies
- Session Management: Smart session detection with configurable timeout
- Conversion Tracking: Thank-you page event tracking
Testing
- Unit Tests:
tests/worker.test.js- Tests worker logic without BigQuery - Integration Tests:
tests/integration/bigquery.integration.test.js- Tests actual BigQuery integration
Deployment
- Set Cloudflare secrets via
wrangler secret put - Configure
wrangler.tomlwith account ID - Update
src/config/default.jswith client-specific settings - Run
npm run deployto deploy to Cloudflare Workers
Request Filtering Considerations
Key Decision: Should workers fire on all requests or only page loads?
Recommendation: Fire on all requests (default behavior) unless:
- Client has extremely high traffic volume (>500M requests/month)
- Client has a Cloudflare specialist who can safely implement filtering
- Cost is a primary concern (rare — Cloudflare Workers are very cheap)
Why filtering is dangerous:
- Requires URL rewriting and routing rules
- Can interact with existing Cloudflare routing rules
- Risk of breaking existing functionality
- Complex to test and maintain
Cost Reality: For ~100M requests/month, Cloudflare Workers cost ~$50/month. Filtering complexity rarely justifies the savings.
Amble Example:
- Volume: ~100M requests/month (25M in 7 days)
- Decision: Fire on all requests
- Cost: Under $50/month
- Rationale: Filtering too dangerous given existing routing rules
Worker Implementation Checklist
- Verify Cloudflare access and Worker creation permissions
- Set up local development environment
- Modularize code (client-agnostic core + client config)
- Configure BigQuery service account credentials
- Implement request parsing and event formatting
- Set up BigQuery table insertion logic
- Add error handling and logging
- Test locally before deployment
- Deploy to Cloudflare Workers
- Verify events are reaching BigQuery
BigQuery Integration
Service Account Setup
- Create service account in client’s GCP project
- Grant permissions:
- BigQuery Data Editor (to insert rows)
- BigQuery Job User (to run queries if needed)
- Generate JSON key for service account
- Store credentials securely (1Password, environment variables)
Access Pattern: Worker uses service account JSON key to authenticate with BigQuery.
Table Schema Design
Two-Table Pattern (used in Amble and Eden):
-
Events Table (
edge_events/thank_you_page_visits):- Raw event data from Edge capture
- Columns: timestamp, request_url, referrer, user_agent, cookies, query_params, session_id, edge_id, transaction_id
- Identifier columns: ga4_client_id, segment_anonymous_id, etc.
- Partitioned by date for performance
-
Sessions Table (
edge_sessions):- Aggregated session-level data
- Columns: session_id, first_seen, last_seen, page_count, conversion_flag, source, medium, campaign
- Links to events via session_id
Identifier Bridging:
- Include columns for identifiers from other systems (GA4 client ID, Segment anonymous ID, etc.)
- Enables joining Edge data with client-side pixel data
- Critical for reconciliation and gap analysis
Example Schema:
CREATE TABLE `project.dataset.edge_thank_you_page_visits` (
timestamp TIMESTAMP,
request_url STRING,
referrer STRING,
user_agent STRING,
cookies STRING,
query_params STRING,
session_id STRING,
edge_id STRING,
ga4_client_id STRING,
segment_anonymous_id STRING,
transaction_id STRING,
-- ... other fields
)
PARTITION BY DATE(timestamp);Data Freshness and Latency
- Edge → BigQuery: Near real-time (seconds to minutes)
- BigQuery → Reverse ETL/CDP: Depends on sync frequency (often hourly or daily)
- Reconciliation reports: Can be run on-demand or scheduled
Consideration: For pilot phases, real-time data isn’t required. Batch processing is sufficient.
Testing Strategy
Phased Approach (proven with Amble):
Phase 1: Dev Environment Testing
- Duration: 1-2 days
- Scope: Local development environment
- Goal: Verify code works, events reach BigQuery
- Checklist:
- Worker code runs locally
- Events formatted correctly
- BigQuery connection works
- Tables created with correct schema
- Sample events inserted successfully
Phase 2: Single-Page Production Test
- Duration: 2-3 days
- Scope: One production page (e.g. homepage or landing page)
- Goal: Verify Edge capture works in production with real traffic
- Checklist:
- Worker deployed to production
- Worker fires on target page
- Events appear in BigQuery
- No performance impact on page load
- Client validates data looks correct
Phase 3: Subdirectory Test
- Duration: 3-5 days
- Scope: One product group or subdirectory (e.g.
/products/category-a/) - Goal: Test at scale with meaningful traffic volume
- Checklist:
- Worker active on subdirectory
- Volume matches expectations
- Data quality checks pass
- Reconciliation with client-side data shows expected gaps
- No errors or performance issues
Phase 4: Full Rollout
- Duration: 1 day (Monday launch recommended)
- Scope: Entire site
- Goal: Complete deployment
- Checklist:
- Worker active site-wide
- Monitoring in place
- Client notified and ready
- Rollback plan documented
- Post-launch validation scheduled
Timeline Pattern (from Amble):
- Week 1: Setup and dev testing
- Week 2: Single-page and subdirectory testing
- Week 3: Full rollout preparation
- Week 4: Full rollout (Monday launch)
Key Principle: Never compress the testing phases. Even if other work is ahead of schedule, maintain the testing cadence for safety.
Operations: monitoring, alerting, and rollback
Monitoring (minimum)
- Edge: Error rates and invocations from the provider (e.g. Cloudflare Workers Analytics / Logpush; Fastly Real-time Log Streaming). Watch spikes after deploys.
- Warehouse: Row insert volume vs traffic expectations; failed batch/stream jobs; table growth anomalies.
- Product: Sample reconciliation dashboards or queries (Edge vs GA4/Segment) on a schedule after rollout.
Alerting
- Wire alerts to the client’s channel (PagerDuty, Slack) for: sustained insert failure, Worker 5xx spike, or zero Edge events when traffic is non-zero (broken route or disabled script).
Rollback
- Edge: Revert to the previous deployment (e.g.
wrangler rollbackor vendor equivalent); or disable the route / detach the Worker from the path (only after confirming traffic can safely bypass — document blast radius). - Warehouse: Do not drop tables in panic; stop the broken insert path first, then fix forward.
- Communicate: Use the escalation path in the SOP; record incident + resolution in the engagement retro.
For data governance (PII, consent), follow the SOP section Data Governance & Privacy.
Project Management Patterns
Timeline Planning
Standard Timeline: 2-4 weeks from kickoff to full rollout
Week-by-Week Breakdown:
Week 1: Setup and Configuration
- Access provisioning (Cloudflare, BigQuery)
- Local dev environment setup
- Code modularization (if needed)
- BigQuery schema design
- Worker code development
Week 2: Dev Testing
- Local testing
- BigQuery table creation
- Sample data validation
- Code review and refinement
Week 3: Production Testing
- Single-page test deployment
- Subdirectory test deployment
- Data validation
- Client review and feedback
Week 4: Full Rollout
- Final testing and validation
- Full site deployment (Monday launch)
- Post-launch monitoring
- Documentation and handoff
Compression Risk: Don’t compress testing phases even if setup is faster than expected. Testing cadence protects against production issues.
Client Communication
Upfront Requirements
Access Requirements (communicate early):
- Cloudflare account access with Worker creation permissions
- BigQuery service account setup
- Analytics tool access (GA4, Segment, etc.) for reconciliation
- Ad platform access (if attribution validation needed)
Cost Transparency:
- Cloudflare Workers cost (~$50/month for 100M requests)
- BigQuery storage and query costs (usually minimal)
- Communicate costs upfront to avoid surprises
Timeline Visibility
Gantt Chart (recommended):
- Visual timeline with phases
- Dependencies clearly marked
- Testing phases highlighted
- Client can see progress and gates
Update Cadence:
- Weekly updates during setup
- Daily updates during testing phases
- Immediate escalation for blockers
Testing Phase Explanations
Why phased testing:
- Reduces risk of production issues
- Allows validation at each scale level
- Builds client confidence
- Enables course correction before full rollout
Client expectations:
- Testing phases are mandatory, not optional
- Each phase validates the approach before scaling
- Full rollout only happens after successful testing
Code Reusability
Modularization Approach (from Amble):
-
Extract client-agnostic core:
- Request parsing logic
- Event formatting
- BigQuery client wrapper
- Common identifier extraction
-
Configuration-driven client logic:
- BigQuery project/dataset/table names
- Client-specific event fields
- Identifier mapping rules
- Filtering rules (if needed)
-
Benefits:
- Faster onboarding for new clients
- Consistent code quality
- Easier maintenance
- Knowledge transfer simpler
Pattern: Spend time upfront modularizing (1-2 days) to save time on future clients.
Amble Case Study
Context
Client: Amble (brand under MinuteMD) Engagement: Edge-to-Activation Phase 0 + Phase 1 Timeline: 2-4 weeks (target) Tech Stack: Cloudflare Workers → BigQuery
Requirements
- High-coverage attribution before page load
- Edge-layer tracking via Cloudflare Workers
- Data to BigQuery for modeling
- Timeline: 2-4 weeks to rollout
Implementation Approach
Technical Decisions
-
Cloudflare Workers on All Requests:
- Volume: ~100M requests/month (25M in 7 days)
- Decision: Fire workers on all requests (not just page loads)
- Rationale: Filtering too complex/dangerous; cost is minimal (~$50/month)
- Learnings: Filtering requires URL rewriting and can break existing routing rules
-
Code Modularization:
- Spent 1-2 days modularizing Eden-specific code
- Created client-agnostic core modules
- Configuration-driven client-specific logic
- Enables faster onboarding for future clients
-
BigQuery Schema:
- Two-table pattern (events + sessions)
- Identifier columns for GA4, Segment bridging
- Partitioned by date for performance
Testing Strategy
Phased Approach:
- Dev Environment: Local testing and validation
- Single-Page Test: One production page (Thursday-Friday)
- Subdirectory Test: One product group (Monday-Wednesday)
- Full Rollout: Entire site (Monday launch)
Timeline:
- Week 1: Setup and configuration
- Week 2: Dev testing
- Week 3: Production testing (single-page + subdirectory)
- Week 4: Full rollout
Key Principle: Never compress testing phases, even if other work is ahead of schedule.
Challenges and Solutions
Challenge: Cloudflare access without Worker creation permissions
- Solution: Requested proper permissions upfront; verified before starting work
Challenge: Code was Eden-specific, not reusable
- Solution: Spent time modularizing before Amble work; created reusable core
Challenge: High traffic volume (100M requests/month)
- Solution: Fired workers on all requests; cost still minimal (~$50/month)
Challenge: Filtering workers to only page loads
- Solution: Decided against filtering; too dangerous given existing routing rules
Lessons Learned
- Modularization pays off: Spending 1-2 days modularizing code saves time on future clients
- Testing phases are non-negotiable: Phased testing reduces risk and builds confidence
- Filtering is dangerous: Unless client has Cloudflare specialist, avoid filtering workers
- Cost is rarely an issue: Cloudflare Workers are very cheap even at high volume
- Access verification critical: Verify Worker creation permissions, not just Cloudflare access
- Timeline visibility helps: Gantt chart with phases helps client understand progress
Outcomes
- Successful Edge capture deployment
- BigQuery tables created and receiving data
- Testing approach validated
- Code modularized for future reuse
- Timeline met (2-4 week target)
Common Patterns & Pitfalls
Patterns That Work
- Modular Code Structure: Client-agnostic core + configuration-driven client logic
- Phased Testing: Dev → single-page → subdirectory → full rollout
- Two-Table Schema: Events table + sessions table in BigQuery
- Identifier Bridging: Include GA4, Segment IDs for reconciliation
- Workers on All Requests: Simpler and safer than filtering
Common Pitfalls
- Assuming Cloudflare Access = Worker Permissions: Verify Worker creation permissions specifically
- Trying to Filter Workers: Complex and dangerous; usually not worth it
- Skipping Testing Phases: High risk; always follow phased approach
- Client-Specific Code: Makes future clients harder; modularize early
- Direct Edge → Downstream: Edge must go through warehouse first
- Underestimating Setup Time: Access provisioning and dev environment setup take time
Volume Considerations
Low Volume (<10M requests/month):
- Workers on all requests (default)
- Cost: <$10/month
- No filtering needed
Medium Volume (10-100M requests/month):
- Workers on all requests (default)
- Cost: $10-50/month
- Consider filtering only if client has Cloudflare specialist
High Volume (>100M requests/month):
- Workers on all requests (default)
- Cost: $50-200/month
- Filtering may be worth considering, but still risky
- Verify cost with client before optimizing
Amble Example: 100M requests/month = ~$50/month (no filtering needed)
Access Requirements Checklist
Before starting any Edge-to-Activation engagement, verify:
- Cloudflare: Account access + Worker creation permissions
- BigQuery: Service account with Data Editor + Job User roles
- Analytics Tools: GA4, Segment access for reconciliation
- Ad Platforms: Access if attribution validation needed
- Data Warehouse: Access to create tables and insert data
- Client Engineering Contact: Available for questions and coordination
Success Metrics
Technical Metrics
- Edge Capture Rate: % of requests captured at Edge
- BigQuery Insertion Success: % of events successfully inserted
- Data Freshness: Latency from Edge capture to BigQuery
- Identifier Match Rate: % of Edge events with matching client-side identifiers
Project Metrics
- Timeline Adherence: On-time delivery within 2-4 week target
- Testing Phase Success: Each phase completed without major issues
- Client Satisfaction: Positive feedback on data quality and process
Business Metrics
- Attribution Coverage: % of conversions with known source (target: 95%+)
- Gap Recovery: % of conversions recovered that were previously unattributed
- Reconciliation Accuracy: Match rate between Edge and client-side data
Instantiating a New Edge-to-Activation Project in Linear
Preferred Method: Use the Edge-to-Activation Tickets Cursor skill (.cursor/skills/edge-to-activation-tickets/SKILL.md) to create all tickets with a single command. The skill prompts for required information and creates all Phase 0, Phase 1, and Phase 2 tickets automatically.
To use the skill: Say “Create Edge-to-Activation tickets for {client}” or “Instantiate E2A project in Linear” in Cursor chat.
Manual Method: If you prefer to create tickets manually or need to customize the process, this section documents the workflow and provides examples using Linear MCP tools directly.
Prerequisites
Before instantiating a Linear project, gather the following information:
- Client name and Linear team assignment
- SOW signed and scope confirmed (which phases are included: Phase 0, Phase 1, Phase 2)
- Key stakeholders identified: GTM lead, data engineer, platform owner
- Access requirements status: Cloudflare (with Worker creation permissions), BigQuery service account, analytics tools
- Timeline confirmed: Target dates for each phase (2-4 weeks typical for Phase 0+1)
Workflow
-
Gather Information:
- Client name, Linear team name
- Phase scope (Phase 0 only? Phase 0+1? All phases including retainer?)
- Timeline and key dates
- Access provisioning status
-
Create Linear Project (optional, if
save_projectMCP tool is available):- Name: “{Client} - Edge-to-Activation”
- Description: Reference to SOW, link to SOP and implementation playbook
- Team: Client’s Linear team
- Note: If
save_projectis not available, structure tickets using phase labels and naming
-
Create Phase Tickets:
- Create multiple full tickets per phase (no subtickets/subtasks)
- Use phase naming in ticket titles: “Phase {N}: {Component} - {Client}”
- Apply phase labels: [“Phase 0”], [“Phase 1”], [“Phase 2”], plus [“Edge-to-Activation”]
- Use
blockedByto show dependencies between tickets - Set initial state appropriately (first ticket in “ToDo”, others in “Backlog”)
Ticket Structure
Phase 0 Tickets (if included)
Single ticket covering Signal Recovery Audit:
- Title: “Phase 0: Signal Recovery Audit - {Client}”
- Labels: [“Phase 0”, “Edge-to-Activation”]
- Description: Use SOP Phase 0 checklist as acceptance criteria
- State: “ToDo” or “Ready for Work”
- Priority: 2 (High)
Phase 1 Tickets (multiple full tickets)
Create separate full tickets for each major component:
-
“Phase 1: Setup Cloudflare Workers - {Client}”
- Cloudflare access verification, Worker creation, code deployment
- Labels: [“Phase 1”, “Edge-to-Activation”]
- blockedBy: Phase 0 ticket (if Phase 0 exists)
-
“Phase 1: Configure BigQuery Schema - {Client}”
- Service account setup, table schema design, identifier bridging
- Labels: [“Phase 1”, “Edge-to-Activation”]
- blockedBy: Cloudflare Workers ticket
-
“Phase 1: Implement Testing Framework - {Client}”
- Dev environment, single-page test, subdirectory test, full rollout
- Labels: [“Phase 1”, “Edge-to-Activation”]
- blockedBy: BigQuery ticket
-
“Phase 1: Documentation and Handoff - {Client}”
- Runbooks, discrepancy reports, client enablement
- Labels: [“Phase 1”, “Edge-to-Activation”]
- blockedBy: Testing ticket
Note: Each ticket is a full Linear issue, not a subtask. Use blockedBy to show dependencies and phase labels to group them.
Phase 2 Tickets (if retainer included)
- “Phase 2: Rollout & Maintenance - {Client}”
- Labels: [“Phase 2”, “Edge-to-Activation”, “Retainer”]
- Description: Reference retainer scope and ongoing work
- State: “Backlog” (starts after Phase 1)
- Note: Phase 2 is ongoing retainer work; ticket may be updated rather than closed
Linear MCP Examples
Example: Create Phase 0 Ticket
call_mcp_tool(
server: "linear",
toolName: "save_issue",
arguments: {
title: "Phase 0: Signal Recovery Audit - {Client}",
team: "{Client Team}",
description: `
## Context
[Brief context about client and engagement]
## Goal
Deploy Edge Layer capture and produce discrepancy report showing signal loss.
## Scope
### In Scope
- CDN/Edge capture deployment (Cloudflare Workers)
- Baseline traffic and conversion signal capture
- Discrepancy report (client vs. Edge comparison)
- Attribution gap by source/channel
- Scope alignment for Phase 1 pilot
### Out of Scope
- Full pilot implementation (Phase 1)
- Reverse ETL/CDP configuration (Phase 1)
- Full rollout (Phase 2)
## Acceptance Criteria
- [ ] CDN/Edge capture deployed in parallel with existing stack
- [ ] No removal of pixels or server-to-server tracking
- [ ] Baseline traffic and conversion signals captured
- [ ] Discrepancy report generated and reviewed
- [ ] Attribution gap quantified by source
- [ ] Client sign-off on findings and pilot scope
## Notes/Constraints
- Edge is an additional data stream; pixels remain in place
- Timeline: 1 week
- See SOP Phase 0 checklist for full details
## Open Questions
- [Any open questions]
`,
labels: ["Phase 0", "Edge-to-Activation"],
state: "ToDo",
priority: 2
}
)Example: Create Phase 1 Tickets
// Ticket 1: Cloudflare Workers
call_mcp_tool(
server: "linear",
toolName: "save_issue",
arguments: {
title: "Phase 1: Setup Cloudflare Workers - {Client}",
team: "{Client Team}",
description: `
## Context
[Client context and Phase 0 outcomes]
## Goal
Deploy Cloudflare Workers for Edge capture with modular, reusable code structure.
## Scope
### In Scope
- Verify Cloudflare access and Worker creation permissions
- Set up local development environment
- Modularize code (client-agnostic core + client config)
- Deploy Worker to Cloudflare
- Verify events are firing correctly
### Out of Scope
- BigQuery schema design (separate ticket)
- Testing framework (separate ticket)
- Documentation (separate ticket)
## Acceptance Criteria
- [ ] Cloudflare access verified with Worker creation permissions
- [ ] Local dev environment set up and tested
- [ ] Code modularized (reusable core + client config)
- [ ] Worker deployed to Cloudflare
- [ ] Events firing on all requests (or filtered if needed)
- [ ] No performance impact on page load
## Notes/Constraints
- Fire workers on all requests by default (filtering is complex/dangerous)
- Cost: ~$50/month for 100M requests
- See implementation playbook for technical details
## Open Questions
- [Any open questions]
`,
labels: ["Phase 1", "Edge-to-Activation"],
blockedBy: ["Phase-0-ticket-id"],
state: "Backlog"
}
)
// Ticket 2: BigQuery Schema
call_mcp_tool(
server: "linear",
toolName: "save_issue",
arguments: {
title: "Phase 1: Configure BigQuery Schema - {Client}",
team: "{Client Team}",
description: `
## Context
[Client context, Cloudflare Workers deployed]
## Goal
Set up BigQuery tables and service account for Edge event ingestion.
## Scope
### In Scope
- Create BigQuery service account with proper permissions
- Design table schema (events + sessions tables)
- Create tables with partitioning
- Configure identifier bridging columns (GA4, Segment IDs)
- Test event insertion from Cloudflare Workers
### Out of Scope
- Reverse ETL/CDP configuration (separate ticket if needed)
- Testing framework (separate ticket)
- Documentation (separate ticket)
## Acceptance Criteria
- [ ] Service account created with Data Editor + Job User roles
- [ ] Events table created with correct schema
- [ ] Sessions table created with correct schema
- [ ] Tables partitioned by date
- [ ] Identifier columns included for bridging
- [ ] Events successfully inserting from Workers
## Notes/Constraints
- Two-table pattern: events (raw) + sessions (aggregated)
- Identifier bridging critical for reconciliation
- See implementation playbook for schema patterns
## Open Questions
- [Any open questions]
`,
labels: ["Phase 1", "Edge-to-Activation"],
blockedBy: ["Phase-1-cloudflare-ticket-id"],
state: "Backlog"
}
)
// Ticket 3: Testing Framework
call_mcp_tool(
server: "linear",
toolName: "save_issue",
arguments: {
title: "Phase 1: Implement Testing Framework - {Client}",
team: "{Client Team}",
description: `
## Context
[Client context, Cloudflare Workers and BigQuery configured]
## Goal
Implement phased testing approach: dev → single-page → subdirectory → full rollout.
## Scope
### In Scope
- Dev environment testing
- Single-page production test
- Subdirectory test (one product group)
- Full rollout (Monday launch)
- Validation and sign-off
### Out of Scope
- Cloudflare Workers setup (separate ticket)
- BigQuery configuration (separate ticket)
- Documentation (separate ticket)
## Acceptance Criteria
- [ ] Dev environment tested and validated
- [ ] Single-page test completed (Thursday-Friday)
- [ ] Subdirectory test completed (Monday-Wednesday)
- [ ] Full rollout completed (Monday launch)
- [ ] No performance issues or errors
- [ ] Client sign-off on testing results
## Notes/Constraints
- Never compress testing phases, even if other work is ahead
- Testing cadence: Week 2 (dev), Week 3 (production), Week 4 (rollout)
- See SOP testing strategy section for details
## Open Questions
- [Any open questions]
`,
labels: ["Phase 1", "Edge-to-Activation"],
blockedBy: ["Phase-1-bigquery-ticket-id"],
state: "Backlog"
}
)
// Ticket 4: Documentation and Handoff
call_mcp_tool(
server: "linear",
toolName: "save_issue",
arguments: {
title: "Phase 1: Documentation and Handoff - {Client}",
team: "{Client Team}",
description: `
## Context
[Client context, Phase 1 implementation complete]
## Goal
Create runbooks, discrepancy reports, and enable client team.
## Scope
### In Scope
- Attribution validation report
- Discrepancy analysis
- Runbooks for ongoing maintenance
- Client walkthrough and enablement
- Handoff documentation
### Out of Scope
- Phase 2 rollout (separate phase)
## Acceptance Criteria
- [ ] Attribution validation report delivered
- [ ] Discrepancy analysis completed
- [ ] Runbooks created and reviewed
- [ ] Client walkthrough completed
- [ ] Team enabled on Edge data usage
- [ ] Client sign-off on Phase 1
## Notes/Constraints
- Documentation should reference SOP and implementation playbook
- Enablement focuses on using discrepancy reports and Edge data
## Open Questions
- [Any open questions]
`,
labels: ["Phase 1", "Edge-to-Activation"],
blockedBy: ["Phase-1-testing-ticket-id"],
state: "Backlog"
}
)Ticket Format Standards
When creating tickets, follow the required Linear ticket format from standards/04-prompts/tickets/linear-ticket-generation-from-transcript.md:
- Title: Start with a verb or phase prefix (“Phase 1: Setup…”)
- Description Template:
- Context
- Goal
- Scope (In scope / Out of scope)
- Acceptance Criteria
- Notes/Constraints
- Open Questions
Label and State Recommendations
- Phase Labels: [“Phase 0”], [“Phase 1”], [“Phase 2”] - Use for filtering
- Service Label: [“Edge-to-Activation”] - Use on all tickets
- AI vs Human Labels: Add
ai-assignableorhuman-onlyperstandards/03-knowledge/engineering/setup/linear-labels-ai-human.md - State: First ticket in “ToDo” or “Ready for Work”; others in “Backlog” until dependencies clear
- Priority: Phase 0 and Phase 1 typically Priority 2 (High)
Integration with SOP
The ticket structure aligns with the SOP phases:
- Phase 0 tickets map to SOP Phase 0 checklist
- Phase 1 tickets map to SOP Phase 1 technical implementation and testing sections
- Phase 2 tickets map to SOP Phase 2 retainer scope
See the SOP Linear Project Setup section for when to instantiate and additional guidance.
Quality checklist (before closing a phase)
Use with the SOP phase gates.
Phase 0
- Edge capture proven on agreed surfaces; discrepancy report reviewed with client
- Attribution gaps quantified; Phase 1 scope explicit
Phase 1
- Warehouse location matches existing datasets; service account/role least-privilege
- Phased testing completed without skipped gates (unless documented risk acceptance)
- Reconciliation approach agreed; runbooks and handoff done
- Monitoring and rollback path documented (see Operations)
Phase 2 (retainer)
- Backlog, SLA, and escalation path current; runbooks updated when stack or publishers change
Iteration log
| Date | Change |
|---|---|
| 2026-04-02 | Initial structured playbook (architecture, Cloudflare reference, BigQuery, testing, Linear). |
| 2026-04-08 | Added overview metadata, non-Cloudflare section, operations (monitor/rollback), quality checklist, iteration log; linked offering implementation plan and linear template; registered in PLAYBOOK_INDEX. |
Related Resources
- SOP: Edge-to-Activation — Detailed delivery playbook with phases, checklists, and SOW copy. Includes Amble case study and testing strategy.
- Implementation plan — Offering-level week-by-week plan and SOW paste block.
- Linear template — Milestones and issue list for client projects.
- Offer: Edge-to-Activation — Sales materials and pitch deck
- Demo: Edge-to-Activation — Demo walkthrough and presentation guide
- Amble project context — Example client context and stakeholders
Note: This playbook focuses on implementation patterns and technical guidance. For delivery checklists, phase gates, and SOW templates, see the SOP.
Questions or Updates
If you encounter new patterns, pitfalls, or learnings from Edge-to-Activation implementations, update this playbook and add a row to the iteration log above.