Doordash Workflow — Implementation Plan
- Owner: Brylle Girang
- Created: 2026-04-01
- Target launch: April 17, 2026 (M3.1)
- Related: Q2 L&D Roadmap — Initiative 3, Project 3.1
What We’re Building
The Doordash metaphor: Platform/service-line teams are the kitchen. Team members are the customers. L&D is the delivery driver — picking up updates, packaging them, and personally dropping them at the door in bite-sized form. The customer never has to go hunting for what’s new.
This implements the “Full workflow TBD” placeholder in the Q2 roadmap (Initiative 3: Feedback Loop and Change Management).
The Five-Stage Workflow
flowchart LR GithubPRs["GitHub PRs\n(weekly scan)"] -->|"Auto-detect changes"| Classification Classification["Classification\n(L&D tiers update)"] -->|"Package update"| Packaging Packaging["Packaging\n(Format + educate)"] -->|"Distribute"| Delivery Delivery["Delivery\n(Right channel, right people)"] -->|"Capture activity"| Absorption Absorption["Absorption\n(Activity-based, WIP)"] -->|"Signal gaps back"| GithubPRs
Stage 1 — Kitchen (Automated Intake via GitHub PRs)
Engineers should not be responsible for notifying L&D. The kitchen is automated: every week, a scan runs against merged GitHub PRs in the brainforge-platform repo and surfaces what changed. Engineers keep building; L&D receives the signal automatically.
How the scan works:
- Weekly cadence (e.g. every Monday): scan merged PRs from the past 7 days
- Filter for changes in signal paths:
.cursor/skills/,.cursor/rules/,knowledge/,apps/platform/(Forge-facing features) - For each relevant PR, extract: PR title, description, changed file paths, and merge date
- Output: a weekly “what shipped” list that feeds directly into Stage 2 classification
What L&D looks at per PR:
- Which files changed (skill, rule, knowledge doc, platform feature)
- PR description — the Summary and Changes sections already carry the context needed for packaging
- PR labels or linked Linear ticket for audience/service-line targeting
Design principle: The existing PR description format (Summary, Changes, Impact, Related) already contains everything L&D needs to classify and package an update. No extra engineer effort required.
Applies: professional-learning/competency-framework-translator.md from the claude-education-skills library — translate raw PR changes into observable team behaviors once surfaced.
Stage 2 — Classification
L&D classifies each update into one of three tiers to determine packaging effort and delivery format:
| Tier | Type | Example | Packaging time |
|---|---|---|---|
| Tier 1 — Quick Tip | Minor change, 1-step habit shift | New Cursor shortcut, renamed skill | < 30 min |
| Tier 2 — Workflow Update | Existing workflow has changed | Updated EP audit skill, new Slack command | 1–2 hrs |
| Tier 3 — New System | Net-new capability or process | New Forge feature, new service-line tool | Half day |
Stage 3 — Packaging (The Education Science Layer)
L&D translates the raw update into a learning unit using evidence-based formats from the claude-education-skills library.
Cognitive load first (memory-learning-science/cognitive-load-analyser.md):
- Strip the update down to its single critical behavior change. One update = one behavior.
- Remove extraneous information (the “why it was built” detail goes to the Forge vault, not the delivery unit).
Structure using Explicit Instruction (explicit-instruction/explicit-instruction-sequence-builder.md):
- I Do: L&D demonstrates the updated behavior (screen recording, Zoom Clip, or written example)
- We Do: Guided example the learner can follow step by step
- You Do: One real micro-task the learner completes in their actual work
Tie to implementation intentions (wellbeing-motivation-agency/implementation-intention-designer.md):
- End every update with: “When I next [trigger in real work], I will [new behavior].” This bridges comprehension to behavioral adoption.
Format by tier:
- Tier 1 → Weekly digest — batched into a single weekly roundup (“This week in the platform”). Multiple Tier 1 updates ship together, not individually. Keeps signal-to-noise high.
- Tier 2 → Slack post — standalone post with context, the behavior change, and a guided example. Posted when ready, not held for the digest.
- Tier 3 → Forge page + Zoom Clip (2–4 min) — permanent Forge page with full context, guided walkthrough clip, and an applied micro-task. Reserved for net-new capabilities or process changes.
Stage 4 — Delivery (Channels and Cadence)
Primary channel: TBD
The right primary channel is still an open decision. Candidates to evaluate:
- A dedicated Slack channel (e.g.
#platform-updatesor#ai-learning) - The Forge (push via The Forge’s notification or announcement mechanism)
- A combination: Forge as the source of truth, Slack as the push nudge
This decision must be made before M3.1 (April 17) to avoid fragmentation. Key criteria: where does the team already pay attention, and what forces the update into their workflow rather than asking them to seek it out.
Secondary channel: The Forge
- All Tier 2 and Tier 3 updates get a permanent Forge page under a new section:
Updates / Changelog - Forge is the long-lived home regardless of what the primary push channel ends up being
Audience targeting:
- All-team updates → primary channel (TBD)
- Service-line-specific → respective SL channel + cross-posted to primary
SLA:
- Tier 1: Held until the weekly digest (no individual SLA)
- Tier 2: 5 business days from PR detection
- Tier 3: Scheduled with the relevant team (no fixed SLA)
Stage 5 — Absorption (WIP)
Status: Work in Progress. The direction here is activity-based, not opinion-based. Asking people “did you understand?” or posting retrieval questions in Slack measures engagement with the delivery, not actual adoption of the behavior. The goal is to capture what people actually do — not what they say they’ll do.
Design direction:
Absorption should be measured by observing real activity signals from the skills and workflows themselves:
- Did usage of the updated skill increase after the delivery?
- Did error patterns or Refire signals decrease for that workflow?
- Did the updated PR patterns / Linear ticket structures appear in subsequent team output?
What this requires (open questions):
- A lightweight telemetry or activity capture layer on skill/workflow usage (e.g. tracking when a Cursor skill is invoked, when a specific vault pattern is used, when a Linear ticket follows the new template)
- Pairing with the Refire system: a drop in Refire signals for a specific skill/workflow after a delivery is a strong absorption signal
- Baseline measurement before delivery so post-delivery change is attributable
What is NOT the right approach:
- Slack polls or “did you use this?” questions — self-report is unreliable and creates noise
- Retrieval questions as the primary measure — useful for learning design but not a proxy for real behavioral change
Next step: Define what “activity signal” looks like concretely for each tier (Tier 1: changelog pattern in PRs; Tier 2: skill invocation log; Tier 3: vault doc usage / Forge page visits). This feeds into the tracking decision open in the Q2 roadmap (Notion vs Supabase vs tags).
Applies (for when this is built out): ai-learning-science/learning-analytics-interpretation-guide.md — turning activity data into actionable L&D decisions.
Connection to Refire
Doordash (outbound: Platform → People) and Refire (inbound: People → Platform) are paired:
- A drop in Refire signals for a specific skill or workflow after a Doordash delivery is the clearest absorption signal available right now — no polls, no retrieval questions, just observed behavior change
- Conversely, a spike in Refire signals on a newly delivered update is a flag to L&D that the packaging didn’t land and a re-delivery or walkthrough is needed
- The intake side of Doordash is now automated (PR scan), so the champion’s ≥2 submissions per month OKR shifts to: champions flag Tier 3 updates that the PR scan might miss (e.g. process changes that aren’t PR-visible)
Both workflows share the changelog log file and the primary delivery channel (TBD).
Key Artifacts to Create
All files go under knowledge/people/learning-development/programs/doordash/:
doordash-pr-scan-guide.md— how to run the weekly GitHub PR scan: which repos, which paths, what to look for, how to extract the classification signal from PR descriptionsdoordash-packaging-guide.md— step-by-step packaging playbook with tier definitions, format templates (weekly digest / Slack post / Forge + Zoom Clip), and education science principles applieddoordash-sla.md— delivery SLAs by tier, channel routing decision (TBD filled in once primary channel is confirmed), and cadencedoordash-changelog.md— running log of all updates delivered (date, tier, topic, channel, absorption status)
Metrics (Aligned to Q2 OKRs)
- M3.1 (April 17): First Doordash delivery completed — PR scan run, update classified, packaged, and delivered via the appropriate format
- M3.3 (June 30): ≥1 update per week logged in changelog
- Weekly digest published every week it contains at least one Tier 1 item
- Absorption: TBD — metric design blocked on activity capture system (see Stage 5). Interim proxy: Refire signal volume per skill/workflow before vs after delivery
Implementation Order
- Decide the primary delivery channel (TBD in Stage 4) — this unblocks the SLA doc
- Draft the PR scan guide and packaging guide (Tier 1 and 2 formats for MVP)
- Run the first weekly PR scan manually: pull the last 7 days of merged PRs, classify them, package the first update end-to-end
- Deliver the first packaged update (weekly digest or Slack post depending on tier)
- Log it in the changelog; refine the scan guide and packaging templates based on what was learned
- Add Doordash workflow reference to Quickstart curriculum (M3 or M5) so new hires understand the system on Day 1
- Launch Refire (M3.2, April 24) as the paired feedback loop
- Stage 5 (absorption capture) is a separate design track — begin defining activity signals in parallel but do not block M3.1 on it