Brainforge L&D North Stars
Purpose: Single authoritative reference for the frameworks, mindsets, and design standards that govern all Brainforge internal L&D outputs—modules, certifications, curricula, and reinforcement. This document is grounded in learning science and calibrated to a 2026 AI-first operating context.
Audience: L&D owners, module authors, certification designers, and anyone shipping learning experiences inside Brainforge.
Last updated: 2026-04-08
Part 1 — Purpose and definition of success
Learning and development at Brainforge exists to change how people work—not to tick boxes. Success is measurable change in how the team operates in real client and internal workflows, not course completion, seat time, or satisfaction scores alone. Completions and ratings may be useful operational signals; they are not the north star.
When we say a program “worked,” we mean team members do something differently on Tuesday (observable behavior in vault, Linear, GitHub, client delivery), not that they sat through a block of content.
Part 2 — The maturity model we are building toward
We use a four-level arc (adapted from Bersin, 2026) to describe how learning organizations evolve:
Static training → Scaled learning → Integrated development → Dynamic enablement
| Stage | What it looks like |
|---|---|
| Static training | Event- and catalog-centric; knowledge decays without systems |
| Scaled learning | Repeatable tracks, shared standards, measurable standards |
| Integrated development | Learning wired into work systems (PRs, Linear, vault), not parallel universes |
| Dynamic enablement | Answers, practice, and assets at the moment of need; knowledge shared at the speed the business changes |
Where we are today: The Quickstart track and certification structure sit in scaled learning—repeatable curricula, rubrics, and real-work assessment—moving toward integrated development as workflows (Doordash, Refire, Forge) connect learning to daily systems.
Destination: Dynamic enablement—team members get the right guidance, examples, and practice when the work demands it, and the organization updates collective know-how as fast as tools and clients change.
Key implication: L&D runs systems (content + experts + workflows + feedback loops), not a static catalog.
Part 3 — Core pedagogical frameworks (evidence-grounded)
The following frameworks are the non-negotiable lenses for design and quality review. Detailed methods and citations live in the claude-education-skills library (see Part 7).
3.1 Backwards design (Understanding by Design)
Source: skills/curriculum-assessment/backwards-design-unit-planner/SKILL.md — Wiggins & McTighe (1998, 2005).
- Stage 1 — Desired results: Start with the desired behavior (what the learner will do differently) before choosing content.
- Stage 2 — Assessment evidence: Design how you will know the behavior is present before designing activities.
- Stage 3 — Learning plan: Only then select readings, demos, and practice.
Brainforge translation: “Desired behavior” = what a team member does differently at work (e.g., opens a PR with the right description, runs a board audit and files evidence in the vault)—not vague “understanding.”
3.2 Knowledge type classification (Manning’s framework)
Source: skills/original-frameworks/learning-target-authoring-guide/SKILL.md.
Every learning objective must be classified into one of three types:
| Type | Nature | How it is tested |
|---|---|---|
| Type 1 — Hierarchical knowledge | Prerequisite chains, clear right/wrong building blocks | Rubric, knowledge check, structured task |
| Type 2 — Horizontal / reasoning knowledge | Multiple valid approaches, judgment under constraints | Quality of reasoning, defense of choices, structured dialogue |
| Type 3 — Dispositional knowledge | Habits, stance, collaboration over time | Multi-informant observation only — never a quiz or single rubric moment |
Implication: A module aimed at habits or judgment (Type 3) cannot be “passed” with a quiz. Design observation, peer/manager signals, or longitudinal evidence instead.
3.3 Spaced practice and retrieval
Source: skills/memory-learning-science/spaced-practice-scheduler/SKILL.md — Ebbinghaus; Roediger & Butler (2011); Cepeda et al. (2006).
- Certification starts a habit; it does not end learning.
- Every track should embed post-certification reinforcement with explicit anchors (e.g., Week 1, Week 2, Week 4)—lightweight retrieval, not re-teaching the whole module.
- Retrieval practice beats re-reading; periodic low-stakes checks beat a single high-stakes event for retention.
3.4 Cognitive load management
Source: skills/memory-learning-science/cognitive-load-analyser/SKILL.md — Sweller (1988, 1994).
- New AI workflows carry high intrinsic load (tools, safety, verification). Design must reduce extraneous load: broken into steps, clear worked examples, minimal split-attention (e.g., don’t force the learner to chase three panes at once without scaffolding).
Brainforge rule of thumb: One new workflow per module. Do not combine tool setup, policy, and end-to-end workflow mastery in one undifferentiated session unless each piece is deliberately scoped and practiced.
3.5 Criterion-referenced assessment
Source: skills/curriculum-assessment/criterion-referenced-rubric-generator/SKILL.md — Brookhart (2013); Wiggins (2005).
- Every certification rubric describes a fixed, observable standard—not a curve, not “impression of mastery.”
- Rubric language must specify observable actions (e.g., “runs a board audit and files a summary in the vault with X sections”)—not internal states (“understands the board,” “gets AI”).
3.6 Formative assessment loop
Sources: skills/ai-learning-science/formative-assessment-loop-designer/SKILL.md; skills/curriculum-assessment/formative-assessment-technique-selector/SKILL.md — Black & Wiliam (1998); Wiliam (2011).
- Every module includes a formative check before the summative assessment (certification task, final rubric).
- Feedback must be task-level and actionable (what to change next), not only pass/fail.
Part 4 — Design standards for L&D outputs
4.1 Modules
- State the target behavior (observable, real-work) before content is designed.
- Include worked example → guided practice → independent task—not reading alone.
- Do not exceed ~45 minutes of new content without a retrieval or formative checkpoint.
- Cognitive load: one new workflow per module unless explicitly scoped as a survey or map.
4.2 Certifications
- Criterion-referenced: rubric with observable standards; no norm-referenced grading.
- Real work: vault, client-appropriate data, Linear, GitHub—not generic simulations when real artifacts are safe and available.
- Post-certification reinforcement: spaced practice anchors (e.g., Week 1 / 2 / 4) are part of the design, not optional.
- Type 3 outcomes (habits, judgment, collaboration): multi-informant observation—not a quiz or single rubric snapshot pretending to measure disposition.
4.3 Curricula / tracks
- Backwards design: What does a certified team member do differently? → What assessment proves it? → What modules build toward that assessment?
- Sequence types: Type 1 (prerequisites) before Type 2 (reasoning) before Type 3 (disposition / observation).
- Include a scope-and-sequence map that shows the learning progression and dependencies.
Part 5 — Mindsets (what we believe)
-
Train named workflows, not tools. Inspired by StackAI / GitHub-style playbooks: explicit inputs, AI step, human review, system of record—so “AI literacy” is always literacy in a workflow we actually use.
-
Behavior change is the only metric that matters. (Bersin lineage.) Completions and NPS are weak proxies; behavior in work is the signal we optimize for.
-
Literacy includes verification, not just prompting. (MIT Sloan, McKinsey-class framing.) When to trust outputs, how to check, what to do when wrong—baked into modules, not bolted on.
-
Knowledge compounds when it is shared. (Dynamic knowledge sharing.) Individual learning that never lands in team systems (vault, standards, skills, Linear) is only partly realized value.
-
The assessment defines the instruction. (UbD.) If you cannot describe what you will accept as evidence, you cannot design a coherent module.
-
One universal floor, then role-based depth. (Deloitte-style tiers; GitHub-style paths.) Quickstart for all; then service-line depth; then builder/champion layers—without skipping the shared floor.
Part 6 — What we do not do (explicit exclusions)
Adapted from exclusions discipline in the claude-education-skills library, plus Brainforge-specific guardrails:
- No learning styles / VARK matching as a design driver (not evidence-based for instructional design decisions).
- No Cone of Learning percentage claims (fabricated; do not cite).
- No “awareness sessions” that end without a behavior target and a path to practice.
- No certification by completion alone—seat time is not mastery.
- No Type 3 outcomes (habits, judgment, collaboration) assessed only by quiz or a single rubric moment.
- No generic “AI literacy” without naming which workflows and which standards are in scope.
- No one-time training without a spaced reinforcement plan for anything we claim should stick.
Part 7 — Evidence and tool stack
External library
- claude-education-skills — Large set of evidence-based education skills (order of 100+); use for unit planning, cognitive load review, spaced schedules, rubrics, and formative design.
- MCP (optional): Education skills MCP — for agents that can call structured education workflows.
Canonical references (for deep dives)
- Wiggins & McTighe — Understanding by Design
- Sweller — cognitive load theory
- Roediger & Butler — retrieval practice
- Black & Wiliam — formative assessment
- Bersin — organizational learning maturity / dynamic enablement
- StackAI / GitHub AI playbooks — golden workflows and human-in-the-loop patterns
Brainforge tooling
- Cursor and the repo skills system (
.cursor/skills/) - The Forge —
knowledge/+knowledge/standards/ - Linear — delivery and linkage to work
- Doordash workflow — packaged updates from platform change to learner-ready formats
Related vault docs
- L&D README — repository map, tracks, Quickstart sequence
- Q2 2026 L&D roadmap — timeline and milestones
When this document and the README diverge, update both so the README stays the entry point and this file stays the expanded standard.