90% Accurate Driver-Based Forecast

Prepared by: Robert Tseng (Brainforge)
Date: 11/13/2025
Audience: Urban Stems

Executive Summary

The company is currently operating with a fragmented forecasting process spread across Finance, Operations, Marketing, and several disconnected data marts. This system produces recurring 25-30% forecast misses, driven by static budgeting assumptions, manual planning overrides, and no integration with real behavioral or marketing drivers.

This engagement focuses on rebuilding forecasting from the ground up using a driver-based forecasting engine: linking traffic, conversion rate, repeat cycles, holiday effects, and SKU mix into a single coherent model. The goal is to reduce forecast error by 50–70% within one quarter, increase visibility across teams, and enable more accurate inventory, procurement, and budget decisions.

We will:

  • Build an integrated driver forecast covering sessions, CVR, AOV, promo elasticity, seasonality, and repeat cycles
  • Consolidate budget, planning, and forecasting models into a single source of truth.
  • Introduce weekly forecast accuracy tracking (MAPE, bias, attribution)
  • Architect a new forecasting workflow used by Product, Finance, Marketing, and Operations
  • Deliver scenario modeling (high/base/low) and SKU/BOM allocation for procurement

We will operate under the following constraints:

  • Low engineering lift: Brainforge leads modeling, data design, and forecasting logic. Client teams primarily supply inputs
  • Practical implementation first: focus on the next 8–12 weeks of forecasting accuracy rather than long-term ML/AI sophistication
  • Self-serve reporting: All outputs (accuracy, scenarios, SKU mix, holiday forecasts) will be accessible in Looker or Sheets
  • Expected ROI: Lower stockouts, reduced over-purchasing, improved margin planning, and more accurate CAC/Promo allocation

Phase 1: Current-State Mapping


Link: https://www.figma.com/board/ny8mNOCUf1kVfWlcceRGog/Untitled?node-id=0-1&p=f&t=00ScnPDjbV1uyZUI-0

Findings

  1. Consistently overforecasting 20-30%
  2. The forecasting process today is a waterfall:
    1. Finance budget → Ops planning override → Holiday uplift → SKU/component allocation.
  3. None of the forecasting models include causal drivers:
    1. No link to traffic, CVR, CAC, seasonality, promo lift, or reorder cycles.
  4. Forecasts do not track accuracy (no MAPE, bias, variance attribution).
  5. Models are drifting independently across:
    • budget_delivery
    • budget_purchase
    • planning_feed
    • holiday_map_type
    • component_forecast
    • Weekly_forecast

The current system behaves more like a cascading budget, not a forecast.

Next Steps

  • Perform a half year historical variance decomposition:

    • sessions vs CVR vs AOV vs repeat rate vs seasonality.
  • Map every dataset feeding the forecast:

    • identify assumptions, uplift multipliers, and transformation logic.
  • Build a “lineage diagram” showing how all marts feed into the final forecast.

  • Identify structural bias and recurring error patterns.

Deliverable

  1. Forecast Diagnostic Report identifying root causes of the 25–30% misses.
  2. Variance Attribution Tree breaking errors into drivers.

Driving Questions

  • Is forecast error primarily driven by traffic instability or incorrect conversion assumptions?
  • Which weekly assumptions contribute the most bias?
  • How much of the error stems from holiday uplift multipliers vs planning overrides?

Phase 2: Build Driver-Based Forecast

Assumptions

  • Current forecast does not respond automatically to market conditions:
    • No sensitivity to CAC changes or promo levels.
    • No seasonality curves calibrated to actual historical lift.
    • No mix model for SKU/component-level distribution.
  • Holiday effects operate as static multipliers with no connection to behavior.
  • Repeat rate and reorder cycles are missing entirely.

Designed Approach

  1. Build a unified driver model:

    • Sessions / Traffic
    • Paid vs organic mix
    • CAC & spend elasticity
    • Conversion rate (new vs repeat)
    • AOV segmented by promo type
    • Repeat reorder curves
    • Holiday uplift curves based on historical patterns
  2. Construct a demand formula

  3. Introduce high/base/low scenarios based on:

    • spend changes
    • CVR volatility
    • promo strategy
    • seasonality uncertainty.
  4. Allocate forecast downstream into:

    • SKU mix
    • BOM/components
    • Regions
    • Delivery constraints.

Deliverable

  1. Driver-Based Forecast Engine (weekly + monthly horizon)
  2. Master driver table (sessions, spend, promo, CVR, AOV, repeat cycles)
  3. High/Base/Low scenario generator

Open Questions

  • What is the expected promo cadence and AOV elasticity?
  • How should we separate new vs repeat demand?
  • Which holiday types materially alter SKU mix?

Phase 3: Forecast Accuracy Tracking & Operational Integration

Current State

  • No backtesting or accuracy scoring occurs today
  • Forecast meetings lack structure because teams cannot see why the forecast is off
  • Procurement and Finance do not have a unified signal for weekly planning

Approach

  • Build automated accuracy monitoring with:

    • MAPE (overall + new vs repeat)
    • Bias
    • SKU mix accuracy
    • Holiday uplift accuracy
    • Promo elasticity deviation.
  • Create weekly “Forecast Accuracy Ritual”:

    • what we expected
    • what actually happened
    • which driver caused the error
    • updated assumptions
    • updated forecast.
  • Introduce one source of truth for:

    • weekly forecast
    • driver inputs
    • procurement envelopes
    • promo & spend assumptions.
  • Train Finance, Ops, and Marketing on:

    • reading the forecast
    • updating driver assumptions
    • using high/base/low scenarios.

Deliverable

  • Forecast Accuracy Tracker (MAPE, bias, driver attribution)
  • Weekly Forecast Ritual Playbook

Ownership Questions

  • Which teams will own updating traffic, spend, promo, and repeat assumptions?
  • How frequently should we publish forecast updates (weekly cadence vs daily refresh)?
  • How should procurement adjust safety stock based on forecast confidence?

Business Case

Introduction

UrbanStems currently relies on a fragmented forecasting process owned in pieces by Finance, Operations, Product, and Marketing. This process inherits outdated budgeting assumptions, applies manual planning overrides, and inserts holiday factors without grounding in customer behavior. Because each team adjusts its own slice of the forecast, the organization operates without a single reliable demand signal. As a result, we are consistently missing by 25–30% on a weekly and monthly basis. These misses drive material operational and financial inefficiency: excess spoilage, stockouts during peak demand windows, poor promo allocation, and reactive decision-making across the business.

A move to a driver-based forecasting model is not a “data upgrade.” It is an operational necessity. Without a unified model that incorporates traffic, conversion rate, repeat cycles, AOV mix, CAC elasticity, and seasonality, the company cannot plan labor, procurement, inventory, delivery capacity, or marketing spend with confidence. This memo outlines why a driver-based forecast is critical, the economic cost of maintaining the status quo, and the organizational advantages created by a single weekly forecast ritual with explicit error tracking.

The Problem: We Are Not Forecasting Demand - We Are Reformatting the Budget

The current system behaves like a cascading budget rather than a forecast. Finance sets an annual target, Operations adjusts it based on capacity, Marketing influences it during promotional periods, and Product modifies SKU-level mix based on launch cycles. None of this reflects real customer demand mechanics. Sessions fluctuate meaningfully week to week. Conversion rate varies by marketing mix and assortment health. AOV moves with promo depth, seasonality, and inventory availability. Repeat cycles are predictable but not modeled anywhere. Holiday lift is applied as a static multiplier that has no relationship to observed historical curves.

Because no behavioral inputs are connected to the forecast, the model cannot react to real conditions. When traffic drops, the forecast does not adjust. When promo intensity increases, the forecast does not adjust. When CAC rises, the forecast does not adjust. This rigidity guarantees error. Worse, because the model does not track its own performance, no MAPE, no bias scoring, no variance attribution, the organization cannot distinguish whether misses originate from traffic volatility, conversion softness, poor SKU availability, or overaggressive seasonal assumptions. Errors compound silently until they manifest operationally in spoilage, stockouts, or ad-hoc procurement.

The Cost: Margin Erosion, Operational Chaos, and Strategic Blind Spots

UrbanStems’ unit economics are highly sensitive to accuracy. Perishable inventory amplifies every mistake. Over-forecasting by even a modest amount leads directly to stem spoilage, forced discounting, or disposal. Under-forecasting forces last-minute procurement at higher cost, reduced assortment quality, and lost revenue during high-margin holidays. Because holiday periods are concentrated revenue moments, stockouts compound their impact over the entire quarter. A 25–30% miss in demand forecasting effectively disables the company’s ability to control gross margin.

Operationally, the absence of a single source of truth forces each team to create its own working assumptions. Marketing spends into a plan that Operations cannot fulfill. Product sets launch priorities detached from procurement readiness. Finance uses a revenue forecast that becomes unreliable as soon as demand deviates from the budget. Meetings become reactive exercises in explaining misses rather than proactively identifying risk or shaping demand. The company spends significant time and cognitive effort reconciling numbers across teams, not managing the business.

Strategically, the inability to attribute variance to drivers prevents UrbanStems from understanding whether demand movement is structural or tactical. Without driver attribution, it is impossible to determine whether CAC efficiency changed due to creative quality, channel shifts, or underlying market softness. Likewise, marketing cannot answer whether promotions drove incremental units or merely pulled forward demand. Product cannot evaluate whether assortment decisions are influencing conversion rate. Finance cannot determine whether shortfalls are due to spend efficiency or operational execution. Strategy without signal becomes guesswork.

The Solution: A Unified, Behaviorally Grounded Driver-Based Forecast

A driver-based system solves these issues by rebuilding the forecast from the mechanics of demand, not from a budget target. This system models demand as a function of sessions, conversion rate, AOV distribution, repeat order curves, spend elasticity, and observed holiday seasonality patterns. These inputs combine to produce a dynamic forecast that responds to real conditions each week and provides a transparent decomposition of why demand moved.

This approach also introduces a structured weekly forecast ritual. Each week, leadership reviews the prior week’s forecast accuracy, identifies the drivers of variance, updates assumption tables, and publishes a refreshed high, base, and low scenario. Accuracy tracking (MAPE, bias, and driver attribution) provides a diagnostic instrument, allowing the company to continuously improve. Instead of arguing about the number, teams align on how the number was generated and which levers need adjustment. Procurement receives clear guidance on inventory envelopes. Marketing receives predictable constraints for spend planning. Product gains visibility into SKU mix and regional demand patterns. Finance restores the integrity of its revenue forecasts.

Organizational Impact: Better Margin, Better Planning, and Better Alignment

A unified driver-based forecast produces measurable financial and operational benefits. Forecast misses decline materially, reducing spoilage and stockouts. Procurement improves its purchasing efficiency by ordering in more precise cycles aligned with real demand curves. Labor and delivery planning stabilize. Marketing optimizes CAC by spending against realistic demand expectations rather than chasing budget gaps. Product can time launches and assortments with a clearer understanding of weekly demand patterns. Cross-functional meetings shift from reconciliation to decision-making.

Most importantly, the company gains the ability to run scenarios. Leadership can ask: What if we reduce spend by 15%? What if conversion rate softens? What if holiday uplift outperforms? Scenario modeling transforms planning from reactive to strategic, allowing UrbanStems to hedge risk and invest confidently

Conclusion and Recommendation

UrbanStems has outgrown its current forecasting model. The current approach produces expensive, recurring errors with no mechanism for correction or learning. A driver-based forecasting engine enables the organization to plan accurately, respond quickly to market and behavioral shifts, and operate with a shared understanding of risk and opportunity.

I recommend beginning Phase 1, the diagnostic and variance attribution analysis, immediately. This will establish the empirical foundation for the new forecasting engine and prepare the company for a more stable, aligned, and financially disciplined 2026.