Canonical structure: This file follows the Brainforge dashboard specification template — knowledge/delivery/service-lines/strategy-analytics/templates/dashboard-specification-template.md. That template is the source of truth for section order, metric authority, and §9 tile patterns (including optional stacked Chart type / Rows / Columns and Data sources tables). This LMNT document is a filled example for the retail & POS Omni pilot, not a fork of the template.
This specification is business- and customer-facing: it is meant for LMNT leaders, retail operations, finance alignment, and partners who need a single place to see what dashboards will include, how metrics are defined, what data sits underneath, and how often it refreshes. Technical builders (Omni, data engineering) use the same document for implementation.
What it contains: Program intent; Omni Topics and Snowflake objects in tabular form; metric authority; formatting rules; freshness options (hourly, daily, weekly, ad hoc); dashboard-by-dashboard tiles; and sign-off (§12).
How to use it: Read §1 for scope and data alignment, §3–§4 for metrics and display standards, §5 for freshness and exports, §9 for each dashboard. Appendices A–B summarize design principles and a QA checklist.
Ownership: LMNT and Brainforge maintain the approved revision together; any distributed copy (for example a Google Doc) should match that revision after stakeholder review (§12).
Specification template
Status: In stakeholder review (pre-dashboard build) Prepared for: LMNT — Phil (BizOps / exec), Will / Russell (Retail), leadership Prepared by: Brainforge Last updated: 2026-04-08
LMNT is building governed Omni dashboards on Snowflake so retail stakeholders can answer recurring questions from retail marts without exporting CSVs or reconciling ad hoc spreadsheets. This document specifies what dashboards to build for the retail and POS slice of the pilot, which metrics and cuts each tile uses, and how definitions align with operational reporting (Emerson shares, Target direct, in-store POS).
POS sales: retailers → end purchasers (sell-through / store scan data where available).
As of the last refresh noted in source material, retail sales data from Emerson was still in flight; build with what is available in RETAIL marts and document gaps explicitly in §9 and §10.
1.2 Omni Topics (semantic model alignment)
Builds in Omni should align to these Topics and their primary Snowflake objects. Exact Topic YAML names follow the Omni model sync (e.g. Retail/*.topic.yaml).
Objects below are exposed through the Omni semantic model (sync path). Use this table as the inventory for which tables power which questions; tile-level detail stays in §9.
Snowflake object
Type
Role in this program
RETAIL_DAILY_SALES_SUMMARY
Summary
Daily POS rollups; as-of date for dashboard header
RETAIL_WEEKLY_SALES_SUMMARY
Summary
Weekly retailer and category views; store health metrics
Questions this pilot should answer (non-exhaustive; tie tiles in §9 back to these):
How do Walmart and Target compare on POS and store health, and what reporting differences explain deltas?
What is POS performance by retailer, category, and SKU — daily and weekly — including WoW / MoM / YoY?
How does performance vary by region and state (where geography exists)?
What is velocity by SKU and retailer, and how is it trending?
What does SKU-level daily detail look like for drill-down (wireframe-aligned)?
How much are retailers purchasing (sell-in / authorized orders) vs POS, and what is order cadence where data allows?
What does a net revenue bridge look like structurally — even when some lines are post-pilot?
1.5 Data scope, folder, and upstream sources
Item
Detail
Omni folder
LMNT — Retail Sales and POS Performance
Snowflake schema
RETAIL — full object list in §1.3
Emerson / retailer feeds
Upstream tables (e.g. WALMART_*, TARGET_*) land in the warehouse and feed RETAIL marts; §9 “reporting differences” documents Walmart vs Target behavior where it affects interpretation
1.6 Definition authority
Source
Role
Data Platform Documentation (Core Metrics tab)
Wins for metric names, formulas, and business definitions.
Describes dashboards, tiles, filters, and convenience copies of definitions. If text here drifts, reconcile to the sheet and locked file.
1.7 Legacy reporting inventory vs Omni pilot dashboards
Purpose: Track what Omni replaces or parallels. Action: LMNT or Brainforge to add rows (file names, owners) in the client review copy as inventory is confirmed.
Existing artifact (as documented or linked)
Primary metrics / purpose
Refresh / grain (if known)
Omni pilot dashboard that supersedes or parallels
POS pilot Google Doc (wireframes)
SKU daily layout, executive views
See doc
Dashboard 1, 5
Emerson / Target / Walmart source feeds
Raw POS and store sales
Vendor / daily
§9 tiles (via RETAIL marts)
Ad hoc spreadsheets / exports
One-off cuts
Varies
Replaced by governed Topics once live
[TBD — Source Medium / legacy BI]
[TBD]
[TBD]
[TBD]
1.8 Delivery tracking
Work is tracked in the Linear project linked under Related artifacts. Dashboard build order: Dashboards 1–2 first, then 3, 4, 5; 6–7 as specified in §9 (subject to data readiness).
2. Dashboard design guidelines (applies to all dashboards in §9)
Glanceability: Headline KPIs and comparisons visible without scrolling past the first screen where possible; dense tables grouped in clear sections.
Consistency: Abbreviations, date formats, and currency match §4 unless a dashboard override is stated.
Honest comparisons: WoW / MoM / YoY and retailer comparisons use the same basis per metric definition in the Data Platform sheet; where bases differ (Walmart vs Target), call it out in Notes and in §9 reporting-differences tiles.
Density: Executive Pulse is tables-first by design; operational dashboards may add charts (trends, maps) where specified.
3. Metrics and definitions (Data Platform sheet and locked file)
The Data Platform Documentation (Core Metrics) and METRIC_DEFINITIONS_LOCKED.md are one definition set — keep them aligned. This section does not replace them.
Defaults for all dashboards unless §9 specifies an override.
Element
Convention
Currency
$ in titles and labels; whole dollars — round to nearest dollar, no decimals
Counts
Integers with thousands separators
Percentages
% with one decimal where specified; state denominator in Notes when ambiguous
Dates
MM/DD/YYYY
Abbreviations
Capitalized except “period over period” phrases (YoY, WoW, MTD, etc.)
Text casing
Sentence case for descriptions; title case for axis labels
Date and grain (global):
Primary date field:business_date / week_end_date as appropriate per Topic — document in Omni so all tiles default consistently.
Default time window (filter):Last 12 months, adjustable by month / week / day.
Retail week:Sat–Fri EST (stated in header). Time zone:EST.
Header element: As-of timestamp from the relevant mart (typically MAX(business_date) from RETAIL_DAILY_SALES_SUMMARY when daily) plus a label that states the refresh cadence agreed in §5.1 and “Week = Sat–Fri EST.”
Dashboard-level filters (default across §9 unless overridden):
Filter
Default
Date range
Last 12 months (adjustable)
Retailer
All
5. Data freshness, caching, and scheduled exports
5.1 Freshness and latency
Standard cadence options (pick what the pipeline and stakeholders support; document the choice in the row below and in the dashboard header):
Cadence
Meaning
Typical use
Hourly
Mart or extract refreshed up to roughly every hour
Near–real-time monitoring, high-change feeds
Daily
Refreshed once per business day (specific time TBD with LMNT)
Default for retail POS marts in this pilot
Weekly
Refreshed on a fixed weekday
Slower-moving summaries or manual staging
Ad hoc
On-demand or manual refresh
One-off analysis, backfills, or pre-go-live loads
This pilot — agreed target (fill in):
Topic
Selection
Target cadence for RETAIL marts exposed in Omni
[ Hourly / Daily / Weekly / Ad hoc — circle one or combine by feed ]
As-of rule in UI
e.g. “Data through [date]” from MAX(business_date) or equivalent at chosen grain
Source pipeline: Emerson / retailer feeds → Snowflake RETAIL marts → Omni Topics.
Known lag: Retailer-specific feed delays — document in §9 Notes when a tile is sensitive to lag.
Expected interaction latency: Filter changes should return within normal Omni interactive bounds; heavy SKU tables may use pagination or limits as implemented.
5.3 Scheduled exports
No scheduled exports for this pilot unless stakeholders add a row later:
Export
Audience
Cadence
Contents
Owner
—
—
—
—
—
6. Visualization tool — documentation and rationale
default_filters, sample queries for AI: Set at Topic level for retailer / date; document in workbook description for Blobby.
Tableau / Looker / Power BI
Not in scope for this pilot unless LMNT adds a row to Related artifacts.
8. Access, training, and rollout
Topic
Detail
Access / permissions
[Omni folder: LMNT — Retail Sales and POS Performance; who can view vs edit]
Training
[Live session, Loom, or office hours — TBD]
Rollout plan
[Pilot users first — TBD]
Support channel
[Slack / email — TBD]
9. Dashboard specifications
Structure: Dashboard → Section → Tile. Tiles use either a compact field table or a stacked layout (Chart type / Rows / Columns on separate lines + Data sources table) per the dashboard specification template§9.
Dashboard 1 — Executive Pulse
Why this dashboard exists
Daily and weekly comparison tables (no charts) for POS sales and units by retailer and category — the operational “pulse” for Phil and retail leads.
Primary audience
Phil (BizOps / exec), Will / Russell (Retail)
Primary questions (tie to §1.4)
What is POS today vs same day last week / month / year by retailer and category?
What is the weekly snapshot vs prior week and same week last year?
Dashboard-level filters
Filter
Scope
Default
Notes
Date range
Dashboard
Last 12 months (slice to latest day/week in tiles)
Tiles may fix MAX(business_date) or latest week
Retailer
Dashboard
All
Category
Dashboard
All
Where applicable
Dashboard-specific formatting
Element
Override
Conditional deltas
Green if positive, red if negative on WoW / MoM / YoY % (sales and units)
Section 1.1 — Daily POS by retailer (sales)
Purpose: One row per retailer for the latest business date with comparison columns.
Section-level filters: Inherit dashboard; tiles may force latest date.
RETAIL_WEEKLY_SALES_SUMMARY (grain: source × week_start_date × metric_name × value — pivot in Omni); or roll weekly comparisons from RETAIL_SKU_DAILY_SUMMARY
Tile-level filters
Latest week in range
Notes
Confirm week boundaries vs Sat–Fri EST in Topic
Tile 1.4b — Weekly by retailer (units)
Field
Detail
Chart type
Table
Columns
Parallel unit fields to 1.4a
Rows
Retailer
Sort
Same as 1.4a
Formats
Integers
Data source
Same as 1.4a
Tile-level filters
Latest week
Notes
—
Section 1.5 — Weekly by category and weekly retailer × category
Purpose: Same patterns as Dashboard 1 Sections 1.2–1.3 but weekly fields.
Tiles 1.5a–1.5d
Tile
Description
1.5a
Weekly by category (sales) — same columns as 1.4a, rows = category
1.5b
Weekly by category (units) — same as 1.4b, rows = category
1.5c
Weekly retailer × category (sales) — nested rows
1.5d
Weekly retailer × category (units) — nested rows
Data source:RETAIL_WEEKLY_SALES_SUMMARY (and rollups as needed).
Known gaps and caveats (Dashboard 1)
Emerson feed completeness for all retailers on a given day — may produce blanks; label in header or footnote if needed.
Out of scope for this pilot (Dashboard 1)
Chart visualizations (explicitly tables-only for this dashboard).
Dashboard 2 — Walmart vs Target & performance summary
Why this dashboard exists
Trend and scorecard view for retail leadership: Walmart vs Target head-to-head, reporting differences, store health, period trends, and placement for the net revenue bridge reference (see Dashboard 7 for the standalone bridge tile set).
Primary audience
Will / Russell, Phil, leadership
Primary questions (tie to §1.4)
How does Target perform vs Walmart on POS and stores for a given day or week?
What reporting differences (calendar, recognition, promo) should a reader know when comparing?
What are multi-period POS trends by retailer and category?
Dashboard-level filters
Filter
Scope
Default
Notes
Date range
Dashboard
Last 12 months
Retailer
Section 2a–2c
Walmart, Target
Where section requires
Category
Dashboard
All
For trend sections
Section 2.1 — Walmart vs Target head-to-head
Tile 2.1a — Scorecards + comparison table
Field
Detail
Chart type
Side-by-side scorecard pair + table
Columns (scorecards)
Metrics: POS Sales — Latest Day; WoW; POS Units — Latest Day; WoW; POS Sales — Latest Week; WoW; Active Stores; WoW; Store Rank; Avg Sales/Store; WoW — per Walmart and Target
Rows (comparison table)
Period rows: Latest Day, Latest Week, MTD, YTD, last 3 months by month
Flat SKU daily table aligned to Phil’s SKU_Daily wireframe — full comparison columns for POS $ and units (daily and weekly).
Section 5.1 — SKU daily table
Tile 5.1a — SKU daily flat table
Field
Detail
Chart type
Flat table (wide; horizontal scroll expected)
Columns
Date; SKU; Retailer; Category; POS Latest Date ;SDLW; WoW %; SDLM ;MoM; YoY %; Units columns (parallel); Sales Latest Week; Week Prior; WoW % weekly; SWLY; YoY weekly; Units weekly columns (see original column list)
Rows
One row per Date × SKU × Retailer (grain)
Sort
Date DESC, then Retailer, SKU; optional sort by volume
Formats
in∗∗k** where original spec used $k for POS columns; units full integers; % conditional green/red
Data source
RETAIL_SKU_DAILY_SUMMARY
Tile-level filters
Retailer, Category, Date (page bar)
Notes
Conditional formatting on all % columns; title Category & SKU Drill-down
Known gaps and caveats (Dashboard 5)
Column naming in Omni should match wireframe after UAT.
Dashboard 6 — Retailer purchasing & order cadence
Why this dashboard exists
Purchasing (what LMNT ships / authorized orders) vs POS — reorder patterns and cadence. Uses sales / Omni facts as specified; PO granularity may be incomplete.
Primary question (tie to §1.4)
How much is each retailer purchasing, how frequently, and what discounts/promo splits are visible?
Section 6.1 — Retailer purchasing summary
Tile 6.1a — Purchasing summary table
Field
Detail
Chart type
Table
Columns
Units Purchased (auth_based_qty Walmart / total_quantity others); Sales Value ($); # POs (purchase_order_count); Avg Order Size; Avg Weeks Between Orders; Last Order Date; MTD; Prior Month; MoM %
Rows
Retailer
Sort
By Sales Value or retailer
Formats
Counts integer; $ whole; % one decimal
Data source
RETAIL_FCT_WALMART_OMNI_SALES (Walmart); RETAIL_FCT_SALES (Target proxy until explicit PO data)
Tile-level filters
Time period: 7d, 30d, week, month, year
Notes
Gap: PO count and cadence need OMS / Emerson — flag in model audit (e.g. Gap 8). Walmart: auth-based fields; Target: shipped POS as proxy
Section 6.2 — Purchasing frequency trend
Tile 6.2a — Weekly bar chart
Field
Detail
Chart type
Bar (weekly)
Columns
Week
Rows / Series
Units purchased by retailer (stacked or grouped)
Sort
Week ascending
Formats
Last 52 weeks typical
Data source
Purchasing rollups
Tile-level filters
Retailer
Notes
Gaps in bars may indicate stockout or distribution loss
Section 6.3 — Order cadence by SKU
Tile 6.3a — Nested table
Field
Detail
Chart type
Table
Columns
Units Purchased; Avg Weekly Run Rate; Last Week With Activity; Weeks Since Last Order; Status (Active / Slowing / Stalled)
Rows
Retailer, SKU (nested)
Sort
By status severity or weeks since order
Formats
Status colors: Active green, Slowing yellow, Stalled red
Data source
Facts + calculated fields
Tile-level filters
Retailer, Category, Date
Notes
Thresholds: Active <4 wks; Slowing 4–8; Stalled >8
$ whole; % one decimal; “—” for post-pilot columns
Data source
Target: PROMO_SALE_AMOUNT, REGULAR_SALE_AMOUNT, etc. from TARGET_DAILY_SALES_TCIN_LOC path; other retailers as modeled
Tile-level filters
Period
Notes
Discounts, chargebacks, trade spend post-pilot — show “—” and tooltip “Post-pilot scope” per BI Tool Memo
Layout notes (Dashboard 6)
Top:6.1a summary.
Below:6.2a trend.
6.3a and 6.4a side-by-side or stacked.
Title: Retailer Purchasing & Order Cadence.
Top note: “Purchasing data: Walmart uses authorized order amounts (Omni). Other retailers use shipped POS as proxy. PO count and exact cadence pending OMS data.”
Known gaps and caveats (Dashboard 6)
PO count, cadence, and trade-spend lines depend on future sourcing.
Dashboard 7 — Net revenue bridge
Why this dashboard exists
Structured walk from gross POS toward net revenue — rows in place even when some measures are post-pilot so users see the roadmap.
Section 7.1 — Bridge table
Tile 7.1a — Waterfall / signed table
Chart type
Waterfall or table with +/− rows.
Rows
Bridge line items in order: Gross POS Sales → Returns → = Net Sales → Discounts → Chargebacks → Trade Spend / Slotting → = Net Revenue.
Columns
Current Month (),PriorMonth(), Change ($).
Field
Detail
Sort
Fixed bridge order
Formats
$ whole; post-pilot rows “—” + tooltip
Tile-level filters
Month
Notes
Build all rows now; do not suppress post-pilot lines — tooltip: “Post-pilot scope — requires trade spend and chargeback data from LMNT finance systems.”
Data sources (Snowflake)
Object
Notes
RETAIL_DAILY_SALES_SUMMARY
Gross POS, returns, net sales — in current warehouse / Omni model.
RETAIL_FCT_CHARGEBACKS
Post-pilot — not in current sync; add when dbt delivers tables (see Gap 7 in model audit).
RETAIL_FCT_TRADE_SPEND
Post-pilot — not in current sync; add when dbt delivers tables (see Gap 7 in model audit).
Layout notes (Dashboard 7)
Standalone dashboard or linked from Dashboard 2 narrative.
Title: Net Revenue Bridge.
Known gaps and caveats (Dashboard 7)
Net revenue and deduction lines pending data per BI Tool Memo and warehouse gaps (e.g. Gap 7 in model audit).
10. Deferred scope (outside this pilot)
Item
Reason
Follow-up
Full trade spend, chargebacks, slotting in bridge and discount tiles
Data not in current Omni sync / needs finance systems
Add tables in warehouse + Topic; new spec revision
Target store geography on par with Walmart
Not in Emerson share
Feed / modeling post-pilot
Exact PO count and order cadence for all retailers
OMS / Emerson gaps
Gap 8 / OMS integration
RETAIL_FCT_CHARGEBACKS, RETAIL_FCT_TRADE_SPEND
Not in current sync under Omni path
dbt / sync when ready
11. Acceptance criteria
This pilot is accepted when:
Users in §1.4 can answer the listed primary questions using the published dashboards.
All metrics trace to the Data Platform sheet; nothing in §3 contradicts the sheet.
Freshness and caching (§5) match what stakeholders were told.
Access and rollout (§8) are executed for the agreed pilot group.
§9 is build-complete: every section and tile documents chart type, Columns, Rows, Sort, and filters at the right level.
§12 sign-off checklist is complete.
12. Review, open questions, and sign-off
12.1 Open questions
Topic
Question
Decision
Owner
Omni Topic naming vs Retail/*.topic.yaml
Confirm final published names
$k vs whole dollars on SKU table
Align wireframe to §4 global rounding
Walmart fiscal week vs Sat–Fri label
Single definition in Data Platform
12.2 Sign-off
Client sponsor or stakeholder — approved: [name, date]
Analytics build owner — approved: [name, date]
Brainforge delivery lead — approved: [name, date]
Client data or IT owner (if required) — approved: [name, date]
Appendix A — Why these dashboard practices matter
Reference only. Principles are stated in business language and tied to Brainforge dashboard delivery.
Glanceability and focus — Stephen Few, Information Dashboard Design
Busy teams need to see whether KPIs are healthy without reading a narrative. Specs list sections and tiles so builders know what must be visible at first glance versus drill-down.
Clear message before chart choice — Cole Nussbaumer Knaflic, Storytelling with Data §1.4 primary questions and per-dashboard “why” text come before the tile table.
Patterns that match the domain — Wexler, Shaffer, Cotgreave, The Big Book of Dashboards
Retail norms (trend + ranking + detail) — Dashboard → Section → Tile makes handoff explicit.
Honest numbers and scales — Edward Tufte, The Visual Display of Quantitative Information; Alberto Cairo, The Functional Art Formats, Notes, and Data Platform alignment keep finance and ops aligned.
Color and attention — Colin Ware, Visual Thinking for Design
Document color rules in Notes (e.g. conditional WoW %) for accessibility and consistency.
Omni documentation — visualization and dashboard building