[Client] — Data quality assessment — [Period / domain]
About this document (Brainforge)
Internal conventions for how this file works in the repo. Optional: strip or export without this section when sharing a client-only artifact.
Titling and filename
Use [Client] — Data quality assessment — [Period or domain] for the document title. Examples: LMNT — Data quality assessment — Q2 2026 · Acme — Data quality assessment — Wholesale (April 2026).
Filename: {client}-dq-assessment-{period}.md under knowledge/clients/{client}/resources/.
When to use this template
Use this template when:
- producing a periodic data quality health report (monthly, quarterly)
- tracking freshness, completeness, and accuracy trends over time
- surfacing data decay before it causes a fire drill
Do not use this template when:
- investigating a specific data accuracy issue (use the Data Findings Memo)
- documenting a production incident (use the RCA Memo)
- profiling a new data source for the first time (use the Discovery Memo)
Document metadata
Status: [Draft / In review / Published]
Assessment period: [MMM YYYY – MMM YYYY]
Warehouse: [Snowflake / BigQuery / other] — Account/region: [details]
Sources assessed: [source 1], [source 2], ...
Previous assessment: [date or link] (omit if first)
Prepared by: Brainforge
Last updated: [YYYY-MM-DD]
Related artifacts
| Artifact | Link / path | Notes |
|---|---|---|
| Discovery Memo(s) | [path to A1 memo] | Baseline source catalog and SLAs |
| Data Platform Documentation | [Google Sheet link] | Source catalog, metric definitions |
| Previous DQ Assessment | [link if exists] | Trend comparison |
| Known open issues | [Linear URL or list] | Items still unresolved from last period |
1. Overall health
One-line summary of the assessment period. Give the overall health status as a color, then a sentence explaining it. A director should read this and know whether to worry.
Health: [🟢 Green / 🟡 Yellow / 🔴 Red]
[2–3 sentences. What is the overall state of data health this period? What improved, what declined, and what is the one thing leadership should know?]
1.1 Period-over-period comparison
| Metric | This period | Previous period | Trend |
|---|---|---|---|
| Sources meeting SLA | [N / total] | [N / total] | [improving / stable / declining] |
| Sources with errors | [N] | [N] | [improving / stable / declining] |
| Days with ingestion delays | [N] | [N] | [improving / stable / declining] |
2. Freshness
Per-source report card on whether data arrived within its SLA.
2.1 Freshness by source
| Source | SLA | Actual latency | Met SLA this period? | Missed days | Notes |
|---|---|---|---|---|---|
[source] | [e.g., ≤ 24h] | [p50/p95 latency] | [Yes / Mostly / No] | [N] | [e.g., 3 weekend delays in March] |
[source] | [e.g., ≤ 6h] | [p50/p95 latency] | [Yes / Mostly / No] | [N] | [...] |
2.2 Freshness incidents
[Date]: [Source] was [X]h late due to [cause]. Resolved.[Date]: [Source] missed refresh entirely. Impact: [Y].
3. Completeness
Assessment of null rates, missing date ranges, and row count anomalies.
3.1 Completeness by source
| Source | Critical column | Null rate this period | Null rate previous | Notes |
|---|---|---|---|---|
[source] | [column] | [%] | [%] | [e.g., increased nulls caused by schema drift in March] |
[source] | [column] | [%] | [%] | [...] |
3.2 Row count trends
| Source | Expected rows/period | Actual this period | % Expected | Notes |
|---|---|---|---|---|
[source] | [~N] | [N] | [%] | [e.g., Black Friday volume spike accounted for] |
[source] | [~N] | [N] | [%] | [e.g., connector outage Mar 10–12] |
4. Accuracy
Spot-check results and known discrepancies. This section is lighter than a full Data Findings investigation — it flags issues for deeper triage.
4.1 Spot-check results
| Check | Metric | Expected | Actual | Match? | Date checked |
|---|---|---|---|---|---|
[e.g., Revenue vs source] | [Total revenue] | [$X] | [$Y] | [Yes / Variance: X%] | [YYYY-MM-DD] |
[e.g., Order count parity] | [Order volume] | [N] | [N] | [Yes / No] | [YYYY-MM-DD] |
4.2 Anomalies flagged for investigation
[Issue]—[brief description, linked to relevant Linear ticket or Findings Memo]
5. Recommended actions
Ordered by priority. Each action should be specific and actionable by a named owner.
- [Action] —
[Why this, why now, who should own it] - [Action] —
[...] - [Action] —
[...]
Appendix — Pre-handoff QA checklist
- Freshness SLA table covers every source assessed
- Completeness null rates are queried (not estimated)
- Row count trends compare current vs expected with explanation for variance
- Accuracy spot-checks include date run
- Periodic open items from last assessment are carried forward if unresolved
- Overall health color is justified in the narrative