Data Findings Memo — Template

About this document (Brainforge)

Internal conventions for how this file works in the repo. Strip or export without this section when sharing a client-only artifact.

Titling and filename

Use [Client Name]: [Topic] — Findings and Verified Figures for the document title. Example: LMNT: Emerson Revenue Discrepancy — Findings and Verified Figures.

Filename: {client}-data-findings-{topic}.md under knowledge/clients/{client}/resources/.

When to use this template

Use this when Brainforge has investigated a data accuracy or data quality issue for a client and needs to communicate what was found, what was fixed, and what the client should do next. This document is written for two audiences at once: a technical implementer who needs the detail, and an executive who needs the narrative and decision context.

This is distinct from a Technical Assessment Memo (evaluating tools or approaches) and an RCA Memo (production incident post-mortem). Use this when the primary deliverable is: “we found a problem in how your data is being measured, here is what we did about it, and here are the verified numbers.”

Do not use this template when:

  • investigating a production pipeline failure (use the RCA Memo)
  • evaluating tool or platform options (use the Technical Assessment Memo)
  • profiling a new data source for the first time (use the Discovery Memo)

About this document: readability and self-service

This memo is designed for direct stakeholder consumption — executives and analysts should be able to read it and act without Brainforge interpreting it. Avoid internal shorthand, unexplained acronyms, and assumed context. Every section should pass the “could a new stakeholder pick this up cold and understand it?” test.


[Client Name]: [Topic] — Findings and Verified Figures

Prepared by: Brainforge ([names]) Prepared for: [Client stakeholder names and titles] Date: YYYY-MM-DD Status: [Draft / Delivered / In Review]


ArtifactLink / pathNotes
Data Platform Documentation[Google Sheet link]Source catalog, metric definitions
Discovery Memo[path to A1 memo]Source profiling reference
Linear ticket[Linear URL]Investigation issue
RCA Memo (if incident escalated)[path]Root cause analysis if applicable

Executive Summary

[2–4 sentences. What was the stated problem? What did Brainforge find? What did we do about it? What is the one number or signal the executive should walk away with?

Write this so the CEO can read it in 60 seconds and understand the situation. No technical detail. No jargon.]


The Problem: What Was Wrong and How We Found It

Starting point

[What triggered this investigation? A number that seemed wrong, a stakeholder question, a QA check? What was the expected value vs. the observed value?]

Root cause [N]

[One subsection per distinct root cause. Name the cause plainly, then explain the mechanism. Use numbers where possible (e.g., “X out of Y invoices had zero amount”). Avoid jargon in the first sentence; technical detail can follow.]

Root cause [N+1]

[…]


What We Built

[Describe the fix or the new models/tables/reports built. For each artifact:]

[Table or artifact name]

[One sentence on what it is. One sentence on what problem it solves. Bullet list of key attributes or columns the client will use.]


Decisions Made (and How to Override Them)

[Every non-obvious choice made during the build should be documented here. The goal is to give an executive enough context to say “I disagree with that” and know how to change it. Use a table.]

DecisionWhat we choseWhyHow to change it
[Decision name][The choice made][1–2 sentence rationale][What to modify in the data or code]

Open Items and Known Caveats

[Be honest. List anything that is not yet resolved, any known approximations, and any edge cases that the client will eventually ask about. One bullet per item. Format: bold the item name, then explain.]

  • [Item name] — [What it is, why it is not resolved, what the impact is, and when it will be addressed or what triggers addressing it.]

How to Validate

[Write this for the technical user (the analyst, the ops person). Give them exact table names, exact SQL snippets, and a clear description of what a passing result looks like vs. a failing result. Use code blocks.]

Start here — [description of the primary QA query]:

[SQL]

Drill into [specific scenario]:

[SQL]

What passing looks like: [Describe the expected output, e.g., “check_amount should be 0 for all completed months.”]


Verified Figures

[The numbers. Present as a table. Include the time range, the key metric, and any breakdown that matters. Run the query fresh before sending — include the date the figures were pulled.]

From [table name], run [YYYY-MM-DD]:

[Dimension][Metric 1][Metric 2][Metric 3]
[Value][Value][Value][Value]

[1–3 sentences of interpretation. What does the trend show? Is this expected or unexpected?]


[3 specific, actionable recommendations. Each should be something the client can actually do. Ordered by priority. Bold the action, then explain the rationale and potential impact.]

  1. [Action] — [Why this, why now, what outcome to expect.]
  2. [Action] — […]
  3. [Action] — […]

Appendix A: Calculation Detail

[Optional. Include formulas, SQL logic, or field definitions that the technical user may need to reproduce or audit the work. Use code blocks. Keep this section purely technical — no narrative.]


Appendix B: Pre-handoff QA Checklist

  • Executive summary gives the one number or signal the executive should walk away with
  • Every root cause is named with a mechanism, not just a label
  • “Decisions Made” table includes how to override each choice
  • Validation SQL queries would run and produce meaningful results
  • Verified figures include the date they were pulled
  • Figures reconcile to the warehouse (fresh run, not copy-pasted from earlier query)
  • No internal Brainforge shorthand, unexplained acronyms, or assumed context
  • Stakeholder could read this cold and act