Omni Blobby: subject-matter playbook (creation spec)

Spec for creating a repeatable playbook so each subject-matter area (for example revenue, support, product usage) is documented and implemented consistently for Blobby, Omni’s AI assistant.

Audience: Domain owners (analytics, data, or business) plus someone who can edit the shared semantic model (Modeler or Connection Admin).


How Blobby “learns”

Blobby does not use a separate ML training pipeline. Accuracy comes from configuring the Omni semantic model: topics, field visibility, labels, synonyms, explicit AI instructions, and example queries.

Impact order (strongest first): ai_contextai_fieldssample_queriessynonyms → field labels and descriptions.

References: Optimizing models for AI, Omni AI overview.


What good looks like

For each subject area, Blobby should:

  • Map how people actually ask (slang, acronyms, “revenue” vs “sales”) to the right fields and filters.
  • Respect non-obvious rules (grain, definitions, time zone, what counts as “active”, etc.).
  • Avoid irrelevant or internal-only fields when the model is large.
  • Be verifiable with a short test matrix of questions and expected query behavior.

Scope template (one playbook per subject area)

SectionWhat to capture
Business definitionWhat this area covers and what it explicitly excludes.
Primary topics/viewsWhich Omni topics and views Blobby should use (topic names, base views).
Key questions5–15 real questions stakeholders ask. Use Analytics → AI usage in Omni when available.
Terminology mapBusiness term → view.field (include timeframes such as created_at[month] where relevant).
Data nuancesGrain (for example row = line item vs order), default filters, UTC vs local time, known caveats.
Fields to expose or hideMust-include and must-exclude patterns for ai_fields (or tag negations).
SynonymsPer important dimension/measure: abbreviations and alternate names.
Sample queries3–5 golden questions with correct query structure (build in a workbook first, then encode in YAML).
Test planQuestions to re-run in Blobby after changes; pass/fail and follow-up edits.

Deliverables

  1. Written playbook using the scope template (location per team convention: Notion, Confluence, or this repo).
  2. Model changes in the shared model (Omni UI or Model YAML API), typically:
    • ai_context on relevant topic(s) (terminology, nuances, behavior).
    • synonyms on critical dimensions and measures.
    • ai_fields when the model is noisy or too large.
    • sample_queries with realistic prompts and valid query blocks.
    • Optional: an AI-specific topic using extends for a curated Blobby surface — see topic parameters.
  3. Field descriptions where labels alone are ambiguous — see synonyms and modeling.
  4. Verification record: test questions, expected outcome, date, and owner sign-off.

Process (for the playbook author)

  1. Discover — Inventory topics and views for this area; note existing ai_context, ai_fields, and sample_queries.
  2. Harvest language — Collect real phrases from stakeholders and from Omni AI usage analytics when possible.
  3. Draft ai_context — Terminology map, grain and definitions, default behaviors (for example “top N”, trend granularity).
  4. Curate fields — Add ai_fields (wildcards, negation, tags) if Blobby is distracted by irrelevant views.
  5. Add synonyms — Field-level alternates; keep narrative and cross-field rules in ai_context.
  6. Author sample_queries — Build correct queries in a workbook, then copy structure into topic YAML (prompt, optional extra ai_context, query).
  7. Polish descriptions — Especially enums, status values, money, and dates.
  8. Optional narrow topic — If one topic is still too broad, add an extends topic labeled for AI with a tight field list.
  9. Test — Run the test plan; iterate ai_context and examples until stable.
  10. Govern — Document intent in commit messages; record who owns updates when the warehouse or business definitions change.

Prerequisites and constraints

  • Permissions: Modeler or Connection Admin for model/API edits.
  • Layers: Changes usually target the shared model (API mode: extension). Know the difference from schema-only and workbook-only layers — see Omni modeling.
  • API: Set OMNI_BASE_URL and API key; use {base}/openapi.json to verify endpoints and schemas when automating.


Optional RACI

RoleResponsibility
Domain ownerVocabulary, definitions, approval of “correct” answers
Modeling ownerYAML changes, ai_fields, synonyms, sample query encoding
QARuns test plan in Blobby after each promotion