Linear Tickets — Eden Command Center: Data Access + Chat Integration
Draft tickets for review before committing to Linear. Project: Eden — Data Access + Chat Integration Linear team: EDE3 Milestones: M1 (Apr 6) Slack, M2 (Apr 13) GWS, M3 (Apr 20) Cross-platform + full Command Center Estimation: 1 point = 1 hour Stack: Mastra (TypeScript) + Next.js 15 on Cloud Run (Eden GCP, BAA-covered), Vertex AI Gemini API for LLM Architecture: CLI-native — generic
run_gwsshell-exec tool + Slack MCP server. Seealt-architecture-cli-native-agent.md.
Effort summary
| # | Ticket | Pts | Step | Milestone | Blocked by | Human-only |
|---|---|---|---|---|---|---|
| 1 | Create GCP project and service account | 2 | 1 | — | — | No |
| 2 | Request DWD approval from Eden IT | 2 | 1 | — | 1 | Yes |
| 3 | Create Slack app for Eden workspace | 3 | 1 | — | — | Partial |
| 4 | Design identity mapping schema and token convention | 3 | 2 | — | 1 | No |
| 5 | Build resolve_identity lookup function + seed mapping | 4 | 2 | — | 4, 2 (can stub) | No |
| 6 | Build core PII redaction function (email, name, phone) | 4 | 2 | — | 5 | No |
| 7 | Write PII redaction test fixtures and validation suite | 3 | 2 | — | 6 | No |
| 7b | Define test environment setup | 1 | 2 | — | — | No |
| 8 | Build run_gws + gws_schema tools, whitelist, and Dockerfile | 5 | 3 | — | 1, 6 | No |
| 9a | Spike — Slack MCP server feasibility and auth model | 2 | 3 | — | 3 | No |
| 9b | Connect Mastra to Slack MCP server + PII processor | 2 | 3 | M1 | 9a (Go), 3, 6 | No |
| 10a | Build Mastra agent scaffold, wire tools, configure observability | 3 | 3 | M1 | 8, 9b | No |
| 10b | Author agent system prompt (GWS ref + Slack routing + reasoning) | 3 | 3 | M1 | 10a | No |
| 11 | Build chat interface and deploy to Cloud Run for M1 demo | 4 | 3 | M1 | 10b, 8 (Dockerfile) | No |
| 12 | GWS command reference + integration tests across all GWS surfaces | 5 | 4 | M2 | 8, 2 (DWD approved) | No |
| 13 | Build cross-platform orchestration and project registry | 5 | 5 | M3 | 10b, 12 | No |
| 14 | End-to-end validation and production deploy | 4 | 5 | M3 | 13 | Yes |
| Total | 55 |
Step 1 — Source Authentication
Ticket 1: Create GCP project and service account for Eden Command Center
Estimate: 2 pts
Context
- The Command Center agent needs a GCP service account with Domain-Wide Delegation (DWD) to query data across Eden’s entire Google Workspace on behalf of any user.
- Spike reference:
spike-command-center-data-access.md§5, Step 1
Goal
Provision the GCP project and service account that all downstream agent tools will authenticate with.
Scope
In scope
- Create a dedicated GCP project for Eden Command Center (or use Eden’s existing GCP project per the BAA)
- Create a service account within that project
- Enable required Google Workspace APIs: Gmail, Drive, Calendar, Drive Activity, Admin SDK (read only?)
- Document the service account email and project ID
Out of scope
- DWD grant approval (Ticket 2 — requires Eden IT)
- Slack app creation (Ticket 3)
- Cloud Run deployment setup (Ticket 11)
Acceptance Criteria
- GCP project exists and is accessible by Brainforge team
- Service account created with a descriptive name (e.g.
command-center-agent@eden-cc.iam.gserviceaccount.com) - Gmail, Drive, Calendar, Drive Activity, and Admin SDK APIs enabled on the project
- Service account key generated and stored in GCP Secret Manager (not in repo)
- Project ID and service account email documented in the Eden vault
Notes / Constraints
- No secrets in the repo — key must live in GCP Secret Manager and 1password
- Use a single service account for all Workspace APIs to keep DWD scoping simple
Ticket 2: Request Domain-Wide Delegation approval from Eden IT
Estimate: 2 pts (our prep work; Eden IT approval wait time is external)
Context
- DWD allows the service account to impersonate any user in Eden’s Google Workspace domain, which is required for cross-org visibility.
- This is a blocker — without DWD approval, the agent can only see data the service account owns (nothing).
- Spike reference:
spike-command-center-data-access.md§5, Step 1; Risks section
Goal
Get Eden IT to approve the DWD grant with scoped OAuth scopes so the agent can query Workspace data across the org.
Scope
In scope
- Prepare a scoped list of OAuth scopes with justification for Eden IT:
gmail.readonly— thread metadata only (no message bodies)drive.metadata.readonly— file metadata only (no file content)calendar.readonly— event metadata only (no meeting notes)https://www.googleapis.com/auth/drive.activity.readonly— Drive Activity audit trailadmin.directory.user.readonly— user directory for identity resolution
- Send the request to Eden IT with the service account client ID
- Offer a walkthrough call if needed
- Confirm the grant is active by testing impersonation on 2–3 test users
Out of scope
- Scopes beyond metadata (no
gmail.modify, nodrive.readonlyfor content) - Any data extraction before DWD is approved
Acceptance Criteria
- Eden IT has received the DWD request with the exact scope list and justification
- DWD grant is approved and active in Eden’s Google Workspace Admin Console
- Validated: service account can impersonate at least 2 Eden users and list their Drive files / Gmail threads
- If declined or delayed: escalation path documented and communicated to CSO
Notes / Constraints
- Start this in Week 1 — it’s the critical path blocker for all GWS data access
- The scope list is deliberately restrictive (metadata-only) to make approval easier, can be upgraded in the future if client needs/wants more data access.
- Eden IT may require a security review or data processing agreement
- Escalation: If DWD is not approved by end of Week 2 (Apr 6), escalate to Adam and shift M2 by one week. Communicate the slip to CSO immediately.
- Fallback (no DWD): If DWD is blocked or indefinitely delayed, fall back to user-authenticated mode: the COO authenticates via their own Google OAuth, and the agent queries only data visible to that user. This still delivers value (the COO sees their own full workspace) while DWD approval continues in parallel. The agent tools (
run_gws) support this by swapping the impersonation credential for the user’s OAuth token.
Open Questions
- Who is the Eden IT contact for Workspace admin approvals? - Adam
- Does Eden require a formal data processing agreement before granting DWD?
- If fallback to user OAuth: does Danny (COO) have sufficient Drive/Gmail/Calendar visibility across the org to make the Command Center useful without DWD?
Ticket 3: Create Slack app for Eden workspace
Estimate: 3 pts
Context
- The Command Center agent needs a Slack app installed in Eden’s workspace to search messages and read channel history. The agent will connect to Slack’s official MCP server, which requires the app’s OAuth token.
- Spike reference:
spike-command-center-data-access.md§5, Step 1; Part 2 (Slack options)
Goal
Create and install a Slack app in Eden’s workspace with the scopes needed for the Slack MCP server connection.
Scope
In scope
- Create a Slack app in Eden’s workspace (or Brainforge-owned, installed to Eden)
- Request the following OAuth scopes:
search:read— Slack RTS API (semantic/keyword search)channels:history— read message history in public channelschannels:read— list public channelsgroups:history— read message history in private channelsgroups:read— list private channels the app is inusers:read— resolve user IDs to display names (for anonymization mapping)
- Generate a user token (xoxp) or bot token (xoxb) depending on required access
- Store the token in GCP Secret Manager and 1password
- Validate: run a test search and a test
conversations.historycall
Out of scope
- Connecting to Slack MCP server (Ticket 9)
- Slack app Marketplace listing (not needed for single-workspace install)
Acceptance Criteria
- Slack app created with all required scopes
- App installed in Eden’s workspace and approved by an Eden Slack admin
- Token stored in GCP Secret Manager (not in repo)
- Test search via RTS API returns results from at least one public channel
- Test
conversations.historycall returns messages from a test channel - Confirmed whether Eden is on Pro or Business+ (determines semantic vs keyword search)
Notes / Constraints
- If Eden is on Slack Pro, semantic search is unavailable — keyword search only. Flag this to CSO.
- Rate limits for new non-Marketplace apps may be restrictive (potentially 1 req/min for
conversations.history). RTS API has separate, higher limits. - Token type decision: user token (xoxp) sees DMs the user has access to; bot token (xoxb) only sees channels the bot is invited to. Decide with Eden based on scope requirements.
Open Questions
- Does the COO need DM access, or just public + specific private channels?
- Is Eden on Slack Pro or Business+?
Step 2 — Identity Anonymization Layer
Ticket 4: Design identity mapping schema and token convention
Estimate: 3 pts
Context
- All user identities must be anonymized before data reaches the LLM or the COO. Before building anything, the schema and naming convention need to be decided and documented.
- Spike reference:
spike-command-center-data-access.md§5, Step 2
Goal
Define and document the mapping schema, token naming convention, storage location, and edge case handling so the implementation (Ticket 5) has a clear spec.
Scope
In scope
- Define the mapping schema:
real_email → anonymized_tokenwith fields (email, slack_user_id, token, role/department, created_at) - Define the token naming convention: role-based, human-readable (e.g.
COO_1,Provider_A,Ops_Tech_1) - Document edge case rules: external contacts →
External_N, shared mailboxes →SharedMailbox_N, distribution lists →DL_N - Decide storage location: GCP Secret Manager vs Firestore vs encrypted JSON in Cloud Storage (all within Eden’s GCP project for BAA compliance). Decision made by: AI tech lead + CSO review.
- Document the update policy: how new hires/departures are handled without breaking historical token consistency
- Write the spec as a short design doc in the Eden vault
Out of scope
- Implementing the lookup function (Ticket 5)
- Building the PII redaction function (Ticket 6)
Acceptance Criteria
- Design doc written with schema, naming convention, edge case rules, storage decision, and update policy
- Reviewed and approved by at least one other engineer
- Token naming convention is consistent and unambiguous (no collisions between roles)
Notes / Constraints
- The mapping must be one-way for the analytic layer — the COO should never be able to reverse a token
- Tokens must be stable: same person → same token forever, even if they change roles (use original role at mapping time, or use a stable counter)
Ticket 5: Build resolve_identity lookup function and seed mapping table
Estimate: 4 pts
Context
- With the schema defined (Ticket 4), build the actual lookup function and populate the mapping table from Eden’s user directory.
- Depends on: Ticket 4 (schema), Ticket 2 (DWD for Admin SDK access — can stub with test data until then)
Goal
Implement the identity resolution function and seed the mapping table so the PII middleware can use it.
Scope
In scope
- Build
resolve_identity(email_or_user_id) → anonymized_tokenfunction - Build
resolve_identity_batch(list_of_ids) → dictfor bulk lookups - Seed the mapping table from Eden’s Admin SDK user directory (via GWS CLI with DWD)
- Handle unknown identities at runtime: auto-assign a fallback token and log for review
- Store the mapping in the location decided in Ticket 4
- Unit tests: deterministic (same input → same output), handles unknowns, handles batch
Out of scope
- PII redaction function (Ticket 6)
- Admin UI for managing the mapping (future scope)
Acceptance Criteria
resolve_identity("known@eden.com")returns the correct token consistentlyresolve_identity("unknown@external.com")returns a fallback token and logs the unknown- Batch function works for lists of 100+ identities in < 200ms
- Mapping table seeded with Eden’s user directory (or test data if DWD not yet live)
- Unit tests passing
Notes / Constraints
- If DWD (Ticket 2) is not yet approved, seed with a synthetic test dataset so downstream tickets are unblocked
- The function will be called by the PII middleware on every API response — it needs to be fast (in-memory cache with lazy refresh)
Ticket 6: Build core PII redaction function
Estimate: 4 pts
Context
- Every API response (Slack and GWS) must pass through a redaction layer before entering the LLM context window. This is the enforcement point — no raw PII reaches the LLM or the COO.
- Depends on: Ticket 5 (
resolve_identityfunction) - Spike reference:
spike-command-center-data-access.md§5, Step 2; Architecture notes
Goal
Build the redact(response) function that strips or replaces all PII in API responses.
Scope
In scope
- Build
redact(raw_response: dict) → dictthat:- Replaces all email addresses with anonymized tokens (via
resolve_identity) - Replaces all display names / real names with the matching anonymized token
- Strips or masks phone numbers (regex-based)
- Strips other PII patterns (physical addresses, health identifiers) if present
- Replaces all email addresses with anonymized tokens (via
- Works on nested dicts/lists (API responses are deeply nested JSON)
- Called by
run_gwstool (post-exec, pre-return) and by the Slack MCP output processor - Performance: < 50ms per response
Out of scope
- Test fixtures and validation suite (Ticket 7)
- Anonymizing file content or message bodies (metadata only)
Acceptance Criteria
redact()function exists and handles nested JSON structures- Emails replaced with tokens from
resolve_identity - Display names replaced with matching tokens
- Phone numbers masked (e.g.
***-***-1234or removed entirely) - Processing time < 50ms on a representative API response
- Function is importable by both
run_gwstool and Slack MCP processor
Notes / Constraints
- Enforcement is at the tool function level, not the prompt level. Even if the LLM hallucinates, raw PII was already stripped.
- Use regex for email/phone patterns; use the mapping table (via
resolve_identity) for name resolution - Names are harder than emails — need to match display names from Slack (
users.info) and GWS (Admin SDK) against the mapping. Consider building a reverse lookup:display_name → email → token.
Ticket 7: Write PII redaction test fixtures and validation suite
Estimate: 3 pts
Context
- The PII redaction function (Ticket 6) needs a thorough test suite to guarantee 0% PII leak rate. This ticket creates the fixtures and tests.
- Depends on: Ticket 6
Goal
Build a test suite that validates the redaction function against representative API responses from all data sources.
Scope
In scope
- Create ≥ 6 representative API response fixtures:
- Slack message (with author name, email in profile, phone in text)
- Slack thread (parent + replies with multiple authors)
- Drive file metadata (owner email, last editor, shared-with list)
- Drive Activity record (actor email, target file, action type)
- Gmail thread metadata (sender, recipients, subject with names)
- Calendar event (organizer, attendees with emails)
- Each fixture includes known PII that must be redacted
- Unit tests: for each fixture, assert zero real emails, zero real names, zero phone numbers in output
- Negative tests: assert that non-PII data (file titles, channel names, timestamps) is preserved
- Integration test: run a mock end-to-end query and verify the final synthesized answer contains only tokens
Out of scope
- Testing against live API responses (that happens in Ticket 12)
Acceptance Criteria
- ≥ 6 fixture files covering Slack, Drive, Gmail, Calendar, Drive Activity
- Each fixture has a paired “expected output” with all PII replaced
- All unit tests pass with 0% PII leak rate
- Non-PII fields are preserved (no over-redaction)
- Integration test demonstrates end-to-end anonymization
- Tests runnable via
vitest(TypeScript)
Notes / Constraints
- Fixtures should use realistic but synthetic data (not real Eden data) for the test suite
- This test suite will be run in CI to catch regressions
Ticket 7b: Define test environment setup
Estimate: 1 pt
Context
- Multiple tickets reference running tests locally and in CI but the shared dev/test environment is not explicitly defined anywhere. This ticket establishes the baseline test environment so all developers and agents use the same configuration.
- Should be completed early (Step 2 or start of Step 3) so downstream tickets have a clear testing baseline.
Goal
Document and configure the shared test environment: tooling, env vars, mock data sources, and CI runner setup.
Scope
In scope
- Document the local dev environment: Node.js version, required env vars (stubs OK), GWS CLI binary, Slack MCP test endpoint (if applicable)
- Create a
.env.test(or equivalent) with all required env vars using placeholder/test values - Configure
vitestfor the project: config file, coverage thresholds, test folder structure - Define how mock data is sourced: synthetic fixtures from Ticket 7, mock GWS CLI responses, mock Slack MCP responses
- Document the CI runner setup: what runs on push, what runs on PR, what requires real credentials (integration tests gated by env var flag)
- Write a short
TESTING.mdor section in the project README
Out of scope
- Writing the actual test fixtures (Ticket 7)
- Provisioning real credentials for integration tests (those come from GCP Secret Manager at runtime)
Acceptance Criteria
.env.testexists with all required env vars (placeholder values)vitestconfig file in place with test directory conventionTESTING.md(or README section) documents: how to run unit tests, how to run integration tests, which env vars are needed for each- A developer or cloud agent can clone the repo, run
npm install && npm test, and get a clean pass on unit tests with zero external dependencies
Step 3 — Data Access Tools + Agent + M1
Ticket 8: Build run_gws and gws_schema tools, command whitelist, and Dockerfile
Estimate: 5 pts
Context
- Instead of building 6+ individual GWS tool functions, we build two generic tools:
run_gwsexecutes any allowedgwsCLI command and returns PII-redacted JSON;gws_schemalets the agent discover API method schemas at runtime. - Depends on: Ticket 1 (GCP project), Ticket 6 (PII redaction)
- Architecture reference:
alt-architecture-cli-native-agent.md
Goal
Build the two generic GWS tools and the security whitelist so the agent can access any Google Workspace API through the CLI.
Scope
In scope
- Build
run_gws(command, params?)as a Mastra tool:- Accepts a CLI command string (e.g.
drive files list) and optional params object - Spawns
gws <command> --params '<json>' --format jsonviaexecFile(notexec— no shell injection) - Passes service account credentials via env vars (
GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE,GOOGLE_WORKSPACE_CLI_IMPERSONATED_USER) - Runs raw JSON output through
redact()before returning to the agent - 10-second timeout on child process
- Logs every command executed (service, method, params) for audit trail
- Accepts a CLI command string (e.g.
- Build
gws_schema(method)as a Mastra tool:- Accepts an API method name (e.g.
drive.files.list) - Returns the request/response schema so the agent can learn available parameters
- Accepts an API method name (e.g.
- Build a command whitelist:
- Allowed services:
drive,gmail,calendar,admin,driveactivity:v2,sheets,docs - Read-only operations only — block
delete,update,send,insert,modify,trashsubcommands - Reject any command not matching the whitelist
- Allowed services:
- For Gmail: enforce metadata-only extraction by restricting
gws_schemaandrun_gwstometadataHeaders(Subject, From, To, Date) and strippingpayload.body/snippetfrom responses. The DWD scope isgmail.readonlybut the whitelist should additionally block response fields containing message bodies to prevent accidental PII exposure beyond whatredact()handles. - Unit tests: whitelist enforcement,
execFileargument construction, PII redaction on sample GWS outputs, Gmail body-stripping - Install
@googleworkspace/cliin the project (npm dependency) and verify it runs in the Docker image - Create a
Dockerfilefor the Next.js + Mastra app:- Base image with Node.js 22
- Install GWS CLI globally (
npm install -g @googleworkspace/cli) - Copy app, install deps, build
- Expose port and set Cloud Run entrypoint
- This Dockerfile is used by Ticket 11 for deployment and should be tested locally before deploying
Out of scope
- Testing against all GWS API surfaces (Ticket 12)
- Agent scaffold (Ticket 10)
- Write operations (the Command Center is read-only)
Acceptance Criteria
run_gws("drive files list", { pageSize: 5 })returns PII-redacted JSONrun_gws("gmail users messages delete", ...)is rejected by the whitelistrun_gws("rm -rf /", ...)is rejected (not a valid service)gws_schema("drive.files.list")returns the API method schema- All outputs pass through
redact()— no raw emails or names in returned JSON - Gmail responses contain only metadata headers —
payload.bodyandsnippetfields are stripped before returning to the agent - Command audit log captures every execution
- Child process timeout fires after 10 seconds
- Dockerfile builds and runs locally:
docker build -t eden-cc . && docker run -p 3000:3000 eden-ccstarts the app with GWS CLI available at/usr/local/bin/gws
Notes / Constraints
- Use
execFile(array of args), neverexec(shell string) — this prevents command injection - The GWS CLI must be installed globally in the Docker image (
npm install -g @googleworkspace/cli) - Service account credentials come from GCP Secret Manager, mounted at runtime
- Gmail body-stripping is defense-in-depth: even if the DWD scope only grants
gmail.readonly, stripping body content ensures no message text reaches the LLM regardless of scope changes
Ticket 9a: Spike — Slack MCP server feasibility and auth model
Estimate: 2 pts
Context
- Slack MCP server (
mcp.slack.com) is a hosted service whose auth model, rate limits, and response format are not fully documented. Before committing to the MCP path, we need to validate feasibility and answer open questions. - Depends on: Ticket 3 (Slack app token)
- This spike gates Ticket 9b (the actual integration).
- Reference: Slack MCP Server docs
Goal
Answer all open questions about the Slack MCP server and produce a written Go/No-Go recommendation: proceed with MCP, proceed with caveats, or fall back to custom Slack tools.
Scope
In scope
- Connect to
mcp.slack.comwith the Slack app token from Ticket 3 - Determine auth model: does it accept bot tokens (xoxb), user tokens (xoxp), or both?
- Discover available tools and document their names, parameters, and response shapes
- Test rate limits: how many requests/min before throttling?
- Test response format: is it raw JSON, LLM-friendly text, or something else? How does this affect PII redaction?
- Test semantic search availability: does it work on Eden’s Slack plan?
- Write a spike doc in the Eden vault with findings and a Go/No-Go recommendation
Out of scope
- Building the Mastra processor or connecting tools to the agent (Ticket 9b)
- Building custom Slack tools (fallback, only if spike says No-Go)
Acceptance Criteria
- Spike doc written with: auth model, available tools with response format examples, rate limit findings, semantic search availability
- Go/No-Go recommendation with rationale
- If No-Go: fallback plan documented with effort estimate for custom Slack tools (~14 pts)
- Reviewed by AI tech lead before proceeding to Ticket 9b
Notes / Constraints
- Keep this timeboxed to 2 pts (2 hours). If the MCP server is down or unresponsive during the spike, that’s a finding — document it and recommend fallback.
Ticket 9b: Connect Mastra to Slack MCP server and wire PII processor
Estimate: 2 pts
Context
- Spike (Ticket 9a) confirmed the Slack MCP server is viable. Now build the actual connection and PII processor.
- Depends on: Ticket 9a (spike — Go decision), Ticket 3 (Slack app token), Ticket 6 (PII redaction function)
- If Ticket 9a recommends No-Go, skip this ticket and create custom Slack tool tickets from the fallback plan instead.
Goal
Connect the Mastra agent to Slack’s MCP server and validate all Slack query types work with PII redaction.
Scope
In scope
- Configure Mastra as an MCP client connecting to
mcp.slack.com - Authenticate using the Slack app OAuth token from Ticket 3
- Validate available tools: search messages, read channel history, read thread replies, list channels, user profiles
- Build a Mastra processor that intercepts all Slack MCP tool outputs and runs
redact()before they enter the LLM context - Test: search returns results, thread reading works, all outputs are anonymized
- Document which Slack MCP tools map to the COO’s expected queries
Out of scope
- Building custom Slack tool functions (fallback option if spike recommends it)
- Caching layer (Slack MCP server handles its own caching)
Acceptance Criteria
- Mastra connects to
mcp.slack.comand discovers available tools - Search tool returns results for a keyword query
- Thread reading returns parent + replies
- All outputs pass through PII redaction processor — anonymized tokens only
- Connection handles errors gracefully (timeout, auth failure, server unavailable)
- Documented: tool name → query type mapping for the system prompt
Notes / Constraints
- Slack MCP server is hosted externally — we don’t control uptime. If it proves unreliable during development, fall back to building 3 custom Slack tool functions (~14 pts additional).
- Semantic search requires Business+ plan. If Eden is on Pro, keyword search only.
- The PII redaction processor must handle Slack MCP’s response format, which may differ from raw Slack API JSON (Slack MCP returns LLM-friendly text, not raw JSON)
Ticket 10a: Build Mastra agent scaffold, wire tools, and configure observability
Estimate: 3 pts
Context
- The generic GWS tools (Ticket 8) and Slack MCP connection (Ticket 9b) need to be wired into a Mastra agent. This ticket focuses on the agent scaffold, tool registration, LLM config, and observability.
- Depends on: Ticket 8 (
run_gws+gws_schematools), Ticket 9b (Slack MCP connection)
Goal
Build the Mastra agent scaffold with all tools registered, LLM configured, and observability enabled — ready for the system prompt (Ticket 10b).
Scope
In scope
- Build a TypeScript Mastra agent (
@mastra/core) with:run_gwsandgws_schemaregistered as tools- Slack MCP tools connected as an MCP tool provider
pii_redactprocessor on all tool outputs
- Configure Gemini 2.5 Flash as the LLM via Vertex AI API (BAA-covered, Eden’s GCP project)
- Configure Mastra’s built-in observability and telemetry:
- Enable OpenTelemetry tracing (
telemetry: { serviceName: "eden-command-center", enabled: true }) - Auto-instrumented by Mastra: LLM calls (token usage, latency, prompt/completion), tool calls (which tool, duration, success/failure), agent decision paths
- Log every agent run: session ID, query text (anonymized), tools called, total tokens, total latency, final response status
- Export to OTLP endpoint (local Jaeger for dev; Cloud Trace or SigNoz for production — wired in Ticket 14)
- Add custom spans for PII redaction (track redaction count, any fallback-token assignments)
- Enable OpenTelemetry tracing (
- Stub system prompt (enough for basic tool routing — full prompt in Ticket 10b)
- Verify agent can call
run_gwsand Slack MCP tools end-to-end (manually or via a simple test script)
Out of scope
- Detailed system prompt with GWS command reference and multi-step reasoning (Ticket 10b)
- Chat interface (Ticket 11)
- Cloud Run deployment (Ticket 11)
Acceptance Criteria
- Mastra agent instantiable with
run_gws,gws_schema, and Slack MCP tools - LLM calls routed to Vertex AI Gemini API (not consumer Gemini API)
- Telemetry enabled: LLM calls, tool calls, and agent runs emit OTLP traces with token counts and latency visible locally (Jaeger or console)
- Custom PII redaction spans emitted
- Runs locally via
npx tsxor Next.js dev server
Ticket 10b: Author agent system prompt (GWS command reference + Slack routing + reasoning)
Estimate: 3 pts
Context
- The agent scaffold (Ticket 10a) is wired but uses a stub system prompt. The system prompt is critical — it replaces typed tool schemas and tells the agent how to construct
gwscommands, route Slack queries, and chain multi-step reasoning. - Depends on: Ticket 10a (agent scaffold)
Goal
Write and iterate on the production system prompt so the agent correctly routes and constructs tool calls for all query types.
Scope
In scope
- Write the agent system prompt with:
- GWS command reference: for each query type, the exact
gwscommand and params to use (Drive search, Drive Activity, comments, Gmail, Calendar, Admin SDK) - Slack tool routing: which MCP tool to use for search vs thread vs stats queries
- Anonymization rules: always use tokens, never output raw identities
- Multi-step reasoning: how to chain tool calls (search → read detail → synthesize)
- GWS command reference: for each query type, the exact
- Test locally against Slack (M1 focus): agent correctly routes 5+ Slack question types
- Test GWS command construction with mock or Brainforge workspace (pre-M2 validation)
- Iterate on prompt: at least 3 revision rounds against the test queries
Out of scope
- Chat interface (Ticket 11)
- Full GWS surface validation (Ticket 12 — M2 scope)
- Production OTLP collector setup and dashboards (Ticket 14)
Acceptance Criteria
- Agent correctly uses Slack MCP for search/thread/channel queries
- Agent correctly constructs
run_gwscommands for Drive/Gmail/Calendar queries - Agent uses
gws_schemawhen it encounters an unfamiliar API method - Synthesized answers use only anonymized tokens
- System prompt includes complete GWS command reference with examples
- Tested with ≥ 5 representative Slack query types and ≥ 3 GWS query types
Notes / Constraints
- The system prompt is the most iteration-heavy artifact in this project. Budget extra time for prompt tuning.
- The agent has more freedom (and more rope) than typed tools. It can construct any
gwscommand the whitelist allows, which is powerful but means more integration testing. - Mastra auto-instruments LLM and tool calls via OpenTelemetry — traces from Ticket 10a will help debug prompt issues.
Ticket 11: Build chat interface and deploy to Cloud Run for M1 demo
Estimate: 4 pts
Context
- M1 (Apr 6) requires Danny to be able to chat with the agent. This ticket builds the minimal interface and deploys the agent so it’s accessible.
- Depends on: Ticket 10b (agent system prompt), Ticket 8 (Dockerfile)
Goal
Ship a working chat interface connected to the Mastra agent so Danny can demo it by April 6.
Scope
In scope
- Build the chat interface as a Next.js page within the same app that hosts the Mastra agent:
- Simple chat UI with shadcn/ui components (input, message list, streaming indicator)
- Next.js API route calls the Mastra agent directly (no separate backend)
- Streaming responses via Server-Sent Events or Vercel AI SDK
useChat
- Deploy the Next.js + Mastra app to Cloud Run in Eden’s GCP project (BAA-covered)
- Configure GCP Secret Manager for service account key, Slack token, identity mapping
- Set up application-level logging:
- Structured JSON logging (e.g.
pino) for all API routes and server-side code - Cloud Run forwards container stdout/stderr to Cloud Logging automatically — structured JSON ensures logs are parseable and filterable in the GCP console
- Correlate app logs with OTEL trace IDs from Ticket 10 (same request → same trace across HTTP log entry and agent trace)
- Log levels: info (request/response), warn (rate limits, retries), error (failures with stack traces)
- Structured JSON logging (e.g.
- Add a
/healthendpoint that reports app status, GWS CLI availability, and Slack MCP connection status - End-to-end test: Danny asks 5 questions and gets useful, anonymized answers
Out of scope
- Full dashboards and project management views (later milestones)
- Google OAuth login (single-user demo, no auth needed yet)
- Polished design (functional is sufficient for M1)
- Alerting rules and log-based dashboards (Ticket 14)
Acceptance Criteria
- Danny can access the chat interface via a Cloud Run URL
- Typed questions are sent to the Mastra agent and responses stream back
- All responses contain only anonymized tokens — no real PII
- Tested with the 5 representative queries from the tech plan:
- “What’s the most active channel this week?”
- “Find discussions about [topic]”
- “What happened in #[channel] yesterday?”
- “Who’s been most active in Slack this week?”
- “Show me the thread about [topic]”
- Deployed on Cloud Run in Eden’s GCP project (not Vercel/Railway)
- API routes emit structured JSON logs with: timestamp, method, path, status code, latency, trace ID
- Errors log full stack traces with error classification
/healthendpoint returns 200 with component status (agent, GWS CLI, Slack MCP)- Logs visible in GCP Cloud Logging filtered by service name
Notes / Constraints
- This is the M1 gate — must be demonstrable by April 6
- Don’t over-engineer the UI — the full Custom UI dashboards come in later milestones
- All compute and LLM calls must stay within Eden’s GCP project (BAA requirement)
Step 4 — GWS Surfaces Validation
Ticket 12: GWS command reference + integration tests across all GWS surfaces
Estimate: 5 pts
Context
- The agent already has
run_gws(Ticket 8) — what’s needed is to validate it works correctly across all GWS API surfaces with real Eden data (once DWD is approved) and build the definitive command reference for the system prompt. - Depends on: Ticket 8 (
run_gwstools), Ticket 2 (DWD approved)
Goal
Validate run_gws against every GWS API surface the COO needs, build the definitive command reference for the system prompt, and ensure PII redaction handles all response shapes.
Scope
In scope
- Test
run_gwsagainst each GWS API via DWD (with real Eden data once DWD is approved):- Drive search —
gws drive files listwith query, folder, MIME type filters - Drive Activity —
gws driveactivity:v2 activity queryfor folder/file audit trails - File comments —
gws drive comments listfor comment metadata and replies - Gmail —
gws gmail users messages list+gws gmail users messages getfor thread metadata - Calendar —
gws calendar events listfor event metadata, attendees, scheduling - Admin SDK —
gws admin directory users listfor org directory
- Drive search —
- For each surface:
- Document the exact command + params that produce the best results
- Validate PII redaction handles the response shape (emails, names, phone numbers stripped)
- Test edge cases: empty results, large result sets (pagination via
--page-all), rate limits - Add the working command to the agent’s system prompt GWS command reference
- Update PII redaction fixtures (Ticket 7) with real response shapes from each GWS API
- Update the agent system prompt with the final, tested GWS command reference
- Integration tests: agent correctly constructs and executes the right
run_gwscommand for 10+ GWS query types
Out of scope
- Cross-platform orchestration (Ticket 13)
Acceptance Criteria
run_gwssuccessfully queries all 6 GWS API surfaces- PII redaction validated against real response shapes from each surface
- Agent system prompt includes tested command reference with examples for each API
- Integration tests pass for:
- “Search for documents about the rebrand” →
run_gws("drive files list", { q: "rebrand" }) - “Who edited the Q1 budget spreadsheet?” →
run_gws("driveactivity:v2 activity query", ...) - “Show me comments on the strategy doc” →
run_gws("drive comments list", { fileId: "..." }) - “What emails came in about the vendor contract?” →
run_gws("gmail users messages list", { q: "vendor contract" }) - “What meetings does the ops team have this week?” →
run_gws("calendar events list", ...) - “Who’s in the engineering department?” →
run_gws("admin directory users list", { query: "engineering" })
- “Search for documents about the rebrand” →
- Existing Slack queries still work correctly
- Fixture file updated with ≥ 6 real GWS response shapes
M2 deliverable (Apr 13): Danny can query the agent for Google Workspace activity — file movement, email thread patterns, calendar load, Drive comments — all with anonymized identities. Slack (M1) continues to work. Same chat UI, same Cloud Run deployment.
Step 5 — Cross-Platform Orchestration + Deploy
Ticket 13: Build cross-platform orchestration and project registry
Estimate: 5 pts
Context
- The COO’s highest-value queries span both Slack and Google Workspace: “What’s the status of Project X?” requires checking Slack channels, Drive folders, Calendar meetings, and Gmail threads. The agent needs orchestration logic and a project registry to resolve these.
- Depends on: Ticket 10b (agent system prompt), Ticket 12 (GWS surfaces validated)
- Spike reference:
spike-command-center-data-access.md§5, Step 5
Goal
Build the cross-platform orchestration layer and project registry so the agent can answer unified questions across all data sources.
Scope
In scope
- Build orchestration logic in the Mastra agent that:
- Plans which tools to call for cross-source queries (multi-step reasoning)
- Executes Slack + GWS queries in parallel where possible
- Applies the cross-platform query pattern: Slack → Drive → Activity → Comments → Calendar → Gmail → Synthesize
- All results pass through PII redaction before synthesis
- Build a project registry: lightweight mapping of project names → Slack channels, Drive folder IDs, and anonymized key participants
- Storage may differ from Ticket 4 (identity mapping): the project registry may need a user-facing interface for the COO or ops team to add/edit projects, or it may be agent-managed (auto-discovered from Drive/Slack). Decision deferred — evaluate during implementation whether Firestore (Eden GCP), Google Sheet via GWS CLI, or a simple admin page is best.
- Agent uses this to resolve ambiguous queries (“the rebrand” → folder ID + rebrand channel)
- Update agent system prompt for cross-platform reasoning
- Test with 5+ cross-platform query types
Out of scope
- Dashboard views (later milestones)
- Scheduled digests (optional future feature)
Acceptance Criteria
- Agent correctly answers cross-platform queries:
- “What’s the status of Project X across Slack and Drive?”
- “What happened this week?” (synthesizes all sources)
- “Who’s been most active on the rebrand?” (Slack + Drive Activity)
- “Are there any meetings about the vendor contract and related Slack discussions?”
- “Show me all activity on Project Y” (Slack messages + Drive edits + Calendar events + Gmail threads)
- Project registry resolvable by name or alias
- Parallel execution: multi-source queries complete in < 30 seconds
- All synthesized answers use anonymized tokens only
Ticket 14: End-to-end validation and production deploy
Estimate: 4 pts
Context
- M3 (Apr 20) is the full Command Center delivery gate. All tools, orchestration, and the chat UI must work end-to-end with real Eden data.
- Depends on: Ticket 13 (cross-platform orchestration)
- Spike reference:
spike-command-center-data-access.md§5, Step 8
Goal
Validate the full system against real Eden data and cut the production deployment.
Scope
In scope
- Run Danny through 10–15 test queries spanning:
- Single-source Slack (“what’s the most active channel this week?”)
- Single-source GWS (“who’s been editing the rebrand docs?”)
- Cross-platform (“what’s the status of Project X across Slack and Drive?”)
- Anonymization validation (“show me team activity” — confirm no real names appear)
- Fix any issues found during validation
- Finalize Cloud Run deployment configuration (min instances, memory, secrets)
- Set GCP budget alerts and Vertex AI quotas
- Document the deployment: Cloud Run service URL, GCP project, secret locations, runbook for restarts
Out of scope
- Dashboard views and project management admin (later milestones)
- Scheduled digests
Acceptance Criteria
- All 10–15 test queries return useful, anonymized answers
- 0% PII leak rate confirmed across all test queries
- Cloud Run service running in Eden’s GCP project with production config
- GCP budget alerts set
- Deployment runbook documented in Eden vault
- Danny signs off on M3
M3 deliverable (Apr 20): Full Command Center — Danny opens the chat UI, asks questions about anything happening across Eden’s entire Google Workspace and entire Slack. All identities anonymized. The agent replaces (and exceeds) what Gemini provided natively.
Ticket dependency graph
Ticket 1: Create GCP project (2 pts) ──────────┐
├──→ Ticket 4: Mapping schema (3 pts)
Ticket 2: Request DWD approval (2 pts) ─────────┘ │
↳ Escalation if not approved by end of Week 2 │
↳ Fallback: user OAuth (COO's own data) │
▼
Ticket 5: resolve_identity (4 pts)
│
Ticket 3: Create Slack app (3 pts) ──────┬────────────┐ │
│ │ ▼
│ ├──→ Ticket 6: PII redaction core (4 pts)
│ │ │
│ │ Ticket 7: PII test suite (3 pts)
│ │ │
Ticket 7b: Test env setup (1 pt) ────┼────────────┼─── (parallel, no deps)
│ │ │
│ │ ▼
│ └── Ticket 8: run_gws + gws_schema + Dockerfile (5 pts)
│
└── Ticket 9a: Spike — Slack MCP (2 pts)
│
[Go decision]
│
Ticket 9b: Slack MCP + PII processor (2 pts)
│
▼
Ticket 10a: Mastra agent scaffold (3 pts) ←── Ticket 8
│
▼
Ticket 10b: Agent system prompt (3 pts)
│
▼
Ticket 11: Chat UI + Cloud Run (4 pts) ←── Ticket 8 (Dockerfile)
↑ M1 gate (Apr 6)
│
┌──────────────────────────────────────────────────────┘
│ (DWD approved — Ticket 2 — unlocks GWS validation)
│
└── Ticket 12: GWS surfaces validation (5 pts)
↑ M2 gate (Apr 13)
│
▼
Ticket 13: Cross-platform orchestration (5 pts)
│
▼
Ticket 14: E2E validation + prod deploy (4 pts) [HUMAN]
↑ M3 gate (Apr 20)
Critical paths
M1 (27 pts): Ticket 1 → 4 → 5 → 6 → 8 → 10a → 10b → 11. Parallel: Tickets 3 → 9a → 9b (merge at 10a), Ticket 7, Ticket 7b.
M2 (+5 pts): Ticket 12 starts once DWD (Ticket 2) is approved and Ticket 8 is done. One ticket between M1 and M2.
M3 (+9 pts): Ticket 13 → 14 (human). Cross-platform orchestration then final validation.
Total: 55 pts across 17 tickets.
Fallback: If Slack MCP is unreliable
If the Ticket 9a spike recommends No-Go for Slack MCP, drop Ticket 9b and replace with 4 custom Slack tool functions:
| Drop | Add | Net change |
|---|---|---|
| Ticket 9b (2 pts) | search_slack (5 pts) + read_slack_thread (3 pts) + get_slack_channel_stats (4 pts) + caching/rate-limits (2 pts) | +12 pts |
Fallback total: 67 pts / 20 tickets (still 8 pts less than the original Plan A at 75 pts).