Linear Tickets — Data Access + Chat Integration (Steps 1–5, M1–M3)
Draft tickets for review before committing to Linear. Project: Eden — Data Access + Chat Integration Linear team: EDE3 Milestones: M1 (Apr 6) Slack, M2 (Apr 13) GWS, M3 (Apr 20) Cross-platform + full Command Center Source:
knowledge/clients/eden/resources/spike-command-center-data-access.md§5, Steps 1–5 Estimation: 1 point = 1 hour Stack: Mastra (TypeScript) + Next.js 15 on Cloud Run (Eden GCP, BAA-covered), Vertex AI Gemini API for LLM
Effort summary
| # | Ticket | Pts | Step | Milestone |
|---|---|---|---|---|
| 1 | Create GCP project and service account | 2 | 1 | — |
| 2 | Request DWD approval from Eden IT | 2 | 1 | — |
| 3 | Create Slack app for Eden workspace | 3 | 1 | — |
| 4a | Design identity mapping schema and token convention | 3 | 2 | — |
| 4b | Build resolve_identity lookup function + seed mapping | 4 | 2 | — |
| 5a | Build core PII redaction function (email, name, phone) | 4 | 2 | — |
| 5b | Write PII redaction test fixtures and validation suite | 3 | 2 | — |
| 6 | Build search_slack tool function | 5 | 3 | M1 |
| 7 | Build read_slack_thread tool function | 3 | 3 | M1 |
| 8a | Build get_slack_channel_stats — history fetching + aggregation | 4 | 3 | M1 |
| 8b | Add caching layer and rate-limit handling for channel stats | 2 | 3 | M1 |
| 9a | Build Mastra agent scaffold and register Slack tools | 4 | 3 | M1 |
| 9b | Build minimal chat interface and deploy to Cloud Run for M1 demo | 4 | 3 | M1 |
| 10 | Build search_drive tool function | 4 | 4 | M2 |
| 11 | Build get_drive_activity tool function | 4 | 4 | M2 |
| 12 | Build get_file_comments tool function | 3 | 4 | M2 |
| 13 | Build search_gmail tool function | 4 | 4 | M2 |
| 14 | Build search_calendar tool function | 3 | 4 | M2 |
| 15 | Build get_user_directory tool function | 2 | 4 | M2 |
| 16 | Register GWS tools in Mastra agent | 3 | 4 | M2 |
| 17 | Build cross-platform orchestration and project registry | 5 | 5 | M3 |
| 18 | End-to-end validation and production deploy | 4 | 5 | M3 |
| Total | 75 |
Step 1 — Source Authentication
Ticket 1: Create GCP project and service account for Eden Command Center
Estimate: 2 pts
Context
- The Command Center agent needs a GCP service account with Domain-Wide Delegation (DWD) to query data across Eden’s entire Google Workspace on behalf of any user.
- Spike reference:
spike-command-center-data-access.md§5, Step 1
Goal
Provision the GCP project and service account that all downstream agent tools will authenticate with.
Scope
In scope
- Create a dedicated GCP project for Eden Command Center
- Create a service account within that project
- Enable required Google Workspace APIs: Gmail, Drive, Calendar, Drive Activity, Admin SDK
- Document the service account email and project ID
Out of scope
- DWD grant approval (separate ticket — requires Eden IT)
- Slack app creation (separate ticket)
- Cloud Run deployment setup (later step)
Acceptance Criteria
- GCP project exists and is accessible by Brainforge team
- Service account created with a descriptive name (e.g.
command-center-agent@eden-cc.iam.gserviceaccount.com) - Gmail, Drive, Calendar, Drive Activity, and Admin SDK APIs enabled on the project
- Service account key generated and stored in GCP Secret Manager (not in repo)
- Project ID and service account email documented in the Eden vault
Notes / Constraints
- No secrets in the repo — key must live in GCP Secret Manager
- Use a single service account for all Workspace APIs to keep DWD scoping simple
Ticket 2: Request Domain-Wide Delegation approval from Eden IT
Estimate: 2 pts (our prep work; Eden IT approval wait time is external)
Context
- DWD allows the service account to impersonate any user in Eden’s Google Workspace domain, which is required for cross-org visibility.
- This is a blocker — without DWD approval, the agent can only see data the service account owns (nothing).
- Spike reference:
spike-command-center-data-access.md§5, Step 1; Risks section
Goal
Get Eden IT to approve the DWD grant with scoped OAuth scopes so the agent can query Workspace data across the org.
Scope
In scope
- Prepare a scoped list of OAuth scopes with justification for Eden IT:
gmail.readonly— thread metadata only (no message bodies)drive.metadata.readonly— file metadata only (no file content)calendar.readonly— event metadata only (no meeting notes)https://www.googleapis.com/auth/drive.activity.readonly— Drive Activity audit trailadmin.directory.user.readonly— user directory for identity resolution
- Send the request to Eden IT with the service account client ID
- Offer a walkthrough call if needed
- Confirm the grant is active by testing impersonation on 2–3 test users
Out of scope
- Scopes beyond metadata (no
gmail.modify, nodrive.readonlyfor content) - Any data extraction before DWD is approved
Acceptance Criteria
- Eden IT has received the DWD request with the exact scope list and justification
- DWD grant is approved and active in Eden’s Google Workspace Admin Console
- Validated: service account can impersonate at least 2 Eden users and list their Drive files / Gmail threads
- If declined or delayed: escalation path documented and communicated to CSO
Notes / Constraints
- Start this in Week 1 — it’s the critical path blocker for all GWS data access
- The scope list is deliberately restrictive (metadata-only) to make approval easier
- Eden IT may require a security review or data processing agreement
Open Questions
- Who is the Eden IT contact for Workspace admin approvals?
- Does Eden require a formal data processing agreement before granting DWD?
Ticket 3: Create Slack app for Eden workspace
Estimate: 3 pts
Context
- The Command Center agent needs a Slack app installed in Eden’s workspace to search messages and read channel history via the RTS API and conversations API.
- Spike reference:
spike-command-center-data-access.md§5, Step 1; Part 2 (Slack options)
Goal
Create and install a Slack app in Eden’s workspace with the scopes needed for the agent’s Slack tool functions.
Scope
In scope
- Create a Slack app in Eden’s workspace (or Brainforge-owned, installed to Eden)
- Request the following OAuth scopes:
search:read— Slack RTS API (semantic/keyword search)channels:history— read message history in public channelschannels:read— list public channelsgroups:history— read message history in private channelsgroups:read— list private channels the app is inusers:read— resolve user IDs to display names (for anonymization mapping)
- Generate a user token (xoxp) or bot token (xoxb) depending on required access
- Store the token in GCP Secret Manager
- Validate: run a test search and a test
conversations.historycall
Out of scope
- Building the Slack tool functions (separate ticket)
- Slack app Marketplace listing (not needed for single-workspace install)
Acceptance Criteria
- Slack app created with all required scopes
- App installed in Eden’s workspace and approved by an Eden Slack admin
- Token stored in GCP Secret Manager (not in repo)
- Test search via RTS API returns results from at least one public channel
- Test
conversations.historycall returns messages from a test channel - Confirmed whether Eden is on Pro or Business+ (determines semantic vs keyword search)
Notes / Constraints
- If Eden is on Slack Pro, semantic search is unavailable — keyword search only. Flag this to CSO.
- Rate limits for new non-Marketplace apps may be restrictive (potentially 1 req/min for
conversations.history). RTS API has separate, higher limits. - Token type decision: user token (xoxp) sees DMs the user has access to; bot token (xoxb) only sees channels the bot is invited to. Decide with Eden based on scope requirements.
Open Questions
- Does the COO need DM access, or just public + specific private channels?
- Is Eden on Slack Pro or Business+?
Step 2 — Identity Anonymization Layer
Ticket 4a: Design identity mapping schema and token convention
Estimate: 3 pts
Context
- All user identities must be anonymized before data reaches the LLM or the COO. Before building anything, the schema and naming convention need to be decided and documented.
- Spike reference:
spike-command-center-data-access.md§5, Step 2
Goal
Define and document the mapping schema, token naming convention, storage location, and edge case handling so the implementation (4b) has a clear spec.
Scope
In scope
- Define the mapping schema:
real_email → anonymized_tokenwith fields (email, slack_user_id, token, role/department, created_at) - Define the token naming convention: role-based, human-readable (e.g.
COO_1,Provider_A,Ops_Tech_1) - Document edge case rules: external contacts →
External_N, shared mailboxes →SharedMailbox_N, distribution lists →DL_N - Decide storage location: GCP Secret Manager vs Firestore vs encrypted JSON in Cloud Storage (all within Eden’s GCP project for BAA compliance)
- Document the update policy: how new hires/departures are handled without breaking historical token consistency
- Write the spec as a short design doc in the Eden vault
Out of scope
- Implementing the lookup function (Ticket 4b)
- Building the PII redaction middleware (Ticket 5a)
Acceptance Criteria
- Design doc written with schema, naming convention, edge case rules, storage decision, and update policy
- Reviewed and approved by at least one other engineer
- Token naming convention is consistent and unambiguous (no collisions between roles)
Notes / Constraints
- The mapping must be one-way for the analytic layer — the COO should never be able to reverse a token
- Tokens must be stable: same person → same token forever, even if they change roles (use original role at mapping time, or use a stable counter)
Ticket 4b: Build resolve_identity lookup function and seed mapping table
Estimate: 4 pts
Context
- With the schema defined (4a), build the actual lookup function and populate the mapping table from Eden’s user directory.
- Depends on: Ticket 4a (schema), Ticket 2 (DWD for Admin SDK access — can stub with test data until then)
Goal
Implement the identity resolution function and seed the mapping table so the PII middleware can use it.
Scope
In scope
- Build
resolve_identity(email_or_user_id) → anonymized_tokenfunction - Build
resolve_identity_batch(list_of_ids) → dictfor bulk lookups - Seed the mapping table from Eden’s Admin SDK user directory (via GWS CLI with DWD)
- Handle unknown identities at runtime: auto-assign a fallback token and log for review
- Store the mapping in the location decided in 4a
- Unit tests: deterministic (same input → same output), handles unknowns, handles batch
Out of scope
- PII redaction middleware (Ticket 5a)
- Admin UI for managing the mapping (future scope)
Acceptance Criteria
resolve_identity("known@eden.com")returns the correct token consistentlyresolve_identity("unknown@external.com")returns a fallback token and logs the unknown- Batch function works for lists of 100+ identities in < 200ms
- Mapping table seeded with Eden’s user directory (or test data if DWD not yet live)
- Unit tests passing
Notes / Constraints
- If DWD (Ticket 2) is not yet approved, seed with a synthetic test dataset so downstream tickets are unblocked
- The function will be called by the PII middleware on every API response — it needs to be fast (in-memory cache with lazy refresh)
Ticket 5a: Build core PII redaction function
Estimate: 4 pts
Context
- Every API response (Slack and GWS) must pass through a redaction layer before entering the LLM context window. This is the enforcement point.
- Depends on: Ticket 4b (
resolve_identityfunction) - Spike reference:
spike-command-center-data-access.md§5, Step 2; Architecture notes
Goal
Build the redact(response) function that strips or replaces all PII in API responses.
Scope
In scope
- Build
redact(raw_response: dict) → dictthat:- Replaces all email addresses with anonymized tokens (via
resolve_identity) - Replaces all display names / real names with the matching anonymized token
- Strips or masks phone numbers (regex-based)
- Strips other PII patterns (physical addresses, health identifiers) if present
- Replaces all email addresses with anonymized tokens (via
- Works on nested dicts/lists (API responses are deeply nested JSON)
- Runs at the tool function level — each tool calls
redact()before returning - Performance: < 50ms per response
Out of scope
- Test fixtures and validation suite (Ticket 5b)
- Anonymizing file content or message bodies (metadata only)
Acceptance Criteria
redact()function exists and handles nested JSON structures- Emails replaced with tokens from
resolve_identity - Display names replaced with matching tokens
- Phone numbers masked (e.g.
***-***-1234or removed entirely) - Processing time < 50ms on a representative API response
- Function is importable by any tool function module
Notes / Constraints
- Enforcement is at the tool function level, not the prompt level. Even if the LLM hallucinates, raw PII was already stripped.
- Use regex for email/phone patterns; use the mapping table (via
resolve_identity) for name resolution - Names are harder than emails — need to match display names from Slack (
users.info) and GWS (Admin SDK) against the mapping. Consider building a reverse lookup:display_name → email → token.
Ticket 5b: Write PII redaction test fixtures and validation suite
Estimate: 3 pts
Context
- The PII redaction function (5a) needs a thorough test suite to guarantee 0% PII leak rate. This ticket creates the fixtures and tests.
- Depends on: Ticket 5a
Goal
Build a test suite that validates the redaction function against representative API responses from all data sources.
Scope
In scope
- Create ≥ 6 representative API response fixtures:
- Slack message (with author name, email in profile, phone in text)
- Slack thread (parent + replies with multiple authors)
- Drive file metadata (owner email, last editor, shared-with list)
- Drive Activity record (actor email, target file, action type)
- Gmail thread metadata (sender, recipients, subject with names)
- Calendar event (organizer, attendees with emails)
- Each fixture includes known PII that must be redacted
- Unit tests: for each fixture, assert zero real emails, zero real names, zero phone numbers in output
- Negative tests: assert that non-PII data (file titles, channel names, timestamps) is preserved
- Integration test: run a mock end-to-end query and verify the final synthesized answer contains only tokens
Out of scope
- Testing against live API responses (that happens in the tool function tickets)
Acceptance Criteria
- ≥ 6 fixture files covering Slack, Drive, Gmail, Calendar, Drive Activity
- Each fixture has a paired “expected output” with all PII replaced
- All unit tests pass with 0% PII leak rate
- Non-PII fields are preserved (no over-redaction)
- Integration test demonstrates end-to-end anonymization
- Tests runnable via
vitest(TypeScript)
Notes / Constraints
- Fixtures should use realistic but synthetic data (not real Eden data) for the test suite
- This test suite will be run in CI to catch regressions
Step 3 — Slack Data Access Tools
Ticket 6: Build search_slack tool function
Estimate: 5 pts
Context
- The agent needs to search across Eden’s Slack workspace to answer COO questions like “what’s the most active topic this week?” or “find discussions about the rebrand.”
- Depends on: Ticket 3 (Slack app + token), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 3; Part 2 Option A (RTS API)
Goal
Build the search_slack tool function that the orchestration agent can invoke to search Slack via the RTS API.
Scope
In scope
- Implement
search_slack(query, channels?, time_range?)as a callable agent tool - Uses Slack RTS API (
assistant.search.context) for semantic search (Business+) or keyword search (Pro) - Returns anonymized results: channel name, anonymized author token, timestamp, thread reply count, reaction count
- All results pass through PII redaction middleware (Ticket 5a) before returning
- Handle pagination if results exceed a single page
- Handle rate limiting gracefully (backoff + retry)
Out of scope
- Thread reading (Ticket 7)
- Channel stats aggregation (Ticket 8a)
- Building the orchestration agent (Ticket 9a)
Acceptance Criteria
- Function callable with
query(required),channels(optional filter),time_range(optional) - Returns structured results with: channel, anonymized_author, timestamp, reply_count, reaction_count, snippet (anonymized)
- All author identities are anonymized tokens — no real names or emails
- Handles empty results gracefully
- Rate limiting: backs off and retries on 429 responses
- Tested against Eden’s Slack (or Brainforge’s for development)
Notes / Constraints
- Semantic search requires Business+ plan. If Eden is on Pro, this falls back to keyword matching — the function should handle both transparently.
- The RTS API is relatively new — monitor for behavior changes or undocumented limits.
Ticket 7: Build read_slack_thread tool function
Estimate: 3 pts
Context
- When the agent finds a relevant Slack message via search, it often needs to read the full thread to understand the discussion context.
- Depends on: Ticket 3 (Slack app + token), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 3
Goal
Build the read_slack_thread tool function that retrieves the full thread context for a given message.
Scope
In scope
- Implement
read_slack_thread(channel_id, thread_ts)as a callable agent tool - Uses Slack
conversations.repliesAPI to fetch all replies in the thread - Returns anonymized thread: list of messages with anonymized author tokens, timestamps, reaction counts
- All results pass through PII redaction middleware (Ticket 5a)
- Handle pagination for long threads
Out of scope
- Searching for threads (that’s
search_slack, Ticket 6) - Extracting file attachments
Acceptance Criteria
- Function callable with
channel_idandthread_ts(both required) - Returns the parent message + all replies, each with: anonymized_author, timestamp, reaction_count, reply text (anonymized)
- All author identities are anonymized tokens
- Handles threads with 0 replies (returns parent message only)
- Handles long threads (pagination)
- Tested against a real Slack thread
Notes / Constraints
conversations.repliesmay be rate-limited to 1 req/min for non-Marketplace apps. This is acceptable since thread reads happen after a search narrows down results.- Thread text content will go through PII redaction. If the metadata-only constraint applies to Slack message bodies too, revisit with the team whether to return anonymized text or just metadata.
Open Questions
- Should the agent see anonymized message text, or only message metadata (author, timestamp, reactions)? The spike says “metadata only” for GWS but is less explicit for Slack thread content.
Ticket 8a: Build get_slack_channel_stats — history fetching and aggregation
Estimate: 4 pts
Context
- The COO may ask questions like “which channels are most active?” or “how much activity was there in operations this week?” The agent needs a tool to aggregate channel-level statistics.
- Depends on: Ticket 3 (Slack app + token), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 3
Goal
Build the core get_slack_channel_stats function that fetches channel history and computes aggregate metrics.
Scope
In scope
- Implement
get_slack_channel_stats(channel_id, time_range?)as a callable agent tool - Fetches recent messages via
conversations.historywith pagination - Computes:
- Total message count in the time range
- Active participant count (anonymized tokens)
- Thread count and average reply depth
- Reaction count (total)
- Message frequency over time (messages per day)
- All participant identities anonymized via PII middleware
- Supports configurable time ranges (default: last 7 days)
Out of scope
- Caching and rate-limit handling (Ticket 8b)
- Cross-channel comparisons (the orchestration agent handles that)
Acceptance Criteria
- Function callable with
channel_id(required) andtime_range(optional, defaults to 7 days) - Returns: message_count, participant_count, thread_count, avg_reply_depth, reaction_count, daily_message_counts
- No real user identities in the output
- Handles channels with no recent activity (returns zeroes)
- Correctly paginates through
conversations.historyfor busy channels - Tested against at least one active channel
Ticket 8b: Add caching layer and rate-limit handling for channel stats
Estimate: 2 pts
Context
get_slack_channel_statsis the heaviest Slack tool on API calls (paginates throughconversations.history). Without caching and rate-limit handling, it’s slow and fragile.- Depends on: Ticket 8a
Goal
Add a caching layer and rate-limit handling to the channel stats function so it’s performant and resilient.
Scope
In scope
- Add in-memory cache (TTL: 5 minutes) so repeated queries for the same channel don’t re-fetch
- Add backoff + retry logic for Slack 429 (rate limit) responses
- Log when rate limits are hit (for monitoring)
Out of scope
- Persistent cache (in-memory is sufficient for now)
- Pre-computing stats for all channels on a schedule (future optimization)
Acceptance Criteria
- Second call for the same channel within 5 minutes returns cached result (no API calls)
- Cache is keyed on
(channel_id, time_range)tuple - 429 responses trigger exponential backoff with max 3 retries
- Rate limit hits are logged
Ticket 9a: Build Mastra agent scaffold and register Slack tools
Estimate: 4 pts
Context
- The Slack tool functions (Tickets 6, 7, 8a) need to be wired into an orchestration agent that can receive a natural language question, decide which tool(s) to call, and synthesize an anonymized answer.
- Depends on: Tickets 6, 7, 8a (Slack tools)
- Spike reference:
spike-command-center-data-access.md§5, Step 3 M1 deliverable
Goal
Build the Mastra agent that orchestrates the Slack tools and can answer questions about Slack activity.
Scope
In scope
- Build a TypeScript Mastra agent (
@mastra/core) with the three Slack tools registered as callable tools - Agent system prompt (instructions): instructs the LLM to plan which tool(s) to call, execute them, and synthesize results using only anonymized tokens
- Agent handles multi-tool calls (e.g. search → then read thread for top result)
- Configure Gemini 2.5 Flash as the LLM via Vertex AI API (BAA-covered, Eden’s GCP project)
- Test locally: agent correctly routes 5+ question types to the right tools
Out of scope
- Chat interface (Ticket 9b)
- Cloud Run deployment (Ticket 9b)
- GWS tools (M2 scope)
Acceptance Criteria
- Mastra agent instantiable with all three Slack tools
- Agent correctly selects
search_slackfor search queries - Agent correctly chains
search_slack→read_slack_threadfor thread-detail queries - Agent correctly selects
get_slack_channel_statsfor activity/volume queries - Synthesized answers use only anonymized tokens
- LLM calls routed to Vertex AI Gemini API (not consumer Gemini API)
- Runs locally via
npx tsxor Next.js dev server
Ticket 9b: Build minimal chat interface and deploy to Cloud Run for M1 demo
Estimate: 4 pts
Context
- M1 (Apr 6) requires Danny to be able to chat with the agent. This ticket builds the minimal interface and deploys the agent so it’s accessible.
- Depends on: Ticket 9a (agent scaffold)
Goal
Ship a working chat interface connected to the Mastra agent so Danny can demo it by April 6.
Scope
In scope
- Build the chat interface as a Next.js page within the same app that hosts the Mastra agent:
- Simple chat UI with shadcn/ui components (input, message list, streaming indicator)
- Next.js API route calls the Mastra agent directly (no separate backend)
- Streaming responses via Server-Sent Events or Vercel AI SDK
useChat
- Deploy the Next.js + Mastra app to Cloud Run in Eden’s GCP project (BAA-covered)
- Configure GCP Secret Manager for service account key, Slack token, identity mapping
- End-to-end test: Danny asks 5 questions and gets useful, anonymized answers
Out of scope
- Full dashboards and project management views (later milestones)
- Google OAuth login (single-user demo, no auth needed yet)
- Polished design (functional is sufficient for M1)
Acceptance Criteria
- Danny can access the chat interface via a Cloud Run URL
- Typed questions are sent to the Mastra agent and responses stream back
- All responses contain only anonymized tokens — no real PII
- Tested with the 5 representative queries from the tech plan:
- “What’s the most active channel this week?”
- “Find discussions about [topic]”
- “What happened in #[channel] yesterday?”
- “Who’s been most active in Slack this week?”
- “Show me the thread about [topic]”
- Deployed on Cloud Run in Eden’s GCP project (not Vercel/Railway)
Notes / Constraints
- This is the M1 gate — must be demonstrable by April 6
- Don’t over-engineer the UI — the full Custom UI dashboards come in later milestones
- All compute and LLM calls must stay within Eden’s GCP project (BAA requirement)
Step 4 — Google Workspace Data Access Tools
Ticket 10: Build search_drive tool function
Estimate: 4 pts
Context
- The COO already has Drive search in Gemini and expects equivalent capability in the Command Center. This tool searches files by query, folder, or owner across the org via GWS CLI + DWD.
- Depends on: Ticket 2 (DWD approval), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 4
Goal
Build the search_drive tool function so the agent can find files across Eden’s entire Drive.
Scope
In scope
- Implement
search_drive(query, folder_id?, owner_token?)as a Mastra tool - Uses GWS CLI (
gws drive files list) with service account + DWD to impersonate and search across users - Returns anonymized file metadata: title, anonymized last-editor token, revision count, last modified timestamp, file type, sharing scope
- All results pass through PII redaction middleware before returning
- Support query by keyword, MIME type, folder, and time range
Out of scope
- Reading file content (metadata only per privacy constraint)
- Drive Activity audit trail (Ticket 11)
- File comments (Ticket 12)
Acceptance Criteria
- Function callable with
query(required),folder_id(optional),owner_token(optional) - Returns structured results with: title, anonymized_editor, revision_count, last_modified, file_type, sharing_scope
- All editor/owner identities are anonymized tokens
- Handles empty results gracefully
- Tested against Eden’s Drive (or Brainforge’s with DWD for development)
Ticket 11: Build get_drive_activity tool function
Estimate: 4 pts
Context
- Drive Activity API v2 provides a granular audit trail of actions (edit, share, move, comment, delete) across Drive. This is how the COO sees “movement” across projects.
- Depends on: Ticket 1 (GCP project with Drive Activity API enabled), Ticket 2 (DWD), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 4; Part 1 (Drive Activity API)
Goal
Build the get_drive_activity tool function that returns anonymized action history for a folder or file.
Scope
In scope
- Implement
get_drive_activity(folder_id?, file_id?, time_range?)as a Mastra tool - Uses GWS CLI (
gws driveactivity:v2 activity query) with service account + DWD - Returns anonymized activity records: action type (edit, share, move, comment, delete), anonymized actor token, target file title, timestamp
- Support filtering by folder (recursive), file, and time range
- Paginate through results for busy folders
Out of scope
- Content diffs (Drive Activity only returns action metadata)
- File comments text (Ticket 12)
Acceptance Criteria
- Function callable with at least one of
folder_idorfile_id, plus optionaltime_range - Returns structured activity records with: action_type, anonymized_actor, target_file, timestamp
- All actor identities are anonymized tokens
- Correctly paginates for folders with 100+ actions
- Tested against a Drive folder with known recent activity
Ticket 12: Build get_file_comments tool function
Estimate: 3 pts
Context
- Drive Comments API exposes comment threads on files. The COO uses this to track review cycles and feedback loops.
- Depends on: Ticket 2 (DWD), Ticket 5a (PII redaction)
- Spike reference:
spike-command-center-data-access.md§5, Step 4
Goal
Build the get_file_comments tool function that returns anonymized comment metadata for a Drive file.
Scope
In scope
- Implement
get_file_comments(file_id)as a Mastra tool - Uses GWS CLI (
gws drive comments list) with service account + DWD - Returns anonymized comment metadata: anonymized author token, timestamp, resolved status, reply count
- Include replies (as nested items) with anonymized authors
Out of scope
- Comment content/text (metadata only per privacy constraint, unless team decides to include anonymized text)
- Creating or resolving comments
Acceptance Criteria
- Function callable with
file_id(required) - Returns: list of comments with anonymized_author, timestamp, resolved, reply_count
- All author identities are anonymized tokens
- Handles files with zero comments
- Tested against a file with known comments
Ticket 13: Build search_gmail tool function
Estimate: 4 pts
Context
- The COO currently sees Gmail context in Gemini and expects it in the Command Center. This tool searches thread metadata (subject, sender, timestamps) without reading message bodies.
- Depends on: Ticket 2 (DWD with
gmail.readonly), Ticket 5a (PII redaction) - Spike reference:
spike-command-center-data-access.md§5, Step 4
Goal
Build the search_gmail tool function for anonymized email thread metadata search.
Scope
In scope
- Implement
search_gmail(query, user_token?, time_range?)as a Mastra tool - Uses GWS CLI (
gws gmail messages list+gws gmail messages get) with service account + DWD - If
user_tokenprovided, impersonate that user; otherwise search across a set of key users (COO, department heads) - Returns anonymized thread metadata: subject line, anonymized sender/recipient tokens, timestamp, thread length, label categories
- No message bodies — metadata only
Out of scope
- Reading email bodies or attachments
- Sending emails
- Gmail settings or filter management
Acceptance Criteria
- Function callable with
query(required),user_token(optional),time_range(optional) - Returns: subject, anonymized_sender, anonymized_recipients, timestamp, thread_length, labels
- All sender/recipient identities are anonymized tokens
- No message body content in the response
- Handles empty results gracefully
- Tested against a mailbox with known threads
Notes / Constraints
- Gmail DWD requires the
gmail.readonlyscope — ensure this was included in the Ticket 2 DWD request - Searching “across all users” is expensive — the tool should accept a user scope parameter or default to a small set of key users
Ticket 14: Build search_calendar tool function
Estimate: 3 pts
Context
- Calendar visibility is something the COO already gets from Gemini. The Command Center needs to show meeting patterns, scheduling density, and attendee overlaps.
- Depends on: Ticket 2 (DWD with
calendar.readonly), Ticket 5a (PII redaction) - Spike reference:
spike-command-center-data-access.md§5, Step 4
Goal
Build the search_calendar tool function for anonymized calendar event metadata.
Scope
In scope
- Implement
search_calendar(query?, user_token?, time_range?)as a Mastra tool - Uses GWS CLI (
gws calendar events list) with service account + DWD - Returns anonymized event info: title, anonymized organizer token, anonymized attendee tokens, duration, recurrence, response status
- Support filtering by user, time range, and keyword in title
Out of scope
- Meeting notes or descriptions
- Creating/modifying calendar events
Acceptance Criteria
- Function callable with at least one of
query,user_token, ortime_range - Returns: title, anonymized_organizer, anonymized_attendees, start_time, duration, recurrence, response_statuses
- All organizer/attendee identities are anonymized tokens
- Handles users with no events in range
- Tested against a calendar with known events
Ticket 15: Build get_user_directory tool function
Estimate: 2 pts
Context
- Admin SDK provides the org directory. The agent uses this for identity resolution context (departments, titles) without revealing real names.
- Depends on: Ticket 2 (DWD with
admin.directory.user.readonly), Ticket 5a (PII redaction) - Spike reference:
spike-command-center-data-access.md§5, Step 4
Goal
Build the get_user_directory tool function for anonymized org directory queries.
Scope
In scope
- Implement
get_user_directory(query?, department?)as a Mastra tool - Uses GWS CLI (
gws admin directory users list) with service account + DWD - Returns anonymized role-based info only: anonymized token, department, title, org unit
- No real names or emails in output
Out of scope
- Modifying user accounts
- Returning real identities
Acceptance Criteria
- Function callable with optional
queryanddepartmentfilters - Returns: anonymized_token, department, title, org_unit
- Zero real names or emails in output
- Handles empty queries (returns full directory, anonymized)
- Tested against the org directory
Ticket 16: Register GWS tools in Mastra agent
Estimate: 3 pts
Context
- The six GWS tool functions (Tickets 10–15) need to be registered in the Mastra agent alongside the existing Slack tools so the agent can answer questions about both data sources.
- Depends on: Tickets 10–15 (GWS tools), Ticket 9a (existing Mastra agent with Slack tools)
- Spike reference:
spike-command-center-data-access.md§5, Step 4 M2 deliverable
Goal
Wire the GWS tools into the Mastra agent, update the system prompt, and validate end-to-end.
Scope
In scope
- Register all six GWS tools in the Mastra agent instance
- Update the agent system prompt to include GWS tool descriptions and routing guidance
- Test: agent correctly routes GWS-specific questions to the right tools
- Test: agent handles ambiguous queries that could go to either Slack or GWS
- Validate all responses are anonymized
Out of scope
- Cross-platform orchestration (queries that combine Slack + GWS — that’s Step 5, Ticket 17)
- New UI components for GWS data (dashboards come later)
Acceptance Criteria
- Agent has 9 tools registered (3 Slack + 6 GWS)
- “Search for documents about the rebrand” → calls
search_drive - “Who edited the Q1 budget spreadsheet?” → calls
get_drive_activity - “Show me comments on the strategy doc” → calls
get_file_comments - “What emails came in about the vendor contract?” → calls
search_gmail - “What meetings does the ops team have this week?” → calls
search_calendar - “Who’s in the engineering department?” → calls
get_user_directory - All responses use anonymized tokens only
- Existing Slack queries still work correctly
M2 deliverable (Apr 13): Danny can query the agent for Google Workspace activity — file movement, email thread patterns, calendar load, Drive comments — all with anonymized identities. Slack (M1) continues to work. Same chat UI, same Cloud Run deployment.
Step 5 — Cross-Platform Orchestration
Ticket 17: Build cross-platform orchestration and project registry
Estimate: 5 pts
Context
- The COO’s highest-value queries span both Slack and Google Workspace: “What’s the status of Project X?” requires checking Slack channels, Drive folders, Calendar meetings, and Gmail threads. The agent needs orchestration logic and a project registry to resolve these.
- Depends on: Ticket 9a (Mastra agent), Ticket 16 (GWS tools registered)
- Spike reference:
spike-command-center-data-access.md§5, Step 5
Goal
Build the cross-platform orchestration layer and project registry so the agent can answer unified questions across all data sources.
Scope
In scope
- Build orchestration logic in the Mastra agent that:
- Plans which tools to call for cross-source queries (multi-step reasoning)
- Executes Slack + GWS queries in parallel where possible
- Applies the cross-platform query pattern: Slack → Drive → Activity → Comments → Calendar → Gmail → Synthesize
- All results pass through PII redaction middleware before synthesis
- Build a project registry: lightweight mapping of project names → Slack channels, Drive folder IDs, and anonymized key participants
- Stored in Firestore (Eden GCP) or Google Sheet via GWS CLI
- Agent uses this to resolve ambiguous queries (“the rebrand” → folder ID + rebrand channel)
- Update agent system prompt for cross-platform reasoning
- Test with 5+ cross-platform query types
Out of scope
- Dashboard views (later milestones)
- Scheduled digests (optional future feature)
Acceptance Criteria
- Agent correctly answers cross-platform queries:
- “What’s the status of Project X across Slack and Drive?”
- “What happened this week?” (synthesizes all sources)
- “Who’s been most active on the rebrand?” (Slack + Drive Activity)
- “Are there any meetings about the vendor contract and related Slack discussions?”
- “Show me all activity on Project Y” (Slack messages + Drive edits + Calendar events + Gmail threads)
- Project registry resolvable by name or alias
- Parallel execution: multi-source queries complete in < 30 seconds
- All synthesized answers use anonymized tokens only
Ticket 18: End-to-end validation and production deploy
Estimate: 4 pts
Context
- M3 (Apr 20) is the full Command Center delivery gate. All tools, orchestration, and the chat UI must work end-to-end with real Eden data.
- Depends on: Ticket 17 (cross-platform orchestration)
- Spike reference:
spike-command-center-data-access.md§5, Step 8
Goal
Validate the full system against real Eden data and cut the production deployment.
Scope
In scope
- Run Danny through 10–15 test queries spanning:
- Single-source Slack (“what’s the most active channel this week?”)
- Single-source GWS (“who’s been editing the rebrand docs?”)
- Cross-platform (“what’s the status of Project X across Slack and Drive?”)
- Anonymization validation (“show me team activity” — confirm no real names appear)
- Fix any issues found during validation
- Finalize Cloud Run deployment configuration (min instances, memory, secrets)
- Set GCP budget alerts and Vertex AI quotas
- Document the deployment: Cloud Run service URL, GCP project, secret locations, runbook for restarts
Out of scope
- Dashboard views and project management admin (later milestones)
- Scheduled digests
Acceptance Criteria
- All 10–15 test queries return useful, anonymized answers
- 0% PII leak rate confirmed across all test queries
- Cloud Run service running in Eden’s GCP project with production config
- GCP budget alerts set
- Deployment runbook documented in Eden vault
- Danny signs off on M3
M3 deliverable (Apr 20): Full Command Center — Danny opens the chat UI, asks questions about anything happening across Eden’s entire Google Workspace and entire Slack. All identities anonymized. The agent replaces (and exceeds) what Gemini provided natively.
Ticket dependency graph
Ticket 1: Create GCP project (2 pts) ──────────┐
├──→ Ticket 4a: Mapping schema (3 pts)
Ticket 2: Request DWD approval (2 pts) ─────────┘ │
▼
Ticket 4b: resolve_identity (4 pts)
│
Ticket 3: Create Slack app (3 pts) ─────────────┐ │
├──→ Ticket 5a: PII redaction core (4 pts)
│ │
│ Ticket 5b: PII test suite (3 pts)
│ │
│ ▼
├── Ticket 6: search_slack (5 pts)
├── Ticket 7: read_slack_thread (3 pts)
└── Ticket 8a: channel_stats core (4 pts)
│
Ticket 8b: caching + rate limits (2 pts)
│
▼
Ticket 9a: Mastra agent scaffold (4 pts)
│
▼
Ticket 9b: Chat UI + Cloud Run deploy (4 pts)
↑ M1 gate (Apr 6)
│
┌──────────────────────────────────────────────────────┘
│ (DWD approved — Ticket 2 — unlocks GWS tools)
│
├── Ticket 10: search_drive (4 pts)
├── Ticket 11: get_drive_activity (4 pts)
├── Ticket 12: get_file_comments (3 pts)
├── Ticket 13: search_gmail (4 pts)
├── Ticket 14: search_calendar (3 pts)
└── Ticket 15: get_user_directory (2 pts)
│
▼
Ticket 16: Register GWS tools in agent (3 pts)
↑ M2 gate (Apr 13)
│
▼
Ticket 17: Cross-platform orchestration (5 pts)
│
▼
Ticket 18: E2E validation + prod deploy (4 pts)
↑ M3 gate (Apr 20)
Critical paths
M1 critical path (25 pts): Ticket 3 → 4a → 4b → 5a → 6 → 9a → 9b. Tickets 7, 8a/8b, and 5b run in parallel once 5a is done.
M2 critical path (additional 23 pts from M1): Tickets 10–15 can all run in parallel once DWD (Ticket 2) is approved and PII middleware (5a) is done. Ticket 16 follows when all six are complete. Most GWS tools can start during Week 2 if DWD is approved by then.
M3 critical path (additional 9 pts from M2): Ticket 17 → 18. Orchestration builds on the full tool set; validation is the final gate.
Total: 75 pts. Parallelizable structure means the calendar path is ~4 weeks even though the serial point count is higher.