Spike: Eden AI Command Center — Data Access Layer
Date: 2026-03-25
Author: Sam (Brainforge)
Status: Draft
Goal: Determine whether gws CLI and Slack APIs can serve as the data access layer for an Eden COO Command Center without building custom data pipelines.
Executive Summary
The Eden AI Command Center needs to let a COO query data from across their entire Google Workspace and entire Slack workspace. This spike investigates whether existing tools (GWS CLI, Slack MCP/RTS API) can serve as the data access layer for an agent, or whether custom ETL pipelines are necessary.
Bottom line:
- Google Workspace: GWS CLI is sufficient. It covers every Google Workspace API, supports service accounts with domain-wide delegation (query any user’s data), outputs structured JSON, and has 100+ agent skills. No data pipeline needed.
- Slack: Depends on the approach. Three viable options exist, each with trade-offs on latency, cost, and subscription tier requirements. A data pipeline is one option but not the only one.
Part 1: Google Workspace — GWS CLI
What is it?
[gws](https://github.com/googleworkspace/cli) (v0.22.0, published 2026-03-24) is a Rust-based CLI that wraps every Google Workspace API into a single tool. Commands are generated dynamically from Google’s Discovery Service, so new APIs appear automatically.
Full service coverage
| Service | Key capabilities |
|---|---|
| Gmail | List, search, read, send, label, archive, filter messages; manage drafts and threads |
| Calendar | List/create/update/delete events; free-busy queries; recurring events; multi-calendar |
| Drive | List, search, upload, download, share, move files; manage permissions and shared drives |
| Sheets | Read/write/append ranges; create spreadsheets; manage tabs |
| Docs | Read and write document content |
| Slides | Read and write presentations |
| Chat | Send/read messages in Chat spaces |
| Admin SDK | List/manage users, groups, devices, org units; audit logs and usage reports |
| Forms | Read form structure and responses |
| Tasks | Manage task lists and tasks |
| People/Contacts | Manage contacts and profiles |
| Meet | Create meeting spaces; review participant info |
| Keep | Manage notes |
| Vault | eDiscovery holds and exports |
| Apps Script | Manage and push Apps Script projects |
| Classroom | Manage classes, rosters, coursework |
| Alert Center | Security alerts |
| Workspace Events | Subscribe to real-time change events across Workspace |
Features relevant to an agent
| Feature | Why it matters for Command Center |
|---|---|
| Structured JSON output | Every response is JSON — pipe to jq, feed directly to an LLM |
Auto-pagination (--page-all) | No manual cursor management; stream all results as NDJSON |
| Dry-run mode | Preview API calls before execution — safe for a COO to use |
Schema introspection (gws schema <method>) | Agent can inspect any API method’s request/response shape at runtime |
| 100+ agent skills (SKILL.md files) | Pre-built skills for Claude Code, Cursor, OpenCode, Gemini CLI |
| 10 persona bundles | Executive assistant, project manager, sales ops, IT admin, etc. |
| 50+ recipes | Multi-step workflows: email triage, meeting prep, weekly digest, file organize, etc. |
| MCP server mode | Expose all GWS APIs to any MCP client (Cursor, Claude, etc.) |
| Model Armor | Scan API responses for prompt injection before feeding to LLM |
| Workspace Events subscriptions | Subscribe to real-time events (file changes, calendar updates) — could power live dashboards |
Authentication modes for Command Center
| Mode | How it works | Best for |
|---|---|---|
| OAuth (interactive) | User logs in via browser, tokens stored locally | Local dev/testing on Brainforge workspace |
| Service Account + Domain-Wide Delegation | Service account impersonates any user in the domain | Production Command Center — query any user’s Gmail, Calendar, Drive without individual OAuth flows |
| Headless/CI | Export credentials from an authenticated machine; use on headless server | Server deployment |
For the Command Center, service account + domain-wide delegation is the right approach. This lets the agent query the COO’s Gmail, Calendar, Drive, and also impersonate other users if needed (e.g., “What did the VP of Sales send to client X last week?”).
Setup requirements:
- Google Cloud project with OAuth consent screen
- Service account with JSON key
- Domain-Wide Delegation enabled in Google Workspace Admin Console
- OAuth scopes granted (Drive, Gmail, Calendar, Sheets, Docs, etc.)
GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE+GOOGLE_WORKSPACE_CLI_IMPERSONATED_USERenv vars
What a COO agent could do with GWS CLI
Email intelligence:
gws gmail users messages list— search Gmail with full query syntax (from:,to:,subject:,after:,before:,has:attachment, etc.)gws gmail users messages get— read full message content- Combine with LLM: “Summarize all emails from [vendor] in the last 30 days”
Calendar awareness:
gws calendar events list— upcoming meetings, past meetings, attendeesgws calendar calendarList list— all calendars the user has access to- Free-busy queries across multiple users
- Combine with LLM: “What meetings does the engineering team have this week?”
Document access:
gws drive files list— search Drive by name, type, owner, date range, shared drivegws sheets spreadsheets values get— read any spreadsheet datagws docs documents get— read document content- Combine with LLM: “Find the latest board deck and summarize key metrics”
Org-wide insights (with admin scopes):
gws admin users list— full user directorygws admin reports activities list— audit logs (who did what, when)gws admin reports userUsageReport get— usage metrics per user- Combine with LLM: “Which team members haven’t logged in this week?”
Real-time subscriptions:
gws events subscribe— subscribe to changes on Drive files, Calendar events, Chat spaces- Could power a live feed: “Alert me when anyone modifies the Q2 budget spreadsheet”
File comments and replies
The Drive API (and therefore gws) exposes full comment and reply resources:
# List all comments on a file
gws drive comments list --params '{"fileId":"FILE_ID","fields":"*"}'
# Get a specific comment with all replies
gws drive comments get --params '{"fileId":"FILE_ID","commentId":"COMMENT_ID","includeDeleted":false}'
# List replies to a specific comment
gws drive replies list --params '{"fileId":"FILE_ID","commentId":"COMMENT_ID"}'Each comment/reply includes:
author(name, email, photo)content(plain text) andhtmlContentcreatedTime/modifiedTimeresolvedstatus (for suggestion/action-item comments)actionfield (resolveorreopen)anchor(position in the document where the comment was placed)
This means the agent can answer questions like: “What feedback did the team leave on the Q1 budget doc?” or “Are there unresolved comments on the SOW?”
Revision history (file versions)
The Drive API tracks revision history for every file:
# List all revisions of a file
gws drive revisions list --params '{"fileId":"FILE_ID","fields":"*"}'
# Get a specific revision
gws drive revisions get --params '{"fileId":"FILE_ID","revisionId":"REVISION_ID"}'Each revision includes:
modifiedTime— when this version was savedlastModifyingUser— who made the edit (name, email)size— file size at that revisionexportLinks— download the file as it was at that point in time
Note: For Google-native files (Docs, Sheets, Slides), Drive auto-creates revision entries as users edit. Non-head revisions are purged after 30 days unless marked “Keep Forever” or the file has fewer than 100 revisions.
Drive Activity API (who did what, when)
This is the most powerful capability for tracking movement/updates. The Drive Activity API v2 provides a granular audit trail of all actions on files and folders. gws can access it via its dynamic discovery mechanism:
# Query activity for a specific file
gws driveactivity:v2 activity query --json '{"itemName":"items/FILE_ID","pageSize":25}'
# Query all activity under a folder (recursive)
gws driveactivity:v2 activity query --json '{"ancestorName":"items/FOLDER_ID","pageSize":50}'
# Filter by time range
gws driveactivity:v2 activity query --json '{"itemName":"items/FILE_ID","filter":"time >= \"2026-03-01T00:00:00Z\""}'
# Filter by action type (edits only)
gws driveactivity:v2 activity query --json '{"itemName":"items/FILE_ID","filter":"detail.action_detail_case: EDIT"}'Each DriveActivity record includes:
- Actor — who did it (user email, system event, or admin)
- ActionDetail — what they did:
Create,Edit,Move,Rename,Delete,Restore,PermissionChange,Comment,DataLeakPreventionChange, etc. - Target — which file/folder/shared drive was affected
- Timestamp or TimeRange — when it happened
What this unlocks for the Command Center:
- “What documents were edited this week?” — query the project folder with a time filter
- “Who modified the SOW since the last meeting?” — query by file + time range
- “Show all sharing/permission changes in the last month” — filter by
PERMISSION_CHANGE - “What’s the activity on the Q2 planning folder?” — recursive query on the parent folder
Required OAuth scope: https://www.googleapis.com/auth/drive.activity.readonly
Admin audit logs and usage reports
For org-wide visibility beyond individual files:
# Admin audit log — who logged in, changed settings, etc.
gws admin-reports activities list --params '{"userKey":"all","applicationName":"drive","maxResults":50}'
# Usage report — per-user activity metrics
gws admin-reports userUsageReport get --params '{"userKey":"all","date":"2026-03-24"}'Workspace Events (real-time subscriptions)
For live monitoring of changes:
# Subscribe to changes on a Drive file or folder
gws events subscriptions create --json '{"targetResource":"//drive.googleapis.com/items/FILE_ID","eventTypes":["google.workspace.drive.file.v1.updated"],"notificationEndpoint":{"pubsubTopic":"projects/PROJECT_ID/topics/TOPIC_NAME"},"payloadOptions":{"includeResource":true}}'This enables push notifications when files are edited, shared, commented on, etc. Could power a real-time “activity feed” in the Command Center.
Limitations
| Limitation | Impact | Mitigation |
|---|---|---|
| Not an officially supported Google product | Risk of breaking changes before v1.0 | Pin versions; the project is actively maintained (22K stars, 40 contributors) |
| Rate limits from Google APIs | Heavy usage may hit quotas | Use caching; batch queries; request quota increases from Google |
| No full-text search across all Drive content | Can search file names/metadata but not index content of all docs | Use Drive’s native fullText query param for Google-native files; for deeper indexing, consider a lightweight extraction layer |
| Domain-Wide Delegation requires Workspace admin | Eden’s IT must configure this | Provide setup playbook; one-time setup |
| Drive Activity API requires separate auth scope | Must add drive.activity.readonly to OAuth scopes | One-time scope addition during setup |
| Revision history purged after 30 days | Can’t see file-level diffs older than 30 days | Activity API still records events; revision content is what’s purged |
Verdict: No data pipeline needed for Google Workspace
gws CLI can serve as the complete data access layer for the Command Center. The agent calls gws commands, gets structured JSON, and feeds results to the LLM. Domain-wide delegation means it can access any user’s data across the org.
Part 2: Slack — Data Access Options
Slack has multiple data access paths. Unlike Google Workspace (where gws CLI covers everything), Slack requires choosing between approaches based on the use case, subscription tier, and latency requirements.
Option A: Slack MCP Server + Real-Time Search (RTS) API
What: Slack’s official MCP server (GA since Feb 17, 2026) provides tools for AI agents to search and interact with Slack content. The RTS API powers the search capability.
How it works:
- Register a Slack app with
search:read.* scopes - User authenticates via OAuth (user token) or bot receives
action_tokenfrom events - Call
assistant.search.contextwith a natural language or keyword query - Get back relevant messages, files, users, channels
- Optionally pull full thread context with
conversations.repliesandconversations.history
MCP server tools:
| Tool | Rate limit | What it does |
|---|---|---|
| Search messages & files | Special limits | Semantic + keyword search across workspace |
| Search users | 20+/min (Tier 2) | Find users by name, email, ID |
| Search channels | 20+/min (Tier 2) | Find channels by name, description |
| Send message | Special limits | Post to any conversation |
| Read a channel | 50+/min (Tier 3) | Full channel history |
| Read a thread | 50+/min (Tier 3) | Complete thread conversation |
| Create/update canvas | 20-50+/min (Tier 2-3) | Rich document management |
| Read a canvas | 50-100+/min (Tier 3-4) | Export canvases as markdown |
| Read user profile | 100+/min (Tier 4) | Full profile with custom fields |
RTS API search scopes:
| Scope | Token type | What it unlocks |
|---|---|---|
search:read.public (required) | Bot or User | All public channel messages |
search:read.private | User only | Private channel messages |
search:read.im | User only | Direct messages |
search:read.mpim | User only | Multi-party DMs |
search:read.files | Bot or User | File search (combine with channel scopes) |
search:read.users | Bot or User | User directory search |
Advanced features:
- OR operator for multi-topic queries (
"budget OR finance OR expenses") - Time-range filtering (
after,beforetimestamps) - Content type filtering (
messages,files,channels,users) - Context messages (surrounding messages for better LLM context)
- Semantic search available on Business+ and Enterprise+ plans
Pros:
- No data pipeline needed — real-time search, no data stored externally
- Official Slack product, actively maintained
- Semantic search (not just keyword matching) on Business+ plans
- Already available as MCP server for Cursor/Claude integration
- Respects user-level permissions (only returns what the authenticated user can see)
- Zero data copy requirement (Slack policy)
Cons:
- User token required for private/DM data (the COO must authenticate)
- Rate limits may constrain bulk analysis
- Searches return “most relevant” results, not exhaustive results — low relevance items excluded
- No guaranteed access to complete history (search quality, not bulk export)
- Semantic search requires Business+ or Enterprise+ plan
- Cannot store/copy retrieved data (Slack policy — pure real-time access)
Subscription requirements:
- API access: Available on all paid plans (Pro, Business+, Enterprise+)
- Semantic search: Business+ or Enterprise+ only
- Recommendation for Eden: Business+ minimum ($12.50/user/year billing) for semantic search capability
Option B: Conversations History API (Traditional API)
What: The standard conversations.history and conversations.list endpoints — iterate through every channel and pull message history.
How it works:
- Register a Slack app with
channels:history,groups:history,im:history,mpim:historyscopes - Call
conversations.listto get all channels - For each channel, call
conversations.historywith cursor-based pagination - Store/index the data locally
Rate limits:
conversations.history: Tier 3 (50+/min) for established apps- Warning: As of May 2025, newly-created apps not approved for the Slack Marketplace face 1 req/min limits
- Recommended page size: 100-200 messages per request
Pros:
- Complete, exhaustive data access (not search-relevance filtered)
- Can build local index for arbitrary queries
- Full control over data processing and retention
- Works on all paid plans
Cons:
- Must iterate channel-by-channel — slow for large workspaces
- Must build and maintain a data pipeline (ingest, store, index, keep in sync)
- Rate limits make initial backfill slow (50K messages/hour at best)
- Ongoing sync needed to stay current
- More engineering effort (build + maintain)
Option C: Hybrid — slacrawl or Custom Local Mirror
What: Tools like slacrawl mirror Slack workspace data into local SQLite with full-text search.
Key features (slacrawl):
- Local SQLite storage with FTS5 full-text search
- Incremental API sync (only new messages)
- Optional Socket Mode live tailing (real-time updates)
- Multi-workspace support
- Read-only SQL access for ad hoc analysis
Current limitations (slacrawl):
- No DMs or MPIMs yet
- No attachment downloads
- macOS only for desktop discovery
- Relatively new project (Go 1.25+)
Pros:
- Local queryable database — agent can run arbitrary SQL
- Incremental sync keeps it current
- Full-text search without API rate limit concerns (local)
- Could serve as the “Slack data layer” the agent queries
Cons:
- Another tool to deploy and maintain
- DM/MPIM gap is significant for a COO (many important conversations happen in DMs)
- Data is stored locally (may have compliance implications for Eden)
- Still uses Slack API under the hood (same rate limits for initial sync)
Option D: Slack Discovery API (Enterprise Grid Only)
What: Compliance-grade API for full data export including DMs and private channels.
Requirements:
- Enterprise Grid plan only (custom pricing, typically $32-45/user/month)
- Legal justification required (compliance, litigation, eDiscovery)
- Slack reviews and may deny access
Verdict: Overkill and likely inaccessible for the Command Center use case. This is for legal/compliance teams, not operational intelligence.
Slack recommendation matrix
| Requirement | Option A (MCP/RTS) | Option B (History API) | Option C (Local Mirror) |
|---|---|---|---|
| No data pipeline needed | Yes | No | Partially (sync pipeline) |
| Real-time data | Yes | Lag (sync interval) | Near real-time (Socket Mode) |
| Semantic search | Yes (Business+) | Build your own | Build your own |
| Complete history | No (relevance-filtered) | Yes | Yes (public channels) |
| DM access | Yes (with user token) | Yes (with scopes) | No (slacrawl limitation) |
| Engineering effort | Low (MCP integration) | High (full pipeline) | Medium (deploy + maintain) |
| Ongoing maintenance | None (Slack-hosted) | High | Medium |
| Subscription requirement | Pro (keyword) / Business+ (semantic) | Pro | Pro |
Slack verdict
Recommended approach for Eden Command Center: Option A (Slack MCP + RTS API) as the primary data layer, with Option B as a fallback for specific use cases that need exhaustive history.
Rationale:
- The COO wants to query Slack, not build a data warehouse. RTS semantic search + channel history reading covers most queries.
- Zero engineering effort for the data layer — focus effort on the agent/LLM layer instead.
- The MCP server is already available as an integration point.
- Business+ plan ($12.50/user/month) unlocks semantic search, which is the key differentiator for a “query anything” experience.
When to add a pipeline (Option B/C):
- If the COO needs analytics over historical trends (message volume, response times, sentiment over time)
- If search-relevance filtering misses too many important results
- If the agent needs to correlate Slack data with non-Slack data in a database
Part 3: Cross-Platform Project Visibility (GWS + Slack Combined)
The real power of the Command Center is combining data from both systems to show movement across a project. Here’s how the agent stitches them together.
The query pattern
When the COO asks “What’s the status of Project X?” the agent:
- Slack RTS search —
"Project X"across all channels → recent discussions, decisions, blockers - Drive search —
gws drive files listwith queryname contains 'Project X'→ find the project folder and key docs - Drive Activity query —
gws driveactivity:v2 activity queryon the project folder → who edited what, when, sharing changes - Drive comments —
gws drive comments liston key docs → open feedback, unresolved items - Calendar search —
gws calendar events listwithq: 'Project X'→ upcoming and past meetings - Gmail search —
gws gmail users messages listwithq: 'Project X'→ email threads with clients/vendors - Synthesize — LLM combines all results into a unified project status
Example: “What’s happening with the rebrand project?”
Agent plan:
1. Slack RTS: query "rebrand" → 12 messages across #marketing, #design, DMs
- Designer posted mockups Monday, 3 replies with feedback
- PM flagged timeline risk in #marketing yesterday
- COO's DM with agency: "need final assets by Friday"
2. Drive Activity: query rebrand folder → 8 activities this week
- Brand guidelines doc edited by 3 people (Mon, Tue, Wed)
- New file uploaded: "Final Logo Options v3.pdf" (Wed)
- SOW shared with external agency (Tue)
3. Drive Comments: on brand guidelines doc → 2 unresolved comments
- VP Marketing: "Need to align color palette with packaging"
- Designer: "Waiting on font license confirmation"
4. Calendar: 2 upcoming meetings
- "Rebrand Review" tomorrow 2pm (6 attendees)
- "Agency Check-in" Friday 10am
5. Gmail: 4 threads this week
- Agency sent invoice for Phase 1
- Legal reviewed trademark filing
Synthesized answer:
"The rebrand is active — brand guidelines doc was edited by 3 team members
this week and new logo options (v3) were uploaded Wednesday. There are 2
unresolved comments on the guidelines doc: color palette alignment and
font licensing. Timeline risk was flagged in #marketing yesterday. You
have a Rebrand Review meeting tomorrow at 2pm. The agency sent a Phase 1
invoice and legal is reviewing the trademark filing."
Project identification strategies
The agent needs to know what constitutes a “project” across both systems. Options:
| Strategy | How it works | Pros | Cons |
|---|---|---|---|
| Keyword/name matching | Search both systems for the project name | Simple; works immediately | Ambiguous names; misses renamed projects |
| Folder-anchored | Map each project to a Drive folder ID; activity query on folder covers all files | Precise for docs; recursive activity query | Doesn’t cover Slack; requires folder mapping |
| Channel-anchored | Map each project to a Slack channel; combine with Drive folder | Precise for both; most conversations happen in a channel | Not all projects have dedicated channels |
| Tag/label convention | Use a consistent naming convention or tag in both systems | Scalable; searchable | Requires discipline from the org |
| Project registry | Maintain a small config mapping project names to Drive folder IDs + Slack channel IDs | Most precise; agent knows exactly where to look | Requires setup and maintenance |
Recommendation: Start with keyword matching (zero setup) and evolve to a project registry if the COO uses the Command Center regularly. The registry could be a simple Google Sheet that maps project names to folder IDs and channel IDs.
Cross-platform timeline view
By combining timestamps from both systems, the agent can construct a unified timeline:
Wed 3/25 10:32am [Slack #design] Designer posted final mockups
Wed 3/25 10:45am [Drive] "Final Logo v3.pdf" uploaded to /Rebrand/
Wed 3/25 11:00am [Drive comment] VP Marketing: "align with packaging"
Wed 3/25 2:15pm [Slack #marketing] PM: "timeline is tight, need to ship by EOW"
Wed 3/25 3:00pm [Gmail] Agency invoice received for Phase 1
Thu 3/26 9:00am [Calendar] Rebrand Review meeting (6 attendees)
Both GWS CLI JSON responses and Slack API responses include ISO timestamps, so merging and sorting is straightforward for the agent.
Part 4: Testing Plan — GWS CLI on Brainforge Workspace (includes comments + activity)
Prerequisites
- Confirm
gwsis installed (we already have setup atknowledge/standards/03-knowledge/engineering/setup/google-workspace-cli-setup.md) - Authenticate with all scopes:
gws auth login -s gmail,calendar,sheets,drive,docs,adminTest matrix
| # | Test | Command | What it validates |
|---|---|---|---|
| 1 | Auth status | gws auth status | Credentials are valid and scoped |
| 2 | Gmail search | gws gmail users messages list --params '{"userId":"me","q":"from:eden subject:standup","maxResults":5}' | Can search Gmail with query syntax |
| 3 | Gmail read | gws gmail users messages get --params '{"userId":"me","id":"<MSG_ID>","format":"full"}' | Can read full email content including body |
| 4 | Calendar list | gws calendar events list --params '{"calendarId":"primary","timeMin":"2026-03-17T00:00:00Z","timeMax":"2026-03-25T23:59:59Z","singleEvents":true}' | Can list upcoming events with details |
| 5 | Drive search | gws drive files list --params '{"q":"name contains '\''Eden'\''","pageSize":10}' | Can search Drive by filename |
| 6 | Drive search by type | gws drive files list --params '{"q":"mimeType='\''application/vnd.google-apps.spreadsheet'\''","pageSize":5}' | Can filter by file type |
| 7 | Sheets read | gws sheets spreadsheets values get --params '{"spreadsheetId":"<SHEET_ID>","range":"Sheet1!A1:E10"}' | Can read spreadsheet data |
| 8 | Sheets metadata | gws sheets spreadsheets get --params '{"spreadsheetId":"<SHEET_ID>"}' | Can get tab names and structure |
| 9 | Docs read | gws docs documents get --params '{"documentId":"<DOC_ID>"}' | Can read Google Doc content |
| 10 | Admin user list | gws admin users list --params '{"customer":"my_customer","maxResults":10}' | Can list org users (requires admin scope) |
| 11 | Auto-pagination | gws gmail users messages list --params '{"userId":"me","maxResults":500}' --page-all | Pagination works for large result sets |
| 12 | Schema inspect | gws schema gmail.users.messages.list | Agent can discover API shape at runtime |
| 13 | Dry run | gws gmail users messages list --params '{"userId":"me","maxResults":5}' --dry-run | Safe preview mode works |
| 14 | Pipe to jq | `gws drive files list —params ’{“pageSize”:3}‘ | jq ‘.files[].name’` |
| 15 | File comments | gws drive comments list --params '{"fileId":"<FILE_ID>","fields":"*"}' | Can read comments, authors, resolved status |
| 16 | Comment replies | gws drive replies list --params '{"fileId":"<FILE_ID>","commentId":"<COMMENT_ID>"}' | Can read threaded comment replies |
| 17 | File revisions | gws drive revisions list --params '{"fileId":"<FILE_ID>","fields":"*"}' | Can see who edited a file and when |
| 18 | Drive Activity (file) | gws driveactivity:v2 activity query --json '{"itemName":"items/<FILE_ID>","pageSize":10}' | Activity API works via dynamic discovery |
| 19 | Drive Activity (folder) | gws driveactivity:v2 activity query --json '{"ancestorName":"items/<FOLDER_ID>","pageSize":20}' | Recursive activity across all files in folder |
| 20 | Activity time filter | gws driveactivity:v2 activity query --json '{"itemName":"items/<FILE_ID>","filter":"time >= \"2026-03-01T00:00:00Z\""}' | Time-range filtering works |
| 21 | Admin audit logs | gws admin-reports activities list --params '{"userKey":"all","applicationName":"drive","maxResults":10}' | Org-wide Drive audit log accessible |
Testing the agent loop (simulate Command Center)
After confirming individual commands work, test the full agent loop:
- Ask a question: “What were the key topics in emails from Eden team members this week?”
- Agent plan:
gws gmail users messages listwith queryafter:2026/03/17 from:@edenhealth.com- For each message,
gws gmail users messages getto read content - Pipe all message bodies to LLM for summarization
- Validate: Does the summary accurately capture the email content?
Repeat for:
- “What meetings do I have tomorrow and who’s attending?”
- “Find the latest Q1 financial report on Drive”
- “What did [person] share in our last standup doc?”
Part 5: Testing Plan — Slack RTS API / MCP
Option A: Slack MCP in Cursor (quickest to test)
Add to .cursor/mcp.json (project or global):
{
"mcpServers": {
"slack": {
"url": "https://mcp.slack.com/mcp",
"auth": {
"CLIENT_ID": "3660753192626.8903469228982"
}
}
}
}Then test in Cursor chat:
- “Search Slack for recent messages about Eden”
- “What’s the latest in platform channel?”
- “Find messages from Luke about dashboards”
Option B: Create a Slack app for RTS API testing
- Go to api.slack.com/apps → Create New App
- Add OAuth scopes:
search:read.public,search:read.private,search:read.im,search:read.mpim,search:read.files,search:read.users,channels:history,groups:history,im:history,mpim:history - Install to workspace, get user token (
xoxp-...) - Test the RTS endpoint:
curl -X POST https://slack.com/api/assistant.search.context \
-H "Authorization: Bearer xoxp-YOUR-TOKEN" \
-H "Content-Type: application/json" \
-d '{"query": "Eden project updates", "content_types": ["messages"], "channel_types": ["public_channel", "private_channel"], "limit": 10}'- Test channel history:
curl -X POST https://slack.com/api/conversations.history \
-H "Authorization: Bearer xoxp-YOUR-TOKEN" \
-H "Content-Type: application/json" \
-d '{"channel": "CHANNEL_ID", "limit": 20}'What to validate
| # | Test | What it validates |
|---|---|---|
| 1 | Search public channels | RTS API returns relevant results from public channels |
| 2 | Search private channels | Private channel access works with user token |
| 3 | Search DMs | DM access works (critical for COO use case) |
| 4 | Semantic vs keyword | Compare “what’s happening with the Eden project” vs “Eden project” |
| 5 | Time-filtered search | after/before params filter correctly |
| 6 | Thread context | conversations.replies returns full thread |
| 7 | OR queries | Multi-topic search works (“budget OR forecast OR revenue”) |
| 8 | Rate limit behavior | How many searches can we sustain per minute |
Part 6: Architecture Recommendation
Proposed stack for Eden Command Center
┌─────────────────────────────────────────────────────┐
│ COO Interface │
│ (Chat UI / Slack Bot / Web Dashboard) │
└─────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ Agent / LLM Layer │
│ (Orchestrates queries, synthesizes answers) │
│ - Receives COO question │
│ - Plans which data sources to query │
│ - Calls GWS CLI and/or Slack API │
│ - Synthesizes response with LLM │
└────────┬─────────────────────────────┬──────────────┘
│ │
▼ ▼
┌────────────────────┐ ┌─────────────────────────┐
│ GWS CLI Layer │ │ Slack API Layer │
│ │ │ │
│ Service Account │ │ RTS API (search) │
│ + Domain-Wide │ │ + conversations.history │
│ Delegation │ │ + conversations.replies │
│ │ │ │
│ Gmail, Calendar, │ │ User token (OAuth) │
│ Drive, Sheets, │ │ for private data │
│ Docs, Admin, etc. │ │ │
└────────────────────┘ └─────────────────────────┘
No data pipeline needed (Phase 1)
Both data sources support real-time querying. The agent calls gws or Slack API on demand, gets results, and synthesizes an answer. No ETL, no warehouse, no sync jobs.
When to add a pipeline (Phase 2, if needed)
Consider adding a lightweight data pipeline only if:
- The COO needs historical trend analysis (message volume over time, response time metrics)
- Cross-source correlation is needed (join Slack threads with Calendar events and Drive files)
- Latency is an issue (pre-index for sub-second queries)
- Full-text search over all document content (not just metadata) is required
If a pipeline is needed, consider:
- Dagster (already in the repo at
apps/dagster-pipelines/) for orchestration - Supabase (already in use) for storage
- Incremental sync using Slack’s
conversations.historypagination + GWS CLI’s--page-all
Part 7: Eden Subscription Requirements
| Service | Minimum plan | Recommended plan | Why |
|---|---|---|---|
| Google Workspace | Business Starter | Business Standard+ | Standard gives Vault, audit logs, Workspace Events API |
| Slack | Pro ($7.25/user) | Business+ ($12.50/user) | Semantic search in RTS API (keyword-only on Pro) |
Key question for Eden: What Slack plan are they on today? If Pro, upgrading to Business+ unlocks semantic search, which is a major capability upgrade for the Command Center.
Part 8: Delivery Surface — Where the COO Actually Uses This
There are four realistic options for how the COO interacts with the Command Center, ranging from “lives inside the Gemini they already use” to “we build a custom UI.”
Option A: Gemini Enterprise Agent Designer (native Google)
What: Google’s Gemini Enterprise (formerly Agentspace) includes a no-code/low-code Agent Designer that lets you build custom agents inside the Gemini web app. You connect data sources, write instructions, and the agent shows up alongside the built-in Gemini tools.
How it would work:
- Eden subscribes to Gemini Enterprise ($30/user/month, or just the COO seat)
- We create a “Command Center” agent via Agent Designer (no-code UI or flow builder)
- Connect Google Workspace data sources (Gmail, Drive, Calendar — GA connectors)
- Connect Slack via the federated connector (currently private preview — requires Google account team approval)
- Optionally connect Linear (public preview connector exists)
- COO opens gemini.google.com → selects the Command Center agent → asks questions
Agent Designer capabilities:
- Natural language agent creation (describe what it should do)
- Visual flow builder for multi-step agents with subagents
- Connect to Google and third-party data sources
- Scheduled agent executions (e.g., “run a daily project status digest at 8am”)
- Preview/test environment before publishing
- Admin controls: enable/disable, share with specific users
Available connectors (relevant to Command Center):
| Connector | Status | Data access model |
|---|---|---|
| Google Workspace (Gmail, Drive, Calendar, etc.) | GA | Native — first-party data stores |
| Slack | Private preview | Federation (real-time, no data copy) |
| Linear | Public preview | Federation |
| Jira Cloud | GA | Ingestion or federation |
| Notion | Public preview | Federation |
| GitHub | Public preview | Federation |
| Confluence | GA | Ingestion or federation |
Critical limitation: No service account / domain-wide delegation support.
Google’s own error documentation states explicitly:
403 Permission Denied — “Searching using service account credentials isn’t supported for Google Workspace data stores.”
Gemini Enterprise uses federated search — it queries Google Drive, Gmail, and Calendar in real time using the logged-in user’s own credentials. The COO would only see files, emails, and events that they personally have access to. There is no way to configure a service account with domain-wide delegation to let the agent query other users’ data.
This means:
- The COO can search their own Drive files, Gmail inbox, and Calendar
- They can search shared drives and files explicitly shared with them
- They cannot see another employee’s private Drive files, emails, or calendar unless those are shared with the COO
- No impersonation: the agent cannot “be” another user to check their inbox or files
For a Command Center that’s supposed to see movement across the entire organization, this is a fundamental constraint. If the COO asks “what emails did the VP of Sales send to client X?” or “what docs did the marketing team edit this week that I haven’t seen?” — Agent Designer cannot answer those questions.
Pros:
- Lives inside Gemini — the COO doesn’t need to learn a new tool
- Google handles the LLM, hosting, security, scaling
- No-code agent builder — fast to prototype
- Admin-managed: IT controls who can see/use the agent
- Scheduled executions for daily digests
Cons:
- No cross-user data access — agent only sees what the COO can already see. No service account or DWD support for Workspace data stores.
- Slack connector is private preview — Eden would need to request access from Google.
- Gemini Enterprise costs $30/user/month (on top of Workspace subscription)
- Agent Designer is still fairly new — may have limitations on complex multi-step reasoning
- Less control over the LLM (can’t swap to Claude/GPT; locked to Gemini models)
- Cannot run GWS CLI commands directly — relies on Google’s built-in connectors
- The “command center” experience is limited to what Agent Designer supports (no custom UI, charts, dashboards)
- No access to Drive Activity API, file comments, or revision history — only file search/read and Gmail search/send
Verdict: Not sufficient for a true org-wide Command Center. Agent Designer is useful if the COO only needs to query their own data (inbox, personal Drive, calendar). But if the goal is cross-org visibility — seeing what’s happening across all teams, all files, all emails — this option cannot deliver that. The user-credential model is a hard wall.
Option B: Gemini Gems (lightweight, limited)
What: Gems are custom Gemini personas with instructions and up to 10 uploaded files. Available to anyone with Gemini for Workspace.
How it would work:
- Create a Gem called “Command Center” with custom instructions describing the COO role
- Upload key documents (org chart, project list, SOW docs) as knowledge files
- COO uses the Gem in gemini.google.com for context-aware conversations
Limitations:
- No live API access — Gems can’t query Gmail, Drive, Slack, or Calendar in real time
- Limited to 10 files, 100MB each
- No scheduled executions
- No third-party data source connections
- Essentially a better system prompt, not an agent
Verdict: Not sufficient for the Command Center. Gems are good for “answer questions about these 10 documents” but cannot query live data across Workspace and Slack.
Option C: Custom Agent Registered in Gemini Enterprise (ADK / A2A)
What: Build a custom agent using Google’s Agent Development Kit (ADK) or the Agent-to-Agent (A2A) protocol, host it on Vertex AI Agent Engine, and register it in Gemini Enterprise so it shows up in the COO’s Gemini web app. Requires Gemini Enterprise subscription.
How it would work:
- We build a Python agent using the ADK that:
- Calls GWS CLI (or Google APIs directly) with service account + DWD for cross-org Workspace data
- Calls Slack RTS API for Slack data
- Uses Drive Activity API for file movement tracking
- Orchestrates multi-source queries and synthesizes answers
- Deploy the agent to Vertex AI Agent Engine
- Register it in Gemini Enterprise so it appears in the agent gallery
- COO opens gemini.google.com → selects Command Center agent → queries as normal
Pros:
- Lives inside Gemini — same UX as Option A but with full data access
- Full control over the agent logic, data sources, and orchestration
- Service account + DWD gives true cross-org visibility (unlike Agent Designer)
- Can use GWS CLI + Slack API however we want (no dependency on private-preview connectors)
- Can include custom tools: cross-platform timeline, project registry, Drive Activity queries
- Can add data sources Google doesn’t have connectors for
Cons:
- Requires Gemini Enterprise subscription ($30/user/month)
- More engineering effort (build + deploy + maintain the agent on Vertex AI)
- Requires GCP project setup, Vertex AI Agent Engine
- Still locked to Gemini models for the final response (but the agent logic can be custom)
Verdict: Best option if Eden has or is willing to add Gemini Enterprise. COO uses Gemini natively, and we have full control over data access with DWD. Avoids all the limitations of Agent Designer (Option A).
Option D: Custom ADK Agent as a Google Chat App (no Gemini Enterprise needed)
What: Same ADK agent as Option C, but delivered as a Google Chat app instead of registering in Gemini Enterprise. The COO messages the bot in Google Chat like messaging a colleague. This works on standard Google Workspace (Business or Enterprise) — Gemini Enterprise is NOT required.
Google’s own developer docs describe this exact pattern: a Google Chat app backed by an ADK agent on Vertex AI, deployed via Apps Script or HTTP endpoints. See Build a Google Chat app with an ADK AI agent.
How it would work:
- We build the same Python ADK agent as Option C (GWS CLI + DWD + Slack RTS API)
- Deploy the agent to Vertex AI Agent Engine
- Create a Google Chat app (via Apps Script or HTTP endpoint) that fronts the agent
- COO opens Google Chat → messages the “Command Center” bot → gets answers
Architecture:
COO in Google Chat
│
▼
Google Chat App (Apps Script or HTTP)
│
▼
ADK Agent on Vertex AI Agent Engine
│
├─── GWS CLI (service account + DWD)
│ └── Gmail, Drive, Calendar, Activity, Comments, Admin
│
└─── Slack RTS API (user token)
└── Messages, files, channels, threads
Pros:
- No Gemini Enterprise subscription needed — works on standard Workspace with Gemini
- Lives inside Google Chat — the COO already uses this daily
- Full control over the agent logic, data sources, and orchestration
- Service account + DWD gives true cross-org visibility
- Can use GWS CLI + Slack API however we want
- Can include custom tools: cross-platform timeline, project registry, Drive Activity queries
- Google Chat supports rich messages (cards, buttons, formatted text)
- Same agent code as Option C — can later register in Gemini Enterprise if Eden adds it
Cons:
- More engineering effort than Agent Designer (build + deploy + maintain)
- Requires GCP project with Vertex AI enabled (billing)
- Chat app UI is conversational only (no dashboards/charts in chat)
- Response formatting limited to Google Chat card markup
- Agent logic runs on Vertex AI (GCP costs for compute + Gemini API calls)
Verdict: The best option if Eden has Workspace with Gemini but NOT Gemini Enterprise. COO stays in a tool they already use (Google Chat), gets full cross-org data access via DWD, and we have complete control over the agent. The agent code is identical to Option C — the only difference is the delivery surface (Google Chat vs. Gemini web app). If Eden adds Gemini Enterprise later, the same agent can be registered there too with zero code changes.
Option E: Apps Script Sidebar in Google Workspace (no Gemini Enterprise needed)
What: Build a Google Workspace Add-on using Apps Script that adds a sidebar to Gmail/Docs/Sheets with a custom chat interface powered by Gemini API (or any LLM).
How it would work:
- Build an Apps Script add-on with a sidebar UI
- Sidebar calls our backend (or directly calls Gemini API via Apps Script)
- Backend orchestrates GWS CLI + Slack API queries
- Results rendered in the sidebar next to whatever doc/email the COO is viewing
Recent development: Google recently introduced A2UI (Agent-to-User Interface) protocol support in Apps Script, which lets agents render rich, interactive UIs (forms, lists, buttons) inside Google Sheets sidebars. This is a paradigm shift from text-only chat to actionable agent interfaces.
Pros:
- Lives inside Google Workspace (sidebar in Gmail, Docs, Sheets)
- Context-aware: if the COO is in a doc, the agent knows which doc
- No additional subscription beyond Workspace + Gemini API usage
- Can use any LLM (Gemini API, Claude, GPT — via Apps Script HTTP fetch)
- Interactive UI elements via A2UI protocol
Cons:
- Apps Script has execution limits (6 min/execution, 90 min/day for consumer; higher for Workspace)
- UI is limited to sidebar form factors
- More fragile than a standalone app (Apps Script ecosystem quirks)
- Harder to build rich visualizations (no charts, dashboards in a sidebar)
Verdict: Good if the COO wants the agent “right there” while working in Docs/Gmail. But limited UI and Apps Script constraints make it less suitable for a full Command Center experience.
Option F: Standalone Custom Agent (separate UI)
What: Build a dedicated web app (e.g., in the Brainforge platform or a standalone Next.js app) with a chat interface and dashboard for the COO. Uses the same core agent as Options C and D — the only addition is a richer frontend.
How it would work:
- Same ADK agent from C/D handles all data access and orchestration
- Wrap it with a web UI: chat + dashboard components
- Agent can use any LLM (Claude, GPT, Gemini — our choice, since we’re not constrained by Vertex/Chat)
- Add dashboards, charts, project timelines, activity feeds as needed
- Deploy as a web app the COO bookmarks or accesses via a subdomain
Pros:
- Full control over everything: UI, UX, LLM, data sources, visualizations
- Can build rich dashboards, charts, timeline views, project health scores
- Not limited by Google’s agent framework or sidebar constraints
- Can use the best LLM for each task (Claude for reasoning, Gemini for Workspace queries)
- Can add features Google will never build (cross-platform project registry, trend analysis, alerts)
- We already have the Next.js platform and could extend it
- Shares the same agent core as C and D — not a rewrite, just a new frontend
Cons:
- COO has to use a separate app (not inside Gemini or Gmail)
- More engineering effort (UI + backend + deployment + auth) — but the agent itself is already built in Phase 1
- We own hosting, scaling, security
- Requires the COO to adopt a new tool
Verdict: Most powerful option with the best long-term potential. Because C, D, and F share the same agent code, the “engineering effort” for F is really just the frontend — the hard part (data access, orchestration, synthesis) is already done.
Recommendation matrix
| Criterion | A: Agent Designer | B: Gems | C: ADK in Gemini | D: ADK in Chat | E: Apps Script | F: Custom UI |
|---|---|---|---|---|---|---|
| Where it lives | Gemini Enterprise | gemini.google.com | Gemini Enterprise | Google Chat | GWS sidebar | Standalone web |
| Requires Gemini Enterprise | Yes ($30/user) | No | Yes ($30/user) | No | No | No |
| Cross-user data (DWD) | No | No | Yes | Yes | Yes | Yes |
| Slack data access | Private preview | No | Yes (we build) | Yes (we build) | Yes (we build) | Yes (we build) |
| GWS data access | User’s own only | Files only | Any user (DWD) | Any user (DWD) | Any user (DWD) | Any user (DWD) |
| Drive Activity / comments | No | No | Yes | Yes | Yes | Yes |
| Shares agent core | No | No | Yes (C/D/F) | Yes (C/D/F) | No | Yes (C/D/F) |
| Engineering effort | Low | None | Medium | Medium | Medium | Medium + UI |
| UI customization | Limited | None | Limited | Chat cards | Sidebar only | Unlimited |
| LLM choice | Gemini only | Gemini only | Gemini (Vertex) | Gemini (Vertex) | Any | Any |
| Dashboards/charts | No | No | No | No | Limited | Yes |
| Cost (monthly) | $30/user + WS | WS only | $30/user + WS + GCP | WS + GCP | WS + API usage | Hosting + API |
| Time to prototype | 1-2 weeks | 1 day | 3-4 weeks | 3-4 weeks | 2-3 weeks | 4-6 weeks |
| Upgrade path | — | — | Add D and/or F | Add C and/or F | — | Add C and/or D |
Recommended path
Options C, D, and F share the same core agent. We build the agent once using Mastra (TypeScript) with GWS CLI + DWD, Slack RTS API, Drive Activity, orchestration, and synthesis — deployed on Cloud Run in Eden’s GCP project (BAA-covered), with LLM calls routed to Vertex AI Gemini API. The only thing that changes between delivery surfaces is the frontend:
| Option | Frontend | Agent code |
|---|---|---|
| C | Gemini Enterprise web app | Shared |
| D | Google Chat bot | Shared |
| F | Standalone web app (Next.js) | Shared |
This means every phase builds on the last — no rewrites, no throwaway work.
Phase 1: Pick C or D based on Eden’s subscription
- If Eden has Gemini Enterprise → Option C. COO gets the agent inside Gemini. Optionally add D (Google Chat) as a second access point at no extra cost.
- If Eden has Workspace with Gemini only (no Enterprise) → Option D. Same agent, delivered as a Google Chat bot. No Gemini Enterprise needed. If Eden adds Enterprise later, register the same agent there (Option C) with zero code changes.
Phase 2 (if the Command Center grows): Add Option F — Custom UI
When the COO wants dashboards, trend views, project health scores, or anything beyond chat — add a standalone web frontend. The agent core is already built. The engineering effort for F is the UI layer, not the data/orchestration layer.
Option A (Agent Designer) is NOT recommended for the core Command Center because (1) it requires Gemini Enterprise, (2) the user-credential model means the agent can only see what the COO already has access to (no DWD), and (3) the Slack connector is private preview. It could be a complement for personal productivity but cannot deliver org-wide visibility.
Hybrid approach: Use C or D as the primary Command Center agent (org-wide data), and the built-in Gemini Workspace features (Gemini in Gmail sidebar, Gemini in Docs) for the COO’s personal productivity. They’re complementary, not competing.
Open Questions
- Eden’s current Slack plan — Pro or Business+? Determines whether semantic search is available.
- Eden’s Google Workspace plan — Affects available APIs (Admin SDK, Vault, Events).
- Does Eden already have Gemini Enterprise? — If yes, Agent Designer and custom agent registration are immediately available. If no, is the COO willing to add it ($30/user/month)?
- Scope of “entire Slack” — Does the COO need DMs and private channels, or just public + specific private channels?
- Authentication model — Should the agent authenticate as the COO (sees what COO sees) or as a service/admin account (sees everything)?
- Data residency/compliance — ✅ Resolved: Eden has a BAA with Google. All compute and LLM calls must stay within Eden’s GCP project. Architecture uses Cloud Run + Vertex AI Gemini API.
- Latency expectations — Is 5-15 seconds per query acceptable, or does the COO expect sub-second responses? (Real-time API queries take a few seconds; a pre-indexed pipeline would be faster.)
- Chat-only vs. dashboard — Does the COO want a chat interface (“ask questions, get answers”) or does he also want visual dashboards (project timelines, activity heatmaps, trend charts)? Chat-only can live in Gemini; dashboards require a custom UI.
- GCP project access — ✅ Resolved: Eden has a GCP project with BAA. We deploy directly to their project (Cloud Run + Vertex AI Gemini API).
Next Steps
- Test GWS CLI on Brainforge workspace — Run the test matrix above to validate all capabilities
- Test Slack MCP in Cursor — Validate search quality and rate limits on our own workspace
- Confirm Eden’s Slack and Google Workspace plans — Determines API availability
- Prototype the agent loop — Build a minimal agent that takes a COO question, plans queries, calls GWS CLI + Slack API, and returns a synthesized answer
- Draft SOW section — Once capabilities are confirmed, scope the Command Center deliverable
§5 Technical Approach — Eden Command Center
Following the structure from
knowledge/delivery/03-project-lifecycle/sow-project-plan-template.md§5. One subsection per deliverable project. Aligned to the Eden AI Project Plan in Notion.
Data Access + Chat Integration
Parent initiative: Command Center Service line: AI / Data Start: March 23 → Target: April 20 Milestones: M1 (Apr 6) → M2 (Apr 13) → M3 (Apr 20)
Relevant playbook(s):
- This spike document — validated data access patterns for GWS CLI and Slack APIs (Parts 1–6)
- Options C/D/F share the same agent core; the delivery surface is determined by Eden’s subscription (see Part 8)
Target process (step-by-step) Purpose: Build privacy-first data access across Eden’s Google Workspace and Slack, with PII redaction and identity anonymization, delivered through an agent the COO can chat with.
Step 1 — Source authentication (Week 1) Provision a GCP service account with Domain-Wide Delegation (DWD). Scoped OAuth scopes:
gmail.readonly— thread metadata (subject, sender domain, recipient count, timestamps). No message bodies.drive.metadata.readonly— file metadata, revision counts, comment resolution rate. No file content.calendar.readonly— event metadata (attendee count, duration, recurrence). No meeting notes.drive.activity.readonly— Drive Activity API v2 for audit trail (who did what, when)admin.directory.user.readonly— user directory for identity resolution
Create a Slack app for Eden’s workspace with search:read (RTS API), channels:history, channels:read, users:read, groups:read, groups:history scopes. Extract metadata only: channel volume, thread reply counts, reaction counts, timestamps. No message body content.
Eden IT admin approves the DWD grant in Google Workspace Admin Console and installs the Slack app.
Step 2 — Identity anonymization layer (Week 1–2) Before any data reaches the agent’s synthesis layer or the COO’s screen, all user identities are resolved to stable role-based tokens:
- Build a secure identity mapping table:
real_email → anonymized_token(e.g.danny@eden.com → COO_1,alice@eden.com → Provider_A,bob@eden.com → Ops_Tech_1) - Mapping is deterministic (same person always gets the same token) and one-way for the analytic layer
- Mapping table stored in a locked location (GCP Secret Manager or a separate locked BigQuery dataset) — never exposed to the agent’s output or the COO
- The agent’s PII redaction middleware strips or replaces names, emails, and phone numbers in all API responses before they enter the LLM context window
Step 3 — Slack data access tools (Week 2, targets M1: Apr 6) Build the Slack tool functions that the agent can invoke:
search_slack(query, channels?, time_range?)— Slack RTS API semantic search (Business+) or keyword search (Pro). Returns anonymized message metadata: channel, anonymized author token, timestamp, thread reply count, reaction count.read_slack_thread(channel_id, thread_ts)— Full thread context viaconversations.replies. Anonymized before entering agent context.get_slack_channel_stats(channel_id, time_range?)— Channel volume, active participant count (anonymized), message frequency over time.
M1 deliverable (Apr 6): Danny (COO) can chat with the agent and query Slack data. Agent returns anonymized Slack insights — channel activity, thread volumes, topic search results — with no raw PII.
Step 4 — Google Workspace data access tools (Week 3, targets M2: Apr 13) Build the GWS tool functions using GWS CLI with service account + DWD:
search_drive(query, folder_id?, owner_token?)— Drive file search. Returns anonymized file metadata: title, anonymized last-editor token, revision count, last modified timestamp.get_drive_activity(folder_id?, time_range?)— Drive Activity API v2 audit trail. Returns anonymized activity records: action type (edit, share, move, comment), anonymized actor token, target file, timestamp.get_file_comments(file_id)— Drive Comments API. Returns anonymized comment metadata: anonymized author token, timestamp, resolved status, reply count.search_gmail(query, user_token?)— Gmail metadata search via DWD. Returns anonymized thread metadata: subject line, anonymized sender/recipient tokens, timestamp, thread length. No message bodies.search_calendar(query, user_token?, time_range?)— Calendar event metadata. Returns anonymized event info: title, anonymized attendee tokens, duration, recurrence.get_user_directory(query?)— Admin SDK lookup. Returns anonymized role-based info only (department, title, anonymized token). No real names or emails in agent output.
M2 deliverable (Apr 13): Danny can query the agent for Google Workspace activity — file movement, email thread patterns, calendar load, Drive comments — all with anonymized identities. Slack (M1) continues to work.
Step 5 — Cross-platform orchestration + combined queries (Week 3–4, targets M3: Apr 20) Build the orchestration agent (Mastra, TypeScript) that ties all sources together:
- Receives a COO question in natural language
- Plans which tools to call (multi-step reasoning)
- Executes Slack + GWS queries in parallel where possible
- Applies the cross-platform query pattern from Part 3 of the spike (Slack → Drive → Activity → Comments → Calendar → Gmail → Synthesize)
- All results pass through the PII redaction middleware before synthesis
- LLM synthesizes a unified, anonymized answer
Implement a project registry — a lightweight mapping of project names → Slack channels, Drive folder IDs, and anonymized key participants. Agent uses this to resolve ambiguous queries (“the rebrand” → folder ID + rebrand channel). Stored as a Google Sheet or JSON config the COO can maintain.
Step 6 — Custom UI: scaffold and chat interface (Week 2–3, parallel with Steps 3–4) Build the Command Center web application that the COO will actually use day-to-day:
- Scaffold a Next.js 15 (App Router) app with Tailwind CSS and shadcn/ui
- Auth via Google OAuth (Danny logs in with his Eden Google account)
- Build the chat interface: WebSocket or SSE connection to the agent backend, streaming responses as they arrive (tool calls → intermediate results → final synthesis). Show which data sources the agent queried for transparency.
- Build the API layer (Next.js API routes — agent is co-located in the same app):
- Chat route calls the Mastra agent directly (no separate service to proxy to)
- Dashboard routes call agent tools directly (e.g., Drive Activity for the last 7 days)
- Cache frequently-requested data (project list, user directory) with short TTLs
Step 7 — Custom UI: dashboards and project management (Week 3–4) Build the visual components that go beyond chat:
- Project overview dashboard — cards per active project showing recent activity (edits, messages, meetings) with status indicators. Pull from project registry + Drive Activity + Slack RTS.
- Activity timeline — chronological feed of cross-platform events (Drive edits, Slack messages, Calendar meetings, Gmail threads) filterable by project, person, or date range.
- People view — anonymized role-based view of who’s active on what, communication patterns, workload signals. Admin SDK user list cross-referenced with Drive Activity and Slack activity.
- Project management admin — page where Danny maps projects to Slack channels, Drive folders, and anonymized key participants. Stored in a lightweight DB (Supabase or Google Sheet via GWS CLI).
- Optional: scheduled digests — daily/weekly job that runs predefined queries and pushes an anonymized summary to email or Slack via GCP Cloud Scheduler.
Step 8 — Deploy and validate (Week 4, targets M3: Apr 20)
- Deploy the Next.js + Mastra app to Cloud Run in Eden’s GCP project (BAA-covered)
- Configure secrets (service account key, Slack tokens, identity mapping) in GCP Secret Manager
- Run Danny through 10–15 test queries spanning:
- Single-source Slack (“what’s the most active channel this week?”)
- Single-source GWS (“who’s been editing the rebrand docs?”)
- Cross-platform (“what’s the status of Project X across Slack and Drive?”)
- Anonymization validation (“show me team activity” — confirm no real names appear)
- Dashboard validation (project cards, activity timeline, people view all render correctly with anonymized data)
M3 deliverable (Apr 20): Full Command Center — Danny opens the web app, sees project dashboards and activity timelines, and can chat with the agent about anything happening across Eden’s entire Google Workspace and entire Slack. All identities anonymized. Both chat and visual views working.
Architecture decision: Mastra + Cloud Run (not ADK + Vertex AI Agent Engine)
Eden has a Business Associate Agreement (BAA) with Google requiring all data processing and LLM calls to stay within their GCP project. Two deployment patterns were evaluated:
| Criterion | Cloud Run + Mastra | Vertex AI Agent Engine + ADK |
|---|---|---|
| BAA coverage | Yes (Cloud Run is BAA-covered) | Yes (Agent Engine is BAA-covered) |
| LLM calls | Vertex AI Gemini API (BAA) | Vertex AI Gemini API (BAA) |
| Language | TypeScript end-to-end | Python (ADK primary), TS SDK less mature |
| Next.js co-location | Same service, same runtime | Separate service, cross-language boundary |
| Control | Full (own Dockerfile, own infra) | Managed (Google controls runtime) |
| Cost | Pay for Cloud Run compute only | Agent Engine markup + Cloud Run compute |
| Model flexibility | Model-agnostic (Mastra routes to any provider) | Gemini-native (other models require workarounds) |
Decision: Cloud Run + Mastra. Single Next.js + Mastra service deployed to Cloud Run in Eden’s GCP project, with LLM calls routed to Vertex AI Gemini API. TypeScript end-to-end eliminates the cross-language boundary between agent and frontend.
Tools & stack
- Agent framework: Mastra (
@mastra/core) — TypeScript-native, model-agnostic, first-class Next.js integration - LLM: Gemini 2.5 Flash via Vertex AI API (BAA-covered). Mastra’s model router allows swapping to other models for specific tasks if needed.
- Google Workspace access: GWS CLI (
gwsv0.22+) with service account + DWD. Metadata-only scopes — no message bodies, no file content. - Slack access: Slack RTS API (semantic search),
conversations.history/conversations.replies(thread context) - PII redaction: Mastra processor (intercept/transform before and after generation) + custom
redact()function. Identity mapping table in GCP Secret Manager. - Frontend + Backend: Next.js 15 (App Router), React 19, Tailwind CSS, shadcn/ui. Agent is co-located in the same Next.js app — no separate backend service.
- Auth: Google OAuth 2.0 (Eden Google account)
- Database (lightweight): Firestore or Cloud SQL (within Eden’s GCP project) for project registry, user preferences, cached data
- Charts/visualization: Recharts or D3 for timeline and activity visualizations
- Secrets management: GCP Secret Manager for service account keys, Slack tokens, identity mapping table
- Deployment: Cloud Run in Eden’s GCP project (BAA-covered). Single service: Next.js app with Mastra agent.
- Language: TypeScript end-to-end (frontend, agent, tools, PII middleware)
Architecture notes / dependencies
- BAA compliance — all compute and LLM calls stay in Eden’s GCP. Eden has a Business Associate Agreement (BAA) with Google. All data processing, LLM inference, and secret storage must run within Eden’s GCP project. The Vertex AI Gemini API (not the consumer Gemini API) is the LLM endpoint. Cloud Run hosts the app. No data leaves Eden’s GCP boundary.
- Mastra over Google ADK. Mastra is TypeScript-native, model-agnostic, and integrates directly with Next.js (same runtime, no cross-language boundary). ADK’s TypeScript SDK is available but less mature; ADK’s primary value prop (Vertex AI Agent Engine managed hosting) is unnecessary since Cloud Run gives full control and lower cost. Mastra’s tool and agent primitives map cleanly to the tool functions defined in this spike.
- Single-service deployment. The Next.js app (UI + API routes) and the Mastra agent run in the same Cloud Run service. No separate agent backend service, no cross-service latency for chat or dashboard requests. The Mastra agent is instantiated server-side and called directly from API route handlers.
- Privacy is the hardest constraint. No raw PII reaches the LLM or the COO. The identity anonymization layer runs before any data enters the agent’s context window. This is enforced at the tool function level, not the prompt level — even if the LLM misbehaves, the raw identities were already stripped.
- Metadata only, not content. Extraction scopes are deliberately restricted: Gmail gets thread metadata (subject, sender domain, timestamps), not message bodies. Drive gets file metadata (title, revision count, editor), not file content. Slack gets message metadata (channel, timestamps, thread structure). The agent reasons over activity patterns and metadata signals, not raw text.
- Service account DWD is the critical enabler — Eden IT must approve the OAuth scopes in Admin Console. Start this in Week 1.
- Slack RTS API semantic search requires Business+ plan. If Eden is on Pro, the agent falls back to keyword search (still functional, less natural).
- Drive Activity API returns action metadata (edit, share, move, comment, delete) but not content diffs.
- Google OAuth for Danny’s login is separate from the service account used for DWD data access. Danny authenticates to prove identity; the service account does the actual data querying.
- The dashboard views call the agent’s tool functions directly (not through chat) to populate widgets. Since the Mastra agent lives in the same Next.js app, dashboard API routes import and call tool functions directly — no HTTP proxy needed.
- Cache strategy: project list and user directory cached for 1 hour; activity data cached for 5 minutes; chat responses never cached.
- No data pipeline or warehouse needed. All queries are real-time against the live APIs. The agent calls GWS CLI or Slack API on demand, anonymizes the response, and synthesizes an answer. No ETL, no BigQuery, no sync jobs. If trend analysis is needed later (Theme Discovery), a pipeline layer can be added on top of the same tool functions.
Performance / accuracy targets
- Query-to-answer latency: < 15 seconds for single-source queries, < 30 seconds for cross-platform synthesis
- Dashboard page load: < 3 seconds (first meaningful paint)
- Activity timeline refresh: < 5 seconds
- Chat streaming: first token in < 2 seconds
- PII leak rate: 0% — no real names or emails in agent output (validated via automated test suite against known identity list)
- Anonymization consistency: same person always resolves to the same token across all queries
- Result relevance: agent returns contextually useful information for ≥ 80% of COO queries in user testing
- Availability: 99.5% uptime
Resourcing
- 1 AI Engineer (agent logic via Mastra, tool functions, PII redaction layer, Vertex AI integration, Cloud Run deployment)
- 1 Data/AI Engineer (GWS CLI integration, Slack API integration, DWD setup with Eden IT, anonymization mapping)
- 1 Full-Stack Engineer (Custom UI: Next.js app, chat interface, dashboards — same codebase as agent)
- 1 Designer (part-time: UI/UX for dashboard views, COO workflow design)
Risks
- Eden IT delays on DWD approval → blocks all cross-org data access. Mitigate: start the admin approval process in Week 1 with a scoped list of OAuth scopes and justification. Offer a call with Eden IT to walk through the security model.
- Anonymization edge cases — shared mailboxes, distribution lists, external contacts may not map cleanly to role tokens. Mitigate: build a fallback category (
External_1,SharedMailbox_1) and review with Eden. - Slack plan is Pro (not Business+) → no semantic search, only keyword. Mitigate: agent still works with keyword search; flag Business+ upgrade as recommended.
- Slack rate limits on new non-Marketplace apps (potentially 1 req/min for
conversations.history). Mitigate: use RTS API as primary (higher limits), history API only for thread context. - Metadata-only approach may feel thin — the COO might want actual message content or document text. Mitigate: the privacy-first constraint is the SOW commitment; revisit with Eden if they want to relax it for specific data types with explicit consent.
- Scope creep on dashboard views — Danny may want 10 views when 3 deliver 80% of the value. Mitigate: start with chat + project overview + activity timeline; add views based on usage data.
- GCP billing from Vertex AI / Gemini API calls and Cloud Run compute. Mitigate: set budget alerts and quotas upfront; Cloud Run scales to zero when idle.
- Cloud Run cold starts could add latency on first request after idle. Mitigate: configure minimum instance count of 1, or accept 2–3 second cold start for a low-traffic internal tool.
Phasing summary
Data Access + Chat Integration (Mar 23 → Apr 20)
├── Week 1 (Mar 23): GCP setup, service account, DWD approval request,
│ Slack app creation, identity mapping table design,
│ UI design with Danny
├── Week 2 (Mar 30): Slack tool functions, PII redaction middleware,
│ anonymization layer, scaffold Custom UI + chat interface
│ → M1: Danny's Slack integrated (Apr 6)
├── Week 3 (Apr 7): GWS tool functions (Drive, Gmail, Calendar, Activity,
│ Comments), dashboard components (project overview,
│ activity timeline, people view)
│ → M2: Full GWS integrated (Apr 13)
└── Week 4 (Apr 14): Cross-platform orchestration agent (Mastra), project
registry, deploy to Cloud Run (Eden GCP), end-to-end
validation → M3: Full Command Center live (Apr 20)
Total for this project: 4 weeks (March 23 → April 20).
References
Google Workspace / GWS CLI:
- GWS CLI GitHub — 22K stars, v0.22.0
- GWS CLI Docs — Full documentation
- GWS CLI Agent Skills Index — 100+ skills
- GWS CLI Service Account Auth — Domain-wide delegation setup
- GWS CLI Dynamic Discovery — How unlisted APIs (like Drive Activity) work
- Drive Activity API v2 — File/folder activity tracking
- Drive Comments API — File comments and replies
- Brainforge GWS CLI Setup — Internal setup guide
Slack:
- Slack MCP Server — Official MCP docs
- Slack Real-Time Search API — RTS API docs
- Slack MCP Cursor Setup — Cursor integration
- slacrawl — Local Slack mirror tool
Agent Framework & Deployment:
- Mastra Docs — TypeScript AI agent framework (tools, memory, workflows, guardrails)
- Mastra Agents Overview — Agent primitives and tool integration
- Mastra Next.js Integration — Server-side agent in Next.js
- Google ADK TypeScript — ADK TS SDK (evaluated, not selected)
- Vertex AI Gemini API — Enterprise LLM endpoint (BAA-covered)
- Cloud Run — Serverless container deployment (BAA-covered)
Gemini Enterprise / Agent Delivery:
- Gemini Enterprise Agents Overview — Agent types (Agent Designer, ADK, A2A, Dialogflow)
- Agent Designer: Create an Agent — No-code/low-code agent builder
- Connect Third-Party Data Sources — Connectors (Slack private preview, Linear public preview, etc.)
- Gemini Enterprise Editions — Pricing and feature comparison
- ADK Agents on Vertex AI — Custom agents registered in Gemini
- A2UI in Apps Script — Agent-driven UIs in Google Workspace sidebars