OpenWork Labs Architecture Reconciliation and Go/No-Go

Status: Proceed with conditions
Date: 2026-03-07
Related tickets: PLT-1073, PLT-1074, PLT-1075, PLT-1076
Related docs: knowledge/engineering/openwork-platform-integration/openwork-platform-integration-plan.md, knowledge/engineering/openwork-platform-integration/openwork-railway-feasibility-2026-03-07.md, standards/03-knowledge/engineering/setup/openwork-hosted-runtime-contract.md


Recommendation

Proceed with the Labs-first architecture for the first hosted release, with explicit launch gates.

This remains the right direction because it keeps OpenWork runtime concerns decoupled from the Platform app while still giving Platform a controlled entry point for internal users.

Do not treat this as approval for broad launch yet. Proceed only if the conditions below are closed in the follow-on tickets.


Reconciled architecture position

1) Runtime ownership

The current plan is still aligned with the intended architecture boundary:

  • OpenWork runtime stays in its own Railway-hosted service.
  • Platform remains the authenticated shell and discovery surface.
  • OpenWork-specific processes (openwork, openwork-server, opencode) stay outside the core Next.js runtime.

This matches the repo’s own runtime principles:

  • apps/openwork/INFRASTRUCTURE.md says OpenWork components should remain CLI-first, sidecar-composable, and independently runnable.
  • apps/openwork/README.md describes host mode as a separate runtime stack rather than a native Platform route package.

2) Ingress and user entry

The current plan to start with a standalone Labs host and a lightweight Platform entry route is still the lowest-risk first release.

  • Railway feasibility proved the host can run as an independent network service.
  • The existing Platform /openwork route is still built around static assets in public/openwork-static, which is not the right long-term path for hosted Labs.
  • Same-origin proxying remains optional follow-on work, not a prerequisite for the first release.

3) Identity and access boundary

The architecture should continue to separate:

  • Platform authentication and navigation
  • OpenWork runtime access tokens and host approval flows

That keeps the first release operationally simpler and avoids coupling Platform middleware changes to runtime viability. Auth bridging can remain a later investigation once adoption is proven.

4) State and persistence

The hosted runtime must treat workspace state and runtime metadata as durable infrastructure, not container-local state.

The accepted first-release contract now defines:

  • /data/workspace for user/project state
  • /data/openwork-orchestrator for orchestrator metadata
  • /data/sidecars and /data/openwork-server/* for downloaded/runtime server artifacts

This matches Railway’s current single-volume service model and the deployed Labs runtime.


Deltas and blockers

Blocker 1: Architecture artifact reference is stale

The planning doc references a Clarence architecture image by an absolute local path that is not valid on this machine, so the underlying artifact is not currently reproducible from the repo.

Impact:

  • Human reviewers cannot independently verify the exact source artifact from the plan alone.
  • Architecture traceability is weaker than it should be for launch gating.

Mitigation:

  • Treat the current plan text as the extracted guidance already captured from that review.
  • Refresh the canonical link or save the architecture artifact in a durable repo-adjacent location before final launch sign-off.

Owner:

  • Uttam / Clarence

Resolved blocker 2: Runtime contract now matches the deployed Railway storage shape

The feasibility pass validated a single mounted /data volume with the workspace rooted at /data/workspace. Railway rejected a second mount on the same service with A volume is already mounted on service openwork-host in environment production.

Resolution:

  • The hosted runtime contract was revised to accept the Railway-supported single-volume /data shape for the first release.
  • The Railway deploy guide and runtime contract now agree on /data/workspace, /data/openwork-orchestrator, and /data/sidecars.

Residual risk:

  • Workspace growth and runtime metadata now share one durable volume, so storage monitoring and cleanup policy matter more than they would in a split-volume design.

Blocker 3: Platform entry route still points at static assets

The current /openwork route checks for a local static build and shows “OpenWork app is not built yet” when those assets are missing.

Impact:

  • Hosted users are still sent down the old embedded-static path.
  • Even with a healthy Labs host, Platform will not direct traffic to it yet.

Mitigation:

  • Complete PLT-1076 before user-facing rollout.
  • Keep the first user entry path lightweight: external link or iframe wrapper to Labs.

Owner:

  • Platform Engineering

Blocker 4: Access policy is still feasibility-grade

The feasibility deploy validates runtime health, but access policy is still minimal.

Current gaps:

  • token issuance and rotation are not yet operationalized
  • OPENWORK_CORS_ORIGINS=* is still acceptable only for feasibility
  • approval mode and host-token handling need an explicit operating model

Mitigation:

  • PLT-1075 must define the initial access policy and usage logging before wider internal rollout.

Owner:

  • Platform Engineering

Launch gates

The Labs deployment can proceed, but labs.brainforge.ai should not become the promoted user path until all of the following are true:

  1. The deploy implementation uses the accepted hosted runtime contract for durable storage and env.
  2. The architecture artifact reference is refreshed or replaced with a durable canonical reference.
  3. The Platform /openwork route is updated to the Labs-first path.
  4. Access policy, logging, and token ownership are explicitly documented and configured.

Go/No-Go call

Call: Proceed with conditions

Rationale:

  • The core architectural decision to keep OpenWork runtime decoupled from Platform is still sound.
  • Railway feasibility materially reduced the biggest unknown around Linux hosting and persistent runtime state.
  • The remaining issues are launch-governance and implementation-alignment issues, not evidence that the Labs-first architecture is wrong.

Stop only if a refreshed architecture review shows a hard requirement for same-origin auth or a single-runtime deployment before internal adoption. Nothing in the current repo evidence points to that requirement today.