Dagster Transition Deployment Strategy

Owner: AI Team
Last updated: 2026-03-03
Scope: apps/dagster-pipelines/ during migration to platform-native processing


Decision

For the migration window, keep live orchestration on the existing Dagster Cloud deployment and treat the monorepo Dagster copy as a migration/test code location.

Do not move Dagster orchestration to Railway as the default path.


Why not Railway for Dagster right now

  • Dagster scheduling/sensors need always-on orchestration processes.
  • Reliable Dagster operation typically requires a dedicated metadata DB and long-lived daemon behavior.
  • Railway can run containers, but reproducing stable Dagster orchestration there adds operational complexity with little migration benefit.
  • Current goal is reducing Dagster usage over time, not building a second long-term Dagster hosting stack.

Safe default in this repo

pipelines/repository.py now gates schedules behind:

DAGSTER_ENABLE_SCHEDULES=true

If unset/false (default), jobs are loadable but schedules are not exposed.

This prevents accidental schedule execution while the old platform is still authoritative.


  1. Production (current source of truth): existing Dagster Cloud deployment (unchanged).
  2. Migration validation: local runs from apps/dagster-pipelines/ in Cursor/CI.
  3. Optional staging code location in Dagster Cloud: only if needed for parity testing, with schedules disabled unless explicitly approved.

Cutover checklist (per pipeline)

  1. Implement platform-native replacement in apps/platform.
  2. Validate parity (inputs, outputs, timing behavior).
  3. Disable corresponding Dagster schedule in production.
  4. Monitor downstream data quality.
  5. Mark status in dagster-inline-migration.md.

Operator commands

Validate code location without schedules

cd apps/dagster-pipelines
source .venv/bin/activate
dagster job list -m pipelines.repository

Inspect schedules intentionally

DAGSTER_ENABLE_SCHEDULES=true dagster schedule list -m pipelines.repository