Dagster Transition Deployment Strategy
Owner: AI Team
Last updated: 2026-03-03
Scope: apps/dagster-pipelines/ during migration to platform-native processing
Decision
For the migration window, keep live orchestration on the existing Dagster Cloud deployment and treat the monorepo Dagster copy as a migration/test code location.
Do not move Dagster orchestration to Railway as the default path.
Why not Railway for Dagster right now
- Dagster scheduling/sensors need always-on orchestration processes.
- Reliable Dagster operation typically requires a dedicated metadata DB and long-lived daemon behavior.
- Railway can run containers, but reproducing stable Dagster orchestration there adds operational complexity with little migration benefit.
- Current goal is reducing Dagster usage over time, not building a second long-term Dagster hosting stack.
Safe default in this repo
pipelines/repository.py now gates schedules behind:
DAGSTER_ENABLE_SCHEDULES=trueIf unset/false (default), jobs are loadable but schedules are not exposed.
This prevents accidental schedule execution while the old platform is still authoritative.
Recommended environment model
- Production (current source of truth): existing Dagster Cloud deployment (unchanged).
- Migration validation: local runs from
apps/dagster-pipelines/in Cursor/CI. - Optional staging code location in Dagster Cloud: only if needed for parity testing, with schedules disabled unless explicitly approved.
Cutover checklist (per pipeline)
- Implement platform-native replacement in
apps/platform. - Validate parity (inputs, outputs, timing behavior).
- Disable corresponding Dagster schedule in production.
- Monitor downstream data quality.
- Mark status in
dagster-inline-migration.md.
Operator commands
Validate code location without schedules
cd apps/dagster-pipelines
source .venv/bin/activate
dagster job list -m pipelines.repositoryInspect schedules intentionally
DAGSTER_ENABLE_SCHEDULES=true dagster schedule list -m pipelines.repository