Skip to main content
Compute settings define where pipeline and notebook jobs run: managed workers, your Kubernetes cluster, Databricks jobs, or warehouse-native execution. Correct sizing reduces cost and avoids queueing during peak loads.

Execution environments

You pick a size profile (small/medium/large) and region. The platform provisions ephemeral workers per run; you do not manage VMs.

Defaults and overrides

  • Workspace defaults apply to new pipelines until authors override per pipeline or per node.
  • Concurrency limits cap simultaneous runs so you do not exhaust warehouse credits or API partner rate limits.
  • Timeouts stop runaway jobs; tune them per environment (longer in prod batch, shorter in dev).
Raising concurrency without checking destination limits can trigger mass throttling. Increase gradually and watch Observability for error spikes.

Secrets and networking

Compute that reaches private data stores may require VPC peering, PrivateLink, or SSH tunnels configured alongside the connection—not only in compute settings. Confirm egress allowlists for SaaS APIs.

Notebooks and interactive workloads

Notebooks may use a different profile than batch pipelines so exploratory kernels do not compete with nightly SLA jobs.

Environments

Dev/stage/prod separation for runs.

Usage and limits

Track consumption against your plan.