Skip to main content
A pipeline in Planasonix is a directed graph of nodes that moves data from sources through transforms to destinations. You define how data flows, how it changes at each step, and how runs are triggered—without writing orchestration glue code by hand.

What you build on the canvas

Each pipeline is a graph: nodes are steps (read, transform, write), and edges connect outputs to inputs. Planasonix executes the graph in dependency order, so downstream nodes always receive data from the nodes you connected upstream. You work in a visual canvas powered by React Flow. You drag nodes from the palette, drop them on the canvas, and draw connections between handles. Pan and zoom to work on large graphs; group related steps visually so your team can read the flow at a glance.

Pipeline canvas

Learn how to edit graphs, preview data, and run or debug pipelines.

Variables

Parameterize connections, paths, and SQL with pipeline and global variables.

Projects and folders

Planasonix organizes pipelines inside projects. A project is the workspace boundary for collaboration: members, default connections, and shared assets (such as templates or variables scoped to that project) typically live there. Within a project, you use folders to group pipelines by domain, team, or lifecycle (for example, finance/ingestion vs product/reverse-etl). You can nest folders when your catalog grows so related graphs stay together in the navigator. Folders reduce clutter in the sidebar and mirror how your organization thinks about data products—not as a single flat list of hundreds of graphs.
Name folders after outcomes (customer 360, billing reconciliation) rather than only technology names. That makes handoffs easier when someone new opens the project.
Sources and destinations reference connections—reusable credential and endpoint profiles defined in the project. The canvas shows the graph; connections live in the connection library. See Connections overview to create and validate connectors before you wire nodes.
Split when schedules, owners, or failure domains differ (for example, hourly events vs daily finance extracts). Keep one pipeline when steps always run together and share the same SLA—branching with control flow is often cheaper than duplicating shared transforms.

Typical workflow

1

Create or open a project

Pick the project where the pipeline belongs. Create a folder if you are starting a new area of work.
2

Add nodes and wire the graph

Place source, transform, and destination nodes on the canvas and connect edges so data flows in the right order.
3

Configure and validate

Set credentials via connections, add variables where values differ by environment, and use preview to confirm row counts and schemas.
4

Run, schedule, or promote

Run on demand, attach a schedule or trigger, and—on eligible plans—use environments and Git workflows to move the same definition through dev, staging, and production.

Templates

Reuse approved patterns across projects.

Import and convert

Bring in definitions from other ETL and orchestration tools.

Git integration

Version pipelines with Git (enterprise).

Environments

Deploy the same pipeline with per-environment configuration (enterprise).