Supported platforms
You can connect Planasonix to:- Snowflake — Multi-cloud warehouse with roles, warehouses, databases, and schemas as first-class objects.
- Google BigQuery — Serverless analytics on GCP; projects, datasets, and jobs APIs.
- Databricks — Unity Catalog, SQL warehouses, and lakehouse tables (including Delta Lake).
- Amazon Redshift — Provisioned clusters and serverless workgroups; IAM and database users.
- Azure Synapse Analytics — Dedicated SQL pool and serverless SQL patterns as supported by the connector.
- Microsoft Fabric — OneLake and warehouse or SQL endpoints exposed through the Fabric connector surface.
- Apache Iceberg — Open table format on object storage; often paired with Spark, Databricks, or Trino-style catalogs depending on your deployment.
Iceberg connections frequently sit alongside a catalog and compute connection (for example Databricks or a query engine). Confirm your Planasonix edition includes the Iceberg and catalog path you use in production.
Configure a warehouse connection
Pick the warehouse connector
In Connections, create a new connection and choose Snowflake, BigQuery, Databricks, or the tile that matches your platform. Iceberg-focused setups may use a dedicated connector or a bundle of catalog plus storage connections—follow the in-product wizard for your edition.
Set account and namespace defaults
Enter account identifiers, project and dataset, workspace URL, HTTP path for SQL warehouses, cluster or workgroup names, and default database/schema so pipeline nodes inherit the correct namespace without repeating it on every node.
Attach the right credential type
Link a credential that matches the platform’s auth model (key pair, OAuth, service account JSON, PAT, IAM, Entra ID, and so on). Details vary by vendor; see the tabs below.
Validate TLS and network path
Ensure warehouse endpoints are reachable from Planasonix workers (public internet, VPN, or private connectivity as your org requires). Run Test connection before scheduling production loads.
Authentication methods by platform
- Snowflake
- BigQuery
- Databricks
- Redshift
- Synapse
- Fabric
- Iceberg
Key pair (JWT) — Preferred for automation: store the private key in Planasonix credentials; register the public key on the Snowflake user.Username and password — Acceptable for some legacy setups; pair with Snowflake network policy and MFA rules from your administrator.OAuth — Use when your organization centralizes Snowflake access through an IdP-backed OAuth client.You also set account identifier, warehouse, database, and schema defaults on the connection so jobs land in the right namespace without repeating them in every pipeline.
Operational tips
Warehouse jobs benefit from explicit staging areas (buckets or external stages) and role separation between extract, transform, and publish. Use one connection per environment and per major workload (for exampleprod-databricks-curated vs prod-databricks-raw) so you can tune warehouse size and cost without cross-contamination.
Related topics
Cloud storage
Land files before COPY/LOAD or external table patterns.
Credentials management
Key rotation and access reviews for warehouse identities.
Pipelines overview
How warehouse nodes fit into orchestrated graphs.
Reverse ETL
When you push modeled warehouse data back to applications.