Skip to main content
Warehouse connections power large-scale loads, merges, and metadata-driven pipelines. Planasonix uses each platform’s native clients and authentication flows so you can align with how your cloud administrator already provisions access.

Supported platforms

You can connect Planasonix to:
  • Snowflake — Multi-cloud warehouse with roles, warehouses, databases, and schemas as first-class objects.
  • Google BigQuery — Serverless analytics on GCP; projects, datasets, and jobs APIs.
  • Databricks — Unity Catalog, SQL warehouses, and lakehouse tables (including Delta Lake).
  • Amazon Redshift — Provisioned clusters and serverless workgroups; IAM and database users.
  • Azure Synapse Analytics — Dedicated SQL pool and serverless SQL patterns as supported by the connector.
  • Microsoft Fabric — OneLake and warehouse or SQL endpoints exposed through the Fabric connector surface.
  • Apache Iceberg — Open table format on object storage; often paired with Spark, Databricks, or Trino-style catalogs depending on your deployment.
Iceberg connections frequently sit alongside a catalog and compute connection (for example Databricks or a query engine). Confirm your Planasonix edition includes the Iceberg and catalog path you use in production.

Configure a warehouse connection

1

Pick the warehouse connector

In Connections, create a new connection and choose Snowflake, BigQuery, Databricks, or the tile that matches your platform. Iceberg-focused setups may use a dedicated connector or a bundle of catalog plus storage connections—follow the in-product wizard for your edition.
2

Set account and namespace defaults

Enter account identifiers, project and dataset, workspace URL, HTTP path for SQL warehouses, cluster or workgroup names, and default database/schema so pipeline nodes inherit the correct namespace without repeating it on every node.
3

Attach the right credential type

Link a credential that matches the platform’s auth model (key pair, OAuth, service account JSON, PAT, IAM, Entra ID, and so on). Details vary by vendor; see the tabs below.
4

Validate TLS and network path

Ensure warehouse endpoints are reachable from Planasonix workers (public internet, VPN, or private connectivity as your org requires). Run Test connection before scheduling production loads.
5

Grant narrow warehouse roles

Create service identities with explicit grants on databases, schemas, and future objects only where the platform supports it. Avoid account-wide administrator roles for pipeline users.

Authentication methods by platform

Key pair (JWT) — Preferred for automation: store the private key in Planasonix credentials; register the public key on the Snowflake user.Username and password — Acceptable for some legacy setups; pair with Snowflake network policy and MFA rules from your administrator.OAuth — Use when your organization centralizes Snowflake access through an IdP-backed OAuth client.You also set account identifier, warehouse, database, and schema defaults on the connection so jobs land in the right namespace without repeating them in every pipeline.

Operational tips

Warehouse jobs benefit from explicit staging areas (buckets or external stages) and role separation between extract, transform, and publish. Use one connection per environment and per major workload (for example prod-databricks-curated vs prod-databricks-raw) so you can tune warehouse size and cost without cross-contamination.
Broad warehouse roles (for example ACCOUNTADMIN on Snowflake or owner on entire catalogs) should not be used for pipeline service identities. Create narrow roles with explicit grants.

Cloud storage

Land files before COPY/LOAD or external table patterns.

Credentials management

Key rotation and access reviews for warehouse identities.

Pipelines overview

How warehouse nodes fit into orchestrated graphs.

Reverse ETL

When you push modeled warehouse data back to applications.