Supported providers
Planasonix supports the following categories; exact capabilities depend on your connector edition. Object and blob storage- AWS S3 — Including cross-account access, SSE-KMS, and VPC endpoints where your network design requires them.
- Azure Blob Storage — Including Azure Data Lake Storage Gen2 when accessed through the blob or DFS endpoints your connector exposes.
- Google Cloud Storage (GCS) — Project-scoped buckets and uniform bucket-level access patterns.
- Cloudflare R2 — S3-compatible API; set custom endpoint and signing options as required.
- MinIO — Self-hosted or air-gapped S3-compatible deployments.
- Wasabi — S3-compatible hot cloud storage with vendor-specific endpoint configuration.
- Box — Folder- and enterprise-scoped content as exposed by the connector.
- Microsoft OneDrive — Personal or work accounts via Microsoft Graph, per connector support.
- Microsoft SharePoint — Sites, libraries, and drives as exposed by the connector.
- FTP and SFTP — Partner and legacy systems; prefer SFTP when the server supports it.
S3-compatible vendors differ in IAM, region, path-style behavior, and signature versions. Always run Test connection and a small sample read after changing endpoint URLs or signing algorithms.
File format support
Planasonix connectors typically support structured and semi-structured file types for parse, split, and schema inference:| Format | Typical use |
|---|---|
| CSV | Exports from spreadsheets, mainframes, and flat-file exchanges; delimiter, quote, escape, and header options are configurable. |
| JSON | API dumps, document exports, and newline-delimited JSON (NDJSON) event logs. |
| Parquet | Columnar analytics handoffs; efficient for wide tables and nested data. |
| Avro | Schema-evolving pipelines, often paired with Kafka or Hadoop-era ecosystems. |
| XML | Enterprise and industry feeds; row extraction depends on connector XPath or flattening options. |
Configure a storage connection
Select the provider connector
In Connections, choose New connection and pick S3, Azure Blob, GCS, or the protocol-specific tile (SFTP, Box, and so on).
Set bucket, container, or path defaults
Enter bucket or container name, optional prefix or folder roots, and region or endpoint URL for S3-compatible stores. For Graph-backed connectors, select the drive or site context the UI requests.
Attach cloud or protocol credentials
Link AWS, Azure, GCP, or password/key credentials per the tabs below. Scope IAM or RBAC to the smallest prefix or container the pipeline needs.
Confirm encryption and TLS
For object stores, align with your cloud default (SSE-S3, SSE-KMS, customer-managed keys). For SFTP, prefer key-based auth and modern ciphers.
Authentication patterns
- AWS S3
- Azure Blob
- GCS
- SFTP / FTP
Use IAM user keys only when your policy requires static keys; prefer IAM roles for EKS, EC2, or cross-account assume role if Planasonix runs in AWS.For buckets in another account, use bucket policies that trust the Planasonix role and scope
s3:GetObject, s3:PutObject, and s3:ListBucket to prefixes—not entire buckets unless necessary.Layout and naming
Organize prefixes by source system, date, or pipeline run ID so you can partition incremental loads and apply lifecycle rules without scanning entire buckets. If you write back to storage, use a dedicated export prefix separate from raw landing data.Related topics
Data warehouses
Load staged files into Snowflake, BigQuery, or similar.
Streaming platforms
When continuous ingestion replaces batch file drops.
Credentials management
Storing and rotating cloud keys and SFTP secrets.
Connections overview
How file connections fit the broader connection model.