Skip to main content

Overview

Planasonix provides a fully managed, spec-compliant Apache Iceberg REST Catalog so you can connect any query engine — Snowflake, DuckDB, Spark, Trino — directly to tables managed by your pipelines, without configuring external catalogs like AWS Glue or Hive Metastore.

How It Works

  1. Pipeline writes — When a Managed Lakehouse pipeline writes Iceberg data, tables are automatically registered in the hosted catalog
  2. Query engines connect — Point any Iceberg-compatible engine to https://api.planasonix.com/v1 with your API key
  3. Credential vending — On each loadTable request, the catalog provides temporary, read-only storage credentials so engines can access data files directly

Authentication

Direct API Key

Pass your flx_ API key as a Bearer token:
Authorization: Bearer flx_your_api_key_here

OAuth2 Token Exchange

For engines that require the Iceberg REST spec’s OAuth2 flow (Spark, Trino):
curl -X POST https://api.planasonix.com/v1/oauth/tokens \
  -d "grant_type=client_credentials" \
  -d "client_id=flx_your_api_key_here"
Response:
{
  "access_token": "eyJhbG...",
  "token_type": "bearer",
  "expires_in": 3600,
  "scope": "catalog"
}

API Endpoints

MethodEndpointDescription
GET/v1/configCatalog configuration
POST/v1/oauth/tokensOAuth2 token exchange
GET/v1/namespacesList namespaces
POST/v1/namespacesCreate namespace
GET/v1/namespaces/{ns}Get namespace
DELETE/v1/namespaces/{ns}Drop namespace
POST/v1/namespaces/{ns}/propertiesUpdate namespace properties
GET/v1/namespaces/{ns}/tablesList tables
POST/v1/namespaces/{ns}/tablesCreate table
GET/v1/namespaces/{ns}/tables/{table}Load table (with credentials)
POST/v1/namespaces/{ns}/tables/{table}Commit table updates
DELETE/v1/namespaces/{ns}/tables/{table}Drop table

Credential Vending

When you load a table, the response includes temporary storage credentials in the config field:

AWS S3

{
  "config": {
    "s3.access-key-id": "ASIA...",
    "s3.secret-access-key": "...",
    "s3.session-token": "...",
    "s3.region": "us-east-1"
  }
}

Google Cloud Storage

{
  "config": {
    "gcs.credentials": "{\"type\": \"service_account\", ...}"
  }
}

Azure Blob Storage

{
  "config": {
    "adls.sas-token.account.dfs.core.windows.net": "sv=2022-11-02&ss=b&...",
    "adls.auth.shared-key.account.name": "mystorageaccount"
  }
}

Tier Limits

TierMax TablesAPI Requests/Day
Professional1010,000
Premium50100,000
EnterpriseUnlimitedUnlimited

Setup

1

Enable Managed Lakehouse

Create a Managed Lakehouse connection with the Hosted (Planasonix) catalog type
2

Run a Pipeline

Configure a pipeline with a Managed Lakehouse destination node and run it. Tables are auto-registered.
3

Connect Your Engine

Use the connection snippets from the connection settings page to configure your query engine