Skip to main content
AI provider connections supply authenticated access to hosted large language models and related APIs. You use them in pipeline steps that classify, summarize, extract entities from text, or generate structured JSON from unstructured input, and in product features such as AI Copilot that assist with mapping, SQL, or documentation inside the Planasonix UI.

Supported providers

ProviderModels (examples)Typical credential
OpenAIGPT-4.1 family, GPT-4o, o-series reasoning models, embeddingsOrganization-scoped API key
Anthropic (Claude)Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 HaikuAPI key with usage limits
Google AI (Gemini)Gemini 1.5 Pro/Flash and successor tiersAPI key or Google Cloud–backed auth as supported
Exact model names and availability change as vendors release updates; the connection panel lists the current catalog your workspace is entitled to use.

API key configuration

1

Create a restricted key in the provider console

In OpenAI, Anthropic, or Google AI Studio / GCP, create an API key used only for Planasonix. Disable capabilities you do not need (for example image generation or fine-tuning) if the console allows per-key restrictions.
2

Add an API key credential in Planasonix

Open Credentials, choose API key, paste the secret once, and label it with environment and cost center (for example prod-openai-etl). Never commit keys to Git or pipeline YAML.
3

Attach the credential to an AI connection

Create a new AI provider connection, select the vendor, and link the credential. Set defaults such as default model, region (if applicable), and max tokens caps that match your budget guardrails.
4

Validate with a dry-run transform

Run a sample pipeline or Copilot action in a non-production workspace to confirm billing, quota, and content policy filters behave as expected before promoting jobs.
If your enterprise uses Azure OpenAI instead of the public API, create a connection that targets the Azure resource endpoint and uses Azure AD or key-based auth as your Planasonix deployment supports—do not mix public and Azure keys on the same connection.

Use cases in pipelines

LLM transforms — Send row or batch context to the model with a system prompt that constrains output format (for example “return JSON with keys sentiment and confidence”). Validate JSON in a downstream step and quarantine malformed responses. Enrichment — Combine retrieved fields (customer notes, ticket bodies) with retrieval-augmented prompts referencing allowed knowledge sources only. AI Copilot — Uses the same connections under the hood for assisted authoring. Point Copilot traffic at separate keys or budgets from bulk batch jobs when finance needs cost attribution.
Do not send regulated personal data to external models until your legal and security teams approve data processing agreements, residency, and retention. Use field masking or hashing steps before the LLM call when policies require minimization.

APIs and webhooks

Generic HTTP AI endpoints and custom model gateways.

AI Copilot overview

In-product assistance that consumes AI provider connections.

Credentials management

Rotation and access control on API keys.

Connections overview

How AI connections relate to other connection types.