Skip to main content
Streaming connections let Planasonix consume and produce events with low latency. You map topics or streams to pipeline steps, control offset management and consumer groups, and optionally enforce schemas through a registry so downstream transforms stay stable as producers evolve.

Supported platforms

  • Apache Kafka — Self-managed clusters and on-premises deployments.
  • Confluent Cloud — Managed Kafka with Schema Registry and enterprise security options.
  • Redpanda — Kafka-compatible API; configure broker URLs and SASL/SCRAM or TLS as your cluster requires.
  • Amazon Kinesis Data Streams — AWS-native streaming; IAM-based access to streams and shards.
  • Apache Pulsar — Multi-tenant messaging; web service URL, authentication, and tenant/namespace/topic paths per connector support.
  • Upstash — Serverless Kafka with REST and Kafka protocol endpoints depending on the connector mode.
Throughput, exactly-once semantics, and transactional guarantees depend on the connector, your cluster configuration, and how you design idempotent sinks. Validate behavior in a staging cluster before promoting to production.

Schema registry support

When events carry Avro, Protobuf, or JSON Schema, a registry stores writer schema versions so consumers deserialize safely.
Point the connection at the Schema Registry URL and supply API key and secret or mutual TLS if your organization requires it.Enable auto-registration only in non-production unless your governance team expects producer-side registration. In production, many teams register schemas through CI and treat the registry as the contract between teams.

Configure a streaming connection

1

Choose the streaming connector

In Connections, select Kafka, Kinesis, Pulsar, or the managed service tile (for example Confluent Cloud).
2

Enter broker or control-plane endpoints

Provide bootstrap servers, Kinesis stream ARN or name, Pulsar broker/web service URL, or Upstash endpoint strings. Enable TLS and SASL for production clusters.
3

Attach credentials

Store passwords, API keys, IAM roles, and trust material in Credentials management. Reference them from the connection; do not paste secrets into topic or consumer property fields that might be logged.
4

Link schema registry when required

If payloads are schema-encoded, configure Confluent Schema Registry or Glue Schema Registry URLs and auth in the same connection or companion settings panel, per your Planasonix version.
5

Define consumer groups and topic scope

Assign a consumer group per critical pipeline so offsets do not collide. Allowlist topic prefixes or explicit topic lists when the UI supports it, and mirror restrictions with broker ACLs.
6

Test and tune backpressure

Run Test connection and a short consume in non-production. Set max poll records, fetch sizes, and commit behavior to match broker limits and downstream sink capacity.

Configuration concepts

Each connection or pipeline step uses a consumer group so Kafka (or compatible brokers) track progress. Reusing a group across unrelated pipelines can cause skipped or replayed messages—give each critical pipeline its own group unless you intentionally share consumption.
Enable SASL (PLAIN, SCRAM, GSSAPI/Kerberos where supported) and TLS for production clusters. Store passwords and trust stores through Credentials management, not in topic configuration fields.
You can allowlist topic prefixes or explicit topic lists to avoid accidental reads from sensitive namespaces. Pair with ACLs on the broker side for defense in depth.

Cloud storage

Archive stream batches or dead-letter objects to object storage.

APIs and webhooks

Bridge HTTP callbacks into topics through an integration layer when needed.

Streaming overview

How stream processing fits next to batch pipelines in Planasonix.

Credentials management

SASL passwords, IAM, and registry API keys.