Supported platforms
- Apache Kafka — Self-managed clusters and on-premises deployments.
- Confluent Cloud — Managed Kafka with Schema Registry and enterprise security options.
- Redpanda — Kafka-compatible API; configure broker URLs and SASL/SCRAM or TLS as your cluster requires.
- Amazon Kinesis Data Streams — AWS-native streaming; IAM-based access to streams and shards.
- Apache Pulsar — Multi-tenant messaging; web service URL, authentication, and tenant/namespace/topic paths per connector support.
- Upstash — Serverless Kafka with REST and Kafka protocol endpoints depending on the connector mode.
Throughput, exactly-once semantics, and transactional guarantees depend on the connector, your cluster configuration, and how you design idempotent sinks. Validate behavior in a staging cluster before promoting to production.
Schema registry support
When events carry Avro, Protobuf, or JSON Schema, a registry stores writer schema versions so consumers deserialize safely.- Confluent Schema Registry
- AWS Glue Schema Registry
Point the connection at the Schema Registry URL and supply API key and secret or mutual TLS if your organization requires it.Enable auto-registration only in non-production unless your governance team expects producer-side registration. In production, many teams register schemas through CI and treat the registry as the contract between teams.
Configure a streaming connection
Choose the streaming connector
In Connections, select Kafka, Kinesis, Pulsar, or the managed service tile (for example Confluent Cloud).
Enter broker or control-plane endpoints
Provide bootstrap servers, Kinesis stream ARN or name, Pulsar broker/web service URL, or Upstash endpoint strings. Enable TLS and SASL for production clusters.
Attach credentials
Store passwords, API keys, IAM roles, and trust material in Credentials management. Reference them from the connection; do not paste secrets into topic or consumer property fields that might be logged.
Link schema registry when required
If payloads are schema-encoded, configure Confluent Schema Registry or Glue Schema Registry URLs and auth in the same connection or companion settings panel, per your Planasonix version.
Define consumer groups and topic scope
Assign a consumer group per critical pipeline so offsets do not collide. Allowlist topic prefixes or explicit topic lists when the UI supports it, and mirror restrictions with broker ACLs.
Configuration concepts
Consumer groups and offsets
Consumer groups and offsets
Each connection or pipeline step uses a consumer group so Kafka (or compatible brokers) track progress. Reusing a group across unrelated pipelines can cause skipped or replayed messages—give each critical pipeline its own group unless you intentionally share consumption.
Security
Security
Enable SASL (PLAIN, SCRAM, GSSAPI/Kerberos where supported) and TLS for production clusters. Store passwords and trust stores through Credentials management, not in topic configuration fields.
Topic discovery
Topic discovery
You can allowlist topic prefixes or explicit topic lists to avoid accidental reads from sensitive namespaces. Pair with ACLs on the broker side for defense in depth.
Related topics
Cloud storage
Archive stream batches or dead-letter objects to object storage.
APIs and webhooks
Bridge HTTP callbacks into topics through an integration layer when needed.
Streaming overview
How stream processing fits next to batch pipelines in Planasonix.
Credentials management
SASL passwords, IAM, and registry API keys.