
Kafka
Overview
Apache Kafka is an open-source distributed event streaming platform. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
Setting up Kafka
1. Add integration
Select Kafka from the integrations page.
2. Configure settings
Fill out the form with the following settings:
Setting | Required | Description |
---|---|---|
Name | TRUE | Name that will be displayed to users when selecting this integration in Superblocks |
Brokers | TRUE | Comma-separated list of broker endpoints |
SASL Mechanism | TRUE | Authentication mechanism. Choose between PLAIN , SCRAM SHA256 , SCRAM SHA512 and AWS. |
Username | TRUE | Username to connect to broker |
Password | TRUE | Password for broker username |
Enable SSL | FALSE | Connect via SSL if selected (SSL encryption should always be used if SASL mechanism is PLAIN ) |
3. Test and save
Click Test Connection to check that Superblocks can connect to the data source.
info
If using Superblocks Cloud, add these Superblocks IPs to your allowlist (not necessary for On-Premise-Agent)
After connecting successfully, click Create to save the integration.
4. Set profiles
Optionally, configure different profiles for separate development environments.
success
Kafka Connected You can now consume and produce messages through Kafka in any Application, Workflow, or Scheduled Job.
Use Kafka in APIs
Connect to your Kafka integration from Superblocks by creating steps in Application backend APIs, Workflows, and Scheduled Jobs. A Kafka step can perform a Consume or Produce action.
info
For Kafka-based integrations, Consume steps must be added inside a Stream block, whereas Produce actions cannot. For general concepts around streaming in Superblocks, see Streaming Applications.
- Consume
- Produce

