Skip to main content
Redpanda logo



Redpanda is an Apache Kafka-compatible, streaming data platform.

Setting up Redpanda

1. Add integration

Select Redpanda from the integrations page.

2. Configure settings

Fill out the form with the following settings:

NameTRUEName that will be displayed to users when selecting this integration in Superblocks
BrokersTRUEComma-separated list of broker endpoints
SASL MechanismTRUEAuthentication mechanism. Choose between PLAIN, SCRAM SHA256, SCRAM SHA512.
UsernameTRUEUsername to connect to broker
PasswordTRUEPassword for broker username
Enable SSLFALSEConnect via SSL if selected (SSL encryption should always be used if SASL mechanism is PLAIN)

3. Test and save

Click Test Connection to check that Superblocks can connect to the data source.


If using Superblocks Cloud, add these Superblocks IPs to your allowlist (not necessary for On-Premise-Agent)

After connecting successfully, click Create to save the integration.

4. Set profiles

Optionally, configure different profiles for separate development environments.


Redpanda Connected You can now consume and produce messages through Redpanda in any Application, Workflow, or Scheduled Job.

Use Redpanda in APIs

Once your Redpanda integration is created, you can start creating steps in Application backend APIs, Workflows, and Scheduled Jobs. Redpanda steps can be used to either Consume or Produce.


Learn more about building internal tools with streaming in our Streaming Applications guide.


To consume data from Redpanda:
  1. Add a Stream block to your API
  2. Add a block for your new Redpanda integration with the action set to Consume
  3. Set the topic to consume from, how you want to consume, and any advanced settings desired
  4. Optionally, configure Process steps to process each message read off the stream
consume from topic


Write data to Redpanda just by adding a step to your API using the Produce action.

produce to topic

Superblocks supports JSON Schema formatted messages with the schema below. Schema Registry's are currently unsupported.

topicTrueThe topic this record will be sent to.
valueTrueRecord contents.
partitionFalseThe partition that the record should be sent to.
keyFalseKey used to deterministically map messages to a partitions based on the hash of the key.
timestampFalseThe timestamp of the record, in milliseconds since epoch.
headersFalseHeaders to be included with the record, sent as an Object with key-value pairs.