Installation
Option 1: Docker Compose
Create adocker-compose.yml file with your Kafka configuration:
Option 2: Download the JAR
Download the latest release and run it directly:application.yml file:
Access the Editor
Once Kanal is running, open your browser and navigate to:Build Your First Pipeline
Let’s create a simple pipeline that reads from a Kafka topic, transforms the data, and writes to a database.1
Add a Kafka Consumer
Drag a Kafka Consumer node onto the canvas from the left sidebar.Configure it:
- Topic:
orders - Schema Type:
JSONorAVRO_SRif using Schema Registry
2
Add a Transform Node
Drag a Transform node onto the canvas and connect it to your Kafka Consumer.Add a JSONata expression to transform the data:See Transformations for more on JSONata.
3
Add a JDBC Sink
Drag a JDBC Sink node onto the canvas and connect it to the Transform node.Configure it:
- Database: Select your configured database
- Table:
processed_orders
4
Run the Pipeline
Click the Play button in the toolbar to start the pipeline.Monitor the throughput and any errors in the metrics panel.
Verify It’s Working
You can verify your pipeline is working by:- Producing test messages to your Kafka topic
- Checking the metrics in the Kanal UI
- Querying your database to see the processed records