Skip to main content
Kanal is configured through application.yml. This page covers all available configuration options.

Configuration File Precedence

Kanal loads configuration from these locations (in this order):
  1. Configuration files loaded in order from the system property micronaut.config.files or the environment variable MICRONAUT_CONFIG_FILES.
  2. Environment Variables
  3. Java System Properties
Stick to one configuration source to avoid precedence confusion.
# With Java System Property
java -jar kanal.jar -Dmicronaut.config.files=/path/to/application.yml

# With Environment Variable
export MICRONAUT_CONFIG_FILES=/path/to/application.yml
java -jar kanal.jar

Environment Variables

You can override any configuration using environment variables. Variable naming convention:
  • Replace . with _
  • Replace - with _
  • Use uppercase
services:
  kanal:
    image: kanal:latest
    ports:
      - "8080:8080"
    environment:
      KAFKA_BOOTSTRAP_SERVERS: broker1:9092,broker2:9092
      DATABASES_DEFAULT_CONNECTION_URL: jdbc:postgresql://pg-xxyyzz-streemlined-nnnn.c.aivencloud.com:13645/defaultdb?ssl=require
      DATABASES_DEFAULT_CONNECTION_USER: avnadmin
      DATABASES_DEFAULT_CONNECTION_PASSWORD: password
      PLUGIN_PATH: /libs
    volumes:
      - ./libs:/libs

Kafka Configuration

Configure your Kafka connection under the kafka key:
kafka:
  bootstrap.servers: localhost:9092
  
  # Authentication (optional)
  security.protocol: SASL_PLAINTEXT
  sasl.mechanism: PLAIN
  sasl.jaas.config: >
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="secret";
  
  # Consumer settings
  group.id: kanal-pipeline-1
  auto.offset.reset: earliest
  max.poll.records: 5000
  enable.auto.commit: false
  
  # Producer settings
  acks: all

Common Kafka Properties

PropertyDefaultDescription
bootstrap.serversKafka broker addresses (required)
group.idConsumer group ID (required)
auto.offset.resetearliestWhere to start when no offset exists
max.poll.records500Max records per poll
enable.auto.commitfalseAuto-commit offsets (recommended: false)
acksallProducer acknowledgment level

Security Configuration

kafka:
  bootstrap.servers: localhost:9092

Schema Registry

Configure Schema Registry for Avro or Protobuf schemas:
value.converter:
  schema.registry.url: http://localhost:8081
  
  # Authentication (if required)
  basic.auth.credentials.source: USER_INFO
  basic.auth.user.info: username:password
Additional Schema Registry configuration options are available, such as SSL settings, and schema caching. See the Confluent Schema Registry documentation for a complete list of configuration properties.

Database Connections

Configure named database connections under the databases key:
databases:
  default:
    connection.url: jdbc:postgresql://localhost:5432/mydb
    connection.user: postgres
    connection.password: secret
  
  analytics:
    connection.url: jdbc:postgresql://analytics-host:5432/analytics
    connection.user: analytics_user
    connection.password: analytics_pass
Reference these connections by name in your JDBC Sink nodes.

Supported Databases

Kanal packages the Aiven JDBC Connector for Apache Kafka by default. Tested databases include:
  • PostgreSQL
  • MySQL / MariaDB
  • SQL Server
  • Oracle
  • Snowflake
  • SQLite

Metrics and Monitoring

Kanal exposes Prometheus metrics by default:
micronaut:
  metrics:
    export:
      prometheus:
        enabled: true
        step: PT1M
        descriptions: true
Need a different metrics reporter? Micronaut Micrometer supports many alternatives including Datadog, CloudWatch, Graphite, InfluxDB, StatsD, New Relic, Dynatrace, Wavefront, Azure Monitor, Stackdriver, and more. Contact support to request an additional registry. See the Micronaut Micrometer documentation for the full list of available registries.

Available Endpoints

EndpointDescription
/healthHealth check status
/prometheusPrometheus metrics
/metricsMicrometer endpoint for humans and debugging, not for tooling or long-term integration.

Logging

Configure logging levels via application properties:
logger:
  levels:
    io.kanal: INFO
    org.apache.kafka: WARN

Log Levels

LevelUse Case
TRACEOnly if requested by Support
DEBUGDevelopment and troubleshooting
INFONormal operation
WARNProduction (recommended)
ERRORMinimal logging

Next Steps