Skip to main content
Streemlined is configured through application.yml. This page covers all available configuration options.

Configuration File Precedence

Streemlined loads configuration from these locations (in this order):
  1. Configuration files loaded in order from the system property micronaut.config.files or the environment variable MICRONAUT_CONFIG_FILES.
  2. Environment Variables
  3. Java System Properties
Stick to one configuration source to avoid precedence confusion.
# With Java System Property
java -jar streemlined.jar -Dmicronaut.config.files=/path/to/application.yml

# With Environment Variable
export MICRONAUT_CONFIG_FILES=/path/to/application.yml
java -jar streemlined.jar

Environment Variables

You can override any configuration using environment variables. Variable naming convention:
  • Replace . with _
  • Replace - with _
  • Use uppercase
services:
  streemlined:
    image: streemlined:latest
    ports:
      - "8080:8080"
    environment:
      LICENSE_TOKEN: <your-license-token>
      CLUSTERS_MY-CLUSTER_BOOTSTRAP_SERVERS: broker1:9092,broker2:9092
      DATABASES_DEFAULT_CONNECTION_URL: jdbc:postgresql://pg-xxyyzz-streemlined-nnnn.c.aivencloud.com:13645/defaultdb?ssl=require
      DATABASES_DEFAULT_CONNECTION_USER: avnadmin
      DATABASES_DEFAULT_CONNECTION_PASSWORD: password
      PLUGINS_PATH: /libs
    volumes:
      - ./libs:/libs

License

A license token is required to run Streemlined. Configure it under the license key:
license:
  token: <your-license-token>

Kafka Clusters

Streemlined supports connecting to multiple Kafka clusters simultaneously. Each cluster is configured as a named entry under the clusters key, and is referenced by name in your pipeline nodes.
clusters:
  my-cluster:
    bootstrap.servers: localhost:9092
    
    # Authentication (optional)
    security.protocol: SASL_PLAINTEXT
    sasl.mechanism: PLAIN
    sasl.jaas.config: >
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="admin"
      password="secret";

    # Schema Registry (optional)
    registry:
      schema.registry.url: http://localhost:8081
      basic.auth.credentials.source: USER_INFO
      basic.auth.user.info: username:password

Multiple Clusters

You can configure as many clusters as you need. Each Kafka Consumer or Producer node in your pipeline references a cluster by name.
application.yml
clusters:
  us-east:
    bootstrap.servers: broker-us.example.com:9092
    registry:
      schema.registry.url: http://schema-registry-us.example.com:8081

  eu-west:
    bootstrap.servers: broker-eu.example.com:9092
    security.protocol: SASL_SSL
    sasl.mechanism: SCRAM-SHA-256
    sasl.jaas.config: >
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="admin"
      password="secret";
    registry:
      schema.registry.url: https://schema-registry-eu.example.com:8081
      basic.auth.credentials.source: USER_INFO
      basic.auth.user.info: admin:secret

Security Configuration

clusters:
  my-cluster:
    bootstrap.servers: localhost:9092

Schema Registry

Each cluster can optionally have its own Schema Registry for Avro or Protobuf schemas, configured under the registry key within the cluster:
application.yml
clusters:
  my-cluster:
    bootstrap.servers: broker:9092
    registry:
      schema.registry.url: http://localhost:8081
      basic.auth.credentials.source: USER_INFO
      basic.auth.user.info: username:password
Additional Schema Registry configuration options are available, such as SSL settings and schema caching. See the Confluent Schema Registry client configuration documentation for a complete list of properties.

Database Connections

Configure named database connections under the databases key:
databases:
  default:
    connection.url: jdbc:postgresql://localhost:5432/mydb
    connection.user: postgres
    connection.password: secret
  
  analytics:
    connection.url: jdbc:postgresql://analytics-host:5432/analytics
    connection.user: analytics_user
    connection.password: analytics_pass
Reference these connections by name in your JDBC Sink nodes.

Supported Databases

Streemlined packages the Aiven JDBC Connector for Apache Kafka by default. Tested databases include:
  • PostgreSQL
  • MySQL / MariaDB
  • SQL Server
  • Oracle
  • Snowflake
  • SQLite

Metrics and Monitoring

Streemlined exposes Prometheus metrics by default:
micronaut:
  metrics:
    export:
      prometheus:
        enabled: true
        step: PT1M
        descriptions: true
Need a different metrics reporter? Micronaut Micrometer supports many alternatives including Datadog, CloudWatch, Graphite, InfluxDB, StatsD, New Relic, Dynatrace, Wavefront, Azure Monitor, Stackdriver, and more. Contact support to request an additional registry. See the Micronaut Micrometer documentation for the full list of available registries.

Available Endpoints

EndpointDescription
/healthHealth check status
/prometheusPrometheus metrics
/metricsMicrometer endpoint for humans and debugging, not for tooling or long-term integration.

Logging

Configure logging levels via application properties:
logger:
  levels:
    io.streemlined: INFO
    org.apache.kafka: WARN

Log Levels

LevelUse Case
TRACEOnly if requested by Support
DEBUGDevelopment and troubleshooting
INFONormal operation
WARNProduction (recommended)
ERRORMinimal logging

Next Steps

Quickstart

Build your first pipeline

Nodes Reference

Explore all available nodes