Pipelines
A pipeline is a directed graph of processing nodes connected by links. Each pipeline:- Consumes data from one or more Kafka topics
- Processes data through transformation nodes
- Outputs data to sinks (databases, Kafka topics, HTTP endpoints)
Nodes
A node is a single processing unit in your pipeline. Streemlined provides three types of nodes:Sources
Ingest data into the pipeline (e.g., Kafka Consumer)
Processors
Transform, filter, route, or enrich data
Sinks
Output data to external systems
- Input ports — Where data enters the node
- Output ports — Where processed data exits
- Configuration — Node-specific settings
Links
Links connect nodes together, defining how data flows through your pipeline.- A link connects an output port of one node to an input port of another
- Data flows along links as individual records
- Links carry schema information for validation
Schemas
Streemlined tracks schemas throughout your pipeline to help you build correct transformations. Schemas flow through your pipeline in a bidirectional system: downstream from sources toward sinks, and upstream from sinks back toward sources.Schema Sources
Schemas can come from:- Schema Registry — Avro schemas fetched automatically
- JSON inference — Schemas inferred from sample data
- Manual definition — Schemas you define explicitly
Error Handling
Streemlined provides dedicated error outputs for every domain error. When a deserialization fails, a Lookup finds no match, or a Sink cannot write, you decide what happens: route the record to a Dead Letter Queue, send it to a recovery flow — or fail the pipeline entirely if the error is unacceptable. This approach keeps your main pipeline clean while giving you full visibility and control over failures. For details on configuring error handling in your pipelines, see Handling Errors in the Visual Editor guide.Next Steps
Visual Editor
Learn to build pipelines in the UI
Transformations
Learn JSONata for data transformation
Nodes Reference
Explore all available processing nodes