A deep dive into Kafka connect settings and pitfalls, analysing several tips to have a better experience when setting up a connector.
Kafka Connect is the spell book for creating magical data streaming setups. It allows us to integrate Apache Kafka with the rest of our data ecosystem and get all the data flowing to the right place. However you need some rather dark magic to configure all the weird and wonderful connectors recently, and this talk will teach you the tricks you need.
We’ll talk about streaming data into topics, the data formats to use and what to look out for when Kafka Connect is plugging data from another platform into your setup. Since we don’t live in a perfect world, we’ll also cover configurations like error tolerance, dead letter queues and single message transforms that can make things more robust.
You’ll see some examples of good practices, and hear some stories about how I learned a few of these things the hard way. Finally we’ll shed light on some of the options, like auto evolution, that seem like a great idea when you are prototyping a new solution but which can store up problems for the longer term.
If you are ready to make magic with Kafka Connect and the Apache Kafka ecosystem, this is the talk for you!
- Aiven for Apache Kafka®
- Aiven for Apache Kafka® Connect
- Deep dive into Single Message Transforms
- JDBC Source connector: what could go wrong?
- JDBC Sink connector not working with schemaless topics
- Works on JDBC sink to parse schemaless topics by inferencing the JSON structure
- Dead Letter Queue - Twitter thread by Gunnar Morling
Check out the Aiven Rolling Challenge