Connectors
Connectors are Kafka Connect components built into Managed Service for Apache Kafka® for streaming data between Apache Kafka® and other data systems. You can use a connector for continuous data migration to and from Managed Service for Apache Kafka® clusters. Data in connectors is handled through separate Kafka Connect worker threads. You can learn more about Kafka Connect in the Apache Kafka® documentation
Connectors perform the following functions:
- Connecting to Managed Service for Apache Kafka® clusters and other data stores.
- Reading and writing data.
- Managing data streaming.
For any type of connector, you can specify:
- Task limit that determines the number of workers running concurrently.
- Additional properties.
For step-by-step guides on using connectors, see Managing connectors.
Supported connectors
MirrorMaker
MirrorMaker is used for replicating Apache Kafka® topics between clusters.
A worker accesses the broker hosts of a Managed Service for Apache Kafka® cluster and replicates the topics specified in the connector filtering template. Depending on the connector's replication factor, it connects to one or more brokers.
Topic names in the target cluster are the same as in the source.
For more information about how MirrorMaker transfers data, see the Apache Kafka® documentation
S3 Sink
The S3 Sink connector is only used as a target for data export from Apache Kafka® topics to S3 storage.
The connector requests data from Apache Kafka® on a regular basis and uploads it to S3 storage. The data is split into chunks, which are S3 file objects. The size of each data chunk depends on the specified maximum number of records it contains.
For more information about how S3 transfers data, see the connector documentation