Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex Managed Service for Apache Kafka®
  • Getting started
    • All guides
    • Managing topics
    • Managing users
    • Managing connectors
    • Kafka UI for Apache Kafka®
  • Access management
  • Pricing policy
  • Terraform reference
  • Yandex Monitoring metrics
  • Audit Trails events
  • Public materials
  • Release notes
  • FAQ

In this article:

  • Getting a list of connectors
  • Getting detailed information about a connector
  • Creating a connector
  • MirrorMaker
  • S3 Sink
  • Editing a connector
  • Pausing a connector
  • Resuming a connector
  • Importing a connector to Terraform
  • Deleting a connector
  1. Step-by-step guides
  2. Managing connectors

Managing connectors

Written by
Yandex Cloud
Updated at February 6, 2026
  • Getting a list of connectors
  • Getting detailed information about a connector
  • Creating a connector
    • MirrorMaker
    • S3 Sink
  • Editing a connector
  • Pausing a connector
  • Resuming a connector
  • Importing a connector to Terraform
  • Deleting a connector

Connectors manage the transfer of Apache Kafka® topics to a different cluster or data storage system.

You can:

  • Get a list of connectors.
  • Get detailed information about a connector.
  • Create a connector of the right type:
    • MirrorMaker
    • S3 Sink
  • Edit a connector.
  • Pause a connector.
  • Resume a connector.
  • Import a connector to Terraform.
  • Delete a connector.

Getting a list of connectorsGetting a list of connectors

Management console
CLI
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To get the list of cluster connectors, run this command:

yc managed-kafka connector list --cluster-name=<cluster_name>

Result:

+--------------+-----------+
|     NAME     | TASKS MAX |
+--------------+-----------+
| connector559 |         1 |
| ...          |           |
+--------------+-----------+

You can get the cluster name with the list of clusters in the folder.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.list method, e.g., via the following cURL request:

    curl \
      --request GET \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors'
    

    You can get the cluster ID with the list of clusters in the folder.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/List method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>"
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.List
    

    You can get the cluster ID with the list of clusters in the folder.

  4. Check the server response to make sure your request was successful.

Getting detailed information about a connectorGetting detailed information about a connector

Management console
CLI
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.
  4. Click the connector name.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To get detailed information about a connector, run this command:

yc managed-kafka connector get <connector_name>\
   --cluster-name=<cluster_name>

Result:

name: connector785
tasks_max: "1"
cluster_id: c9qbkmoiimsl********
...

You can get the connector name with the list of cluster connectors, and the cluster name, with the list of clusters in the folder.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.get method, e.g., via the following cURL request:

    curl \
      --request GET \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors/<connector_name>'
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of cluster connectors.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/Get method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>"
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Get
    

    You can get the cluster ID with the list of clusters in the folder.

  4. Check the server response to make sure your request was successful.

Creating a connectorCreating a connector

Management console
CLI
Terraform
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.

  2. Go to Managed Service for Kafka.

  3. Select the cluster and open the Connectors tab.

  4. Click Create connector.

  5. Under Basic parameters, specify:

    • Connector name.
    • Task limit: Number of concurrent tasks. To distribute replication load evenly, we recommend a value of at least 2.
  6. Under Additional properties, specify the connector properties in the following format:

    <key>:<value>
    

    The key can either be a simple string or include a prefix that indicates whether it belongs to the source or target (a cluster alias in the connector configuration):

    <cluster_alias>.<key_body>:<value>
    
  7. Select the connector type, MirrorMaker or S3 Sink, and set up its configuration.

    For more information about the supported connector types, see Connectors.

  8. Click Create.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To create a MirrorMaker connector:

  1. See the description of the CLI command for creating a connector:

    yc managed-kafka connector-mirrormaker create --help
    
  2. Create a connector:

    yc managed-kafka connector-mirrormaker create <connector_name> \
       --cluster-name=<cluster_name> \
       --direction=<connector_direction> \
       --tasks-max=<task_limit> \
       --properties=<advanced_properties> \
       --replication-factor=<replication_factor> \
       --topics=<topic_pattern> \
       --this-cluster-alias=<this_cluster_prefix> \
       --external-cluster alias=<external_cluster_prefix>,`
                         `bootstrap-servers=<list_of_broker_host_FQDNs>,`
                         `security-protocol=<security_protocol>,`
                         `sasl-mechanism=<authentication_mechanism>,`
                         `sasl-username=<username>,`
                         `sasl-password=<user_password>,`
                         `ssl-truststore-certificates=<certificates_in_PEM_format>
    

    To learn how to get a broker host FQDN, see this guide.

    You can get the cluster name with the list of clusters in the folder.

    --direction takes these values:

    • egress: If the current cluster is a source cluster.
    • ingress: If the current cluster is a target cluster.

To create an S3 Sink connector:

  1. See the description of the CLI command for creating a connector:

    yc managed-kafka connector-s3-sink create --help
    
  2. Create a connector:

    yc managed-kafka connector-s3-sink create <connector_name> \
       --cluster-name=<cluster_name> \
       --tasks-max=<task_limit> \
       --properties=<advanced_properties> \
       --topics=<topic_pattern> \
       --file-compression-type=<compression_codec> \
       --file-max-records=<file_max_records> \
       --bucket-name=<bucket_name> \
       --access-key-id=<AWS_compatible_static_key_ID> \
       --secret-access-key=<AWS_compatible_static_key_contents> \
       --storage-endpoint=<S3_compatible_storage_endpoint> \
       --region=<S3_compatible_storage_region>
    

    You can get the cluster name with the list of clusters in the folder.

  1. Check the list of MirrorMaker and S3 Sink connector settings.

  2. Open the current Terraform configuration file describing your infrastructure.

    Learn how to create this file in Creating a cluster.

  3. To create a MirrorMaker connector, add the yandex_mdb_kafka_connector resource with the connector_config_mirrormaker configuration section:

    resource "yandex_mdb_kafka_connector" "<connector_name>" {
      cluster_id = "<cluster_ID>"
      name       = "<connector_name>"
      tasks_max  = <task_limit>
      properties = {
        <advanced_properties>
      }
      connector_config_mirrormaker {
        topics             = "<topic_pattern>"
        replication_factor = <replication_factor>
        source_cluster {
          alias = "<cluster_prefix>"
          external_cluster {
            bootstrap_servers           = "<list_of_broker_host_FQDNs>"
            sasl_username               = "<username>"
            sasl_password               = "<user_password>"
            sasl_mechanism              = "<authentication_mechanism>"
            security_protocol           = "<security_protocol>"
            ssl-truststore-certificates = "<PEM_certificate_contents>"
          }
        }
        target_cluster {
          alias = "<cluster_prefix>"
          this_cluster {}
        }
      }
    }
    

    To learn how to get a broker host FQDN, see this guide.

  4. To create an S3 Sink connector, add the yandex_mdb_kafka_connector resource with the connector_config_s3_sink configuration section:

    resource "yandex_mdb_kafka_connector" "<connector_name>" {
      cluster_id = "<cluster_ID>"
      name       = "<connector_name>"
      tasks_max  = <task_limit>
      properties = {
        <advanced_properties>
      }
      connector_config_s3_sink {
        topics                = "<topic_pattern>"
        file_compression_type = "<compression_codec>"
        file_max_records      = <file_max_records>
        s3_connection {
          bucket_name = "<bucket_name>"
          external_s3 {
            endpoint          = "<S3_compatible_storage_endpoint>"
            access_key_id     = "<AWS_compatible_static_key_ID>"
            secret_access_key = "<AWS_compatible_static_key_contents>"
          }
        }
      }
    }
    
  5. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  6. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

For more information, see this Terraform provider guide.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. To create a MirrorMaker connector, call the Connector.create method, e.g., via the following cURL request:

    curl \
      --request POST \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors' \
      --data '{
                "connectorSpec": {
                  "name": "<connector_name>",
                  "tasksMax": "<task_limit>"
                  "properties": "<advanced_connector_properties>"
                  "connectorConfigMirrormaker": {
                    <Mirrormaker_connector_settings>
                  }
                }
              }'
    

    You can get the cluster ID with the list of clusters in the folder.

  3. To create an S3 Sink connector, call the Connector.create method, e.g., via the following cURL request:

    curl \
      --request POST \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors' \
      --data '{
                "connectorSpec": {
                  "name": "<connector_name>",
                  "tasksMax": "<task_limit>"
                  "properties": "<advanced_connector_properties>"
                  "connectorConfigS3Sink": {
                    <S3_Sink_connector_settings>
                  }
                }
              }'
    

    You can get the cluster ID with the list of clusters in the folder.

  4. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. To create a MirrorMaker connector, call the ConnectorService/Create method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_spec": {
              "name": "<connector_name>",
              "tasks_max": {
                "value": "<task_limit>"
              },
              "properties": "<advanced_connector_properties>"
              "connector_config_mirrormaker": {
                <Mirrormaker_connector_settings>
              }
            }
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Create
    

    You can get the cluster ID with the list of clusters in the folder.

  4. To create an S3 Sink connector, call the ConnectorService/Create method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_spec": {
              "name": "<connector_name>",
              "tasks_max": {
                "value": "<task_limit>"
              },
              "properties": "<advanced_connector_properties>"
              "connector_config_s3_sink": {
                <S3_Sink_connector_settings>
              }
            }
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Create
    

    You can get the cluster ID with the list of clusters in the folder.

  5. Check the server response to make sure your request was successful.

MirrorMakerMirrorMaker

Specify the MirrorMaker connector parameters as follows:

Management console
CLI
Terraform
REST API
gRPC API
  • Topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • Replication factor: Number of replicas the cluster stores for each topic.

  • Under Source cluster, specify the parameters for connecting to the source cluster:

    • Alias: Source cluster prefix in the connector settings.

      Note

      Topics in the target cluster will be created with the specified prefix.

    • Use this cluster: Select this option to use the current cluster as the source.

    • Bootstrap servers: Comma-separated list of the FQDNs of the source cluster broker hosts with the port numbers for connection, e.g., broker1.example.com:9091,broker2.example.com.

      To learn how to get a broker host FQDN, see this guide.

    • SASL username: Username for the connector to access the source cluster.

    • SASL password: User password for the connector to access the source cluster.

    • SASL mechanism: Authentication mechanism for username and password validation.

    • Security protocol: Select the connection protocol for the connector:

      • PLAINTEXT, SASL_PLAINTEXT: To connect without SSL.
      • SSL, SASL_SSL: To connect with SSL.
    • Certificate in PEM format: Upload a PEM certificate to access the external cluster.

  • Under Target cluster, specify the parameters for connecting to the target cluster:

    • Alias: Target cluster prefix in the connector settings.

    • Use this cluster: Select this option to use the current cluster as the target.

    • Bootstrap servers: Comma-separated list of the FQDNs of the target cluster broker hosts with the port numbers for connection.

      To learn how to get a broker host FQDN, see this guide.

    • SASL username: Username for the connector to access the target cluster.

    • SASL password: User password for the connector to access the target cluster.

    • SASL mechanism: Authentication mechanism for username and password validation.

    • Security protocol: Select the connection protocol for the connector:

      • PLAINTEXT, SASL_PLAINTEXT: To connect without SSL.
      • SSL, SASL_SSL: To connect with SSL.
    • Certificate in PEM format: Upload a PEM certificate to access the external cluster.

  • To specify additional settings not listed above, create the relevant keys and set their values under Additional properties when creating or updating the connector. Here are some examples of keys:

    • key.converter
    • value.converter

    For the list of general connector settings, see this Apache Kafka® guide.

  • --cluster-name: Cluster name.

  • --direction: Connector direction:

    • ingress: For a target cluster.
    • egress: For a source cluster.
  • --tasks-max: Number of concurrent tasks. To distribute replication load evenly, we recommend a value of at least 2.

  • --properties: Comma-separated list of additional connector settings in <key>:<value> format. Here are some examples of keys:

    • key.converter
    • value.converter

    For the list of general connector settings, see this Apache Kafka® guide documentation.

  • --replication-factor: Number of replicas the cluster stores for each topic.

  • --topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • --this-cluster-alias: This cluster prefix in the connector settings.

  • --external-cluster: External cluster parameters:

    • alias: External cluster prefix in the connector settings.

    • bootstrap-servers: Comma-separated list of the FQDNs of the external cluster broker hosts with the port numbers for connection.

      To learn how to get a broker host FQDN, see this guide.

    • security-protocol: Connection protocol for the connector:

      • plaintext, sasl_plaintext: To connect without SSL.
      • ssl, sasl_ssl: To connect with SSL.
    • sasl-mechanism: Authentication mechanism for username and password validation.

    • sasl-username: Username for the connector to access the external cluster.

    • sasl-password: User password for the connector to access the external cluster.

    • ssl-truststore-certificates: List of PEM certificates.

  • properties: Comma-separated list of additional connector settings in <key>:<value> format. Here are some examples of keys:

    • key.converter
    • value.converter

    For the list of general connector settings, see this Apache Kafka® guide.

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • replication_factor: Number of replicas the cluster stores for each topic.

  • source_cluster and target_cluster: Parameters for connecting to the source and target clusters:

    • alias: Cluster prefix in the connector settings.

      Note

      Topics in the target cluster will be created with the specified prefix.

    • this_cluster: Option to use the current cluster as the source or target.

    • external_cluster: Parameters for connecting to the external cluster:

      • bootstrap_servers: Comma-separated list of the FQDNs of the cluster broker hosts with the port numbers for connection.

        To learn how to get a broker host FQDN, see this guide.

      • sasl_username: Username for the connector to access the cluster.

      • sasl_password: User password for the connector to access the cluster.

      • sasl_mechanism: Authentication mechanism for username and password validation.

      • security_protocol: Connection protocol for the connector:

        • PLAINTEXT, SASL_PLAINTEXT: To connect without SSL.
        • SSL, SASL_SSL: To connect with SSL.
      • ssl_truststore_certificates: PEM certificate contents.

To configure the MirrorMaker connector, use the connectorSpec.connectorConfigMirrormaker parameter:

  • sourceCluster and targetCluster: Parameters for connecting to the source and target clusters:

    • alias: Cluster prefix in the connector settings.

      Note

      Topics in the target cluster will be created with the specified prefix.

    • thisCluster: Option to use the current cluster as the source or target.

    • externalCluster: Parameters for connecting to the external cluster:

      • bootstrapServers: Comma-separated list of the FQDNs of the cluster broker hosts with the port numbers for connection.

        To learn how to get a broker host FQDN, see this guide.

      • saslUsername: Username for the connector to access the cluster.

      • saslPassword: User password for the connector to access the cluster.

      • saslMechanism: Authentication mechanism for username and password validation.

      • securityProtocol: Connection protocol for the connector:

        • PLAINTEXT, SASL_PLAINTEXT: To connect without SSL.
        • SSL, SASL_SSL: To connect with SSL.
      • sslTruststoreCertificates: PEM certificate contents.

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • replicationFactor: Number of replicas the cluster stores for each topic.

To configure the MirrorMaker connector, use the connector_spec.connector_config_mirrormaker parameter:

  • source_cluster and target_cluster: Parameters for connecting to the source and target clusters:

    • alias: Cluster prefix in the connector settings.

      Note

      Topics in the target cluster will be created with the specified prefix.

    • this_cluster: Option to use the current cluster as the source or target.

    • external_cluster: Parameters for connecting to the external cluster:

      • bootstrap_servers: Comma-separated list of the FQDNs of the cluster broker hosts with the port numbers for connection.

        To learn how to get a broker host FQDN, see this guide.

      • sasl_username: Username for the connector to access the cluster.

      • sasl_password: User password for the connector to access the cluster.

      • sasl_mechanism: Authentication mechanism for username and password validation.

      • security_protocol: Connection protocol for the connector:

        • PLAINTEXT, SASL_PLAINTEXT: To connect without SSL.
        • SSL, SASL_SSL: To connect with SSL.
      • ssl_truststore_certificates: PEM certificate contents.

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • replication_factor: Number of replicas the cluster stores for each topic, provided as an object with the value field.

S3 SinkS3 Sink

Specify the S3 Sink connector parameters as follows:

Management console
CLI
Terraform
REST API
gRPC API
  • Topics: Pattern for selecting topics to export. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • Compression type: Message compression codec:

    • none (default): No compression
    • gzip: gzip codec
    • snappy: snappy codec
    • zstd: zstd codec

    You cannot change this setting after the cluster is created.

  • Max record per file: Maximum number of records that can be written to a single file in an S3-compatible storage. This is an optional setting.

  • Under S3 connection, specify the storage connection parameters:

    • Bucket: Storage bucket name.

    • Endpoint: Endpoint for storage access. Get it from your storage provider.

    • Region: Region name. This is an optional setting. The default value is ru-central1. You can find the list of available regions here.

      Note

      Some apps designed to work with Amazon S3 do not allow you to specify the region; this is why Yandex Object Storage may also accept the main AWS region value, which is the first row in the table of regions.

    • Access key ID, Secret access key: AWS-compatible key ID and contents. This is an optional setting.

  • To specify additional settings not listed above, create the relevant keys and set their values under Additional properties when creating or updating the connector. Here are some examples of keys:

    • key.converter
    • value.converter
    • value.converter.schemas.enable
    • format.output.type

    For the list of all connector settings, see this connector guide. For the list of general connector settings, see this Apache Kafka® guide.

  • --cluster-name: Cluster name.

  • --tasks-max: Number of concurrent tasks. To distribute replication load evenly, we recommend a value of at least 2.

  • --properties: Comma-separated list of additional connector settings in <key>:<value> format. Here are some examples of keys:

    • key.converter
    • value.converter
    • value.converter.schemas.enable
    • format.output.type

    For the list of all connector settings, see this connector guide. For the list of general connector settings, see this Apache Kafka® guide.

  • --topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • --file-compression-type: Message compression codec. You cannot change this setting after the cluster is created. Valid values:

    • none (default): No compression
    • gzip: gzip codec
    • snappy: snappy codec
    • zstd: zstd codec
  • --file-max-records: Maximum number of records that can be written to a single file in an S3-compatible storage.

  • --bucket-name: Name of the S3-compatible storage bucket to write data to.

  • --storage-endpoint: Endpoint for storage access (get it from your storage provider), e.g., storage.yandexcloud.net.

  • --region: Region where the S3-compatible storage bucket resides. The default value is ru-central1. You can find the list of available regions here.

    Note

    Some apps designed to work with Amazon S3 do not allow you to specify the region; this is why Yandex Object Storage may also accept the main AWS region value, which is the first row in the table of regions.

  • --access-key-id, --secret-access-key: AWS-compatible key ID and contents.

  • properties: Comma-separated list of additional connector settings in <key>:<value> format. Here are some examples of keys:

    • key.converter
    • value.converter
    • value.converter.schemas.enable
    • format.output.type

    For the list of all connector settings, see this connector guide. For the list of general connector settings, see this Apache Kafka® guide.

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • file_compression_type: Message compression codec. You cannot change this setting after the cluster is created. Valid values:

    • none (default): No compression
    • gzip: gzip codec
    • snappy: snappy codec
    • zstd: zstd codec
  • file_max_records: Maximum number of records that can be written to a single file in an S3-compatible storage.

  • s3_connection: S3-compatible storage connection parameters:

    • bucket_name: Name of the bucket to write data to.

    • external_s3: External S3-compatible storage connection parameters:

      • endpoint: Endpoint for storage access (get it from your storage provider), e.g., storage.yandexcloud.net.

      • region: Region where the S3-compatible storage bucket resides. The default value is ru-central1. You can find the list of available regions here.

        Note

        Some apps designed to work with Amazon S3 do not allow you to specify the region; this is why Yandex Object Storage may also accept the main AWS region value, which is the first row in the table of regions.

      • access_key_id, secret_access_key: AWS-compatible key ID and contents.

To configure the S3 Sink connector, use the connectorSpec.connectorConfigS3Sink parameter:

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • fileCompressionType: Message compression codec. You cannot change this setting after the cluster is created. Valid values:

    • none (default): No compression
    • gzip: gzip codec
    • snappy: snappy codec
    • zstd: zstd codec
  • fileMaxRecords: Maximum number of records that can be written to a single file in an S3-compatible storage.

  • s3Connection: S3-compatible storage connection parameters:

    • bucketName: Name of the bucket to write data to.
    • externalS3: External storage parameters:
      • endpoint: Endpoint for storage access (get it from your storage provider), e.g., storage.yandexcloud.net.

      • region: Region where the S3-compatible storage bucket resides. The default value is ru-central1. You can find the list of available regions here.

        Note

        Some apps designed to work with Amazon S3 do not allow you to specify the region; this is why Yandex Object Storage may also accept the main AWS region value, which is the first row in the table of regions.

      • accessKeyId, secretAccessKey: AWS-compatible key ID and contents.

To configure the S3 Sink connector, use the connector_spec.connector_config_s3_sink parameter:

  • topics: Pattern for selecting topics to replicate. List topic names separated by commas or |. You can also use a regular expression (.*), e.g., analysis.*. To replicate all topics, specify .*.

  • file_compression_type: Message compression codec. You cannot change this setting after the cluster is created. Valid values:

    • none (default): No compression
    • gzip: gzip codec
    • snappy: snappy codec
    • zstd: zstd codec
  • file_max_records: Maximum number of records that can be written to a single file in an S3-compatible storage. provided as an object with the value field.

  • s3_connection: S3-compatible storage connection parameters:

    • bucket_name: Name of the bucket to write data to.
    • external_s3: External storage parameters:
      • endpoint: Endpoint for storage access (get it from your storage provider), e.g., storage.yandexcloud.net.

      • region: Region where the S3-compatible storage bucket resides. The default value is ru-central1. You can find the list of available regions here.

        Note

        Some apps designed to work with Amazon S3 do not allow you to specify the region; this is why Yandex Object Storage may also accept the main AWS region value, which is the first row in the table of regions.

      • access_key_id, secret_access_key: AWS-compatible key ID and contents.

Editing a connectorEditing a connector

Management console
CLI
Terraform
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.
  4. In the connector row, click and select Edit connector.
  5. Edit the connector properties as needed.
  6. Click Save.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To edit a MirrorMaker connector:

  1. See the description of the CLI command for editing a connector:

    yc managed-kafka connector-mirrormaker update --help
    
  2. Run this command, e.g., to update the task limit:

    yc managed-kafka connector-mirrormaker update <connector_name> \
       --cluster-name=<cluster_name> \
       --direction=<connector_direction> \
       --tasks-max=<new_task_limit>
    

    Where --direction is the connector direction, either ingress or egres.

    You can get the connector name with the list of cluster connectors, and the cluster name, with the list of clusters in the folder.

To update an S3 Sink connector:

  1. See the description of the CLI command for editing a connector:

    yc managed-kafka connector-s3-sink update --help
    
  2. Run this command, e.g., to update the task limit:

    yc managed-kafka connector-s3-sink update <connector_name> \
       --cluster-name=<cluster_name> \
       --tasks-max=<new_task_limit>
    

    You can get the connector name with the list of cluster connectors, and the cluster name, with the list of clusters in the folder.

  1. Check the list of MirrorMaker and S3 Sink connector settings.

  2. Open the current Terraform configuration file describing your infrastructure.

    Learn how to create this file in Creating a cluster.

  3. Edit the parameter values in the yandex_mdb_kafka_connector resource description:

    • For a MirrorMaker connector:

      resource "yandex_mdb_kafka_connector" "<connector_name>" {
        cluster_id = "<cluster_ID>"
        name       = "<connector_name>"
        tasks_max  = <task_limit>
        properties = {
          <advanced_properties>
        }
        connector_config_mirrormaker {
          topics             = "<topic_pattern>"
          replication_factor = <replication_factor>
          source_cluster {
            alias = "<cluster_prefix>"
            external_cluster {
              bootstrap_servers           = "<list_of_broker_host_FQDNs>"
              sasl_username               = "<username>"
              sasl_password               = "<user_password>"
              sasl_mechanism              = "<authentication_mechanism>"
              security_protocol           = "<security_protocol>"
              ssl-truststore-certificates = "<PEM_certificate_contents>"
            }
          }
          target_cluster {
            alias = "<cluster_prefix>"
            this_cluster {}
          }
        }
      }
      
    • For an S3 Sink connector:

      resource "yandex_mdb_kafka_connector" "<S3_Sink_connector_name>" {
        cluster_id = "<cluster_ID>"
        name       = "<S3_Sink_connector_name>"
        tasks_max  = <task_limit>
        properties = {
          <advanced_properties>
       }
        connector_config_s3_sink {
          topics                = "<topic_pattern>"
          file_max_records      = <file_max_records>
          s3_connection {
            bucket_name = "<bucket_name>"
            external_s3 {
              endpoint          = "<S3_compatible_storage_endpoint>"
              access_key_id     = "<AWS_compatible_static_key_ID>"
              secret_access_key = "<AWS_compatible_static_key_contents>"
            }
          }
        }
      }
      
  4. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  5. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

For more information, see this Terraform provider guide.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.update method, e.g., via the following cURL request:

    Warning

    The API method will assign default values to all the parameters of the object you are modifying unless you explicitly provide them in your request. To avoid this, list the settings you want to change in the updateMask parameter as a single comma-separated string.

    curl \
      --request PATCH \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors/<connector_name>' \
      --data '{
                "updateMask": "connectorSpec.tasksMax,connectorSpec.properties,connectorSpec.connectorConfigMirrormaker.<Mirrormaker_1_connector_setting>,...,connectorSpec.connectorConfigMirrormaker.<Mirrormaker_N_connector_setting>,connectorSpec.connectorConfigS3Sink.<S3_Sink_1_connector_setting>,...,connectorSpec.connectorConfigS3Sink.<S3_Sink_N_connector_setting>",
                "connectorSpec": {
                  "tasksMax": "<task_limit>"
                  "properties": "<advanced_connector_properties>"
                  "connectorConfigMirrormaker": {
                    <Mirrormaker_connector_settings>
                  },
                  "connectorConfigS3Sink": {
                    <S3_Sink_connector_settings>
                  }
                }
              }'
    

    Where:

    • updateMask: Comma-separated string of connector settings you want to update.

      Specify the relevant parameters:

      • connectorSpec.tasksMax: To change the connector task limit.
      • connectorSpec.properties: To change the connector’s advanced properties.
      • connectorSpec.connectorConfigMirrormaker.<configuring_Mirrormaker_connector>: To update the Mirrormaker connector settings.
      • connectorSpec.connectorConfigS3Sink.<configuring_S3_Sink_connector>: To update the S3 Sink connector settings.
    • connectorSpec: Specify the MirrorMaker or S3 Sink connector settings.

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/Update method, e.g., via the following gRPCurl request:

    Warning

    The API method will assign default values to all the parameters of the object you are modifying unless you explicitly provide them in your request. To avoid this, list the settings you want to change in the update_mask parameter as an array of paths[] strings.

    Format for listing settings
    "update_mask": {
        "paths": [
            "<setting_1>",
            "<setting_2>",
            ...
            "<setting_N>"
        ]
    }
    
    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_name": "<connector_name>",
            "update_mask": {
              "paths": [
                "connector_spec.tasks_max",
                "connector_spec.properties",
                "connector_spec.connector_config_mirrormaker.<Mirrormaker_1_connector_setting>",
                ...,
                "connector_spec.connector_config_mirrormaker.<Mirrormaker_N_connector_setting>",
                "connector_spec.connector_config_s3_sink.<S3_Sink_1_connector_setting>",
                ...,
                "connector_spec.connector_config_s3_sink.<S3-Sink_N_connector_setting>"
              ]
            },
            "connector_spec": {
              "tasks_max": {
                "value": "<task_limit>"
              },
              "properties": "<advanced_connector_properties>"
              "connector_config_mirrormaker": {
                <Mirrormaker_connector_settings>
              },
              "connector_config_s3_sink": {
                <S3_Sink_connector_settings>
              }
            }
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Update
    

    Where:

    • update_mask: List of connector settings you want to update as an array of strings (paths[]).

      Specify the relevant parameters:

      • connector_spec.tasks_max: To change the connector task limit.
      • connector_spec.properties: To change the connector’s advanced properties.
      • connector_spec.connector_config_mirrormaker.<configuring_Mirrormaker_connector>: To update the Mirrormaker connector settings.
      • connector_spec.connector_config_s3_sink.<configuring_S3_Sink_connector>: To update the S3 Sink connector settings.
    • connector_spec: Specify the MirrorMaker or S3 Sink connector settings.

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  4. Check the server response to make sure your request was successful.

Pausing a connectorPausing a connector

When you pause a connector, the system:

  • Terminates the sink connection.
  • Deletes data from the connector service topics.

To pause a connector:

Management console
CLI
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.
  4. Click next to the connector name and select Pause.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To pause a connector, run this command:

yc managed-kafka connector pause <connector_name> \
   --cluster-name=<cluster_name>
  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.pause method, e.g., via the following cURL request:

    curl \
      --request POST \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors/pause/<connector_name>'
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/Pause method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_name": "<connector_name>"
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Pause
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  4. Check the server response to make sure your request was successful.

Resuming a connectorResuming a connector

Management console
CLI
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.
  4. Click next to the connector name and select Resume.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To resume a connector, run this command:

yc managed-kafka connector resume <connector_name> \
   --cluster-name=<cluster_name>
  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.pause method, e.g., via the following cURL request:

    curl \
      --request POST \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors/resume/<connector_name>'
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/Resume method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_name": "<connector_name>"
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Resume
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  4. Check the server response to make sure your request was successful.

Importing a connector to TerraformImporting a connector to Terraform

You can import the existing connectors to manage them with Terraform.

Terraform
  1. In the Terraform configuration file, specify the connector you want to import:

    resource "yandex_mdb_kafka_cluster" "<connector_name>" {}
    
  2. Run the following command to import your connector:

    terraform import yandex_mdb_kafka_connector.<connector_name> <cluster_ID>:<connector_name>
    

    To learn more about importing connectors, see this Terraform provider guide.

Deleting a connectorDeleting a connector

Management console
CLI
Terraform
REST API
gRPC API
  1. In the management console, navigate to the relevant folder.
  2. Go to Managed Service for Kafka.
  3. Select the cluster and open the Connectors tab.
  4. Click next to the connector name and select Delete.
  5. Click Delete.

If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

To delete a connector, run this command:

yc managed-kafka connector delete <connector_name> \
   --cluster-name <cluster_name>
  1. Open the current Terraform configuration file describing your infrastructure.

    Learn how to create this file in Creating a cluster.

  2. Delete the yandex_mdb_kafka_connector resource with the connector description.

  3. Make sure the settings are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  4. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

For more information, see this Terraform provider guide.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Call the Connector.pause method, e.g., via the following cURL request:

    curl \
      --request DELETE \
      --header "Authorization: Bearer $IAM_TOKEN" \
      --url 'https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/<cluster_ID>/connectors/<connector_name>'
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  3. Check the server response to make sure your request was successful.

  1. Get an IAM token for API authentication and put it into an environment variable:

    export IAM_TOKEN="<IAM_token>"
    
  2. Clone the cloudapi repository:

    cd ~/ && git clone --depth=1 https://github.com/yandex-cloud/cloudapi
    

    Below, we assume that the repository contents reside in the ~/cloudapi/ directory.

  3. Call the ConnectorService/Delete method, e.g., via the following gRPCurl request:

    grpcurl \
      -format json \
      -import-path ~/cloudapi/ \
      -import-path ~/cloudapi/third_party/googleapis/ \
      -proto ~/cloudapi/yandex/cloud/mdb/kafka/v1/connector_service.proto \
      -rpc-header "Authorization: Bearer $IAM_TOKEN" \
      -d '{
            "cluster_id": "<cluster_ID>",
            "connector_name": "<connector_name>"
          }' \
      mdb.api.cloud.yandex.net:443 \
      yandex.cloud.mdb.kafka.v1.ConnectorService.Delete
    

    You can get the cluster ID with the list of clusters in the folder, and the connector name, with the list of connectors in the cluster.

  4. Check the server response to make sure your request was successful.

Was the article helpful?

Previous
Managing users
Next
Kafka UI for Apache Kafka®
© 2026 Direct Cursus Technology L.L.C.