Yandex Cloud
Поиск
Связаться с намиПодключиться
  • Истории успеха
  • Документация
  • Блог
  • Все сервисы
  • Статус работы сервисов
    • Доступны в регионе
    • Инфраструктура и сеть
    • Платформа данных
    • Контейнеры
    • Инструменты разработчика
    • Бессерверные вычисления
    • Безопасность
    • Мониторинг и управление ресурсами
    • ИИ для бизнеса
    • Бизнес-инструменты
  • Все решения
    • По отраслям
    • По типу задач
    • Экономика платформы
    • Безопасность
    • Техническая поддержка
    • Каталог партнёров
    • Обучение и сертификация
    • Облако для стартапов
    • Облако для крупного бизнеса
    • Центр технологий для общества
    • Партнёрская программа
    • Поддержка IT-бизнеса
    • Облако для фрилансеров
    • Обучение и сертификация
    • Блог
    • Документация
    • Мероприятия и вебинары
    • Контакты, чаты и сообщества
    • Идеи
    • Тарифы Yandex Cloud
    • Промоакции и free tier
    • Правила тарификации
  • Истории успеха
  • Документация
  • Блог
Проект Яндекса
© 2025 ТОО «Облачные Сервисы Казахстан»
Terraform в Yandex Cloud
  • Начало работы
  • Библиотека решений
    • Обзор
    • История изменений (англ.)
          • mdb_kafka_cluster
          • mdb_kafka_connector
          • mdb_kafka_topic
          • mdb_kafka_user

В этой статье:

  • Example usage
  • Schema
  • Required
  • Optional
  • Read-Only
  • Nested Schema for timeouts
  • Nested Schema for topic_config
  • Import
  1. Справочник Terraform
  2. Ресурсы (англ.)
  3. Managed Service for Apache Kafka
  4. Resources
  5. mdb_kafka_topic

yandex_mdb_kafka_topic (Resource)

Статья создана
Yandex Cloud
Обновлена 7 августа 2025 г.
  • Example usage
  • Schema
    • Required
    • Optional
    • Read-Only
    • Nested Schema for timeouts
    • Nested Schema for topic_config
  • Import

Manages a topic of a Kafka Topic within the Yandex Cloud. For more information, see the official documentation.

Example usageExample usage

//
// Create a new MDB Kafka Topic.
//
resource "yandex_mdb_kafka_topic" "events" {
  cluster_id         = yandex_mdb_kafka_cluster.my_cluster.id
  name               = "events"
  partitions         = 4
  replication_factor = 1
  topic_config {
    cleanup_policy        = "CLEANUP_POLICY_COMPACT"
    compression_type      = "COMPRESSION_TYPE_LZ4"
    delete_retention_ms   = 86400000
    file_delete_delay_ms  = 60000
    flush_messages        = 128
    flush_ms              = 1000
    min_compaction_lag_ms = 0
    retention_bytes       = 10737418240
    retention_ms          = 604800000
    max_message_bytes     = 1048588
    min_insync_replicas   = 1
    segment_bytes         = 268435456
  }
}

resource "yandex_mdb_kafka_cluster" "my_cluster" {
  name       = "foo"
  network_id = "c64vs98keiqc7f24pvkd"

  config {
    version = "2.8"
    zones   = ["ru-central1-a"]
    kafka {
      resources {
        resource_preset_id = "s2.micro"
        disk_type_id       = "network-hdd"
        disk_size          = 16
      }
    }
  }
}

SchemaSchema

RequiredRequired

  • cluster_id (String) The ID of the Kafka cluster.
  • name (String) The resource name.
  • partitions (Number) The number of the topic's partitions.
  • replication_factor (Number) Amount of data copies (replicas) for the topic in the cluster.

OptionalOptional

  • timeouts (Block, Optional) (see below for nested schema)
  • topic_config (Block List, Max: 1) User-defined settings for the topic. For more information, see the official documentation and the Kafka documentation. (see below for nested schema)

Read-OnlyRead-Only

  • id (String) The ID of this resource.

Nested Schema for Nested Schema for timeouts

Optional:

  • create (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
  • delete (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.
  • read (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Read operations occur during any refresh or planning operation when refresh is enabled.
  • update (String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).

Nested Schema for Nested Schema for topic_config

Optional:

  • cleanup_policy (String) Retention policy to use on log segments.
  • compression_type (String) Compression type of kafka topic.
  • delete_retention_ms (String) The amount of time to retain delete tombstone markers for log compacted topics.
  • file_delete_delay_ms (String) The time to wait before deleting a file from the filesystem.
  • flush_messages (String) This setting allows specifying an interval at which we will force an fsync of data written to the log.
  • flush_ms (String) This setting allows specifying a time interval at which we will force an fsync of data written to the log.
  • max_message_bytes (String) The largest record batch size allowed by Kafka (after compression if compression is enabled).
  • min_compaction_lag_ms (String) The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.
  • min_insync_replicas (String) When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.
  • preallocate (Boolean, Deprecated) True if we should preallocate the file on disk when creating a new log segment.
  • retention_bytes (String) This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy.
  • retention_ms (String) This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy.
  • segment_bytes (String) This configuration controls the segment file size for the log.

ImportImport

The resource can be imported by using their resource ID. For getting the resource ID you can use Yandex Cloud Web Console or YC CLI.

# terraform import yandex_mdb_kafka_topic.<resource_name> <cluster_id>:<topic_name>
terraform import yandex_mdb_kafka_topic.events <cluster_id>:events

Была ли статья полезна?

Предыдущая
mdb_kafka_connector
Следующая
mdb_kafka_user
Проект Яндекса
© 2025 ТОО «Облачные Сервисы Казахстан»