yandex_mdb_kafka_topic (Resource)
Статья создана
Обновлена 7 августа 2025 г.
Manages a topic of a Kafka Topic within the Yandex Cloud. For more information, see the official documentation.
Example usage
//
// Create a new MDB Kafka Topic.
//
resource "yandex_mdb_kafka_topic" "events" {
cluster_id = yandex_mdb_kafka_cluster.my_cluster.id
name = "events"
partitions = 4
replication_factor = 1
topic_config {
cleanup_policy = "CLEANUP_POLICY_COMPACT"
compression_type = "COMPRESSION_TYPE_LZ4"
delete_retention_ms = 86400000
file_delete_delay_ms = 60000
flush_messages = 128
flush_ms = 1000
min_compaction_lag_ms = 0
retention_bytes = 10737418240
retention_ms = 604800000
max_message_bytes = 1048588
min_insync_replicas = 1
segment_bytes = 268435456
}
}
resource "yandex_mdb_kafka_cluster" "my_cluster" {
name = "foo"
network_id = "c64vs98keiqc7f24pvkd"
config {
version = "2.8"
zones = ["ru-central1-a"]
kafka {
resources {
resource_preset_id = "s2.micro"
disk_type_id = "network-hdd"
disk_size = 16
}
}
}
}
Schema
Required
cluster_id(String) The ID of the Kafka cluster.name(String) The resource name.partitions(Number) The number of the topic's partitions.replication_factor(Number) Amount of data copies (replicas) for the topic in the cluster.
Optional
timeouts(Block, Optional) (see below for nested schema)topic_config(Block List, Max: 1) User-defined settings for the topic. For more information, see the official documentation and the Kafka documentation . (see below for nested schema)
Read-Only
id(String) The ID of this resource.
Nested Schema for timeouts
Optional:
create(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).delete(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.read(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Read operations occur during any refresh or planning operation when refresh is enabled.update(String) A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
Nested Schema for topic_config
Optional:
cleanup_policy(String) Retention policy to use on log segments.compression_type(String) Compression type of kafka topic.delete_retention_ms(String) The amount of time to retain delete tombstone markers for log compacted topics.file_delete_delay_ms(String) The time to wait before deleting a file from the filesystem.flush_messages(String) This setting allows specifying an interval at which we will force an fsync of data written to the log.flush_ms(String) This setting allows specifying a time interval at which we will force an fsync of data written to the log.max_message_bytes(String) The largest record batch size allowed by Kafka (after compression if compression is enabled).min_compaction_lag_ms(String) The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.min_insync_replicas(String) When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.preallocate(Boolean, Deprecated) True if we should preallocate the file on disk when creating a new log segment.retention_bytes(String) This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy.retention_ms(String) This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy.segment_bytes(String) This configuration controls the segment file size for the log.
Import
The resource can be imported by using their resource ID. For getting the resource ID you can use Yandex Cloud Web Console
# terraform import yandex_mdb_kafka_topic.<resource_name> <cluster_id>:<topic_name>
terraform import yandex_mdb_kafka_topic.events <cluster_id>:events