Yandex Cloud
Поиск
Связаться с намиПодключиться
  • Документация
  • Блог
  • Все сервисы
  • Статус работы сервисов
    • Популярные
    • Инфраструктура и сеть
    • Платформа данных
    • Контейнеры
    • Инструменты разработчика
    • Бессерверные вычисления
    • Безопасность
    • Мониторинг и управление ресурсами
    • Машинное обучение
    • Бизнес-инструменты
  • Все решения
    • По отраслям
    • По типу задач
    • Экономика платформы
    • Безопасность
    • Техническая поддержка
    • Каталог партнёров
    • Обучение и сертификация
    • Облако для стартапов
    • Облако для крупного бизнеса
    • Центр технологий для общества
    • Облако для интеграторов
    • Поддержка IT-бизнеса
    • Облако для фрилансеров
    • Обучение и сертификация
    • Блог
    • Документация
    • Контент-программа
    • Мероприятия и вебинары
    • Контакты, чаты и сообщества
    • Идеи
    • Истории успеха
    • Тарифы Yandex Cloud
    • Промоакции и free tier
    • Правила тарификации
  • Документация
  • Блог
Проект Яндекса
© 2025 ООО «Яндекс.Облако»
Yandex Managed Service for Apache Kafka®
  • Начало работы
  • Управление доступом
  • Правила тарификации
  • Справочник Terraform
    • Аутентификация в API
      • Overview
        • Overview
        • Get
        • List
        • Create
        • Update
        • Delete
  • Метрики Yandex Monitoring
  • Аудитные логи Audit Trails
  • Публичные материалы
  • История изменений
  • Вопросы и ответы
  • Обучающие курсы

В этой статье:

  • HTTP request
  • Path parameters
  • Body parameters
  • TopicSpec
  • TopicConfig2_8
  • TopicConfig3
  • Response
  • CreateTopicMetadata
  • Status
  • Topic
  • TopicConfig2_8
  • TopicConfig3
  1. Справочник API
  2. REST (англ.)
  3. Topic
  4. Create

Managed Service for Apache Kafka® API, REST: Topic.Create

Статья создана
Yandex Cloud
Обновлена 26 ноября 2024 г.
  • HTTP request
  • Path parameters
  • Body parameters
  • TopicSpec
  • TopicConfig2_8
  • TopicConfig3
  • Response
  • CreateTopicMetadata
  • Status
  • Topic
  • TopicConfig2_8
  • TopicConfig3

Creates a new Kafka topic in the specified cluster.

HTTP requestHTTP request

POST https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}/topics

Path parametersPath parameters

Field

Description

clusterId

string

Required field. ID of the Apache Kafka® cluster to create a topic in.

To get the cluster ID, make a ClusterService.List request.

Body parametersBody parameters

{
  "topicSpec": {
    "name": "string",
    "partitions": "string",
    "replicationFactor": "string",
    // Includes only one of the fields `topicConfig_2_8`, `topicConfig_3`
    "topicConfig_2_8": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "string",
      "fileDeleteDelayMs": "string",
      "flushMessages": "string",
      "flushMs": "string",
      "minCompactionLagMs": "string",
      "retentionBytes": "string",
      "retentionMs": "string",
      "maxMessageBytes": "string",
      "minInsyncReplicas": "string",
      "segmentBytes": "string",
      "preallocate": "boolean"
    },
    "topicConfig_3": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "string",
      "fileDeleteDelayMs": "string",
      "flushMessages": "string",
      "flushMs": "string",
      "minCompactionLagMs": "string",
      "retentionBytes": "string",
      "retentionMs": "string",
      "maxMessageBytes": "string",
      "minInsyncReplicas": "string",
      "segmentBytes": "string",
      "preallocate": "boolean"
    }
    // end of the list of possible fields
  }
}

Field

Description

topicSpec

TopicSpec

Required field. Configuration of the topic to create.

TopicSpecTopicSpec

Field

Description

name

string

Name of the topic.

partitions

string (int64)

The number of the topic's partitions.

replicationFactor

string (int64)

Amount of copies of a topic data kept in the cluster.

topicConfig_2_8

TopicConfig2_8

Includes only one of the fields topicConfig_2_8, topicConfig_3.

User-defined settings for the topic.

topicConfig_3

TopicConfig3

Includes only one of the fields topicConfig_2_8, topicConfig_3.

User-defined settings for the topic.

TopicConfig2_8TopicConfig2_8

A topic settings for 2.8

Field

Description

cleanupPolicy

enum (CleanupPolicy)

Retention policy to use on old log messages.

  • CLEANUP_POLICY_UNSPECIFIED
  • CLEANUP_POLICY_DELETE: This policy discards log segments when either their retention time or log size limit is reached. See also: KafkaConfig2_8.logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: This policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: This policy use both compaction and deletion for messages and log segments.

compressionType

enum (CompressionType)

The compression type for a given topic.

  • COMPRESSION_TYPE_UNSPECIFIED
  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).

deleteRetentionMs

string (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs

string (int64)

The time to wait before deleting a file from the filesystem.

flushMessages

string (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.logFlushIntervalMessages setting on the topic level.

flushMs

string (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.logFlushIntervalMs setting on the topic level.

minCompactionLagMs

string (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes

string (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect.
It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig2_8.logRetentionBytes setting on the topic level.

retentionMs

string (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig2_8.logRetentionMs setting on the topic level.

maxMessageBytes

string (int64)

The largest record batch size allowed in topic.

minInsyncReplicas

string (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write
to be considered successful (when a producer sets acks to "all").

segmentBytes

string (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file
at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig2_8.logSegmentBytes setting on the topic level.

preallocate

boolean

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig2_8.logPreallocate setting on the topic level.

TopicConfig3TopicConfig3

A topic settings for 3.x

Field

Description

cleanupPolicy

enum (CleanupPolicy)

Retention policy to use on old log messages.

  • CLEANUP_POLICY_UNSPECIFIED
  • CLEANUP_POLICY_DELETE: This policy discards log segments when either their retention time or log size limit is reached. See also: KafkaConfig3.logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: This policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: This policy use both compaction and deletion for messages and log segments.

compressionType

enum (CompressionType)

The compression type for a given topic.

  • COMPRESSION_TYPE_UNSPECIFIED
  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).

deleteRetentionMs

string (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs

string (int64)

The time to wait before deleting a file from the filesystem.

flushMessages

string (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig3.logFlushIntervalMessages setting on the topic level.

flushMs

string (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig3.logFlushIntervalMs setting on the topic level.

minCompactionLagMs

string (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes

string (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect.
It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig3.logRetentionBytes setting on the topic level.

retentionMs

string (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig3.logRetentionMs setting on the topic level.

maxMessageBytes

string (int64)

The largest record batch size allowed in topic.

minInsyncReplicas

string (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write
to be considered successful (when a producer sets acks to "all").

segmentBytes

string (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file
at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig3.logSegmentBytes setting on the topic level.

preallocate

boolean

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig3.logPreallocate setting on the topic level.

ResponseResponse

HTTP Code: 200 - OK

{
  "id": "string",
  "description": "string",
  "createdAt": "string",
  "createdBy": "string",
  "modifiedAt": "string",
  "done": "boolean",
  "metadata": {
    "clusterId": "string",
    "topicName": "string"
  },
  // Includes only one of the fields `error`, `response`
  "error": {
    "code": "integer",
    "message": "string",
    "details": [
      "object"
    ]
  },
  "response": {
    "name": "string",
    "clusterId": "string",
    "partitions": "string",
    "replicationFactor": "string",
    // Includes only one of the fields `topicConfig_2_8`, `topicConfig_3`
    "topicConfig_2_8": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "string",
      "fileDeleteDelayMs": "string",
      "flushMessages": "string",
      "flushMs": "string",
      "minCompactionLagMs": "string",
      "retentionBytes": "string",
      "retentionMs": "string",
      "maxMessageBytes": "string",
      "minInsyncReplicas": "string",
      "segmentBytes": "string",
      "preallocate": "boolean"
    },
    "topicConfig_3": {
      "cleanupPolicy": "string",
      "compressionType": "string",
      "deleteRetentionMs": "string",
      "fileDeleteDelayMs": "string",
      "flushMessages": "string",
      "flushMs": "string",
      "minCompactionLagMs": "string",
      "retentionBytes": "string",
      "retentionMs": "string",
      "maxMessageBytes": "string",
      "minInsyncReplicas": "string",
      "segmentBytes": "string",
      "preallocate": "boolean"
    }
    // end of the list of possible fields
  }
  // end of the list of possible fields
}

An Operation resource. For more information, see Operation.

Field

Description

id

string

ID of the operation.

description

string

Description of the operation. 0-256 characters long.

createdAt

string (date-time)

Creation timestamp.

String in RFC3339 text format. The range of possible values is from
0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the
Protocol Buffers reference.
In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

createdBy

string

ID of the user or service account who initiated the operation.

modifiedAt

string (date-time)

The time when the Operation resource was last modified.

String in RFC3339 text format. The range of possible values is from
0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the
Protocol Buffers reference.
In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

done

boolean

If the value is false, it means the operation is still in progress.
If true, the operation is completed, and either error or response is available.

metadata

CreateTopicMetadata

Service-specific metadata associated with the operation.
It typically contains the ID of the target resource that the operation is performed on.
Any method that returns a long-running operation should document the metadata type, if any.

error

Status

The error result of the operation in case of failure or cancellation.

Includes only one of the fields error, response.

The operation result.
If done == false and there was no failure detected, neither error nor response is set.
If done == false and there was a failure detected, error is set.
If done == true, exactly one of error or response is set.

response

Topic

The normal response of the operation in case of success.
If the original method returns no data on success, such as Delete,
the response is google.protobuf.Empty.
If the original method is the standard Create/Update,
the response should be the target resource of the operation.
Any method that returns a long-running operation should document the response type, if any.

Includes only one of the fields error, response.

The operation result.
If done == false and there was no failure detected, neither error nor response is set.
If done == false and there was a failure detected, error is set.
If done == true, exactly one of error or response is set.

CreateTopicMetadataCreateTopicMetadata

Field

Description

clusterId

string

ID of the Apache Kafka® cluster where a topic is being created.

topicName

string

Required field. Name of the Kafka topic that is being created.

StatusStatus

The error result of the operation in case of failure or cancellation.

Field

Description

code

integer (int32)

Error code. An enum value of google.rpc.Code.

message

string

An error message.

details[]

object

A list of messages that carry the error details.

TopicTopic

An Kafka topic.
For more information, see the Concepts -> Topics and partitions section of the documentation.

Field

Description

name

string

Name of the topic.

clusterId

string

ID of an Apache Kafka® cluster that the topic belongs to.

To get the Apache Kafka® cluster ID, make a ClusterService.List request.

partitions

string (int64)

The number of the topic's partitions.

replicationFactor

string (int64)

Amount of data copies (replicas) for the topic in the cluster.

topicConfig_2_8

TopicConfig2_8

Includes only one of the fields topicConfig_2_8, topicConfig_3.

User-defined settings for the topic.

topicConfig_3

TopicConfig3

Includes only one of the fields topicConfig_2_8, topicConfig_3.

User-defined settings for the topic.

TopicConfig2_8TopicConfig2_8

A topic settings for 2.8

Field

Description

cleanupPolicy

enum (CleanupPolicy)

Retention policy to use on old log messages.

  • CLEANUP_POLICY_UNSPECIFIED
  • CLEANUP_POLICY_DELETE: This policy discards log segments when either their retention time or log size limit is reached. See also: KafkaConfig2_8.logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: This policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: This policy use both compaction and deletion for messages and log segments.

compressionType

enum (CompressionType)

The compression type for a given topic.

  • COMPRESSION_TYPE_UNSPECIFIED
  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).

deleteRetentionMs

string (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs

string (int64)

The time to wait before deleting a file from the filesystem.

flushMessages

string (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.logFlushIntervalMessages setting on the topic level.

flushMs

string (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.logFlushIntervalMs setting on the topic level.

minCompactionLagMs

string (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes

string (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect.
It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig2_8.logRetentionBytes setting on the topic level.

retentionMs

string (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig2_8.logRetentionMs setting on the topic level.

maxMessageBytes

string (int64)

The largest record batch size allowed in topic.

minInsyncReplicas

string (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write
to be considered successful (when a producer sets acks to "all").

segmentBytes

string (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file
at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig2_8.logSegmentBytes setting on the topic level.

preallocate

boolean

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig2_8.logPreallocate setting on the topic level.

TopicConfig3TopicConfig3

A topic settings for 3.x

Field

Description

cleanupPolicy

enum (CleanupPolicy)

Retention policy to use on old log messages.

  • CLEANUP_POLICY_UNSPECIFIED
  • CLEANUP_POLICY_DELETE: This policy discards log segments when either their retention time or log size limit is reached. See also: KafkaConfig3.logRetentionMs and other similar parameters.
  • CLEANUP_POLICY_COMPACT: This policy compacts messages in log.
  • CLEANUP_POLICY_COMPACT_AND_DELETE: This policy use both compaction and deletion for messages and log segments.

compressionType

enum (CompressionType)

The compression type for a given topic.

  • COMPRESSION_TYPE_UNSPECIFIED
  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).

deleteRetentionMs

string (int64)

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs

string (int64)

The time to wait before deleting a file from the filesystem.

flushMessages

string (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig3.logFlushIntervalMessages setting on the topic level.

flushMs

string (int64)

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig3.logFlushIntervalMs setting on the topic level.

minCompactionLagMs

string (int64)

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes

string (int64)

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanupPolicy is in effect.
It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig3.logRetentionBytes setting on the topic level.

retentionMs

string (int64)

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig3.logRetentionMs setting on the topic level.

maxMessageBytes

string (int64)

The largest record batch size allowed in topic.

minInsyncReplicas

string (int64)

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write
to be considered successful (when a producer sets acks to "all").

segmentBytes

string (int64)

This configuration controls the segment file size for the log. Retention and cleaning is always done a file
at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig3.logSegmentBytes setting on the topic level.

preallocate

boolean

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig3.logPreallocate setting on the topic level.

Была ли статья полезна?

Предыдущая
List
Следующая
Update
Проект Яндекса
© 2025 ООО «Яндекс.Облако»