Managed Service for Apache Kafka® API, REST: Cluster.create
Creates a new Apache Kafka® cluster in the specified folder.
HTTP request
POST https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters
Body parameters
{
"folderId": "string",
"name": "string",
"description": "string",
"labels": "object",
"environment": "string",
"configSpec": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
},
// `configSpec.kafka` includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "string",
"logFlushIntervalMessages": "integer",
"logFlushIntervalMs": "integer",
"logFlushSchedulerIntervalMs": "integer",
"logRetentionBytes": "integer",
"logRetentionHours": "integer",
"logRetentionMinutes": "integer",
"logRetentionMs": "integer",
"logSegmentBytes": "integer",
"logPreallocate": true,
"socketSendBufferBytes": "integer",
"socketReceiveBufferBytes": "integer",
"autoCreateTopicsEnable": true,
"numPartitions": "integer",
"defaultReplicationFactor": "integer",
"messageMaxBytes": "integer",
"replicaFetchMaxBytes": "integer",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "integer",
"saslEnabledMechanisms": [
"string"
]
},
"kafkaConfig_3": {
"compressionType": "string",
"logFlushIntervalMessages": "integer",
"logFlushIntervalMs": "integer",
"logFlushSchedulerIntervalMs": "integer",
"logRetentionBytes": "integer",
"logRetentionHours": "integer",
"logRetentionMinutes": "integer",
"logRetentionMs": "integer",
"logSegmentBytes": "integer",
"logPreallocate": true,
"socketSendBufferBytes": "integer",
"socketReceiveBufferBytes": "integer",
"autoCreateTopicsEnable": true,
"numPartitions": "integer",
"defaultReplicationFactor": "integer",
"messageMaxBytes": "integer",
"replicaFetchMaxBytes": "integer",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "integer",
"saslEnabledMechanisms": [
"string"
]
},
// end of the list of possible fields`configSpec.kafka`
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "integer",
"assignPublicIp": true,
"unmanagedTopics": true,
"schemaRegistry": true,
"access": {
"dataTransfer": true
},
"restApiConfig": {
"enabled": true
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "string",
"emergencyUsageThreshold": "string",
"diskSizeLimit": "string"
}
},
"topicSpecs": [
{
"name": "string",
"partitions": "integer",
"replicationFactor": "integer",
// `topicSpecs[]` includes only one of the fields `topicConfig_2_8`, `topicConfig_3`
"topicConfig_2_8": {
"cleanupPolicy": "string",
"compressionType": "string",
"deleteRetentionMs": "integer",
"fileDeleteDelayMs": "integer",
"flushMessages": "integer",
"flushMs": "integer",
"minCompactionLagMs": "integer",
"retentionBytes": "integer",
"retentionMs": "integer",
"maxMessageBytes": "integer",
"minInsyncReplicas": "integer",
"segmentBytes": "integer",
"preallocate": true
},
"topicConfig_3": {
"cleanupPolicy": "string",
"compressionType": "string",
"deleteRetentionMs": "integer",
"fileDeleteDelayMs": "integer",
"flushMessages": "integer",
"flushMs": "integer",
"minCompactionLagMs": "integer",
"retentionBytes": "integer",
"retentionMs": "integer",
"maxMessageBytes": "integer",
"minInsyncReplicas": "integer",
"segmentBytes": "integer",
"preallocate": true
},
// end of the list of possible fields`topicSpecs[]`
}
],
"userSpecs": [
{
"name": "string",
"password": "string",
"permissions": [
{
"topicName": "string",
"role": "string",
"allowHosts": [
"string"
]
}
]
}
],
"networkId": "string",
"subnetId": [
"string"
],
"securityGroupIds": [
"string"
],
"hostGroupIds": [
"string"
],
"deletionProtection": true,
"maintenanceWindow": {
// `maintenanceWindow` includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": {},
"weeklyMaintenanceWindow": {
"day": "string",
"hour": "string"
},
// end of the list of possible fields`maintenanceWindow`
}
}
Field | Description |
---|---|
folderId | string Required. ID of the folder to create the Apache Kafka® cluster in. To get the folder ID, make a list request. The maximum string length in characters is 50. |
name | string Required. Name of the Apache Kafka® cluster. The name must be unique within the folder. The string length in characters must be 1-63. Value must match the regular expression |
description | string Description of the Apache Kafka® cluster. The maximum string length in characters is 256. |
labels | object Custom labels for the Apache Kafka® cluster as For example, "project": "mvp" or "source": "dictionary". No more than 64 per resource. The string length in characters for each key must be 1-63. Each key must match the regular expression |
environment | string Deployment environment of the Apache Kafka® cluster.
|
configSpec | object Kafka and hosts configuration the Apache Kafka® cluster. |
configSpec. version |
string Version of Apache Kafka® used in the cluster. Possible values: |
configSpec. kafka |
object Configuration and resource allocation for Kafka brokers. |
configSpec. kafka. resources |
object Resources allocated to Kafka brokers. |
configSpec. kafka. resources. resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
configSpec. kafka. resources. diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
configSpec. kafka. resources. diskTypeId |
string Type of the storage environment for the host. |
configSpec. kafka. kafkaConfig_2_8 |
object configSpec.kafka includes only one of the fields kafkaConfig_2_8 , kafkaConfig_3 Kafka version 2.8 broker configuration. |
configSpec. kafka. kafkaConfig_2_8. compressionType |
string Cluster topics compression type.
|
configSpec. kafka. kafkaConfig_2_8. logFlushIntervalMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. logFlushIntervalMs |
integer (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. logFlushSchedulerIntervalMs |
integer (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
configSpec. kafka. kafkaConfig_2_8. logRetentionBytes |
integer (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. logRetentionHours |
integer (int64) The number of hours to keep a log segment file before deleting it. |
configSpec. kafka. kafkaConfig_2_8. logRetentionMinutes |
integer (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
configSpec. kafka. kafkaConfig_2_8. logRetentionMs |
integer (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. logSegmentBytes |
integer (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. logPreallocate |
boolean (boolean) Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_2_8. socketSendBufferBytes |
integer (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
configSpec. kafka. kafkaConfig_2_8. socketReceiveBufferBytes |
integer (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
configSpec. kafka. kafkaConfig_2_8. autoCreateTopicsEnable |
boolean (boolean) Enable auto creation of topic on the server |
configSpec. kafka. kafkaConfig_2_8. numPartitions |
integer (int64) Default number of partitions per topic on the whole cluster |
configSpec. kafka. kafkaConfig_2_8. defaultReplicationFactor |
integer (int64) Default replication factor of the topic on the whole cluster |
configSpec. kafka. kafkaConfig_2_8. messageMaxBytes |
integer (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
configSpec. kafka. kafkaConfig_2_8. replicaFetchMaxBytes |
integer (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
configSpec. kafka. kafkaConfig_2_8. sslCipherSuites[] |
string A list of cipher suites. |
configSpec. kafka. kafkaConfig_2_8. offsetsRetentionMinutes |
integer (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
configSpec. kafka. kafkaConfig_2_8. saslEnabledMechanisms[] |
string The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
configSpec. kafka. kafkaConfig_3 |
object configSpec.kafka includes only one of the fields kafkaConfig_2_8 , kafkaConfig_3 Kafka version 3.x broker configuration. |
configSpec. kafka. kafkaConfig_3. compressionType |
string Cluster topics compression type.
|
configSpec. kafka. kafkaConfig_3. logFlushIntervalMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. logFlushIntervalMs |
integer (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. logFlushSchedulerIntervalMs |
integer (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
configSpec. kafka. kafkaConfig_3. logRetentionBytes |
integer (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. logRetentionHours |
integer (int64) The number of hours to keep a log segment file before deleting it. |
configSpec. kafka. kafkaConfig_3. logRetentionMinutes |
integer (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
configSpec. kafka. kafkaConfig_3. logRetentionMs |
integer (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. logSegmentBytes |
integer (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. logPreallocate |
boolean (boolean) Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the |
configSpec. kafka. kafkaConfig_3. socketSendBufferBytes |
integer (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
configSpec. kafka. kafkaConfig_3. socketReceiveBufferBytes |
integer (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
configSpec. kafka. kafkaConfig_3. autoCreateTopicsEnable |
boolean (boolean) Enable auto creation of topic on the server |
configSpec. kafka. kafkaConfig_3. numPartitions |
integer (int64) Default number of partitions per topic on the whole cluster |
configSpec. kafka. kafkaConfig_3. defaultReplicationFactor |
integer (int64) Default replication factor of the topic on the whole cluster |
configSpec. kafka. kafkaConfig_3. messageMaxBytes |
integer (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
configSpec. kafka. kafkaConfig_3. replicaFetchMaxBytes |
integer (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
configSpec. kafka. kafkaConfig_3. sslCipherSuites[] |
string A list of cipher suites. |
configSpec. kafka. kafkaConfig_3. offsetsRetentionMinutes |
integer (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
configSpec. kafka. kafkaConfig_3. saslEnabledMechanisms[] |
string The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
configSpec. zookeeper |
object Configuration and resource allocation for ZooKeeper hosts. |
configSpec. zookeeper. resources |
object Resources allocated to ZooKeeper hosts. |
configSpec. zookeeper. resources. resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
configSpec. zookeeper. resources. diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
configSpec. zookeeper. resources. diskTypeId |
string Type of the storage environment for the host. |
configSpec. zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
configSpec. brokersCount |
integer (int64) The number of Kafka brokers deployed in each availability zone. |
configSpec. assignPublicIp |
boolean (boolean) The flag that defines whether a public IP address is assigned to the cluster. If the value is |
configSpec. unmanagedTopics |
boolean (boolean) Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
configSpec. schemaRegistry |
boolean (boolean) Enables managed schema registry on cluster |
configSpec. access |
object Access policy for external services. |
configSpec. access. dataTransfer |
boolean (boolean) Allow access for DataTransfer. |
configSpec. restApiConfig |
object Configuration of REST API. |
configSpec. restApiConfig. enabled |
boolean (boolean) Is REST API enabled for this cluster. |
configSpec. diskSizeAutoscaling |
object DiskSizeAutoscaling settings |
configSpec. diskSizeAutoscaling. plannedUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. Acceptable values are 0 to 100, inclusive. |
configSpec. diskSizeAutoscaling. emergencyUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. Acceptable values are 0 to 100, inclusive. |
configSpec. diskSizeAutoscaling. diskSizeLimit |
string (int64) New storage size (in bytes) that is set when one of the thresholds is achieved. |
topicSpecs[] | object One or more configurations of topics to be created in the Apache Kafka® cluster. |
topicSpecs[]. name |
string Name of the topic. |
topicSpecs[]. partitions |
integer (int64) The number of the topic's partitions. |
topicSpecs[]. replicationFactor |
integer (int64) Amount of copies of a topic data kept in the cluster. |
topicSpecs[]. topicConfig_2_8 |
object topicSpecs[] includes only one of the fields topicConfig_2_8 , topicConfig_3 A topic settings for 2.8 |
topicSpecs[]. topicConfig_2_8. cleanupPolicy |
string Retention policy to use on old log messages.
|
topicSpecs[]. topicConfig_2_8. compressionType |
string The compression type for a given topic.
|
topicSpecs[]. topicConfig_2_8. deleteRetentionMs |
integer (int64) The amount of time in milliseconds to retain delete tombstone markers for log compacted topics. |
topicSpecs[]. topicConfig_2_8. fileDeleteDelayMs |
integer (int64) The time to wait before deleting a file from the filesystem. |
topicSpecs[]. topicConfig_2_8. flushMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_2_8. flushMs |
integer (int64) The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_2_8. minCompactionLagMs |
integer (int64) The minimum time in milliseconds a message will remain uncompacted in the log. |
topicSpecs[]. topicConfig_2_8. retentionBytes |
integer (int64) The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the This setting overrides the cluster-level |
topicSpecs[]. topicConfig_2_8. retentionMs |
integer (int64) The number of milliseconds to keep a log segment's file before deleting it. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_2_8. maxMessageBytes |
integer (int64) The largest record batch size allowed in topic. |
topicSpecs[]. topicConfig_2_8. minInsyncReplicas |
integer (int64) This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all"). |
topicSpecs[]. topicConfig_2_8. segmentBytes |
integer (int64) This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_2_8. preallocate |
boolean (boolean) True if we should preallocate the file on disk when creating a new log segment. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3 |
object topicSpecs[] includes only one of the fields topicConfig_2_8 , topicConfig_3 A topic settings for 3.x |
topicSpecs[]. topicConfig_3. cleanupPolicy |
string Retention policy to use on old log messages.
|
topicSpecs[]. topicConfig_3. compressionType |
string The compression type for a given topic.
|
topicSpecs[]. topicConfig_3. deleteRetentionMs |
integer (int64) The amount of time in milliseconds to retain delete tombstone markers for log compacted topics. |
topicSpecs[]. topicConfig_3. fileDeleteDelayMs |
integer (int64) The time to wait before deleting a file from the filesystem. |
topicSpecs[]. topicConfig_3. flushMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3. flushMs |
integer (int64) The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3. minCompactionLagMs |
integer (int64) The minimum time in milliseconds a message will remain uncompacted in the log. |
topicSpecs[]. topicConfig_3. retentionBytes |
integer (int64) The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3. retentionMs |
integer (int64) The number of milliseconds to keep a log segment's file before deleting it. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3. maxMessageBytes |
integer (int64) The largest record batch size allowed in topic. |
topicSpecs[]. topicConfig_3. minInsyncReplicas |
integer (int64) This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all"). |
topicSpecs[]. topicConfig_3. segmentBytes |
integer (int64) This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. This setting overrides the cluster-level |
topicSpecs[]. topicConfig_3. preallocate |
boolean (boolean) True if we should preallocate the file on disk when creating a new log segment. This setting overrides the cluster-level |
userSpecs[] | object Configurations of accounts to be created in the Apache Kafka® cluster. |
userSpecs[]. name |
string Required. Name of the Kafka user. The string length in characters must be 1-256. Value must match the regular expression |
userSpecs[]. password |
string Required. Password of the Kafka user. The string length in characters must be 8-128. |
userSpecs[]. permissions[] |
object Set of permissions granted to the user. |
userSpecs[]. permissions[]. topicName |
string Name or prefix-pattern with wildcard for the topic that the permission grants access to. To get the topic name, make a list request. |
userSpecs[]. permissions[]. role |
string Access role type to grant to the user.
|
userSpecs[]. permissions[]. allowHosts[] |
string Lists hosts allowed for this permission. When not defined, access from any host is allowed. Bare in mind that the same host might appear in multiple permissions at the same time, hence removing individual permission doesn't automatically restricts access from the |
networkId | string ID of the network to create the Apache Kafka® cluster in. The maximum string length in characters is 50. |
subnetId[] | string IDs of subnets to create brokers in. |
securityGroupIds[] | string User security groups |
hostGroupIds[] | string Host groups to place VMs of cluster on. |
deletionProtection | boolean (boolean) Deletion Protection inhibits deletion of the cluster |
maintenanceWindow | object Window of maintenance operations. |
maintenanceWindow. anytime |
object maintenanceWindow includes only one of the fields anytime , weeklyMaintenanceWindow |
maintenanceWindow. weeklyMaintenanceWindow |
object maintenanceWindow includes only one of the fields anytime , weeklyMaintenanceWindow |
maintenanceWindow. weeklyMaintenanceWindow. day |
string |
maintenanceWindow. weeklyMaintenanceWindow. hour |
string (int64) Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
Response
HTTP Code: 200 - OK
{
"id": "string",
"description": "string",
"createdAt": "string",
"createdBy": "string",
"modifiedAt": "string",
"done": true,
"metadata": "object",
// includes only one of the fields `error`, `response`
"error": {
"code": "integer",
"message": "string",
"details": [
"object"
]
},
"response": "object",
// end of the list of possible fields
}
An Operation resource. For more information, see Operation.
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
createdAt | string (date-time) Creation timestamp. String in RFC3339 text format. The range of possible values is from To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits). |
createdBy | string ID of the user or service account who initiated the operation. |
modifiedAt | string (date-time) The time when the Operation resource was last modified. String in RFC3339 text format. The range of possible values is from To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits). |
done | boolean (boolean) If the value is |
metadata | object Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
error | object The error result of the operation in case of failure or cancellation. includes only one of the fields error , response |
error. code |
integer (int32) Error code. An enum value of google.rpc.Code. |
error. message |
string An error message. |
error. details[] |
object A list of messages that carry the error details. |
response | object includes only one of the fields error , response The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any. |