Managed Service for Apache Kafka® API, REST: Cluster.Update
- HTTP request
- Path parameters
- Body parameters
- ConfigSpec
- Kafka
- Resources
- KafkaConfig2_8
- KafkaConfig3
- Zookeeper
- Access
- RestAPIConfig
- DiskSizeAutoscaling
- KRaft
- MaintenanceWindow
- WeeklyMaintenanceWindow
- Response
- UpdateClusterMetadata
- Status
- Cluster
- Monitoring
- ConfigSpec
- Kafka
- Resources
- KafkaConfig2_8
- KafkaConfig3
- Zookeeper
- Access
- RestAPIConfig
- DiskSizeAutoscaling
- KRaft
- MaintenanceWindow
- WeeklyMaintenanceWindow
- MaintenanceOperation
Updates the specified Apache Kafka® cluster.
HTTP request
PATCH https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}
Path parameters
Field |
Description |
clusterId |
string Required field. ID of the Apache Kafka® cluster to update. To get the Apache Kafka® cluster ID, make a ClusterService.List request. |
Body parameters
{
"updateMask": "object",
"description": "string",
"labels": "object",
"configSpec": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
},
// Includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "string",
"logFlushIntervalMessages": "string",
"logFlushIntervalMs": "string",
"logFlushSchedulerIntervalMs": "string",
"logRetentionBytes": "string",
"logRetentionHours": "string",
"logRetentionMinutes": "string",
"logRetentionMs": "string",
"logSegmentBytes": "string",
"logPreallocate": "boolean",
"socketSendBufferBytes": "string",
"socketReceiveBufferBytes": "string",
"autoCreateTopicsEnable": "boolean",
"numPartitions": "string",
"defaultReplicationFactor": "string",
"messageMaxBytes": "string",
"replicaFetchMaxBytes": "string",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "string",
"saslEnabledMechanisms": [
"string"
]
},
"kafkaConfig_3": {
"compressionType": "string",
"logFlushIntervalMessages": "string",
"logFlushIntervalMs": "string",
"logFlushSchedulerIntervalMs": "string",
"logRetentionBytes": "string",
"logRetentionHours": "string",
"logRetentionMinutes": "string",
"logRetentionMs": "string",
"logSegmentBytes": "string",
"logPreallocate": "boolean",
"socketSendBufferBytes": "string",
"socketReceiveBufferBytes": "string",
"autoCreateTopicsEnable": "boolean",
"numPartitions": "string",
"defaultReplicationFactor": "string",
"messageMaxBytes": "string",
"replicaFetchMaxBytes": "string",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "string",
"saslEnabledMechanisms": [
"string"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "string",
"assignPublicIp": "boolean",
"unmanagedTopics": "boolean",
"schemaRegistry": "boolean",
"access": {
"dataTransfer": "boolean"
},
"restApiConfig": {
"enabled": "boolean"
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "string",
"emergencyUsageThreshold": "string",
"diskSizeLimit": "string"
},
"kraft": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
}
},
"name": "string",
"securityGroupIds": [
"string"
],
"deletionProtection": "boolean",
"maintenanceWindow": {
// Includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": "object",
"weeklyMaintenanceWindow": {
"day": "string",
"hour": "string"
}
// end of the list of possible fields
},
"networkId": "string",
"subnetIds": [
"string"
]
}
Field |
Description |
updateMask |
object (field-mask) A comma-separated names off ALL fields to be updated. If |
description |
string New description of the Apache Kafka® cluster. |
labels |
object (map<string, string>) Custom labels for the Apache Kafka® cluster as For example, "project": "mvp" or "source": "dictionary". The new set of labels will completely replace the old ones. |
configSpec |
New configuration and resources for hosts in the Apache Kafka® cluster. Use |
name |
string New name for the Apache Kafka® cluster. |
securityGroupIds[] |
string User security groups |
deletionProtection |
boolean Deletion Protection inhibits deletion of the cluster |
maintenanceWindow |
New maintenance window settings for the cluster. |
networkId |
string ID of the network to move the cluster to. |
subnetIds[] |
string IDs of subnets where the hosts are located or a new host is being created |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
brokersCount |
string (int64) The number of Kafka brokers deployed in each availability zone. |
assignPublicIp |
boolean The flag that defines whether a public IP address is assigned to the cluster. |
unmanagedTopics |
boolean Allows to manage topics via AdminAPI |
schemaRegistry |
boolean Enables managed schema registry on cluster |
access |
Access policy for external services. |
restApiConfig |
Configuration of REST API. |
diskSizeAutoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafkaConfig_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafkaConfig_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
diskTypeId |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compressionType |
enum (CompressionType) Cluster topics compression type.
|
logFlushIntervalMessages |
string (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMessages setting. |
logFlushIntervalMs |
string (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMs setting. |
logFlushSchedulerIntervalMs |
string (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
string (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionBytes setting. |
logRetentionHours |
string (int64) The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
string (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
string (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionMs setting. |
logSegmentBytes |
string (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segmentBytes setting. |
logPreallocate |
boolean Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socketSendBufferBytes |
string (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
string (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
boolean Enable auto creation of topic on the server |
numPartitions |
string (int64) Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
string (int64) Default replication factor of the topic on the whole cluster |
messageMaxBytes |
string (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
string (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
string (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum (SaslMechanism) The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compressionType |
enum (CompressionType) Cluster topics compression type.
|
logFlushIntervalMessages |
string (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMessages setting. |
logFlushIntervalMs |
string (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMs setting. |
logFlushSchedulerIntervalMs |
string (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
string (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionBytes setting. |
logRetentionHours |
string (int64) The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
string (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
string (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionMs setting. |
logSegmentBytes |
string (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segmentBytes setting. |
logPreallocate |
boolean Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socketSendBufferBytes |
string (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
string (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
boolean Enable auto creation of topic on the server |
numPartitions |
string (int64) Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
string (int64) Default replication factor of the topic on the whole cluster |
messageMaxBytes |
string (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
string (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
string (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum (SaslMechanism) The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
dataTransfer |
boolean Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
boolean Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
plannedUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergencyUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
diskSizeLimit |
string (int64) New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
object Includes only one of the fields |
weeklyMaintenanceWindow |
Includes only one of the fields |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum (WeekDay)
|
hour |
string (int64) Hour of the day in UTC. |
Response
HTTP Code: 200 - OK
{
"id": "string",
"description": "string",
"createdAt": "string",
"createdBy": "string",
"modifiedAt": "string",
"done": "boolean",
"metadata": {
"clusterId": "string"
},
// Includes only one of the fields `error`, `response`
"error": {
"code": "integer",
"message": "string",
"details": [
"object"
]
},
"response": {
"id": "string",
"folderId": "string",
"createdAt": "string",
"name": "string",
"description": "string",
"labels": "object",
"environment": "string",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
},
// Includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "string",
"logFlushIntervalMessages": "string",
"logFlushIntervalMs": "string",
"logFlushSchedulerIntervalMs": "string",
"logRetentionBytes": "string",
"logRetentionHours": "string",
"logRetentionMinutes": "string",
"logRetentionMs": "string",
"logSegmentBytes": "string",
"logPreallocate": "boolean",
"socketSendBufferBytes": "string",
"socketReceiveBufferBytes": "string",
"autoCreateTopicsEnable": "boolean",
"numPartitions": "string",
"defaultReplicationFactor": "string",
"messageMaxBytes": "string",
"replicaFetchMaxBytes": "string",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "string",
"saslEnabledMechanisms": [
"string"
]
},
"kafkaConfig_3": {
"compressionType": "string",
"logFlushIntervalMessages": "string",
"logFlushIntervalMs": "string",
"logFlushSchedulerIntervalMs": "string",
"logRetentionBytes": "string",
"logRetentionHours": "string",
"logRetentionMinutes": "string",
"logRetentionMs": "string",
"logSegmentBytes": "string",
"logPreallocate": "boolean",
"socketSendBufferBytes": "string",
"socketReceiveBufferBytes": "string",
"autoCreateTopicsEnable": "boolean",
"numPartitions": "string",
"defaultReplicationFactor": "string",
"messageMaxBytes": "string",
"replicaFetchMaxBytes": "string",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "string",
"saslEnabledMechanisms": [
"string"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "string",
"assignPublicIp": "boolean",
"unmanagedTopics": "boolean",
"schemaRegistry": "boolean",
"access": {
"dataTransfer": "boolean"
},
"restApiConfig": {
"enabled": "boolean"
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "string",
"emergencyUsageThreshold": "string",
"diskSizeLimit": "string"
},
"kraft": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
}
},
"networkId": "string",
"health": "string",
"status": "string",
"securityGroupIds": [
"string"
],
"hostGroupIds": [
"string"
],
"deletionProtection": "boolean",
"maintenanceWindow": {
// Includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": "object",
"weeklyMaintenanceWindow": {
"day": "string",
"hour": "string"
}
// end of the list of possible fields
},
"plannedOperation": {
"info": "string",
"delayedUntil": "string"
}
}
// end of the list of possible fields
}
An Operation resource. For more information, see Operation.
Field |
Description |
id |
string ID of the operation. |
description |
string Description of the operation. 0-256 characters long. |
createdAt |
string (date-time) Creation timestamp. String in RFC3339 To work with values in this field, use the APIs described in the |
createdBy |
string ID of the user or service account who initiated the operation. |
modifiedAt |
string (date-time) The time when the Operation resource was last modified. String in RFC3339 To work with values in this field, use the APIs described in the |
done |
boolean If the value is |
metadata |
Service-specific metadata associated with the operation. |
error |
The error result of the operation in case of failure or cancellation. Includes only one of the fields The operation result. |
response |
The normal response of the operation in case of success. Includes only one of the fields The operation result. |
UpdateClusterMetadata
Field |
Description |
clusterId |
string ID of the Apache Kafka® cluster that is being updated. |
Status
The error result of the operation in case of failure or cancellation.
Field |
Description |
code |
integer (int32) Error code. An enum value of google.rpc.Code |
message |
string An error message. |
details[] |
object A list of messages that carry the error details. |
Cluster
An Apache Kafka® cluster resource.
For more information, see the Concepts section of the documentation.
Field |
Description |
id |
string ID of the Apache Kafka® cluster. |
folderId |
string ID of the folder that the Apache Kafka® cluster belongs to. |
createdAt |
string (date-time) Creation timestamp. String in RFC3339 To work with values in this field, use the APIs described in the |
name |
string Name of the Apache Kafka® cluster. |
description |
string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels |
object (map<string, string>) Custom labels for the Apache Kafka® cluster as |
environment |
enum (Environment) Deployment environment of the Apache Kafka® cluster.
|
monitoring[] |
Description of monitoring systems relevant to the Apache Kafka® cluster.
|
config |
Configuration of the Apache Kafka® cluster.
|
networkId |
string ID of the network that the cluster belongs to. |
health |
enum (Health) Aggregated cluster health.
|
status |
enum (Status) Current state of the cluster.
|
securityGroupIds[] |
string User security groups |
hostGroupIds[] |
string Host groups hosting VMs of the cluster. |
deletionProtection |
boolean Deletion Protection inhibits deletion of the cluster |
maintenanceWindow |
Window of maintenance operations. |
plannedOperation |
Scheduled maintenance operation. |
Monitoring
Metadata of monitoring system.
Field |
Description |
name |
string Name of the monitoring system. |
description |
string Description of the monitoring system. |
link |
string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
brokersCount |
string (int64) The number of Kafka brokers deployed in each availability zone. |
assignPublicIp |
boolean The flag that defines whether a public IP address is assigned to the cluster. |
unmanagedTopics |
boolean Allows to manage topics via AdminAPI |
schemaRegistry |
boolean Enables managed schema registry on cluster |
access |
Access policy for external services. |
restApiConfig |
Configuration of REST API. |
diskSizeAutoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafkaConfig_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafkaConfig_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
diskTypeId |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compressionType |
enum (CompressionType) Cluster topics compression type.
|
logFlushIntervalMessages |
string (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMessages setting. |
logFlushIntervalMs |
string (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMs setting. |
logFlushSchedulerIntervalMs |
string (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
string (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionBytes setting. |
logRetentionHours |
string (int64) The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
string (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
string (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionMs setting. |
logSegmentBytes |
string (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segmentBytes setting. |
logPreallocate |
boolean Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socketSendBufferBytes |
string (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
string (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
boolean Enable auto creation of topic on the server |
numPartitions |
string (int64) Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
string (int64) Default replication factor of the topic on the whole cluster |
messageMaxBytes |
string (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
string (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
string (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum (SaslMechanism) The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compressionType |
enum (CompressionType) Cluster topics compression type.
|
logFlushIntervalMessages |
string (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMessages setting. |
logFlushIntervalMs |
string (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMs setting. |
logFlushSchedulerIntervalMs |
string (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
string (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionBytes setting. |
logRetentionHours |
string (int64) The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
string (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
string (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionMs setting. |
logSegmentBytes |
string (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segmentBytes setting. |
logPreallocate |
boolean Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socketSendBufferBytes |
string (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
string (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
boolean Enable auto creation of topic on the server |
numPartitions |
string (int64) Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
string (int64) Default replication factor of the topic on the whole cluster |
messageMaxBytes |
string (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
string (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
string (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum (SaslMechanism) The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
dataTransfer |
boolean Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
boolean Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
plannedUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergencyUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
diskSizeLimit |
string (int64) New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
object Includes only one of the fields |
weeklyMaintenanceWindow |
Includes only one of the fields |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum (WeekDay)
|
hour |
string (int64) Hour of the day in UTC. |
MaintenanceOperation
Field |
Description |
info |
string |
delayedUntil |
string (date-time) String in RFC3339 To work with values in this field, use the APIs described in the |