Managed Service for Apache Kafka® API, gRPC: ClusterService.Update
- gRPC request
- UpdateClusterRequest
- ConfigSpec
- Kafka
- Resources
- KafkaConfig2_8
- KafkaConfig3
- Zookeeper
- Access
- RestAPIConfig
- DiskSizeAutoscaling
- KRaft
- MaintenanceWindow
- AnytimeMaintenanceWindow
- WeeklyMaintenanceWindow
- operation.Operation
- UpdateClusterMetadata
- Cluster
- Monitoring
- ConfigSpec
- Kafka
- Resources
- KafkaConfig2_8
- KafkaConfig3
- Zookeeper
- Access
- RestAPIConfig
- DiskSizeAutoscaling
- KRaft
- MaintenanceWindow
- AnytimeMaintenanceWindow
- WeeklyMaintenanceWindow
- MaintenanceOperation
Updates the specified Apache Kafka® cluster.
gRPC request
rpc Update (UpdateClusterRequest) returns (operation.Operation)
UpdateClusterRequest
{
"cluster_id": "string",
"update_mask": "google.protobuf.FieldMask",
"description": "string",
"labels": "map<string, string>",
"config_spec": {
"version": "string",
"kafka": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
// Includes only one of the fields `kafka_config_2_8`, `kafka_config_3`
"kafka_config_2_8": {
"compression_type": "CompressionType",
"log_flush_interval_messages": "google.protobuf.Int64Value",
"log_flush_interval_ms": "google.protobuf.Int64Value",
"log_flush_scheduler_interval_ms": "google.protobuf.Int64Value",
"log_retention_bytes": "google.protobuf.Int64Value",
"log_retention_hours": "google.protobuf.Int64Value",
"log_retention_minutes": "google.protobuf.Int64Value",
"log_retention_ms": "google.protobuf.Int64Value",
"log_segment_bytes": "google.protobuf.Int64Value",
"log_preallocate": "google.protobuf.BoolValue",
"socket_send_buffer_bytes": "google.protobuf.Int64Value",
"socket_receive_buffer_bytes": "google.protobuf.Int64Value",
"auto_create_topics_enable": "google.protobuf.BoolValue",
"num_partitions": "google.protobuf.Int64Value",
"default_replication_factor": "google.protobuf.Int64Value",
"message_max_bytes": "google.protobuf.Int64Value",
"replica_fetch_max_bytes": "google.protobuf.Int64Value",
"ssl_cipher_suites": [
"string"
],
"offsets_retention_minutes": "google.protobuf.Int64Value",
"sasl_enabled_mechanisms": [
"SaslMechanism"
]
},
"kafka_config_3": {
"compression_type": "CompressionType",
"log_flush_interval_messages": "google.protobuf.Int64Value",
"log_flush_interval_ms": "google.protobuf.Int64Value",
"log_flush_scheduler_interval_ms": "google.protobuf.Int64Value",
"log_retention_bytes": "google.protobuf.Int64Value",
"log_retention_hours": "google.protobuf.Int64Value",
"log_retention_minutes": "google.protobuf.Int64Value",
"log_retention_ms": "google.protobuf.Int64Value",
"log_segment_bytes": "google.protobuf.Int64Value",
"log_preallocate": "google.protobuf.BoolValue",
"socket_send_buffer_bytes": "google.protobuf.Int64Value",
"socket_receive_buffer_bytes": "google.protobuf.Int64Value",
"auto_create_topics_enable": "google.protobuf.BoolValue",
"num_partitions": "google.protobuf.Int64Value",
"default_replication_factor": "google.protobuf.Int64Value",
"message_max_bytes": "google.protobuf.Int64Value",
"replica_fetch_max_bytes": "google.protobuf.Int64Value",
"ssl_cipher_suites": [
"string"
],
"offsets_retention_minutes": "google.protobuf.Int64Value",
"sasl_enabled_mechanisms": [
"SaslMechanism"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
}
},
"zone_id": [
"string"
],
"brokers_count": "google.protobuf.Int64Value",
"assign_public_ip": "bool",
"unmanaged_topics": "bool",
"schema_registry": "bool",
"access": {
"data_transfer": "bool"
},
"rest_api_config": {
"enabled": "bool"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "int64",
"emergency_usage_threshold": "int64",
"disk_size_limit": "int64"
},
"kraft": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
}
}
},
"name": "string",
"security_group_ids": [
"string"
],
"deletion_protection": "bool",
"maintenance_window": {
// Includes only one of the fields `anytime`, `weekly_maintenance_window`
"anytime": "AnytimeMaintenanceWindow",
"weekly_maintenance_window": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"network_id": "string",
"subnet_ids": [
"string"
]
}
Field |
Description |
cluster_id |
string Required field. ID of the Apache Kafka® cluster to update. To get the Apache Kafka® cluster ID, make a ClusterService.List request. |
update_mask |
|
description |
string New description of the Apache Kafka® cluster. |
labels |
object (map<string, string>) Custom labels for the Apache Kafka® cluster as For example, "project": "mvp" or "source": "dictionary". The new set of labels will completely replace the old ones. |
config_spec |
New configuration and resources for hosts in the Apache Kafka® cluster. Use |
name |
string New name for the Apache Kafka® cluster. |
security_group_ids[] |
string User security groups |
deletion_protection |
bool Deletion Protection inhibits deletion of the cluster |
maintenance_window |
New maintenance window settings for the cluster. |
network_id |
string ID of the network to move the cluster to. |
subnet_ids[] |
string IDs of subnets where the hosts are located or a new host is being created |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] |
string IDs of availability zones where Kafka brokers reside. |
brokers_count |
The number of Kafka brokers deployed in each availability zone. |
assign_public_ip |
bool The flag that defines whether a public IP address is assigned to the cluster. |
unmanaged_topics |
bool Allows to manage topics via AdminAPI |
schema_registry |
bool Enables managed schema registry on cluster |
access |
Access policy for external services. |
rest_api_config |
Configuration of REST API. |
disk_size_autoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafka_config_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafka_config_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resource_preset_id |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
disk_size |
int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compression_type |
enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
log_retention_bytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours |
The number of hours to keep a log segment file before deleting it. |
log_retention_minutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
log_retention_ms |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable |
Enable auto creation of topic on the server |
num_partitions |
Default number of partitions per topic on the whole cluster |
default_replication_factor |
Default replication factor of the topic on the whole cluster |
message_max_bytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] |
string A list of cipher suites. |
offsets_retention_minutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compression_type |
enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
log_retention_bytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours |
The number of hours to keep a log segment file before deleting it. |
log_retention_minutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
log_retention_ms |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable |
Enable auto creation of topic on the server |
num_partitions |
Default number of partitions per topic on the whole cluster |
default_replication_factor |
Default replication factor of the topic on the whole cluster |
message_max_bytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] |
string A list of cipher suites. |
offsets_retention_minutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
data_transfer |
bool Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
bool Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
planned_usage_threshold |
int64 Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergency_usage_threshold |
int64 Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
disk_size_limit |
int64 New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
Includes only one of the fields |
weekly_maintenance_window |
Includes only one of the fields |
AnytimeMaintenanceWindow
Field |
Description |
Empty |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum WeekDay
|
hour |
int64 Hour of the day in UTC. |
operation.Operation
{
"id": "string",
"description": "string",
"created_at": "google.protobuf.Timestamp",
"created_by": "string",
"modified_at": "google.protobuf.Timestamp",
"done": "bool",
"metadata": {
"cluster_id": "string"
},
// Includes only one of the fields `error`, `response`
"error": "google.rpc.Status",
"response": {
"id": "string",
"folder_id": "string",
"created_at": "google.protobuf.Timestamp",
"name": "string",
"description": "string",
"labels": "map<string, string>",
"environment": "Environment",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"kafka": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
},
// Includes only one of the fields `kafka_config_2_8`, `kafka_config_3`
"kafka_config_2_8": {
"compression_type": "CompressionType",
"log_flush_interval_messages": "google.protobuf.Int64Value",
"log_flush_interval_ms": "google.protobuf.Int64Value",
"log_flush_scheduler_interval_ms": "google.protobuf.Int64Value",
"log_retention_bytes": "google.protobuf.Int64Value",
"log_retention_hours": "google.protobuf.Int64Value",
"log_retention_minutes": "google.protobuf.Int64Value",
"log_retention_ms": "google.protobuf.Int64Value",
"log_segment_bytes": "google.protobuf.Int64Value",
"log_preallocate": "google.protobuf.BoolValue",
"socket_send_buffer_bytes": "google.protobuf.Int64Value",
"socket_receive_buffer_bytes": "google.protobuf.Int64Value",
"auto_create_topics_enable": "google.protobuf.BoolValue",
"num_partitions": "google.protobuf.Int64Value",
"default_replication_factor": "google.protobuf.Int64Value",
"message_max_bytes": "google.protobuf.Int64Value",
"replica_fetch_max_bytes": "google.protobuf.Int64Value",
"ssl_cipher_suites": [
"string"
],
"offsets_retention_minutes": "google.protobuf.Int64Value",
"sasl_enabled_mechanisms": [
"SaslMechanism"
]
},
"kafka_config_3": {
"compression_type": "CompressionType",
"log_flush_interval_messages": "google.protobuf.Int64Value",
"log_flush_interval_ms": "google.protobuf.Int64Value",
"log_flush_scheduler_interval_ms": "google.protobuf.Int64Value",
"log_retention_bytes": "google.protobuf.Int64Value",
"log_retention_hours": "google.protobuf.Int64Value",
"log_retention_minutes": "google.protobuf.Int64Value",
"log_retention_ms": "google.protobuf.Int64Value",
"log_segment_bytes": "google.protobuf.Int64Value",
"log_preallocate": "google.protobuf.BoolValue",
"socket_send_buffer_bytes": "google.protobuf.Int64Value",
"socket_receive_buffer_bytes": "google.protobuf.Int64Value",
"auto_create_topics_enable": "google.protobuf.BoolValue",
"num_partitions": "google.protobuf.Int64Value",
"default_replication_factor": "google.protobuf.Int64Value",
"message_max_bytes": "google.protobuf.Int64Value",
"replica_fetch_max_bytes": "google.protobuf.Int64Value",
"ssl_cipher_suites": [
"string"
],
"offsets_retention_minutes": "google.protobuf.Int64Value",
"sasl_enabled_mechanisms": [
"SaslMechanism"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
}
},
"zone_id": [
"string"
],
"brokers_count": "google.protobuf.Int64Value",
"assign_public_ip": "bool",
"unmanaged_topics": "bool",
"schema_registry": "bool",
"access": {
"data_transfer": "bool"
},
"rest_api_config": {
"enabled": "bool"
},
"disk_size_autoscaling": {
"planned_usage_threshold": "int64",
"emergency_usage_threshold": "int64",
"disk_size_limit": "int64"
},
"kraft": {
"resources": {
"resource_preset_id": "string",
"disk_size": "int64",
"disk_type_id": "string"
}
}
},
"network_id": "string",
"health": "Health",
"status": "Status",
"security_group_ids": [
"string"
],
"host_group_ids": [
"string"
],
"deletion_protection": "bool",
"maintenance_window": {
// Includes only one of the fields `anytime`, `weekly_maintenance_window`
"anytime": "AnytimeMaintenanceWindow",
"weekly_maintenance_window": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"planned_operation": {
"info": "string",
"delayed_until": "google.protobuf.Timestamp"
}
}
// end of the list of possible fields
}
An Operation resource. For more information, see Operation.
Field |
Description |
id |
string ID of the operation. |
description |
string Description of the operation. 0-256 characters long. |
created_at |
Creation timestamp. |
created_by |
string ID of the user or service account who initiated the operation. |
modified_at |
The time when the Operation resource was last modified. |
done |
bool If the value is |
metadata |
Service-specific metadata associated with the operation. |
error |
The error result of the operation in case of failure or cancellation. Includes only one of the fields The operation result. |
response |
The normal response of the operation in case of success. Includes only one of the fields The operation result. |
UpdateClusterMetadata
Field |
Description |
cluster_id |
string ID of the Apache Kafka® cluster that is being updated. |
Cluster
An Apache Kafka® cluster resource.
For more information, see the Concepts section of the documentation.
Field |
Description |
id |
string ID of the Apache Kafka® cluster. |
folder_id |
string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at |
Creation timestamp. |
name |
string Name of the Apache Kafka® cluster. |
description |
string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels |
object (map<string, string>) Custom labels for the Apache Kafka® cluster as |
environment |
enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] |
Description of monitoring systems relevant to the Apache Kafka® cluster.
|
config |
Configuration of the Apache Kafka® cluster.
|
network_id |
string ID of the network that the cluster belongs to. |
health |
enum Health Aggregated cluster health.
|
status |
enum Status Current state of the cluster.
|
security_group_ids[] |
string User security groups |
host_group_ids[] |
string Host groups hosting VMs of the cluster. |
deletion_protection |
bool Deletion Protection inhibits deletion of the cluster |
maintenance_window |
Window of maintenance operations. |
planned_operation |
Scheduled maintenance operation. |
Monitoring
Metadata of monitoring system.
Field |
Description |
name |
string Name of the monitoring system. |
description |
string Description of the monitoring system. |
link |
string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] |
string IDs of availability zones where Kafka brokers reside. |
brokers_count |
The number of Kafka brokers deployed in each availability zone. |
assign_public_ip |
bool The flag that defines whether a public IP address is assigned to the cluster. |
unmanaged_topics |
bool Allows to manage topics via AdminAPI |
schema_registry |
bool Enables managed schema registry on cluster |
access |
Access policy for external services. |
rest_api_config |
Configuration of REST API. |
disk_size_autoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafka_config_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafka_config_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resource_preset_id |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
disk_size |
int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compression_type |
enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
log_retention_bytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours |
The number of hours to keep a log segment file before deleting it. |
log_retention_minutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
log_retention_ms |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable |
Enable auto creation of topic on the server |
num_partitions |
Default number of partitions per topic on the whole cluster |
default_replication_factor |
Default replication factor of the topic on the whole cluster |
message_max_bytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] |
string A list of cipher suites. |
offsets_retention_minutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compression_type |
enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
log_retention_bytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours |
The number of hours to keep a log segment file before deleting it. |
log_retention_minutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
log_retention_ms |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable |
Enable auto creation of topic on the server |
num_partitions |
Default number of partitions per topic on the whole cluster |
default_replication_factor |
Default replication factor of the topic on the whole cluster |
message_max_bytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] |
string A list of cipher suites. |
offsets_retention_minutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
data_transfer |
bool Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
bool Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
planned_usage_threshold |
int64 Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergency_usage_threshold |
int64 Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
disk_size_limit |
int64 New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
Includes only one of the fields |
weekly_maintenance_window |
Includes only one of the fields |
AnytimeMaintenanceWindow
Field |
Description |
Empty |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum WeekDay
|
hour |
int64 Hour of the day in UTC. |
MaintenanceOperation
Field |
Description |
info |
string |
delayed_until |