Managed Service for Apache Kafka® API, gRPC: ClusterService.RescheduleMaintenance
Reschedule planned maintenance operation.
gRPC request
rpc RescheduleMaintenance (RescheduleMaintenanceRequest) returns (operation.Operation)
RescheduleMaintenanceRequest
{
"clusterId": "string",
"rescheduleType": "RescheduleType",
"delayedUntil": "google.protobuf.Timestamp"
}
Field |
Description |
clusterId |
string Required field. ID of the Kafka cluster to reschedule the maintenance operation for. |
rescheduleType |
enum RescheduleType Required field. The type of reschedule request.
|
delayedUntil |
The time until which this maintenance operation should be delayed. The value should be ahead of the first time when the maintenance operation has been scheduled for no more than two weeks. The value can also point to the past moment of time if rescheduleType.IMMEDIATE reschedule type is chosen. |
operation.Operation
{
"id": "string",
"description": "string",
"createdAt": "google.protobuf.Timestamp",
"createdBy": "string",
"modifiedAt": "google.protobuf.Timestamp",
"done": "bool",
"metadata": {
"clusterId": "string",
"delayedUntil": "google.protobuf.Timestamp"
},
// Includes only one of the fields `error`, `response`
"error": "google.rpc.Status",
"response": {
"id": "string",
"folderId": "string",
"createdAt": "google.protobuf.Timestamp",
"name": "string",
"description": "string",
"labels": "string",
"environment": "Environment",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
},
// Includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "CompressionType",
"logFlushIntervalMessages": "google.protobuf.Int64Value",
"logFlushIntervalMs": "google.protobuf.Int64Value",
"logFlushSchedulerIntervalMs": "google.protobuf.Int64Value",
"logRetentionBytes": "google.protobuf.Int64Value",
"logRetentionHours": "google.protobuf.Int64Value",
"logRetentionMinutes": "google.protobuf.Int64Value",
"logRetentionMs": "google.protobuf.Int64Value",
"logSegmentBytes": "google.protobuf.Int64Value",
"logPreallocate": "google.protobuf.BoolValue",
"socketSendBufferBytes": "google.protobuf.Int64Value",
"socketReceiveBufferBytes": "google.protobuf.Int64Value",
"autoCreateTopicsEnable": "google.protobuf.BoolValue",
"numPartitions": "google.protobuf.Int64Value",
"defaultReplicationFactor": "google.protobuf.Int64Value",
"messageMaxBytes": "google.protobuf.Int64Value",
"replicaFetchMaxBytes": "google.protobuf.Int64Value",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "google.protobuf.Int64Value",
"saslEnabledMechanisms": [
"SaslMechanism"
]
},
"kafkaConfig_3": {
"compressionType": "CompressionType",
"logFlushIntervalMessages": "google.protobuf.Int64Value",
"logFlushIntervalMs": "google.protobuf.Int64Value",
"logFlushSchedulerIntervalMs": "google.protobuf.Int64Value",
"logRetentionBytes": "google.protobuf.Int64Value",
"logRetentionHours": "google.protobuf.Int64Value",
"logRetentionMinutes": "google.protobuf.Int64Value",
"logRetentionMs": "google.protobuf.Int64Value",
"logSegmentBytes": "google.protobuf.Int64Value",
"logPreallocate": "google.protobuf.BoolValue",
"socketSendBufferBytes": "google.protobuf.Int64Value",
"socketReceiveBufferBytes": "google.protobuf.Int64Value",
"autoCreateTopicsEnable": "google.protobuf.BoolValue",
"numPartitions": "google.protobuf.Int64Value",
"defaultReplicationFactor": "google.protobuf.Int64Value",
"messageMaxBytes": "google.protobuf.Int64Value",
"replicaFetchMaxBytes": "google.protobuf.Int64Value",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "google.protobuf.Int64Value",
"saslEnabledMechanisms": [
"SaslMechanism"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "google.protobuf.Int64Value",
"assignPublicIp": "bool",
"unmanagedTopics": "bool",
"schemaRegistry": "bool",
"access": {
"dataTransfer": "bool"
},
"restApiConfig": {
"enabled": "bool"
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "int64",
"emergencyUsageThreshold": "int64",
"diskSizeLimit": "int64"
},
"kraft": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
}
}
},
"networkId": "string",
"health": "Health",
"status": "Status",
"securityGroupIds": [
"string"
],
"hostGroupIds": [
"string"
],
"deletionProtection": "bool",
"maintenanceWindow": {
// Includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": "AnytimeMaintenanceWindow",
"weeklyMaintenanceWindow": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"plannedOperation": {
"info": "string",
"delayedUntil": "google.protobuf.Timestamp"
}
}
// end of the list of possible fields
}
An Operation resource. For more information, see Operation.
Field |
Description |
id |
string ID of the operation. |
description |
string Description of the operation. 0-256 characters long. |
createdAt |
Creation timestamp. |
createdBy |
string ID of the user or service account who initiated the operation. |
modifiedAt |
The time when the Operation resource was last modified. |
done |
bool If the value is |
metadata |
Service-specific metadata associated with the operation. |
error |
The error result of the operation in case of failure or cancellation. Includes only one of the fields The operation result. |
response |
The normal response of the operation in case of success. Includes only one of the fields The operation result. |
RescheduleMaintenanceMetadata
Field |
Description |
clusterId |
string ID of the Kafka cluster. |
delayedUntil |
The time until which this maintenance operation is to be delayed. |
Cluster
An Apache Kafka® cluster resource.
For more information, see the Concepts section of the documentation.
Field |
Description |
id |
string ID of the Apache Kafka® cluster. |
folderId |
string ID of the folder that the Apache Kafka® cluster belongs to. |
createdAt |
Creation timestamp. |
name |
string Name of the Apache Kafka® cluster. |
description |
string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels |
string Custom labels for the Apache Kafka® cluster as |
environment |
enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] |
Description of monitoring systems relevant to the Apache Kafka® cluster.
|
config |
Configuration of the Apache Kafka® cluster.
|
networkId |
string ID of the network that the cluster belongs to. |
health |
enum Health Aggregated cluster health.
|
status |
enum Status Current state of the cluster.
|
securityGroupIds[] |
string User security groups |
hostGroupIds[] |
string Host groups hosting VMs of the cluster. |
deletionProtection |
bool Deletion Protection inhibits deletion of the cluster |
maintenanceWindow |
Window of maintenance operations. |
plannedOperation |
Scheduled maintenance operation. |
Monitoring
Metadata of monitoring system.
Field |
Description |
name |
string Name of the monitoring system. |
description |
string Description of the monitoring system. |
link |
string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
brokersCount |
The number of Kafka brokers deployed in each availability zone. |
assignPublicIp |
bool The flag that defines whether a public IP address is assigned to the cluster. |
unmanagedTopics |
bool Allows to manage topics via AdminAPI |
schemaRegistry |
bool Enables managed schema registry on cluster |
access |
Access policy for external services. |
restApiConfig |
Configuration of REST API. |
diskSizeAutoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafkaConfig_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafkaConfig_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
diskSize |
int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
diskTypeId |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compressionType |
enum CompressionType Cluster topics compression type.
|
logFlushIntervalMessages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMessages setting. |
logFlushIntervalMs |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMs setting. |
logFlushSchedulerIntervalMs |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionBytes setting. |
logRetentionHours |
The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionMs setting. |
logSegmentBytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segmentBytes setting. |
logPreallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socketSendBufferBytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
Enable auto creation of topic on the server |
numPartitions |
Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
Default replication factor of the topic on the whole cluster |
messageMaxBytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compressionType |
enum CompressionType Cluster topics compression type.
|
logFlushIntervalMessages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMessages setting. |
logFlushIntervalMs |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMs setting. |
logFlushSchedulerIntervalMs |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionBytes setting. |
logRetentionHours |
The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionMs setting. |
logSegmentBytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segmentBytes setting. |
logPreallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socketSendBufferBytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
Enable auto creation of topic on the server |
numPartitions |
Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
Default replication factor of the topic on the whole cluster |
messageMaxBytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
dataTransfer |
bool Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
bool Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
plannedUsageThreshold |
int64 Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergencyUsageThreshold |
int64 Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
diskSizeLimit |
int64 New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
Includes only one of the fields |
weeklyMaintenanceWindow |
Includes only one of the fields |
AnytimeMaintenanceWindow
Field |
Description |
Empty |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum WeekDay
|
hour |
int64 Hour of the day in UTC. |
MaintenanceOperation
Field |
Description |
info |
string |
delayedUntil |