Managed Service for Apache Kafka® API, gRPC: ClusterService.List
Retrieves the list of Apache Kafka® clusters that belong to the specified folder.
gRPC request
rpc List (ListClustersRequest) returns (ListClustersResponse)
ListClustersRequest
{
"folderId": "string",
"pageSize": "int64",
"pageToken": "string",
"filter": "string"
}
Field |
Description |
folderId |
string Required field. ID of the folder to list Apache Kafka® clusters in. To get the folder ID, make a yandex.cloud.resourcemanager.v1.FolderService.List request. |
pageSize |
int64 The maximum number of results per page to return. If the number of available results is larger than |
pageToken |
string Page token. To get the next page of results, set |
filter |
string Filter support is not currently implemented. Any filters are ignored. |
ListClustersResponse
{
"clusters": [
{
"id": "string",
"folderId": "string",
"createdAt": "google.protobuf.Timestamp",
"name": "string",
"description": "string",
"labels": "string",
"environment": "Environment",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
},
// Includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "CompressionType",
"logFlushIntervalMessages": "google.protobuf.Int64Value",
"logFlushIntervalMs": "google.protobuf.Int64Value",
"logFlushSchedulerIntervalMs": "google.protobuf.Int64Value",
"logRetentionBytes": "google.protobuf.Int64Value",
"logRetentionHours": "google.protobuf.Int64Value",
"logRetentionMinutes": "google.protobuf.Int64Value",
"logRetentionMs": "google.protobuf.Int64Value",
"logSegmentBytes": "google.protobuf.Int64Value",
"logPreallocate": "google.protobuf.BoolValue",
"socketSendBufferBytes": "google.protobuf.Int64Value",
"socketReceiveBufferBytes": "google.protobuf.Int64Value",
"autoCreateTopicsEnable": "google.protobuf.BoolValue",
"numPartitions": "google.protobuf.Int64Value",
"defaultReplicationFactor": "google.protobuf.Int64Value",
"messageMaxBytes": "google.protobuf.Int64Value",
"replicaFetchMaxBytes": "google.protobuf.Int64Value",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "google.protobuf.Int64Value",
"saslEnabledMechanisms": [
"SaslMechanism"
]
},
"kafkaConfig_3": {
"compressionType": "CompressionType",
"logFlushIntervalMessages": "google.protobuf.Int64Value",
"logFlushIntervalMs": "google.protobuf.Int64Value",
"logFlushSchedulerIntervalMs": "google.protobuf.Int64Value",
"logRetentionBytes": "google.protobuf.Int64Value",
"logRetentionHours": "google.protobuf.Int64Value",
"logRetentionMinutes": "google.protobuf.Int64Value",
"logRetentionMs": "google.protobuf.Int64Value",
"logSegmentBytes": "google.protobuf.Int64Value",
"logPreallocate": "google.protobuf.BoolValue",
"socketSendBufferBytes": "google.protobuf.Int64Value",
"socketReceiveBufferBytes": "google.protobuf.Int64Value",
"autoCreateTopicsEnable": "google.protobuf.BoolValue",
"numPartitions": "google.protobuf.Int64Value",
"defaultReplicationFactor": "google.protobuf.Int64Value",
"messageMaxBytes": "google.protobuf.Int64Value",
"replicaFetchMaxBytes": "google.protobuf.Int64Value",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "google.protobuf.Int64Value",
"saslEnabledMechanisms": [
"SaslMechanism"
]
}
// end of the list of possible fields
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "google.protobuf.Int64Value",
"assignPublicIp": "bool",
"unmanagedTopics": "bool",
"schemaRegistry": "bool",
"access": {
"dataTransfer": "bool"
},
"restApiConfig": {
"enabled": "bool"
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "int64",
"emergencyUsageThreshold": "int64",
"diskSizeLimit": "int64"
},
"kraft": {
"resources": {
"resourcePresetId": "string",
"diskSize": "int64",
"diskTypeId": "string"
}
}
},
"networkId": "string",
"health": "Health",
"status": "Status",
"securityGroupIds": [
"string"
],
"hostGroupIds": [
"string"
],
"deletionProtection": "bool",
"maintenanceWindow": {
// Includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": "AnytimeMaintenanceWindow",
"weeklyMaintenanceWindow": {
"day": "WeekDay",
"hour": "int64"
}
// end of the list of possible fields
},
"plannedOperation": {
"info": "string",
"delayedUntil": "google.protobuf.Timestamp"
}
}
],
"nextPageToken": "string"
}
Field |
Description |
clusters[] |
List of Apache Kafka® clusters. |
nextPageToken |
string Token that allows you to get the next page of results for list requests. If the number of results is larger than ListClustersRequest.pageSize, use |
Cluster
An Apache Kafka® cluster resource.
For more information, see the Concepts section of the documentation.
Field |
Description |
id |
string ID of the Apache Kafka® cluster. |
folderId |
string ID of the folder that the Apache Kafka® cluster belongs to. |
createdAt |
Creation timestamp. |
name |
string Name of the Apache Kafka® cluster. |
description |
string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels |
string Custom labels for the Apache Kafka® cluster as |
environment |
enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] |
Description of monitoring systems relevant to the Apache Kafka® cluster.
|
config |
Configuration of the Apache Kafka® cluster.
|
networkId |
string ID of the network that the cluster belongs to. |
health |
enum Health Aggregated cluster health.
|
status |
enum Status Current state of the cluster.
|
securityGroupIds[] |
string User security groups |
hostGroupIds[] |
string Host groups hosting VMs of the cluster. |
deletionProtection |
bool Deletion Protection inhibits deletion of the cluster |
maintenanceWindow |
Window of maintenance operations. |
plannedOperation |
Scheduled maintenance operation. |
Monitoring
Metadata of monitoring system.
Field |
Description |
name |
string Name of the monitoring system. |
description |
string Description of the monitoring system. |
link |
string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field |
Description |
version |
string Version of Apache Kafka® used in the cluster. Possible values: |
kafka |
Configuration and resource allocation for Kafka brokers. |
zookeeper |
Configuration and resource allocation for ZooKeeper hosts. |
zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
brokersCount |
The number of Kafka brokers deployed in each availability zone. |
assignPublicIp |
bool The flag that defines whether a public IP address is assigned to the cluster. |
unmanagedTopics |
bool Allows to manage topics via AdminAPI |
schemaRegistry |
bool Enables managed schema registry on cluster |
access |
Access policy for external services. |
restApiConfig |
Configuration of REST API. |
diskSizeAutoscaling |
DiskSizeAutoscaling settings |
kraft |
Configuration and resource allocation for KRaft-controller hosts. |
Kafka
Field |
Description |
resources |
Resources allocated to Kafka brokers. |
kafkaConfig_2_8 |
Includes only one of the fields Kafka broker configuration. |
kafkaConfig_3 |
Includes only one of the fields Kafka broker configuration. |
Resources
Field |
Description |
resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). |
diskSize |
int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
diskTypeId |
string Type of the storage environment for the host. |
KafkaConfig2_8
Kafka version 2.8 broker configuration.
Field |
Description |
compressionType |
enum CompressionType Cluster topics compression type.
|
logFlushIntervalMessages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMessages setting. |
logFlushIntervalMs |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flushMs setting. |
logFlushSchedulerIntervalMs |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionBytes setting. |
logRetentionHours |
The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retentionMs setting. |
logSegmentBytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segmentBytes setting. |
logPreallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socketSendBufferBytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
Enable auto creation of topic on the server |
numPartitions |
Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
Default replication factor of the topic on the whole cluster |
messageMaxBytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
KafkaConfig3
Kafka version 3.x broker configuration.
Field |
Description |
compressionType |
enum CompressionType Cluster topics compression type.
|
logFlushIntervalMessages |
The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMessages setting. |
logFlushIntervalMs |
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flushMs setting. |
logFlushSchedulerIntervalMs |
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. |
logRetentionBytes |
Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionBytes setting. |
logRetentionHours |
The number of hours to keep a log segment file before deleting it. |
logRetentionMinutes |
The number of minutes to keep a log segment file before deleting it. If not set, the value of |
logRetentionMs |
The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retentionMs setting. |
logSegmentBytes |
The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segmentBytes setting. |
logPreallocate |
Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socketSendBufferBytes |
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socketReceiveBufferBytes |
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
autoCreateTopicsEnable |
Enable auto creation of topic on the server |
numPartitions |
Default number of partitions per topic on the whole cluster |
defaultReplicationFactor |
Default replication factor of the topic on the whole cluster |
messageMaxBytes |
The largest record batch size allowed by Kafka. Default value: 1048588. |
replicaFetchMaxBytes |
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
sslCipherSuites[] |
string A list of cipher suites. |
offsetsRetentionMinutes |
Offset storage time after a consumer group loses all its consumers. Default: 10080. |
saslEnabledMechanisms[] |
enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].
|
Zookeeper
Field |
Description |
resources |
Resources allocated to ZooKeeper hosts. |
Access
Field |
Description |
dataTransfer |
bool Allow access for DataTransfer. |
RestAPIConfig
Field |
Description |
enabled |
bool Is REST API enabled for this cluster. |
DiskSizeAutoscaling
Field |
Description |
plannedUsageThreshold |
int64 Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. |
emergencyUsageThreshold |
int64 Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. |
diskSizeLimit |
int64 New storage size (in bytes) that is set when one of the thresholds is achieved. |
KRaft
Field |
Description |
resources |
Resources allocated to KRaft controller hosts. |
MaintenanceWindow
Field |
Description |
anytime |
Includes only one of the fields |
weeklyMaintenanceWindow |
Includes only one of the fields |
AnytimeMaintenanceWindow
Field |
Description |
Empty |
WeeklyMaintenanceWindow
Field |
Description |
day |
enum WeekDay
|
hour |
int64 Hour of the day in UTC. |
MaintenanceOperation
Field |
Description |
info |
string |
delayedUntil |