Managed Service for Apache Kafka® API, REST: Cluster.list
Retrieves the list of Apache Kafka® clusters that belong to the specified folder.
HTTP request
GET https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters
Query parameters
Parameter | Description |
---|---|
folderId | Required. ID of the folder to list Apache Kafka® clusters in. To get the folder ID, make a list request. The maximum string length in characters is 50. |
pageSize | The maximum number of results per page to return. If the number of available results is larger than pageSize, the service returns a nextPageToken that can be used to get the next page of results in subsequent list requests. The maximum value is 1000. |
pageToken | Page token. To get the next page of results, set pageToken to the nextPageToken returned by the previous list request. The maximum string length in characters is 100. |
filter | Filter support is not currently implemented. Any filters are ignored. The maximum string length in characters is 1000. |
Response
HTTP Code: 200 - OK
{
"clusters": [
{
"id": "string",
"folderId": "string",
"createdAt": "string",
"name": "string",
"description": "string",
"labels": "object",
"environment": "string",
"monitoring": [
{
"name": "string",
"description": "string",
"link": "string"
}
],
"config": {
"version": "string",
"kafka": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
},
// `clusters[].config.kafka` includes only one of the fields `kafkaConfig_2_8`, `kafkaConfig_3`
"kafkaConfig_2_8": {
"compressionType": "string",
"logFlushIntervalMessages": "integer",
"logFlushIntervalMs": "integer",
"logFlushSchedulerIntervalMs": "integer",
"logRetentionBytes": "integer",
"logRetentionHours": "integer",
"logRetentionMinutes": "integer",
"logRetentionMs": "integer",
"logSegmentBytes": "integer",
"logPreallocate": true,
"socketSendBufferBytes": "integer",
"socketReceiveBufferBytes": "integer",
"autoCreateTopicsEnable": true,
"numPartitions": "integer",
"defaultReplicationFactor": "integer",
"messageMaxBytes": "integer",
"replicaFetchMaxBytes": "integer",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "integer",
"saslEnabledMechanisms": [
"string"
]
},
"kafkaConfig_3": {
"compressionType": "string",
"logFlushIntervalMessages": "integer",
"logFlushIntervalMs": "integer",
"logFlushSchedulerIntervalMs": "integer",
"logRetentionBytes": "integer",
"logRetentionHours": "integer",
"logRetentionMinutes": "integer",
"logRetentionMs": "integer",
"logSegmentBytes": "integer",
"logPreallocate": true,
"socketSendBufferBytes": "integer",
"socketReceiveBufferBytes": "integer",
"autoCreateTopicsEnable": true,
"numPartitions": "integer",
"defaultReplicationFactor": "integer",
"messageMaxBytes": "integer",
"replicaFetchMaxBytes": "integer",
"sslCipherSuites": [
"string"
],
"offsetsRetentionMinutes": "integer",
"saslEnabledMechanisms": [
"string"
]
},
// end of the list of possible fields`clusters[].config.kafka`
},
"zookeeper": {
"resources": {
"resourcePresetId": "string",
"diskSize": "string",
"diskTypeId": "string"
}
},
"zoneId": [
"string"
],
"brokersCount": "integer",
"assignPublicIp": true,
"unmanagedTopics": true,
"schemaRegistry": true,
"access": {
"dataTransfer": true
},
"restApiConfig": {
"enabled": true
},
"diskSizeAutoscaling": {
"plannedUsageThreshold": "string",
"emergencyUsageThreshold": "string",
"diskSizeLimit": "string"
}
},
"networkId": "string",
"health": "string",
"status": "string",
"securityGroupIds": [
"string"
],
"hostGroupIds": [
"string"
],
"deletionProtection": true,
"maintenanceWindow": {
// `clusters[].maintenanceWindow` includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
"anytime": {},
"weeklyMaintenanceWindow": {
"day": "string",
"hour": "string"
},
// end of the list of possible fields`clusters[].maintenanceWindow`
},
"plannedOperation": {
"info": "string",
"delayedUntil": "string"
}
}
],
"nextPageToken": "string"
}
Field | Description |
---|---|
clusters[] | object List of Apache Kafka® clusters. |
clusters[]. id |
string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
clusters[]. folderId |
string ID of the folder that the Apache Kafka® cluster belongs to. |
clusters[]. createdAt |
string (date-time) Creation timestamp. String in RFC3339 text format. The range of possible values is from To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits). |
clusters[]. name |
string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression |
clusters[]. description |
string Description of the Apache Kafka® cluster. 0-256 characters long. |
clusters[]. labels |
object Custom labels for the Apache Kafka® cluster as |
clusters[]. environment |
string Deployment environment of the Apache Kafka® cluster.
|
clusters[]. monitoring[] |
object Description of monitoring systems relevant to the Apache Kafka® cluster. |
clusters[]. monitoring[]. name |
string Name of the monitoring system. |
clusters[]. monitoring[]. description |
string Description of the monitoring system. |
clusters[]. monitoring[]. link |
string Link to the monitoring system charts for the Apache Kafka® cluster. |
clusters[]. config |
object Configuration of the Apache Kafka® cluster. |
clusters[]. config. version |
string Version of Apache Kafka® used in the cluster. Possible values: |
clusters[]. config. kafka |
object Configuration and resource allocation for Kafka brokers. |
clusters[]. config. kafka. resources |
object Resources allocated to Kafka brokers. |
clusters[]. config. kafka. resources. resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
clusters[]. config. kafka. resources. diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
clusters[]. config. kafka. resources. diskTypeId |
string Type of the storage environment for the host. |
clusters[]. config. kafka. kafkaConfig_2_8 |
object clusters[].config.kafka includes only one of the fields kafkaConfig_2_8 , kafkaConfig_3 Kafka version 2.8 broker configuration. |
clusters[]. config. kafka. kafkaConfig_2_8. compressionType |
string Cluster topics compression type.
|
clusters[]. config. kafka. kafkaConfig_2_8. logFlushIntervalMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. logFlushIntervalMs |
integer (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. logFlushSchedulerIntervalMs |
integer (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
clusters[]. config. kafka. kafkaConfig_2_8. logRetentionBytes |
integer (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. logRetentionHours |
integer (int64) The number of hours to keep a log segment file before deleting it. |
clusters[]. config. kafka. kafkaConfig_2_8. logRetentionMinutes |
integer (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
clusters[]. config. kafka. kafkaConfig_2_8. logRetentionMs |
integer (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. logSegmentBytes |
integer (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. logPreallocate |
boolean (boolean) Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_2_8. socketSendBufferBytes |
integer (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
clusters[]. config. kafka. kafkaConfig_2_8. socketReceiveBufferBytes |
integer (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
clusters[]. config. kafka. kafkaConfig_2_8. autoCreateTopicsEnable |
boolean (boolean) Enable auto creation of topic on the server |
clusters[]. config. kafka. kafkaConfig_2_8. numPartitions |
integer (int64) Default number of partitions per topic on the whole cluster |
clusters[]. config. kafka. kafkaConfig_2_8. defaultReplicationFactor |
integer (int64) Default replication factor of the topic on the whole cluster |
clusters[]. config. kafka. kafkaConfig_2_8. messageMaxBytes |
integer (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
clusters[]. config. kafka. kafkaConfig_2_8. replicaFetchMaxBytes |
integer (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
clusters[]. config. kafka. kafkaConfig_2_8. sslCipherSuites[] |
string A list of cipher suites. |
clusters[]. config. kafka. kafkaConfig_2_8. offsetsRetentionMinutes |
integer (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
clusters[]. config. kafka. kafkaConfig_2_8. saslEnabledMechanisms[] |
string The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
clusters[]. config. kafka. kafkaConfig_3 |
object clusters[].config.kafka includes only one of the fields kafkaConfig_2_8 , kafkaConfig_3 Kafka version 3.x broker configuration. |
clusters[]. config. kafka. kafkaConfig_3. compressionType |
string Cluster topics compression type.
|
clusters[]. config. kafka. kafkaConfig_3. logFlushIntervalMessages |
integer (int64) The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. logFlushIntervalMs |
integer (int64) The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. logFlushSchedulerIntervalMs |
integer (int64) The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
clusters[]. config. kafka. kafkaConfig_3. logRetentionBytes |
integer (int64) Partition size limit; Kafka will discard old log segments to free up space if This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. logRetentionHours |
integer (int64) The number of hours to keep a log segment file before deleting it. |
clusters[]. config. kafka. kafkaConfig_3. logRetentionMinutes |
integer (int64) The number of minutes to keep a log segment file before deleting it. If not set, the value of |
clusters[]. config. kafka. kafkaConfig_3. logRetentionMs |
integer (int64) The number of milliseconds to keep a log segment file before deleting it. If not set, the value of This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. logSegmentBytes |
integer (int64) The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. logPreallocate |
boolean (boolean) Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the |
clusters[]. config. kafka. kafkaConfig_3. socketSendBufferBytes |
integer (int64) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
clusters[]. config. kafka. kafkaConfig_3. socketReceiveBufferBytes |
integer (int64) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
clusters[]. config. kafka. kafkaConfig_3. autoCreateTopicsEnable |
boolean (boolean) Enable auto creation of topic on the server |
clusters[]. config. kafka. kafkaConfig_3. numPartitions |
integer (int64) Default number of partitions per topic on the whole cluster |
clusters[]. config. kafka. kafkaConfig_3. defaultReplicationFactor |
integer (int64) Default replication factor of the topic on the whole cluster |
clusters[]. config. kafka. kafkaConfig_3. messageMaxBytes |
integer (int64) The largest record batch size allowed by Kafka. Default value: 1048588. |
clusters[]. config. kafka. kafkaConfig_3. replicaFetchMaxBytes |
integer (int64) The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
clusters[]. config. kafka. kafkaConfig_3. sslCipherSuites[] |
string A list of cipher suites. |
clusters[]. config. kafka. kafkaConfig_3. offsetsRetentionMinutes |
integer (int64) Offset storage time after a consumer group loses all its consumers. Default: 10080. |
clusters[]. config. kafka. kafkaConfig_3. saslEnabledMechanisms[] |
string The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
clusters[]. config. zookeeper |
object Configuration and resource allocation for ZooKeeper hosts. |
clusters[]. config. zookeeper. resources |
object Resources allocated to ZooKeeper hosts. |
clusters[]. config. zookeeper. resources. resourcePresetId |
string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
clusters[]. config. zookeeper. resources. diskSize |
string (int64) Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
clusters[]. config. zookeeper. resources. diskTypeId |
string Type of the storage environment for the host. |
clusters[]. config. zoneId[] |
string IDs of availability zones where Kafka brokers reside. |
clusters[]. config. brokersCount |
integer (int64) The number of Kafka brokers deployed in each availability zone. |
clusters[]. config. assignPublicIp |
boolean (boolean) The flag that defines whether a public IP address is assigned to the cluster. If the value is |
clusters[]. config. unmanagedTopics |
boolean (boolean) Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
clusters[]. config. schemaRegistry |
boolean (boolean) Enables managed schema registry on cluster |
clusters[]. config. access |
object Access policy for external services. |
clusters[]. config. access. dataTransfer |
boolean (boolean) Allow access for DataTransfer. |
clusters[]. config. restApiConfig |
object Configuration of REST API. |
clusters[]. config. restApiConfig. enabled |
boolean (boolean) Is REST API enabled for this cluster. |
clusters[]. config. diskSizeAutoscaling |
object DiskSizeAutoscaling settings |
clusters[]. config. diskSizeAutoscaling. plannedUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold. Acceptable values are 0 to 100, inclusive. |
clusters[]. config. diskSizeAutoscaling. emergencyUsageThreshold |
string (int64) Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold. Acceptable values are 0 to 100, inclusive. |
clusters[]. config. diskSizeAutoscaling. diskSizeLimit |
string (int64) New storage size (in bytes) that is set when one of the thresholds is achieved. |
clusters[]. networkId |
string ID of the network that the cluster belongs to. |
clusters[]. health |
string Aggregated cluster health.
|
clusters[]. status |
string Current state of the cluster.
|
clusters[]. securityGroupIds[] |
string User security groups |
clusters[]. hostGroupIds[] |
string Host groups hosting VMs of the cluster. |
clusters[]. deletionProtection |
boolean (boolean) Deletion Protection inhibits deletion of the cluster |
clusters[]. maintenanceWindow |
object Window of maintenance operations. |
clusters[]. maintenanceWindow. anytime |
object clusters[].maintenanceWindow includes only one of the fields anytime , weeklyMaintenanceWindow |
clusters[]. maintenanceWindow. weeklyMaintenanceWindow |
object clusters[].maintenanceWindow includes only one of the fields anytime , weeklyMaintenanceWindow |
clusters[]. maintenanceWindow. weeklyMaintenanceWindow. day |
string |
clusters[]. maintenanceWindow. weeklyMaintenanceWindow. hour |
string (int64) Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
clusters[]. plannedOperation |
object Scheduled maintenance operation. |
clusters[]. plannedOperation. info |
string The maximum string length in characters is 256. |
clusters[]. plannedOperation. delayedUntil |
string (date-time) String in RFC3339 text format. The range of possible values is from To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits). |
nextPageToken | string Token that allows you to get the next page of results for list requests. If the number of results is larger than pageSize, use nextPageToken as the value for the pageToken parameter in the next list request. Each subsequent list request will have its own nextPageToken to continue paging through the results. |