Yandex Managed Service for Kubernetes release notes
Yandex Managed Service for Kubernetes release channels receive updates in the set order. First, updates with new features and improvements are released in the rapid channel, after a while, in the regular channel, and only then they become available in the stable channel.
Q3 2025
New features
- Now you can select a master configuration using Terraform and the Yandex Cloud CLI when creating or updating a cluster. For more information, see Creating a Managed Service for Kubernetes cluster.
- Now you can access the Yandex Cloud API from a Managed Service for Kubernetes cluster using a Yandex Identity and Access Management workload identity federation. Now you can exchange Kubernetes service account tokens for Yandex Cloud IAM tokens for simple authentication and authorization in the cloud from cluster pods.
- Added support for authentication in Yandex Cloud Registry using a node group service account. To access Cloud Registry registries, assign the
cloud-registry.artifacts.pullerrole to the node group service account. - Added support for simultaneous upscaling of multiple persistent volumes mounted on a single node. For more information, see Expanding a pod volume.
Improvements
- Added support for Kubernetes 1.33
. For more information, see Release channels. - Updated the containerd
runtime to version 1.7.27 for clusters with Kubernetes 1.30 or higher. - Starting with Kubernetes version 1.30, the node OS changed from Ubuntu 20.04 to Ubuntu 22.04. When you update node groups within these versions, new nodes are automatically created from an Ubuntu 22.04 VM image. For more information, see Updating node group OS.
- Cluster Autoscaler now checks zones for availability when selecting a node group for scaling. The system will no longer try to autoscale node groups in zones that are unavailable.
Fixes
- Fixed an error where, during the updating of master resources, the cluster would get the
Runningstatus before the update operation was completed. - Fixed an error that disrupted the master's connectivity with nodes in tunnel clusters when migrating the master from one subnet to another. The issue rendered the Kubernetes Webhook and Aggregated API inoperable on the newly migrated master.
Other updates
Removed the option to disable upscaling of master resources in response to increased load; the feature is now enabled for all Managed Service for Kubernetes clusters. Removed from the Yandex Managed Service for Kubernetes® service level
Q2 2025
New features
- Added support for Kubernetes version 1.32
. For more information, see Release channels. - You can now specify the same value for the minimum and maximum number of nodes in an autoscaling group. This way, you can achieve a fixed size and effectively disable autoscaling without switching to the fixed group type.
- Added support for encrypted Yandex Compute Cloud disks for static and dynamic provisioning of persistent volumes.
- Now you can send an
UpdateClusterCertificatemanagement event to Yandex Audit Trails when updating a cluster certificate. - Updated the Calico network controller to version 3.30
.
Improvements
- Implemented forced removal of a node in an autoscaling group if, for any reason, it was unable to connect to the cluster within 15 minutes. Once removed, the node is automatically recreated.
- In accordance with the CIS Kubernetes Benchmarks
, disabled profiling for master components. - In clusters with tunnel mode, added support for Topology Aware Routing
to keep traffic in one availability zone to reduce network latency. - Made cluster node registration more secure: now you can use a bootstrap configuration to issue a certificate for a node only from that node itself, not from any other node or pod.
Fixes
- Fixed the Cilium network controller's bug making the cluster network unavailable if the masters failed. Now, the network and applications in the cluster remain available even if the masters fail completely. Only supported on clusters running Cilium 1.15 or higher (Kubernetes 1.31).
- Fixed a bug that could cause master components to keep operating with an expired certificate.
- Fixed a bug that could prevent autoscaling in node groups of over 80 nodes.
- Fixed a bug that could prevent updating Yandex Network Load Balancer target groups for
LoadBalancertype services.
Q1 2025
New features
-
You can now configure computing resources for masters using the quotas we added.
-
Updated the master configuration types:
- Base: Contains one master host in a single availability zone. Its former name is zonal.
- Highly available in three availability zones: Contains three master hosts in three different availability zones. Its former name is regional.
- Highly available in one availability zone: Contains three master hosts in one availability zone and one subnet. This is a new configuration.
For more information, see the master description.
Fixes and improvements
- Switched encrypting cluster secrets in etcd
to KMS v2 . - Fixed an error that would, in some cases, prevent creating a Managed Service for Kubernetes cluster with logging enabled.
- Fixed the issue where a Network Load Balancer with enabled deletion protection managed by the Managed Service for Kubernetes cluster, would block cluster deletion. Cluster deletion is no longer blocked, and load balancers remain in the user folder.
Q4 2024
New features
- Added support for Kubernetes version 1.31
. For more information, see Release channels. - Updated Cilium
from version 1.12.9 to 1.15.10 for clusters with Kubernetes version 1.31 and higher. - Updated CoreDNS
from version 1.9.4 to 1.11.3 for all supported Kubernetes versions.
Fixes and improvements
-
Added a preflight check for compatibility of objects or configurations with the new Kubernetes version before cluster upgrade.
If the check identifies incompatible objects or configurations, the upgrade will return an error with a list of incompatible resources and their description.
Currently, only Cilium network policies are checked.
-
Fixed an issue that in some cases made it impossible to connect a new node to the cluster, the node ending up permanently in the
NOT_CONNECTEDstatus.
Q3 2024
New features
Added support for migrating masters between subnets within a single availability zone.
Fixes and improvements
- Fixed the error that prevented saving cluster audit log files with records larger than 128 KB. Record clipping is now enabled.
- Revised the cluster roles
for the Cilium network policy controller. Now they have only the minimum required permissions. - Added the
subnet-idfield validation when updating a node group using the CLI, Terraform, and API. Now, if both thenetwork-interfaceandlocationsparameters are specified in an update request, thesubnet-idfields underlocationsmust either be all empty or fully match thesubnet-idlist undernetwork-interface(thesubnet-iditems may be listed in any order). If thenetwork-interfacearray in your request has more than one element, thesubnet-idfields underlocationsmust be empty.
H1 2024
New features
- Added support for Kubernetes 1.28
, 1.29 , and 1.30 . For more information, see Release channels. - Updated the CSI
limits to support disks larger than 200 TB.
Fixes and improvements
- Fixed the error that could cause the snapshot
size to be missing if the PersistentVolume was large. - Fixed the error where, during some node group updates, routes to podCIDR
would fail to update, causing pods on the node to become unavailable. - Fixed a number of vulnerabilities in runC
. - Fixed the issue with running a cluster after updating certificates while it was stopped.
- Fixed the error that, in some cases, caused a new node to permanently remain in the
NOT_CONNECTEDstatus.
2023
Release 2023-6
In the rapid, regular, and stable release channels, the following updates are available:
- Added support for ultra high-speed network storages with three replicas (SSD) for storage classes and persistent volumes.
- You can now use node groups with GPUs without pre-installed drivers. Use the GPU Operator
application to select an appropriate driver version. For more information, see Using node groups with GPUs and no pre-installed drivers. - Removed the CPU resource limit for CoreDNS
pods to prevent throttling. - Added support for placement groups of non-replicable disks in the Kubernetes CSI driver. Placement group parameters are available for storage classes.
- Fixed the error of ignoring a log group ID when updating the
master_loggingparameter in a cluster. - Updated the Calico network controller to version 3.25
for Kubernetes versions beginning with 1.24.
Release 2023-5
In the rapid, regular, and stable release channels, the following updates are available:
- Fixed the issue where the Guest Agent on nodes would access a resource outside a cluster.
- Updated the patch version
for Kubernetes version 1.27. - Added support for Kubernetes version 1.26.