Yandex Managed Service for Kubernetes release notes
Updates appear in the service's release channels in a sequence. First, updates with new features and improvements are released in the rapid
channel, after a while, in the regular
channel, and only then they become available in the stable
channel.
Q2 2025
New features
- Added support for Kubernetes version 1.32
. For more information, see Release channels. - You can now specify the same value for the minimum and maximum number of nodes in an autoscaling group. This way, you can achieve a fixed size and effectively disable autoscaling without switching the group type to fixed.
- Added support for encrypted Yandex Compute Cloud disks for static and dynamic provisioning of persistent volumes.
- Added sending a
UpdateClusterCertificate
management event to Yandex Audit Trails when updating a cluster certificate. - Updated the Calico network controller to version 3.30
.
Improvements
- Implemented a mechanism for forced removal of an autoscaling group node if for any reason it was unable to connect to the cluster within 15 minutes. Once deleted, the node is automatically recreated.
- In accordance with CIS Kubernetes Benchmarks
, disabled profiling for master components. - In tunnel mode clusters, added support for Topology Aware Routing
to localize traffic in one availability zone to reduce network latency. - Made cluster node registration more secure: now you can use a bootstrap configuration to issue a certificate for a node only from that node itself, not from any other node or pod.
Fixes
- Fixed the Cilium network controller's bug causing the cluster network to go unavailable if the masters failed. Now, the network and apps in the cluster remain operational even if the masters fail completely. Only supported on clusters running Cilium 1.15 or higher (Kubernetes 1.31).
- Fixed a bug with potential to cause master components to keep operating with an expired certificate.
- Fixed a bug with potential to prevent autoscaling in node groups of over 80 nodes.
- Fixed a bug with potential to prevent updating Yandex Network Load Balancer target groups for
LoadBalancer
type services.
Q1 2025
New features
-
You can now configure computing resources for masters; the relevant quotas were added.
-
Updated master configuration types:
- Basic: Contains one master host in one availability zone. Its former name is zonal.
- Highly available in three availability zones: Contains three master hosts in three different availability zones. Its former name is regional.
- Highly available in one availability zone: Contains three master hosts in one availability zone and one subnet. New configuration.
For more information, see the master description.
Fixes and improvements
- Encrypting cluster secrets in etcd
is switched to KMS v2 . - Fixed an error that would, in some cases, prevent creating a Managed Service for Kubernetes cluster with logging enabled.
- Fixed the issue where Network Load Balancer load balancers with deletion protection enabled, managed by the Managed Service for Kubernetes cluster, would block cluster deletion. The cluster deletion process is no longer blocked, and the load balancers remain in the user's folder.
Q4 2024
New features
- Added support for Kubernetes version 1.31
. For more information, see Release channels. - Updated Cilium
from version 1.12.9 to 1.15.10 for clusters with Kubernetes version 1.31 and higher. - Updated CoreDNS
from version 1.9.4 to 1.11.3 for all supported Kubernetes versions.
Fixes and improvements
-
Added a preflight check for compatibility of objects or configurations with the new Kubernetes version before cluster upgrade.
If the check identifies incompatible objects or configurations, the upgrade will return an error with a list of incompatible resources and a description.
Currently, only Cilium network policies are checked.
-
Fixed an issue that in some cases made it impossible to connect a new node to the cluster, the node ending up with a permanent
NOT_CONNECTED
status.
Q3 2024
New features
Added support for migrating masters between subnets within a single availability zone.
Fixes and improvements
- Fixed the error that prevented saving cluster audit log files with records larger than 128 KB. Record clipping is now enabled.
- Revised the cluster roles
for the Cilium network policy controller. Now they have only the minimum required permissions. - Added
subnet-id
field validation when updating a node group using CLI, Terraform, and API. Now, if both thenetwork-interface
andlocations
parameters are specified in an update request, thesubnet-id
fields underlocations
must either be all empty or fully match thesubnet-id
list undernetwork-interface
(thesubnet-id
items may be listed in any order). If thenetwork-interface
array in your request has more than one element, thesubnet-id
fields underlocations
must be empty.
H1 2024
New features
- Added support for Kubernetes 1.28
, 1.29 , and 1.30 . For more information, see Release channels. - Updated CSI
limits to support disks larger than 200 TB.
Fixes and improvements
- Fixed an error that could cause the snapshot
size to be missing when PersistentVolume was large. - Fixed an error where, during some node group updates, routes would fail to update to podCIDR
, causing pods on the node to be unavailable. - Fixed a number of vulnerabilities in runC
. - Fixed a problem with running a cluster after updating certificates while it was stopped.
- Fixed an error that, in some cases, caused a new node to permanently remain in the
NOT_CONNECTED
status.
2023
Release 2023-6
In the rapid
, regular
, and stable
release channels, the following updates are available:
- Added support for ultra high-speed network storages with three replicas (SSD) for storage classes and persistent volumes.
- Node groups can now be used with GPUs without preinstalled drivers. Now you can use the GPU Operator
application to select an appropriate driver version. For more information, see Using node groups with GPUs and no pre-installed drivers. - Removed the CPU resource restriction imposed on CoreDNS
pods to prevent throttling. - Added support for placement groups of non-replicable disks in the Kubernetes CSI driver. Placement group parameters are available for storage classes.
- Fixed the error of ignoring a log group ID when updating the
master_logging
parameter in a cluster. - Updated the Calico network controller to version 3.25
for Kubernetes versions beginning with 1.24.
Release 2023-5
In the rapid
, regular
, and stable
release channels, the following updates are available:
- Fixed the issue where the Guest Agent on nodes would access a resource outside a cluster.
- Updated the patch version
for Kubernetes version 1.27. - Added support for Kubernetes version 1.26.