Release channels
Managed Service for Kubernetes provides updates through release channels.
Managed Service for Kubernetes supports three Kubernetes release channels. Master node and Managed Service for Kubernetes node group versions are independent. You can specify different Kubernetes versions in a single release channel when creating them.
Warning
If you need to update both the master and the node group, upgrade the master first.
When creating a Managed Service for Kubernetes cluster, specify one of the below release channels. You cannot change the channel after the Managed Service for Kubernetes cluster is created. You can only recreate the cluster and specify a new release channel.
| Channel | Auto updates | Channel description |
|---|---|---|
RAPID |
Auto updates cannot be disabled. You can specify a time period for auto updates. | A channel receives updates with new features and improvements first. |
REGULAR |
Auto updates can be disabled. | New features and improvements are added shortly after they appear on RAPID. |
STABLE |
Auto updates can be disabled. | New features and improvements are added shortly after they appear on REGULAR. |
For information on supported Kubernetes versions in channels, see this page.
Warning
Starting with Kubernetes 1.30, the Managed Service for Kubernetes cluster node base image changed from Ubuntu 20.04 to Ubuntu 22.04 in all release channels. In the existing clusters and node groups, the OS version will be upgraded using the method you select.
For OS upgrade details and recommendations, see Updating node group OS.
Updates
When a release channel receives an update, you get a notification in the management console. You can install updates automatically or manually.
-
Auto updates are installed within the specified time period. They do not require any user actions.
Updates are initiated and should be completed within this time period. In some cases, when updating a Managed Service for Kubernetes node group, an update may continue beyond this period.
Auto updates include new Managed Service for Kubernetes features, improvements, and fixes, as well as Kubernetes component fixes.
-
Manual updates can be initiated by the user at any time.
They include Kubernetes minor version updates. Note that you can only update to one minor version at a time.
For example, from 1.31 to 1.32.
The difference between the cluster and node group versions must not be more than two minor versions. The node group version cannot be higher than the cluster version.
For example, if the cluster version is 1.33, the node group version may be 1.33, 1.32, or 1.31
Note
The system will automatically run a preflight check for object or configuration compatibility with the new Kubernetes version. If the check identifies incompatible objects or configurations, the upgrade will return an error with a list of incompatible resources and their description.
Read more about the end of support for Kubernetes versions and how different Managed Service for Kubernetes cluster components are updated.
End of support for a Kubernetes version
When upgrading from a Kubernetes version that is no longer supported:
- The Managed Service for Kubernetes master is not auto updated, you need to do it manually.
- Minor versions (e.g., 1.20 to 1.21) must be updated manually.
- Managed Service for Kubernetes node groups are updated automatically if auto updates are enabled. If auto updates are disabled, the old version of Kubernetes remains on the Managed Service for Kubernetes node groups. In this case, the user has to deal with any issues with their Managed Service for Kubernetes node group on their own, since the old version of Kubernetes is no longer supported.
Updating Kubernetes cluster components
The update process is different for a Managed Service for Kubernetes master and a node group.
Master
The amount of time a Managed Service for Kubernetes master is unavailable during an update depends on the master type:
- The basic master is unavailable during the update.
- The highly available master maintains network connectivity during the update.
For more information, see Updating a cluster.
Node group
The Kubernetes version is updated on group nodes in line with the deploy policy. This policy applies not only during Kubernetes version upgrades but also when editing node group settings.
The cluster's behavior will vary depending on how the policy is configured:
-
Policy with
max_unavailable > 0andmax_expansion = 0.This policy prohibits expanding the node group in the course of the operation (
max_expansion = 0).Modifying or updating the node group will be performed by sequentially executing the operation for
max_unavailablenodes at a time, until the operation is completed for all nodes in the group. As the selected nodes will become unavailable during the operation, before it executes it, the cluster will try to migrate the workload from these nodes to the remaining ones in the group.Warning
If the workload cannot be migrated to the remaining nodes due to insufficient computing resources on those nodes, the operation will be forcibly performed for the selected nodes.
This may lead to complete or partial unavailability of your applications in the cluster until the operation is fully completed for the whole node group.
Example
You have a node group set up as follows:
- Scaling type: Fixed.
- Number of nodes: 5.
max_expansion: 0.max_unavailable: 2.
If you modify the node group in this configuration:
- The workload from two nodes will be migrated to the remaining three nodes.
- The two nodes without workload will enter the
Reconcilingstatus, get updated, rebooted, and then return to theRunningstatus. - The workload from the next un-updated nodes will be migrated to the two updated nodes and one un-updated node.
- The two un-updated nodes without workload will enter the
Reconcilingstatus, get updated, rebooted, and then return to theRunningstatus. - The workload from the last un-updated node will be migrated to the four updated ones.
- The last un-updated node without workload will enter the
Reconcilingstatus, get updated, rebooted, and then return to theRunningstatus.
-
Policy with
max_expansion > 0andmax_unavailable = 0.This policy makes sure you have no unavailable nodes during the update process (
max_unavailable = 0).Modifying or updating the node group will be performed by sequentially expanding the node group by
max_expansionnew nodes at a time. These new nodes will get the modified configuration or updated Kubernetes version right away. Next, the new nodes will take the workload from the existing outdated ones, which will then be deleted. This process will continue until all nodes with the outdated configuration are replaced with new ones. The migration of workload from nodes consumes the computing resources of these nodes in the process.If you use a deployment policy like that, make sure your cloud has enough resources to expand the group before modifying it. Increase quotas if needed.
Warning
The operation for the node group may slow down or stop entirely if there are not enough resources for the expansion.
When expanding your node group, you pay for the nodes you create. For more information, see Managed Service for Kubernetes pricing policy.
Example
You have a node group set up as follows:
- Scaling type: Fixed.
- Number of nodes: 5.
max_expansion: 2.max_unavailable: 0.
If you modify the node group in this configuration:
- Two new nodes with the updated configuration will be created.
- After the new nodes enter the
Runningstatus, the workload from the two un-updated nodes will be migrated to the new ones, and the two nodes without workload will be deleted. - Two more new nodes with the updated configuration will be created.
- After the new nodes enter the
Runningstatus, the workload from the two un-updated nodes will be migrated to the new ones, and the two nodes without workload will be deleted. - One more new node with the updated configuration will be created.
- After the new node enters the
Runningstatus, the workload from the last un-updated node will be migrated to the new one, and the node without workload will be deleted.
-
Policy with
max_expansion > 0andmax_unavailable > 0.This policy is a combination of the policies described above.
Modifying or updating the node group will be performed by sequentially executing the operation for
max_unavailablenodes at a time, until the operation is completed for all nodes in the group. As the selected nodes will become unavailable during the operation, before it executes it, the cluster will try to migrate the workload from these nodes to the remaining ones in the group. The migration of workload from nodes consumes the computing resources of these nodes in the process.The node group will be expanded by
max_expansionnodes to be able to handle the workload from the nodes that are being updated. The group's expansion and the operation for the nodes take place simultaneously.When using this policy, you should monitor both the available computing resources of the nodes and the quotas and resources of your cloud.
Example
You have a node group set up as follows:
- Scaling type: Fixed.
- Number of nodes: 5.
max_expansion: 2.max_unavailable: 2.
If you modify the node group in this configuration:
- Two new nodes with the updated configuration will start to be created. At the same time, the workload from the two un-updated nodes will start to be migrated to the remaining three un-updated nodes.
- The new nodes will enter the
Runningstatus and start getting workload from the migrated nodes. - The two nodes without workload will enter the
Reconcilingstatus, get updated, rebooted, and then return to theRunningstatus. - The workload from the two un-updated nodes will be migrated to the four updated nodes and one un-updated node.
- One node without workload will enter the
Reconcilingstatus, get updated, rebooted, and then return to theRunningstatus. The other one will be deleted. - The workload from the remaining un-updated node will be migrated to the five updated nodes. With that done, this node will be deleted.
The behavior may slightly differ from the description depending on which comes first: pod migration from the un-updated node or the new/rebooted nodes getting the
Runningstatus; however, ultimately, the group will enter the required state.
For more information, see Configuring a deployment policy.
Certificates
In accordance with the safety recommendations, Managed Service for Kubernetes cluster and node group certificates
- If automatic updates are enabled, certificates are auto updated whenever a Managed Service for Kubernetes cluster or node group updates.
- If automatic updates are disabled, a certificate update will be forced a week before they expire.
For more information about updating certificates, see this Kubernetes guide
Required update
All Managed Service for Kubernetes release channels enforce required updates, such as replacements for outdated Kubernetes versions or critical updates for vulnerability patching. They may be installed both during and outside the update window.
If a cluster has a scheduled required update:
- You will get a notification about a scheduled update. Make sure you have set up your notification methods.
- On the cluster information page, you will see the Mandatory update scheduled section indicating the date.
Note
Features of a required update:
-
You cannot opt out of required updates.
-
If the random update time mode is enabled in your cluster, Managed Service for Kubernetes will initiate the update on its own schedule.
-
If the cluster has no update window or the window is set for a specific time, by default, the required update will be installed after 14 days. You can reschedule the required update to an earlier date.
-
If the cluster has an update window, this is when the required update will take place. However, if the cluster is unavailable at the time of the update, it will be applied during one of the next update windows.
-
Stopped clusters will be updated during their update windows in the order defined by Managed Service for Kubernetes.
For more information, see Working with required updates.