Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
    • All guides
    • Connecting to a node over SSH
    • Connecting to a node via OS Login
    • Updating Kubernetes
    • Configuring autoscaling
      • Information about existing node groups
      • Creating a node group
      • Connecting to a node over SSH
      • Connecting to a node via OS Login
      • Configuring autoscaling
      • Updating a node group
      • Managing Kubernetes node labels
      • Deleting a node group
    • Connecting external nodes to the cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Getting started
  • Configuring cluster autoscaling
  • Configuring horizontal pod autoscaling
  • Configuring vertical pod autoscaling
  • Deleting Terminated pods
  • Manually
  • Automatically using CronJob
  1. Step-by-step guides
  2. Configuring autoscaling

Configuring autoscaling

Written by
Yandex Cloud
Updated at May 5, 2025
  • Getting started
  • Configuring cluster autoscaling
  • Configuring horizontal pod autoscaling
  • Configuring vertical pod autoscaling
  • Deleting Terminated pods
    • Manually
    • Automatically using CronJob

Managed Service for Kubernetes has three autoscaling methods available:

  • Cluster autoscaling
  • Horizontal pod autoscaling
  • Vertical pod autoscaling

Getting startedGetting started

  1. Create a Managed Service for Kubernetes cluster with any suitable configuration.

  2. Install kubect and configure it to work with the new cluster.

Configuring cluster autoscalingConfiguring cluster autoscaling

Warning

You can only enable automatic scaling of this type when creating a Managed Service for Kubernetes node group.

To create an autoscalable Managed Service for Kubernetes node group:

Management console
CLI
Terraform

Create a Managed Service for Kubernetes node group with the following parameters:

  • Scaling Type: Automatic.
  • Minimum number of nodes: Specify the number of Managed Service for Kubernetes nodes to remain in the group at minimum load.
  • Maximum number of nodes: Specify the maximum number of Managed Service for Kubernetes nodes allowed in the group.
  • Initial number of nodes: Number of Managed Service for Kubernetes nodes to be created together with the group (this number must be between the minimum and the maximum number of nodes in the group).

If you do not have the Yandex Cloud CLI yet, install and initialize it.

The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

  1. Review the command to create a Managed Service for Kubernetes node group:

    yc managed-kubernetes node-group create --help
    
  2. Create an autoscalable Managed Service for Kubernetes node group:

    yc managed-kubernetes node-group create \
    ...
      --auto-scale min=<minimum_number_of_nodes>, max=<maximum_number_of_nodes>, initial=<initial_number_of_nodes>
    
  1. With Terraform, you can quickly create a cloud infrastructure in Yandex Cloud and manage it using configuration files. These files store the infrastructure description written in HashiCorp Configuration Language (HCL). If you change the configuration files, Terraform automatically detects which part of your configuration is already deployed, and what should be added or removed.

    Terraform is distributed under the Business Source License. The Yandex Cloud provider for Terraform is distributed under the MPL-2.0 license.

    For more information about the provider resources, see the documentation on the Terraform website or mirror website.

    If you do not have Terraform yet, install it and configure its Yandex Cloud provider.

  2. Open the current Terraform configuration file describing the node group.

    For more information about creating this file, see Creating a node group.

  3. Add a description of the new node group by specifying the autoscaling settings under scale_policy.auto_scale:

    resource "yandex_kubernetes_node_group" "<node_group_name>" {
    ...
      scale_policy {
        auto_scale {
          min     = <minimum_number_of_nodes_per_group>
          max     = <maximum_number_of_nodes_per_group>
          initial = <initial_number_of_nodes_per_group>
        }
      }
    }
    
  4. Make sure the configuration files are correct.

    1. In the command line, navigate to the directory that contains the current Terraform configuration files defining the infrastructure.

    2. Run this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

  5. Confirm updating the resources.

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

Cluster Autoscaler is managed on the Managed Service for Kubernetes side.

For more information about Cluster Autoscaler, see Cluster autoscaling. The default parameters are described in the Kubernetes documentation.

See also Questions and answers about node group autoscaling in Managed Service for Kubernetes.

Configuring horizontal pod autoscalingConfiguring horizontal pod autoscaling

CLI
  1. Create a Horizontal Pod Autoscaler for your application, for example:

    kubectl autoscale deployment/<application_name> --cpu-percent=50 --min=1 --max=3
    

    Where:

    • --cpu-percent: Expected Managed Service for Kubernetes pod load on the vCPU.
    • --min: Minimum number of Managed Service for Kubernetes pods.
    • --max: Maximum number of Managed Service for Kubernetes pods.
  2. Check the Horizontal Pod Autoscaler status:

    kubectl describe hpa/<application_name>
    

For more information about Horizontal Pod Autoscaler, see Horizontal pod autoscaling.

Configuring vertical pod autoscalingConfiguring vertical pod autoscaling

CLI
  1. Install Vertical Pod Autoscaler from the following repository:

    cd /tmp && \
      git clone https://github.com/kubernetes/autoscaler.git && \
      cd autoscaler/vertical-pod-autoscaler/hack && \
      ./vpa-up.sh
    
  2. Create a configuration file called vpa.yaml for your application:

    apiVersion: autoscaling.k8s.io/v1
    kind: VerticalPodAutoscaler
    metadata:
      name: <application_name>
    spec:
      targetRef:
        apiVersion: "apps/v1"
        kind:       Deployment
        name:       <application_name>
    updatePolicy:
      updateMode: "<VPA_runtime_mode>"
    

    Where updateMode is the Vertical Pod Autoscaler runtime mode, Auto or Off.

  3. Create a Vertical Pod Autoscaler for your application:

    kubectl apply -f vpa.yaml
    
  4. Check the Vertical Pod Autoscaler status:

    kubectl describe vpa <application_name>
    

For more information about Vertical Pod Autoscaler, see Vertical pod autoscaling.

Deleting Terminated podsDeleting Terminated pods

Sometimes during autoscaling, Managed Service for Kubernetes node pods are not deleted and stay in the Terminated status. This happens because the Pod garbage collector (PodGC) fails to timely delete these pods.

You can delete the stuck Managed Service for Kubernetes pods:

  • Manually
  • Automatically using CronJob

ManuallyManually

Run this command:

kubectl get pods --all-namespaces | grep -i Terminated \
| awk '{print $1, $2}' | xargs -n2 kubectl delete pod -n

Automatically using CronJobAutomatically using CronJob

To delete stuck Managed Service for Kubernetes pods automatically:

  1. Set up a CronJob.
  2. Check the results of your CronJob jobs.

If you no longer need the CronJob, delete it.

Setting up automatic deletion in a CronJobSetting up automatic deletion in a CronJob

  1. Create a cronjob.yaml file with a specification for the CronJob and the resources needed to run it:

    ---
    apiVersion: batch/v1
    kind: CronJob
    metadata:
      name: terminated-pod-cleaner
    spec:
      schedule: "*/5 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              serviceAccountName: terminated-pod-cleaner
              containers:
              - name: terminated-pod-cleaner
                image: bitnami/kubectl
                imagePullPolicy: IfNotPresent
                command: ["/bin/sh", "-c"]
                args: ["kubectl get pods --all-namespaces | grep -i Terminated | awk '{print $1, $2}' | xargs --no-run-if-empty -n2 kubectl delete pod -n"]
              restartPolicy: Never
    
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: terminated-pod-cleaner
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: terminated-pod-cleaner
    rules:
      - apiGroups: [""]
        resources:
          - pods
        verbs: [list, delete]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: terminated-pod-cleaner
    subjects:
    - kind: ServiceAccount
      name: terminated-pod-cleaner
      namespace: default
    roleRef:
      kind: ClusterRole
      name: terminated-pod-cleaner
      apiGroup: rbac.authorization.k8s.io
    

    The schedule: "*/5 * * * *" line defines a schedule in cron format: the job runs every 5 minutes. Change the interval if needed.

  2. Create a CronJob and its resources:

    kubectl create -f cronjob.yaml
    

    Result:

    cronjob.batch/terminated-pod-cleaner created
    serviceaccount/terminated-pod-cleaner created
    clusterrole.rbac.authorization.k8s.io/terminated-pod-cleaner created
    clusterrolebinding.rbac.authorization.k8s.io/terminated-pod-cleaner created
    
  3. Check that the CronJob has been created:

    kubectl get cronjob terminated-pod-cleaner
    

    Result:

    NAME                    SCHEDULE     SUSPEND  ACTIVE  LAST SCHEDULE  AGE
    terminated-pod-cleaner  */5 * * * *  False    0       <none>         4s
    

    After the interval specified in the SCHEDULE, a time value will appear in the LAST SCHEDULE column. This means that the task was successfully executed or ended with an error.

Checking the results of CronJob jobsChecking the results of CronJob jobs

  1. Retrieve a list of jobs:

    kubectl get jobs
    

    Result:

    NAME           COMPLETIONS  DURATION  AGE
    <job_name>  1/1          4s        2m1s
    ...
    
  2. Get the name of the Managed Service for Kubernetes pod that ran the job:

    kubectl get pods --selector=job-name=<job_name> --output=jsonpath={.items[*].metadata.name}
    
  3. View the Managed Service for Kubernetes pod logs:

    kubectl logs <pod_name>
    

    The log will include a list of deleted Managed Service for Kubernetes pods. If the log is empty, this means that there were no Managed Service for Kubernetes pods in the Terminated status when the job ran.

Deleting the CronJobDeleting the CronJob

To delete the CronJob and its resources, run this command:

kubectl delete cronjob terminated-pod-cleaner && \
kubectl delete serviceaccount terminated-pod-cleaner && \
kubectl delete clusterrole terminated-pod-cleaner && \
kubectl delete clusterrolebinding terminated-pod-cleaner

Was the article helpful?

Previous
Updating Kubernetes
Next
Connection method overview
Yandex project
© 2025 Yandex.Cloud LLC