Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up Time-Slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
        • Horizontal application scaling in a cluster
        • Vertical application scaling in a cluster
        • Updating the Metrics Server parameters
        • Deploying and load testing a gRPC service with scaling

In this article:

  • Required paid resources
  • Getting started
  • Create Vertical Pod Autoscaler and a test application
  • Test Vertical Pod Autoscaler
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Setting up and testing scaling
  4. Vertical application scaling in a cluster

Vertical scaling of an application in a Yandex Managed Service for Kubernetes cluster

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Create Vertical Pod Autoscaler and a test application
  • Test Vertical Pod Autoscaler
  • Delete the resources you created

Managed Service for Kubernetes supports several types of autoscaling. In this article you will learn how to configure the automatic management of pod resources with Vertical Pod Autoscaler:

  • Create Vertical Pod Autoscaler and a test application.
  • Test Vertical Pod Autoscaler.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Kubernetes cluster fee: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Cluster nodes (VM) fee: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for the public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. If you do not have the Yandex Cloud CLI yet, install and initialize it.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

  2. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  3. Create a Managed Service for Kubernetes cluster. Use these settings:

    • Use the previously created security groups.
    • If you intend to use your cluster within the Yandex Cloud network, there is no need to allocate a public IP address to it. To allow connections from outside the network, assign a public IP address to the cluster.
  4. Create a node group. Use these settings:

    • Use the previously created security groups.
    • Allocate it a public IP address to grant internet access to the node group and allow pulling Docker images and components.
  5. Install kubect and configure it to work with the new cluster.

    If a cluster has no public IP address assigned and kubectl is configured via the cluster's private IP address, run kubectl commands on a Yandex Cloud VM that is in the same network as the cluster.

  6. Install Vertical Pod Autoscaler from the following repository:

    cd /tmp && \
      git clone https://github.com/kubernetes/autoscaler.git && \
      cd autoscaler/vertical-pod-autoscaler/hack && \
      ./vpa-up.sh
    

Create Vertical Pod Autoscaler and a test applicationCreate Vertical Pod Autoscaler and a test application

  1. Create a file named app.yaml with the nginx test application and load balancer settings:

    app.yaml
    ---
    ### Deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          containers:
            - name: nginx
              image: registry.k8s.io/hpa-example
              resources:
                requests:
                  memory: "256Mi"
                  cpu: "500m"
                limits:
                  memory: "500Mi"
                  cpu: "1"
    ---
    ### Service
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      type: LoadBalancer
    
  2. Create a file named vpa.yaml with Vertical Pod Autoscaler configuration:

    vpa.yaml
    ---
    apiVersion: autoscaling.k8s.io/v1
    kind: VerticalPodAutoscaler
    metadata:
      name: nginx
    spec:
      targetRef:
        apiVersion: "apps/v1"
        kind:       Deployment
        name:       nginx
      updatePolicy:
        updateMode:  "Auto"
        minReplicas: 1
    
  3. Create objects:

    kubectl apply -f app.yaml && \
    kubectl apply -f vpa.yaml
    
  4. Make sure the Vertical Pod Autoscaler and nginx pods have entered the Running state:

    kubectl get pods -n kube-system | grep vpa && \
    kubectl get pods | grep nginx
    

    Result:

    vpa-admission-controller-58********-qmxtv  1/1  Running  0  44h
    vpa-recommender-67********-jqvgt           1/1  Running  0  44h
    vpa-updater-64********-xqsts               1/1  Running  0  44h
    nginx-6c********-62j7w                     1/1  Running  0  42h
    

Test Vertical Pod AutoscalerTest Vertical Pod Autoscaler

To test Vertical Pod Autoscaler, nginx application workload will be simulated.

  1. Review the recommendations provided by Vertical Pod Autoscaler prior to creating the workload:

    kubectl describe vpa nginx
    

    Note the low Cpu values in the Status.Recommendation.Container Recommendations metrics:

    Name:         nginx
    Namespace:    default
    Labels:       <none>
    Annotations:  <none>
    API Version:  autoscaling.k8s.io/v1
    Kind:         VerticalPodAutoscaler
    ...
    Status:
      Conditions:
        Last Transition Time:  2022-03-18T08:02:04Z
        Status:                True
        Type:                  RecommendationProvided
      Recommendation:
        Container Recommendations:
          Container Name:  nginx
          Lower Bound:
            Cpu:     25m
            Memory:  262144k
          Target:
            Cpu:     25m
            Memory:  262144k
          Uncapped Target:
            Cpu:     25m
            Memory:  262144k
          Upper Bound:
            Cpu:     25m
            Memory:  262144k
    
  2. Make sure Vertical Pod Autoscaler is managing the nginx application pod resources:

    kubectl get pod <nginx_pod_name> --output yaml
    

    Result:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        vpaObservedContainers: nginx
        vpaUpdates: 'Pod resources updated by nginx: container 0: cpu request, memory
          request, cpu limit, memory limit'
    ...
    spec:
      containers:
      ...
        name: nginx
        resources:
          limits:
            cpu: 50m
            memory: 500000Ki
          requests:
            cpu: 25m
            memory: 262144k
    
  3. Run the workload simulation process in a separate window:

    URL=$(kubectl get service nginx -o json \
      | jq -r '.status.loadBalancer.ingress[0].ip') && \
      while true; do wget -q -O- http://$URL; done
    

    Tip

    To increase load and accelerate the execution of the scenario, run several processes in separate windows.

    Note

    If the resource is unavailable at the specified URL, make sure that the security groups for the Managed Service for Kubernetes cluster and its node groups are configured correctly. If any rule is missing, add it.

  4. After several minutes, review the recommendation provided by Vertical Pod Autoscaler after the workload is created:

    kubectl describe vpa nginx
    

    Vertical Pod Autoscaler allocated additional resources to the pods as the workload increased. Note the increased Cpu values in the Status.Recommendation.Container Recommendations metrics:

    Name:         nginx
    Namespace:    default
    Labels:       <none>
    Annotations:  <none>
    API Version:  autoscaling.k8s.io/v1
    Kind:         VerticalPodAutoscaler
    ...
    Status:
     Conditions:
        Last Transition Time:  2022-03-18T08:02:04Z
        Status:                True
        Type:                  RecommendationProvided
      Recommendation:
        Container Recommendations:
          Container Name:  nginx
          Lower Bound:
            Cpu:     25m
            Memory:  262144k
          Target:
            Cpu:     410m
            Memory:  262144k
          Uncapped Target:
            Cpu:     410m
            Memory:  262144k
          Upper Bound:
            Cpu:     28897m
            Memory:  1431232100
    
  5. Stop simulating the workload. Within a few minutes, the Status.Recommendation.Container Recommendations metrics will return to their original values.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Kubernetes cluster.
  2. If static public IP addresses were used for cluster and node access, release and delete them.

Was the article helpful?

Previous
Horizontal application scaling in a cluster
Next
Updating the Metrics Server parameters
Yandex project
© 2025 Yandex.Cloud LLC