Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up Time-Slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
        • Horizontal application scaling in a cluster
        • Vertical application scaling in a cluster
        • Updating the Metrics Server parameters
        • Deploying and load testing a gRPC service with scaling

In this article:

  • View the amount of resources allocated to the Metrics Server pod
  • Update the Metrics Server parameters
  • Check the result
  • Reset the Metrics Server parameters
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Setting up and testing scaling
  4. Updating the Metrics Server parameters

Updating the Metrics Server parameters in a Yandex Managed Service for Kubernetes cluster

Written by
Yandex Cloud
Updated at April 24, 2025
  • View the amount of resources allocated to the Metrics Server pod
  • Update the Metrics Server parameters
  • Check the result
  • Reset the Metrics Server parameters

Metrics Server is a Managed Service for Kubernetes cluster service, which is installed by default. It collects metrics from each Managed Service for Kubernetes cluster node using kubelet and provides them through the Metrics API. Horizontal Pod Autoscaler and Vertical Pod Autoscaler run based on data from these metrics. You can get the metric data using the kubectl top node or kubectl top pod commands. For more information, see the Metrics Server documentation.

A Metrics Server pod has two containers: metrics-server and metrics-server-nanny, the latter acting as an addon-resizer for metrics-server. The metrics-server-nanny container is responsible for the automatic allocation of resources to the metrics-server container depending on the number of Managed Service for Kubernetes cluster nodes.

In some cases, the metrics-server-nanny component may run incorrectly. For instance, if many pods are created while there are few nodes in the Managed Service for Kubernetes cluster. If so, the Metrics Server pod will exceed its limits, which may degrade the server performance.

To avoid this, change the parameters of the Metrics Server manually:

  1. View the amount of resources allocated to the Metrics Server pod.
  2. Update the Metrics Server parameters.
  3. Check the result.

To restore the default values of the Metrics Server parameters, reset them.

View the amount of resources allocated to the Metrics Server podView the amount of resources allocated to the Metrics Server pod

  1. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  2. Install kubect and configure it to work with the new cluster.

  3. Run this command:

    kubectl get pod <Metrics_Server_pod_name> \
      --namespace=kube-system \
      --output=json | \
      jq '.spec.containers[] | select(.name == "metrics-server") | .resources'
    

    The resources are calculated using the following formula:

    cpu = baseCPU + cpuPerNode * nodesCount
    memory = baseMemory + memoryPerNode * nodesCount
    

    Where:

    • baseCPU: Basic number of CPUs.
    • cpuPerNode: Number of CPUs per node.
    • nodesCount: Number of Managed Service for Kubernetes nodes.
    • baseMemory: Basic amount of RAM.
    • memoryPerNode: Amount of RAM per node.

Update the Metrics Server parametersUpdate the Metrics Server parameters

  1. Open the Metrics Server configuration file:

    kubectl edit configmap metrics-server-config \
      --namespace=kube-system \
      --output=yaml
    
  2. Add or update the resource parameters under data.NannyConfiguration:

    apiVersion: v1
    data:
      NannyConfiguration: |-
        apiVersion: nannyconfig/v1alpha1
        kind: NannyConfiguration
        baseCPU: <basic_number_of_CPUs>m
        cpuPerNode: <number_of_CPUs_per_node>m
        baseMemory: <basic_amount_of_RAM>Mi
        memoryPerNode: <amount_of_RAM_per_node>Mi
    ...
    
    Sample configuration file
    apiVersion: v1
    data:
      NannyConfiguration: |-
        apiVersion: nannyconfig/v1alpha1
        kind: NannyConfiguration
        baseCPU: 48m
        cpuPerNode: 1m
        baseMemory: 104Mi
        memoryPerNode: 3Mi
    kind: ConfigMap
    metadata:
      creationTimestamp: "2022-12-15T06:28:22Z"
      labels:
        addonmanager.kubernetes.io/mode: EnsureExists
      name: metrics-server-config
      namespace: kube-system
      resourceVersion: "303569"
      uid: 931b88ca-21da-4d04-a3c1-da7e********
    
  3. Restart the Metrics Server. To do this, delete it and wait until the Kubernetes controller creates it again:

    kubectl delete deployment metrics-server \
      --namespace=kube-system
    

Check the resultCheck the result

View the amount of resources allocated to the Metrics Server pod once again and make sure it has been updated for the new pod.

Reset the Metrics Server parametersReset the Metrics Server parameters

To reset the parameters to their default values, delete the Metrics Server configuration file and its Deployment:

kubectl delete configmap metrics-server-config \
  --namespace=kube-system && \
kubectl delete deployment metrics-server \
  --namespace=kube-system

As a result, a new pod of the Metrics Server is created automatically.

Was the article helpful?

Previous
Vertical application scaling in a cluster
Next
Deploying and load testing a gRPC service with scaling
© 2025 Direct Cursus Technology L.L.C.