Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Cloud Marketplace
    • Getting started
    • Access management
    • Getting started
      • All tutorials
        • Integration with Argo CD
        • Integration with Crossplane
        • Syncing with Yandex Lockbox secrets
        • Configuring Fluent Bit for Cloud Logging
        • Setting up Gateway API
        • Configuring an L7 Application Load Balancer via an ingress controller
        • Configuring L7 Application Load Balancer logging via an ingress controller
        • Creating an L7 load balancer with a Smart Web Security profile through an Application Load Balancer ingress controller
        • Performing health checks on Managed Service for Kubernetes cluster applications via an L7 Application Load Balancer
        • Using Jaeger to trace requests in Managed Service for YDB
        • Setting up Kyverno & Kyverno Policies
        • Using Metrics Provider to stream metrics
        • Editing website images with Thumbor
        • Using Istio
        • Using HashiCorp Vault to store secrets
    • Access management
    • Audit Trails events

In this article:

  • Required paid resources
  • Getting started
  • Set up the runtime environment
  • Install Metrics Provider and the runtime environment
  • Test Metrics Provider
  • Delete the resources you created
  1. Users
  2. Tutorials
  3. Using Cloud Marketplace products in Managed Service for Kubernetes
  4. Using Metrics Provider to stream metrics

Using Metrics Provider to deliver metrics

Written by
Yandex Cloud
Updated at November 21, 2025
  • Required paid resources
  • Getting started
  • Set up the runtime environment
  • Install Metrics Provider and the runtime environment
  • Test Metrics Provider
  • Delete the resources you created

Metrics Provider delivers metrics of Managed Service for Kubernetes cluster objects to monitoring systems and automatic scaling systems.

In this tutorial, you will learn how to set up the delivery of external metrics to Horizontal Pod Autoscaler using Metrics Provider.

To set up the delivery of metrics:

  1. Set up the runtime environment.
  2. Install Metrics Provider and the runtime environment.
  3. Test Metrics Provider.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost for this solution includes:

  • Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
  • Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
  • Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

    By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  2. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  3. Create a Managed Service for Kubernetes cluster and node group with any suitable configuration. When creating, specify the preconfigured security groups.

  4. Install kubect and configure it to work with the new cluster.

Set up the runtime environmentSet up the runtime environment

To test Metrics Provider, you will create the nginx test app and Horizontal Pod Autoscaler to receive CPU utilization metrics from Metrics Provider.

  1. Create the app.yaml file with the nginx app manifest:

    app.yaml
    ---
    ### Deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      namespace: kube-system
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          containers:
            - name: nginx
              image: registry.k8s.io/hpa-example
              resources:
                requests:
                  memory: "256Mi"
                  cpu: "500m"
                limits:
                  memory: "500Mi"
                  cpu: "1"
    ---
    ### Service
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      namespace: kube-system
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      type: LoadBalancer
    
  2. Create the hpa.yaml file with the Horizontal Pod Autoscaler manifest for test-hpa:

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
      name: test-hpa
      namespace: kube-system
    spec:
      maxReplicas: 2
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: nginx
      metrics:
        - type: External
          external:
            metric:
              name: cpu_usage
              selector:
                matchLabels:
                  service: "compute"
                  resource_id: "<node_name>"
                  resource_type: "vm"
            target:
              type: Value
              value: "20"
    

    You can get the name of the node for deploying Metrics Provider and the runtime environment with the list of cluster nodes:

    kubectl get nodes
    

Install Metrics Provider and the runtime environmentInstall Metrics Provider and the runtime environment

  1. Install Metrics Provider by following this guide.

  2. Create a test application and Horizontal Pod Autoscaler:

    kubectl apply -f app.yaml && \
    kubectl apply -f hpa.yaml
    
  3. Make sure the app pods switched to Running:

    kubectl get pods -n kube-system | grep nginx && \
    kubectl get pods -n kube-system | grep metrics
    

    Result:

    nginx-6c********-dbfrn                      1/1     Running   0          2d22h
    nginx-6c********-gckhp                      1/1     Running   0          2d22h
    metrics-server-v0.3.1-6b********-f7dv6      2/2     Running   4          7d3h
    

Test Metrics ProviderTest Metrics Provider

Make sure that Horizontal Pod Autoscaler receives metrics from Metrics Provider and uses them to calculate the number of the nginx app pods:

kubectl -n kube-system describe hpa test-hpa

In the expected command output, the AbleToScale and ScalingActive conditions should be True:

Name:                          test-hpa
Namespace:                     kube-system
...
Min replicas:                  1
Max replicas:                  2
Deployment pods:               2 current / 2 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from external metric cpu_usage(&LabelSelector{MatchLabels:map[string]string{resource_id: <node_name>,resource_type: vm,service: compute,},MatchExpressions:[]LabelSelectorRequirement{},})
Events:           <none>

Note

Metrics Provider will take some time to fetch metrics from the Managed Service for Kubernetes cluster. If the unable to get external metric ... no metrics returned from external metrics API error occurs, repeat the provider test after a few minutes.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for Kubernetes cluster.
  2. Delete the cluster public static IP address if you reserved one.

Was the article helpful?

Previous
Setting up Kyverno & Kyverno Policies
Next
Editing website images with Thumbor
© 2025 Direct Cursus Technology L.L.C.