Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Cloud Marketplace
    • Getting started
    • Access management
      • All tutorials
        • Integration with Argo CD
        • Integration with Crossplane
        • Syncing with Yandex Lockbox secrets
        • Configuring Fluent Bit for Cloud Logging
        • Setting up Gateway API
        • Configuring an Application Load Balancer L7 load balancer using an Ingress controller
        • Configuring logging for an Application Load Balancer L7 load balancer using an Ingress controller
        • Creating an L7 load balancer with a Smart Web Security security profile through an Application Load Balancer Ingress controller
        • Health checking your apps in a Managed Service for Kubernetes cluster using an Application Load Balancer L7 load balancer
        • Using Jaeger to trace requests in Managed Service for YDB
        • Setting up Kyverno & Kyverno Policies
        • Using Metrics Provider to stream metrics
        • Editing website images using Thumbor
        • Using Istio
        • Using HashiCorp Vault to store secrets
    • Access management
    • Audit Trails events

In this article:

  • Required paid resources
  • Getting started
  • Set up the runtime environment
  • Install Metrics Provider and the runtime environment
  • Test Metrics Provider
  • Delete the resources you created
  1. Users
  2. Tutorials
  3. Using Cloud Marketplace products in Managed Service for Kubernetes
  4. Using Metrics Provider to stream metrics

Using Metrics Provider to stream metrics

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Set up the runtime environment
  • Install Metrics Provider and the runtime environment
  • Test Metrics Provider
  • Delete the resources you created

Metrics Provider streams metrics of Managed Service for Kubernetes cluster objects to monitoring systems and autoscaling systems.

In this article, you will learn how to set up transfers of external metrics to Horizontal Pod Autoscaler using Metrics Provider.

To set up the transfer of metrics:

  1. Set up the runtime environment.
  2. Install Metrics Provider and the runtime environment.
  3. Test Metrics Provider.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Fee for the Managed Service for Kubernetes cluster: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Cluster nodes (VM) fee: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for a public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. If you do not have the Yandex Cloud CLI yet, install and initialize it.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

  2. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  3. Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating them, specify the security groups prepared earlier.

  4. Install kubect and configure it to work with the new cluster.

Set up the runtime environmentSet up the runtime environment

To test Metrics Provider, the following will be created: the nginx test app and Horizontal Pod Autoscaler, to which CPU utilization metrics will be provided by Metrics Provider.

  1. Create the app.yaml file with the nginx app manifest:

    app.yaml
    ---
    ### Deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      namespace: kube-system
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          containers:
            - name: nginx
              image: registry.k8s.io/hpa-example
              resources:
                requests:
                  memory: "256Mi"
                  cpu: "500m"
                limits:
                  memory: "500Mi"
                  cpu: "1"
    ---
    ### Service
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      namespace: kube-system
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      type: LoadBalancer
    
  2. Create the hpa.yaml file with the Horizontal Pod Autoscaler manifest for test-hpa:

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
      name: test-hpa
      namespace: kube-system
    spec:
      maxReplicas: 2
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: nginx
      metrics:
        - type: External
          external:
            metric:
              name: cpu_usage
              selector:
                matchLabels:
                  service: "compute"
                  resource_id: "<node_name>"
                  resource_type: "vm"
            target:
              type: Value
              value: "20"
    

    You can get the name of the node where Metrics Provider and the runtime environment will be deployed with a list of cluster nodes:

    kubectl get nodes
    

Install Metrics Provider and the runtime environmentInstall Metrics Provider and the runtime environment

  1. Follow the instructions to install Metrics Provider.

  2. Create a test application and Horizontal Pod Autoscaler:

    kubectl apply -f app.yaml && \
    kubectl apply -f hpa.yaml
    
  3. Make sure the app pods have entered the Running state:

    kubectl get pods -n kube-system | grep nginx && \
    kubectl get pods -n kube-system | grep metrics
    

    Result:

    nginx-6c********-dbfrn                      1/1     Running   0          2d22h
    nginx-6c********-gckhp                      1/1     Running   0          2d22h
    metrics-server-v0.3.1-6b********-f7dv6      2/2     Running   4          7d3h
    

Test Metrics ProviderTest Metrics Provider

Make sure that Horizontal Pod Autoscaler gets metrics from Metrics Provider and uses them to calculate the number of the nginx app pods:

kubectl -n kube-system describe hpa test-hpa

In the expected command output, the AbleToScale and ScalingActive conditions should be True:

Name:                          test-hpa
Namespace:                     kube-system
...
Min replicas:                  1
Max replicas:                  2
Deployment pods:               2 current / 2 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from external metric cpu_usage(&LabelSelector{MatchLabels:map[string]string{resource_id: <node_name>,resource_type: vm,service: compute,},MatchExpressions:[]LabelSelectorRequirement{},})
Events:           <none>

Note

It will take Metrics Provider some time to receive metrics from the Managed Service for Kubernetes cluster. If the unable to get external metric ... no metrics returned from external metrics API error occurs, rerun the provider performance check in a few minutes.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the Managed Service for Kubernetes cluster.
  2. Delete the cluster's public static IP address if you had reserved one.

Was the article helpful?

Previous
Setting up Kyverno & Kyverno Policies
Next
Editing website images using Thumbor
© 2025 Direct Cursus Technology L.L.C.