Using Metrics Provider to stream metrics
Metrics Provider streams metrics of Managed Service for Kubernetes cluster objects to monitoring systems and autoscaling systems.
In this article, you will learn how to set up transfers of external metrics to Horizontal Pod Autoscaler using Metrics Provider.
To set up the transfer of metrics:
- Set up a work environment.
- Install Metrics Provider and the runtime environment.
- Test Metrics Provider.
If you no longer need the resources you created, delete them.
Getting started
-
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating them, specify the security groups prepared earlier.
-
Install kubectl
and configure it to work with the created cluster.
Set up a working environment
To test Metrics Provider, the following will be created: the nginx
test app and Horizontal Pod Autoscaler, to which CPU utilization metrics will be provided by Metrics Provider.
-
Create the
app.yaml
file with thenginx
app manifest:app.yaml
--- ### Deployment apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: kube-system labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: registry.k8s.io/hpa-example resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "500Mi" cpu: "1" --- ### Service apiVersion: v1 kind: Service metadata: name: nginx namespace: kube-system spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
-
Create the
hpa.yaml
file with the Horizontal Pod Autoscaler manifest fortest-hpa
:apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: test-hpa namespace: kube-system spec: maxReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nginx metrics: - type: External external: metric: name: cpu_usage selector: matchLabels: service: "compute" resource_id: "<node_name>" resource_type: "vm" target: type: Value value: "20"
You can get the name of the node where Metrics Provider and the runtime environment will be deployed with a list of cluster nodes:
kubectl get nodes
Install Metrics Provider and the runtime environment
-
Follow the instructions to install Metrics Provider.
-
Create a test application and Horizontal Pod Autoscaler:
kubectl apply -f app.yaml && \ kubectl apply -f hpa.yaml
-
Make sure the app pods have entered the
Running
state:kubectl get pods -n kube-system | grep nginx && \ kubectl get pods -n kube-system | grep metrics
Result:
nginx-6c********-dbfrn 1/1 Running 0 2d22h nginx-6c********-gckhp 1/1 Running 0 2d22h metrics-server-v0.3.1-6b********-f7dv6 2/2 Running 4 7d3h
Test Metrics Provider
Make sure that Horizontal Pod Autoscaler gets metrics from Metrics Provider and uses them to calculate the number of the nginx
app pods:
kubectl -n kube-system describe hpa test-hpa
In the expected command output, the AbleToScale
and ScalingActive
conditions should be True
:
Name: test-hpa
Namespace: kube-system
...
Min replicas: 1
Max replicas: 2
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from external metric cpu_usage(&LabelSelector{MatchLabels:map[string]string{resource_id: <node_name>,resource_type: vm,service: compute,},MatchExpressions:[]LabelSelectorRequirement{},})
Events: <none>
Note
It will take Metrics Provider some time to receive metrics from the Managed Service for Kubernetes cluster. If the unable to get external metric ... no metrics returned from external metrics API
error occurs, rerun the provider performance check in a few minutes.
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- Delete the cluster's public static IP address if you had reserved one.