Using Metrics Provider to stream metrics
Metrics Provider streams metrics of Managed Service for Kubernetes cluster objects to monitoring systems and auto scaling systems.
In this article, you will learn how to set up transfers of external metrics to Horizontal Pod Autoscaler using Metrics Provider.
To set up the transfer of metrics:
- Set up a working environment
- Install Metrics Provider and the runtime environment
- Test Metrics Provider
- Delete the resources you created
Getting started
-
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration. When creating them, specify the security groups prepared earlier.
-
Install kubectl
and configure it to work with the created cluster.
Set up a working environment
To test Metrics Provider, an nginx
test app and Horizontal Pod Autoscaler are created, where Metrics Provider will transfer CPU usage metrics.
-
Create a file named
app.yaml
with thenginx
application manifest:app.yaml
--- ### Deployment apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: kube-system labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: registry.k8s.io/hpa-example resources: requests: memory: "256Mi" cpu: "500m" limits: memory: "500Mi" cpu: "1" --- ### Service apiVersion: v1 kind: Service metadata: name: nginx namespace: kube-system spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
-
Create a file named
hpa.yaml
with the Horizontal Pod Autoscalertest-hpa
manifest:apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: test-hpa namespace: kube-system spec: maxReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nginx metrics: - type: External external: metric: name: cpu_usage selector: matchLabels: service: "compute" resource_id: "<node_name>" resource_type: "vm" target: type: Value value: "20"
You can get the name of the node where Metrics Provider and the runtime environment will be deployed with a list of cluster nodes:
kubectl get nodes
Install Metrics Provider and the runtime environment
-
Follow the instructions to install Metrics Provider.
-
Create a test application and Horizontal Pod Autoscaler:
kubectl apply -f app.yaml && \ kubectl apply -f hpa.yaml
-
Make sure that the application pods have entered the
Running
state:kubectl get pods -n kube-system | grep nginx && \ kubectl get pods -n kube-system | grep metrics
Result:
nginx-6c********-dbfrn 1/1 Running 0 2d22h nginx-6c********-gckhp 1/1 Running 0 2d22h metrics-server-v0.3.1-6b********-f7dv6 2/2 Running 4 7d3h
Test Metrics Provider
Make sure that Horizontal Pod Autoscaler gets metrics from Metrics Provider and uses them to calculate the number of the nginx
application pods:
kubectl -n kube-system describe hpa test-hpa
In the expected command output, the AbleToScale
and ScalingActive
conditions should have the True
status:
Name: test-hpa
Namespace: kube-system
...
Min replicas: 1
Max replicas: 2
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from external metric cpu_usage(&LabelSelector{MatchLabels:map[string]string{resource_id: <node_name>,resource_type: vm,service: compute,},MatchExpressions:[]LabelSelectorRequirement{},})
Events: <none>
Note
It will take Metrics Provider some time to receive metrics from the Managed Service for Kubernetes cluster. If the unable to get external metric ... no metrics returned from external metrics API
error is returned, rerun the provider performance check in a few minutes.
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- If you reserved a public static IP address for the cluster, delete it.