Troubleshooting HPA issues in Managed Service for Kubernetes
Written by
Updated at December 17, 2025
Issue description
-
metrics.k8s.ioandcustom.metrics.k8s.ioAPI method calls time out with theno context deadline exceedederror message. -
The node running the metrics server pod is experiencing low RAM, triggering
oom-killer. OOM killer messages show up in the Managed Service for Kubernetes node serial console. -
Running
kubectl describe hpato get HPA status info in a Managed Service for Kubernetes cluster returns the following messages:Warning FailedGetResourceMetric horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1/namespaces/jaeger/pods: Post net/http: request canceled (Client.Timeout exceeded while awaiting headers)") has prevented the request from succeeding (get pods.metrics.k8s.io) Warning FailedGetResourceMetric horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: an error on the server ("Internal Server Error: Post: net/http: request canceled (Client.Timeout exceeded while awaiting headers)") has prevented the request from succeeding (get pods.metrics.k8s.io)
Solution
Follow these steps to solve the issue:
- If the issue persists, change the configuration of the metrics server pod using this tutorial.
- Manually move the metrics server pod to a less loaded node within the Managed Service for Kubernetes cluster.
If the issue persists
If the above actions did not help, create a support ticket
- ID of the Managed Service for Kubernetes cluster in question.
- ID of the Managed Service for Kubernetes cluster pod in question.
kubectl describe hpaoutput for the Managed Service for Kubernetes cluster in question.