Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
All solutions
    • All solutions for Managed Service for Kubernetes
    • Fixing the "timed out waiting for the condition" error when mounting a PVC
    • Resolving the "timed out waiting for the condition" error when mounting a volume with a large number of files
    • Hosts of the Managed Service for Kubernetes node groups stay in the Reconciling status for a long time
    • Constant restarts of the kube-dns-autoscaler pod
    • Number of nodes in the group fails to decrease
    • Resolving the "Multi-Attach error for volume" error
    • Resolving the "DEADLINE_EXCEEDED" error
    • Adding and updating Linux SSH keys on Managed Services for Kubernetes node group hosts
    • Assigning static external IP addresses for Managed Service for Kubernetes cluster nodes
    • Granting internet access to Kubernetes nodes
    • How to change the network or subnet for an MDB cluster
    • How to create an internal Ingress controller

In this article:

  • Issue description
  • Solution
  1. Managed Kubernetes
  2. Constant restarts of the kube-dns-autoscaler pod

Constant restarts of the kube-dns-autoscaler pod

Written by
Yandex Cloud
Updated at November 27, 2023
  • Issue description
  • Solution

Issue descriptionIssue description

After updating Kubernetes and node groups from version 1.21 to version 1.22, kube-dns-autoscaler began restarting constantly.

SolutionSolution

We have updated coredns and cluster-proportional-autoscaler: the notation has changed in them, and the previous ConfigMap version is no longer supported.

To resolve this issue:

  1. Reduce the number of kube-dns-autoscaler replicas to 0:
kubectl scale -n kube-system deploy kube-dns-autoscaler --replicas=0
  1. Add the official cluster-proportional-autoscaler repository to your deployment:
helm repo add cluster-proportional-autoscaler [https://kubernetes-sigs.github.io/cluster-proportional-autoscaler](https://kubernetes-sigs.github.io/cluster-proportional-autoscaler "External link (will open in a new window)")\
helm repo update
  1. Using our adapted values, install cluster-proportional-autoscaler:
affinity: {}
config:
 ladder:
   coresToReplicas:
     - [ 1, 1 ]
     - [ 64, 3 ]
     - [ 512, 5 ]
     - [ 1024, 7 ]
     - [ 2048, 10 ]
     - [ 4096, 15 ]
   nodesToReplicas:
     - [ 1, 1 ]
     - [ 2, 2 ]
image:
  repository: registry.k8s.io/cpa/cluster-proportional-autoscaler
  pullPolicy: IfNotPresent
  tag:
imagePullSecrets: []
fullnameOverride:
nameOverride:
nodeSelector: {}
options:
  alsoLogToStdErr:
  logBacktraceAt:
  logDir: {}
  logLevel:
   --v=6
  # Defaulting to true limits use of ephemeral storage
  logToStdErr: true
  maxSyncFailures:
  namespace: kube-system
  nodeLabels: {}
  #  label1: value1
  #  label2: value2
  pollPeriodSeconds:
  stdErrThreshold:
  target: deployment/coredns
  vmodule:
podAnnotations: {}
podSecurityContext:
   fsGroup: 65534
replicaCount: 1
resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi
securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000
serviceAccount:
  create: true
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  # If set and create is false, no service account will be created and the expectation is that the provided service account already exists or it will use the "default" service account
  name:
tolerations: []
priorityClassName: ""
helm upgrade --install cluster-proportional-autoscaler\
cluster-proportional-autoscaler/cluster-proportional-autoscaler --values values-cpa-dns.yaml

Was the article helpful?

Previous
Hosts of the Managed Service for Kubernetes node groups stay in the Reconciling status for a long time
Next
Number of nodes in the group fails to decrease
Yandex project
© 2025 Yandex.Cloud LLC