Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
All solutions
    • All solutions for Managed Service for Kubernetes
    • Resolving the `Timed out waiting for the condition` error when mounting a PVC
    • Resolving the `Timed out waiting for the condition` error when mounting a volume with a large number of files
    • Hosts of the Managed Service for Kubernetes node groups stay in the `RECONCILING` status for a long time
    • Cyclical restarts of the `kube-dns-autoscaler` pod
    • Cannot reduce the number of nodes in a Managed Service for Kubernetes group
    • Cannot increase the number of nodes in a Managed Service for Kubernetes group
    • Resolving the `Multi-Attach error for volume` error
    • Resolving the `DEADLINE_EXCEEDED` error
    • Troubleshooting time synchronization issues on Managed Service for Kubernetes cluster nodes
    • Troubleshooting DNS name resolving issues in Managed Service for Kubernetes
    • Resolving the `0/10 nodes are available - node(s) had untolerated taint` error
    • The cluster remains in the `STARTING` status for too long
    • Cluster pods remain in the `PENDING` status for too long
    • Troubleshooting HPA issues in Managed Service for Kubernetes
    • Resolving the `Can't use allocation_policy.locations.subnet_id together with node_template.network_interface_specs` error
    • Troubleshooting issues when mounting volumes to Managed Service for Kubernetes cluster pods using Container Storage Interface for S3
    • Managed Service for Kubernetes cluster remains `STARTING` for too long after renaming
    • How to add and update Linux SSH keys on Managed Service for Kubernetes node group hosts
    • How to assign static external IP addresses to Managed Service for Kubernetes cluster nodes
    • How to provide Kubernetes nodes with internet access
    • How to create an internal Ingress controller
    • How to set up autodeletion of old images from Managed Service for Kubernetes nodes
    • How to find out the external IP address of a Managed Service for Kubernetes node
    • How to use Certificate Manager certificates in load balancers or in Managed Service for Kubernetes
    • How to change the time zone on Managed Service for Kubernetes cluster nodes
    • How to find out the ephemeral storage size for a Managed Service for Kubernetes node

In this article:

  • Issue description
  • Solution
  1. Managed Service for Kubernetes
  2. Cyclical restarts of the `kube-dns-autoscaler` pod

Cyclical restarts of the kube-dns-autoscaler pod

Written by
Yandex Cloud
Updated at December 17, 2025
  • Issue description
  • Solution

Issue descriptionIssue description

Updating Kubernetes and node groups from version 1.21 to version 1.22 triggered kube-dns-autoscaler cyclical restarts.

SolutionSolution

Sine we updated coredns and cluster-proportional-autoscaler, their notations changed, which is why the previous ConfigMap version is no longer valid.

Follow these steps to fix the issue:

  1. Reduce the number of kube-dns-autoscaler replicas to 0:

    kubectl scale -n kube-system deploy kube-dns-autoscaler --replicas=0
    
  2. Add the official cluster-proportional-autoscaler repository to your deployment:

    helm repo add cluster-proportional-autoscaler [https://kubernetes-sigs.github.io/cluster-proportional-autoscaler](https://kubernetes-sigs.github.io/cluster-proportional-autoscaler "External link (will open in a new window)")\
    helm repo update
    
  3. Using our adapted values, install cluster-proportional-autoscaler:

    Installing cluster-proportional-autoscaler
    affinity: {}
    config:
    ladder:
      coresToReplicas:
        - [ 1, 1 ]
        - [ 64, 3 ]
        - [ 512, 5 ]
        - [ 1024, 7 ]
        - [ 2048, 10 ]
        - [ 4096, 15 ]
      nodesToReplicas:
        - [ 1, 1 ]
        - [ 2, 2 ]
    image:
      repository: registry.k8s.io/cpa/cluster-proportional-autoscaler
      pullPolicy: IfNotPresent
      tag:
    imagePullSecrets: []
    fullnameOverride:
    nameOverride:
    nodeSelector: {}
    options:
      alsoLogToStdErr:
      logBacktraceAt:
      logDir: {}
      logLevel:
      --v=6
      # Defaulting to true limits use of ephemeral storage
      logToStdErr: true
      maxSyncFailures:
      namespace: kube-system
      nodeLabels: {}
      #  label1: value1
      #  label2: value2
      pollPeriodSeconds:
      stdErrThreshold:
      target: deployment/coredns
      vmodule:
    podAnnotations: {}
    podSecurityContext:
      fsGroup: 65534
    replicaCount: 1
    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext: {}
      # capabilities:
      #   drop:
      #   - ALL
      # readOnlyRootFilesystem: true
      # runAsNonRoot: true
      # runAsUser: 1000
    serviceAccount:
      create: true
      annotations: {}
      # The name of the service account to use.
      # If not set and create is true, a name is generated using the fullname template
      # If set and create is false, no service account will be created and the expectation is that the provided service account already exists or it will use the "default" service account
      name:
    tolerations: []
    priorityClassName: ""
    
    helm upgrade --install cluster-proportional-autoscaler\
    cluster-proportional-autoscaler/cluster-proportional-autoscaler --values values-cpa-dns.yaml
    

Was the article helpful?

Previous
Hosts of the Managed Service for Kubernetes node groups stay in the `RECONCILING` status for a long time
Next
Cannot reduce the number of nodes in a Managed Service for Kubernetes group
© 2025 Direct Cursus Technology L.L.C.