Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
All solutions
    • All solutions for Managed Service for Kubernetes
    • Resolving the `Timed out waiting for the condition` error when mounting a PVC
    • Resolving the `Timed out waiting for the condition` error when mounting a volume with a large number of files
    • Hosts of the Managed Service for Kubernetes node groups stay in the `RECONCILING` status for a long time
    • Cyclical restarts of the `kube-dns-autoscaler` pod
    • Cannot reduce the number of nodes in a Managed Service for Kubernetes group
    • Cannot increase the number of nodes in a Managed Service for Kubernetes group
    • Resolving the `Multi-Attach error for volume` error
    • Resolving the `DEADLINE_EXCEEDED` error
    • Troubleshooting time synchronization issues on Managed Service for Kubernetes cluster nodes
    • Troubleshooting DNS name resolving issues in Managed Service for Kubernetes
    • Resolving the `0/10 nodes are available - node(s) had untolerated taint` error
    • The cluster remains in the `STARTING` status for too long
    • Cluster pods remain in the `PENDING` status for too long
    • Troubleshooting HPA issues in Managed Service for Kubernetes
    • Resolving the `Can't use allocation_policy.locations.subnet_id together with node_template.network_interface_specs` error
    • Troubleshooting issues when mounting volumes to Managed Service for Kubernetes cluster pods using Container Storage Interface for S3
    • Managed Service for Kubernetes cluster remains `STARTING` for too long after renaming
    • How to add and update Linux SSH keys on Managed Service for Kubernetes node group hosts
    • How to assign static external IP addresses to Managed Service for Kubernetes cluster nodes
    • How to provide Kubernetes nodes with internet access
    • How to create an internal Ingress controller
    • How to set up autodeletion of old images from Managed Service for Kubernetes nodes
    • How to find out the external IP address of a Managed Service for Kubernetes node
    • How to use Certificate Manager certificates in load balancers or in Managed Service for Kubernetes
    • How to change the time zone on Managed Service for Kubernetes cluster nodes
    • How to find out the ephemeral storage size for a Managed Service for Kubernetes node

In this article:

  • Issue description
  • Solution
  • If the issue persists
  1. Managed Service for Kubernetes
  2. Troubleshooting DNS name resolving issues in Managed Service for Kubernetes

Troubleshooting DNS name resolving issues in Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at December 17, 2025
  • Issue description
  • Solution
  • If the issue persists

Issue descriptionIssue description

The Managed Service for Kubernetes cluster does not resolve FQDNs for either internal or external resources.

SolutionSolution

Check the Kubernetes version running on the master and worker nodes by running these commands:

yc managed-kubernetes cluster get $CLUSTER_ID | grep vers
yc managed-kubernetes node-group get $NODE_GROUP_ID | grep vers

Alert

If your cluster or node group version is outdated and missing from the list of supported versions (yc managed-kubernetes list-versions), update both before proceeding with the diagnostics.

If the cluster and node group are running a supported Kubernetes version, check whether CoreDNS works properly within the cluster.
To diagnose CoreDNS, you need to analyze the state of the cluster's system DNS pods using the kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide command.

Example of the kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide command output

NAME                       READY   STATUS    RESTARTS   AGE   IP              NODE                        NOMINATED NODE   READINESS GATES
coredns-85fd96f799-2zzvw   1/1     Running   5          21d   10.96.138.252   cl1*****************-yxeg   <none>           <none>
coredns-85fd96f799-9lz6b   1/1     Running   3          20d   10.96.140.90    cl1*****************-icos   <none>           <none>

Check the statuses of the pods in the cluster. If any pod is not in the RUNNING status, use the kubectl logs -l k8s-app=kube-dns -n kube-system --all-containers=true command to check the system logs of all DNS pods in the cluster and find the source of the issues.

If the issue with CoreDNS persists, try one of the following solutions:

Increase the number of CoreDNS pods.
Use NodeLocal DNS.

Typically, a cluster has two CoreDNS pods, unless it is a single-node cluster with one pod. You can increase the number of CoreDNS replicas by updating the CoreDNS deployment autoscaling configuration and specifying the linear parameter:

Example of kube-dns-autoscaler deployment (kubectl -n kube-system edit cm kube-dns-autoscaler)
apiVersion: v1
data:
linear: '{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}' # < These are autoscaling settings.
kind: ConfigMap
metadata:
name: kube-dns-autoscaler
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/configmaps/kube-dns-autoscaler

You can learn more about the scaling configuration from Kubernetes developer guides on this GitHub page.

To reduce the load from DNS requests in a Managed Service for Kubernetes cluster, enable NodeLocal DNS Cache. If a Managed Service for Kubernetes cluster contains more than 50 nodes, use automatic DNS scaling.

When NodeLocal DNS Cache is enabled, a DaemonSet is deployed in a Managed Service for Kubernetes cluster. The caching agent (the node-local-dns pod) runs on each Managed Service for Kubernetes node. User pods now send requests to the agent running on their Managed Service for Kubernetes nodes.

If the agent's cache contains the request, the agent returns a direct response. Otherwise, the system creates a TCP connection to kube-dns ClusterIP. By default, the caching agent makes cache-miss requests to kube-dns for the cluster.local DNS zone of the Managed Service for Kubernetes cluster.

Install [NodeLocal DNS]
(https://yandex.cloud/en/marketplace/products/yc/node-local-dns) using Cloud Marketplace as described in this guide or manually by following this tutorial.

Tip

You can also reliably troubleshoot DNS issues in your cluster by installing NodeLocal DNS Cache from Yandex Cloud Marketplace following these guides:

  • Getting started with Cloud Marketplace
  • Installing NodeLocal DNS

If the issue persistsIf the issue persists

If the above actions did not help, create a support ticket. Provide the following information in your ticket:

  1. Managed Service for Kubernetes cluster ID.
  2. Managed Service for Kubernetes cluster event log: kubectl get events output.
  3. Cluster DNS service log: kubectl logs -l k8s-app=kube-dns -n kube-system --all-containers=true output.
  4. Examples of DNS resolution errors in the cluster with the date and time of each issue.

Was the article helpful?

Previous
Troubleshooting time synchronization issues on Managed Service for Kubernetes cluster nodes
Next
Resolving the `0/10 nodes are available - node(s) had untolerated taint` error
© 2025 Direct Cursus Technology L.L.C.