Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
    • Running a Docker image on a VM using Cloud Registry
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Creating a Kubernetes cluster using the Yandex Cloud provider for the Kubernetes Cluster API
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up time-slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
      • Connecting a BareMetal server as an external node to a Managed Service for Kubernetes cluster
        • Integration with a corporate DNS zone
        • DNS autoscaling based on the cluster size
        • Setting up NodeLocal DNS Cache
        • DNS challenge for Let's Encrypt® certificates

In this article:

  • Required paid resources
  • Getting started
  • Create your infrastructure
  • Set up your environment
  • Install NodeLocal DNS
  • Create a test environment
  • Check the NodeLocal DNS functionality
  • Delete NodeLocal DNS
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Working with DNS
  4. Setting up NodeLocal DNS Cache

Setting up NodeLocal DNS in Yandex Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at January 29, 2026
  • Required paid resources
  • Getting started
    • Create your infrastructure
    • Set up your environment
  • Install NodeLocal DNS
  • Create a test environment
  • Check the NodeLocal DNS functionality
  • Delete NodeLocal DNS
  • Delete the resources you created

To reduce the load from DNS queries in a Managed Service for Kubernetes cluster, use NodeLocal DNS.

Tip

If your Managed Service for Kubernetes cluster has more than 50 nodes, use DNS autoscaling.

Warning

If the Managed Service for Kubernetes cluster uses a Cilium network policy controller, the setup will have some unique features. Use this guide.

NodeLocal DNS is a Managed Service for Kubernetes cluster system component which acts as a local DNS cache on each node.

NodeLocal DNS is deployed in a cluster as a DaemonSet with node-local-dns pods in the kube-system namespace. NodeLocal DNS configures iptables to redirect pod requests to kube-dns to the node-local-dns pod on the same node (local cache):

  • If there is a valid entry in the cache that has not yet expired, the response is returned without accessing the cluster’s main DNS service.
  • If no entry exists in the cache or if the entry has expired, the request goes to the main DNS service, kube-dns.

Note

Redirects of DNS requests to the local cache are transparent to the pods: you do not need to modify the pod’s /etc/resolv.conf file and restart it. Disabling NodeLocal DNS does not require these actions as well.

Using NodeLocal DNS in a Managed Service for Kubernetes cluster offers the following benefits:

  • Reduced DNS request processing time.
  • Reduced internal network traffic to avoid limitations on the number of connections.
  • Reduced risk of conntrack failure due to fewer UDP requests to the DNS service.
  • Improved resilience and scalability of the cluster DNS subsystem.

Follow this guide to install NodeLocal DNS in a Yandex Managed Service for Kubernetes cluster and test it using the dnsutils package. To do this, follow these steps:

  1. Install NodeLocal DNS.
  2. Create a test environment.
  3. Check the NodeLocal DNS functionality.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

  • Managed Service for Kubernetes master (see Managed Service for Kubernetes pricing).
  • Managed Service for Kubernetes cluster nodes: Use of computing resources and storage (see Compute Cloud pricing).
  • Public IP addresses for Managed Service for Kubernetes cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

Create your infrastructureCreate your infrastructure

Manually
Terraform
  1. Create a cloud network and subnet.

  2. Create a service account with the k8s.clusters.agent and vpc.publicAdmin roles.

  3. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines performance and availability of the cluster and the services and applications running in it.

  4. Create a Managed Service for Kubernetes cluster and node group with public internet access and preconfigured security groups.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-node-local-dns.tf configuration file of the Managed Service for Kubernetes cluster to the same working directory. This file describes:

    • Network.

    • Subnet.

    • Managed Service for Kubernetes cluster.

    • Service account for the Managed Service for Kubernetes cluster and node group.

    • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines performance and availability of the cluster and the services and applications running in it.

  6. Specify the following in the configuration file:

    • Folder ID.
    • Kubernetes versions for the Managed Service for Kubernetes cluster and node groups.
    • Managed Service for Kubernetes cluster CIDR.
    • Name of the Managed Service for Kubernetes cluster service account.
  7. Validate your Terraform configuration files using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up your environmentSet up your environment

  1. If you do not have the Yandex Cloud CLI installed yet, install and initialize it.

    By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the yc config set folder-id <folder_ID> command. You can also set a different folder for any specific command using the --folder-name or --folder-id parameter.

  2. Install kubect and configure it to work with the new cluster.

Install NodeLocal DNSInstall NodeLocal DNS

Yandex Cloud Marketplace
Manually

Install NodeLocal DNS using Cloud Marketplace as described in this guide.

  1. Get the kube-dns service IP address:

    kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
    
  2. Create a file named node-local-dns.yaml. In the node-local-dns DaemonSet settings, specify the kube-dns IP address:

    node-local-dns.yaml
    # Copyright 2018 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # Modified for Yandex Cloud Usage
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: node-local-dns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns-upstream
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/name: "KubeDNSUpstream"
    spec:
      ports:
      - name: dns
        port: 53
        protocol: UDP
        targetPort: 53
      - name: dns-tcp
        port: 53
        protocol: TCP
        targetPort: 53
      selector:
        k8s-app: kube-dns
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: node-local-dns
      namespace: kube-system
    data:
      Corefile: |
        cluster.local:53 {
          errors
          cache {
            success 9984 30
            denial 9984 5
          }
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          health 169.254.20.10:8080
        }
        in-addr.arpa:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          }
        ip6.arpa:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          }
        .:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__UPSTREAM__SERVERS__ {
            prefer_udp
          }
          prometheus :9253
          }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: node-local-dns
      namespace: kube-system
      labels:
        k8s-app: node-local-dns
    spec:
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 10%
      selector:
        matchLabels:
          k8s-app: node-local-dns
      template:
        metadata:
          labels:
            k8s-app: node-local-dns
          annotations:
            prometheus.io/port: "9253"
            prometheus.io/scrape: "true"
        spec:
          priorityClassName: system-node-critical
          serviceAccountName: node-local-dns
          hostNetwork: true
          dnsPolicy: Default # Don't use cluster DNS.
          tolerations:
          - key: "CriticalAddonsOnly"
            operator: "Exists"
          - effect: "NoExecute"
            operator: "Exists"
          - effect: "NoSchedule"
            operator: "Exists"
          containers:
          - name: node-cache
            image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0
            resources:
              requests:
                cpu: 25m
                memory: 5Mi
            args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ]
            securityContext:
              privileged: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9253
              name: metrics
              protocol: TCP
            livenessProbe:
              httpGet:
                host: 169.254.20.10
                path: /health
                port: 8080
              initialDelaySeconds: 60
              timeoutSeconds: 5
            volumeMounts:
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - name: config-volume
              mountPath: /etc/coredns
            - name: kube-dns-config
              mountPath: /etc/kube-dns
          volumes:
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
          - name: kube-dns-config
            configMap:
              name: kube-dns
              optional: true
          - name: config-volume
            configMap:
              name: node-local-dns
              items:
                - key: Corefile
                  path: Corefile.base
    ---
    # Headless Service has no ClusterIP and returns Pod IPs via DNS.
    # Used for Prometheus service discovery of node-local-dns metrics.
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        prometheus.io/port: "9253"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: node-local-dns
      name: node-local-dns
      namespace: kube-system
    spec:
      clusterIP: None
      ports:
        - name: metrics
          port: 9253
          targetPort: 9253
      selector:
        k8s-app: node-local-dns
    

    Warning

    The application works correctly only with the kube-system namespace.

  3. Create resources for NodeLocal DNS:

    kubectl apply -f node-local-dns.yaml
    

    Result:

    serviceaccount/node-local-dns created
    service/kube-dns-upstream created
    configmap/node-local-dns created
    daemonset.apps/node-local-dns created
    service/node-local-dns created
    
  4. Make sure the DaemonSet is successfully deployed and running:

    kubectl get ds -l k8s-app=node-local-dns -n kube-system
    

    Result:

    NAME            DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
    node-local-dns  3        3        3      3           3          <none>         24m
    

Create a test environmentCreate a test environment

To test the local DNS, the nettool pod containing the dnsutils network utility suite will be launched in your Managed Service for Kubernetes cluster.

  1. Run the nettool pod:

    kubectl run nettool --image cr.yandex/yc/demo/network-multitool -- sleep infinity
    
  2. Make sure the pod has switched to Running:

    kubectl get pods
    
  3. Find out which Managed Service for Kubernetes cluster node is hosting the nettool pod:

    kubectl get pod nettool -o wide
    

    You can find the node name in the NODE column, for example:

    NAME     READY  STATUS   RESTARTS  AGE  IP         NODE        NOMINATED NODE  READINESS GATES
    nettool  1/1    Running  0         23h  10.1.0.68  <node_name>  <none>          <none>
    
  4. Get the IP address of the pod running NodeLocal DNS:

    kubectl get pod -o wide -n kube-system | grep 'node-local.*<node_name>'
    

    Result:

    node-local-dns-gv68c  1/1  Running  0  26m  <pod_IP_address>  <node_name>  <none>  <none>
    

Check the NodeLocal DNS functionalityCheck the NodeLocal DNS functionality

To test the local DNS, several DNS requests will be made from the nettool pod. This will change the metrics for the number of DNS requests on the pod servicing NodeLocal DNS.

  1. Get the values of the metrics for DNS requests before testing:

    kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_total
    

    Result:

    # HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family.
    # TYPE coredns_dns_requests_total counter
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="."} 18
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="cluster.local."} 18
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="."} 18
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="cluster.local."} 18
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="cluster.local."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="in-addr.arpa."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="ip6.arpa."} 1
    

    The result demonstrates that NodeLocal DNS receives DNS requests on two IP addresses:

    • Address matching the kube-dns cluster IP. Here, this is 10.96.128.2:53; the actual value may differ.

      This is the main address. NodeLocal DNS configures iptables to redirect requests to kube-dns to the node-local-dns pod on the same node.

    • NodeLocal DNS local address (169.254.20.10).

      This is a fallback address. You can use it to access the node-local-dns pod directly.

  2. Run these DNS requests:

    kubectl exec -ti nettool -- nslookup kubernetes && \
    kubectl exec -ti nettool -- nslookup kubernetes.default && \
    kubectl exec -ti nettool -- nslookup ya.ru
    

    Result (IP addresses may differ):

    Server:         10.96.128.2
    Address:        10.96.128.2#53
    
    Name:   kubernetes.default.svc.cluster.local
    Address: 10.96.128.1
    
    Server:         10.96.128.2
    Address:        10.96.128.2#53
    
    Name:   kubernetes.default.svc.cluster.local
    Address: 10.96.128.1
    
    Server:         10.96.128.2
    Address:        10.96.128.2#53
    
    Non-authoritative answer:
    Name:   ya.ru
    Address: 5.255.255.242
    Name:   ya.ru
    Address: 77.88.44.242
    Name:   ya.ru
    Address: 77.88.55.242
    Name:   ya.ru
    Address: 2a02:6b8::2:242
    
  3. Get the DNS request metric values again:

    kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_total
    

    Result:

    # HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family.
    # TYPE coredns_dns_requests_total counter
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="."} 27
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="cluster.local."} 30
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="."} 25
    coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="cluster.local."} 26
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="cluster.local."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="in-addr.arpa."} 1
    coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="ip6.arpa."} 1
    

    The result demonstrates that metric values have increased for the kube-dns address but remain unchanged for the NodeLocal DNS local address. This means pods continue to send DNS requests to the kube-dns address, which are now handled by NodeLocal DNS.

Delete NodeLocal DNSDelete NodeLocal DNS

Yandex Cloud Marketplace
Manually

Delete the NodeLocal DNS application as described in this guide.

Run this command:

kubectl delete -f node-local-dns.yaml

Result:

serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the resources depending on how you created them:

    Manually
    Terraform

    Delete the Managed Service for Kubernetes cluster.

    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

  2. If you used static public IP addresses to access your Managed Service for Kubernetes cluster or nodes, release and delete them.

Was the article helpful?

Previous
DNS autoscaling based on the cluster size
Next
DNS challenge for Let's Encrypt® certificates
© 2026 Direct Cursus Technology L.L.C.