Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Tutorials
    • All tutorials
    • Setting up a Managed Service for PostgreSQL connection from a container in Serverless Containers
    • Creating a VM from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with an additional volume for a Docker container
    • Creating an instance group from a Container Optimized Image with multiple Docker containers
    • Creating an instance group from a Container Optimized Image
    • Creating a VM from a Container Optimized Image with multiple Docker containers
    • Updating a Container Optimized Image VM
    • Configuring data output from a Docker container to a serial port
      • Creating a new Kubernetes project
      • Creating a Kubernetes cluster with no internet access
      • Running workloads with GPUs
      • Using node groups with GPUs and no pre-installed drivers
      • Setting up Time-Slicing GPUs
      • Migrating resources to a different availability zone
      • Encrypting secrets
        • Integrating with a corporate DNS zone
        • DNS autoscaling based on cluster size
        • Setting up NodeLocal DNS Cache
        • DNS Challenge for Let's Encrypt® certificates

In this article:

  • Required paid resources
  • Getting started
  • Create an infrastructure
  • Set up your environment
  • Install NodeLocal DNS
  • Change the NodeLocal DNS Cache configuration
  • Run DNS requests
  • Set up traffic routing through NodeLocal DNS
  • Check logs
  • Stop the DaemonSet
  • Delete the resources you created
  1. Container infrastructure
  2. Managed Service for Kubernetes
  3. Working with DNS
  4. Setting up NodeLocal DNS Cache

Setting up NodeLocal DNS Cache in Yandex Managed Service for Kubernetes

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
    • Create an infrastructure
    • Set up your environment
  • Install NodeLocal DNS
  • Change the NodeLocal DNS Cache configuration
  • Run DNS requests
  • Set up traffic routing through NodeLocal DNS
  • Check logs
  • Stop the DaemonSet
  • Delete the resources you created

To reduce the number of DNS requests to a Managed Service for Kubernetes cluster, enable NodeLocal DNS Cache.

Tip

If a Managed Service for Kubernetes cluster contains more than 50 nodes, use automatic DNS scaling.

By default, pods send requests to the kube-dns service. In /etc/resolv.conf, the nameserver field is set to ClusterIp of the kube-dns service. To establish a connection with ClusterIP, use iptables or IP Virtual Server.

When NodeLocal DNS Cache is enabled, a DaemonSet is deployed in a Managed Service for Kubernetes cluster. The caching agent (node-local-dns pod) runs on each Managed Service for Kubernetes node. User pods now send requests to the agent running on their Managed Service for Kubernetes nodes.

If the request is in the agent's cache, the agent returns a direct response. Otherwise, the system creates a TCP connection to kube-dns ClusterIP. By default, the caching agent makes cache-miss requests to kube-dns for the cluster.local DNS zone of the Managed Service for Kubernetes cluster.

This helps avoid the DNAT rules, connection tracking, and restrictions on the number of connections. For more information about NodeLocal DNS Cache, see the documentation.

To set up DNS request caching:

  1. Install NodeLocal DNS.
  2. Change the NodeLocal DNS Cache configuration.
  3. Run DNS requests.
  4. Set up traffic routing through NodeLocal DNS.
  5. Check logs.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Managed Service for Kubernetes cluster fee: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Cluster nodes (VM) fee: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for the public IP address assigned to cluster nodes (see Virtual Private Cloud pricing).

Getting startedGetting started

Create an infrastructureCreate an infrastructure

Manually
Terraform
  1. Create a cloud network and subnet.

  2. Create a service account with the k8s.clusters.agent and vpc.publicAdmin roles.

  3. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  4. Create a Managed Service for Kubernetes cluster and a node group with public internet access and the security groups you prepared earlier.

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-node-local-dns.tf configuration file of the Managed Service for Kubernetes cluster to the same working directory. This file describes:

    • Network.

    • Subnet.

    • Managed Service for Kubernetes cluster.

    • Service account required for the Managed Service for Kubernetes cluster and node group.

    • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  6. Specify the following in the configuration file:

    • Folder ID.
    • Kubernetes versions for the cluster and Managed Service for Kubernetes node groups.
    • Managed Service for Kubernetes cluster CIDR.
    • Name of the Managed Service for Kubernetes cluster service account.
  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Set up your environmentSet up your environment

  1. If you do not have the Yandex Cloud CLI yet, install and initialize it.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

  2. Install kubect and configure it to work with the new cluster.

Install NodeLocal DNSInstall NodeLocal DNS

Yandex Cloud Marketplace
Manually

Install NodeLocal DNS using Cloud Marketplace as described in this guide.

  1. Retrieve the kube-dns service IP address:

    kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
    
  2. Create a file named node-local-dns.yaml. In the node-local-dns DaemonSet settings, specify the kube-dns IP address:

    node-local-dns.yaml
    # Copyright 2018 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # Modified for Yandex Cloud Usage
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: node-local-dns
      namespace: kube-system
      labels:
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns-upstream
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/name: "KubeDNSUpstream"
    spec:
      ports:
      - name: dns
        port: 53
        protocol: UDP
        targetPort: 53
      - name: dns-tcp
        port: 53
        protocol: TCP
        targetPort: 53
      selector:
        k8s-app: kube-dns
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: node-local-dns
      namespace: kube-system
      labels:
    data:
      Corefile: |
        cluster.local:53 {
          errors
          cache {
            success 9984 30
            denial 9984 5
          }
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          health 169.254.20.10:8080
        }
        in-addr.arpa:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          }
        ip6.arpa:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__CLUSTER__DNS__ {
            prefer_udp
          }
          prometheus :9253
          }
        .:53 {
          errors
          cache 30
          reload
          loop
          bind 169.254.20.10 <kube-dns_IP_address>
          forward . __PILLAR__UPSTREAM__SERVERS__ {
            prefer_udp
          }
          prometheus :9253
          }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: node-local-dns
      namespace: kube-system
      labels:
        k8s-app: node-local-dns
    spec:
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 10%
      selector:
        matchLabels:
          k8s-app: node-local-dns
      template:
        metadata:
          labels:
            k8s-app: node-local-dns
          annotations:
            prometheus.io/port: "9253"
            prometheus.io/scrape: "true"
        spec:
          priorityClassName: system-node-critical
          serviceAccountName: node-local-dns
          hostNetwork: true
          dnsPolicy: Default # Don't use cluster DNS.
          tolerations:
          - key: "CriticalAddonsOnly"
            operator: "Exists"
          - effect: "NoExecute"
            operator: "Exists"
          - effect: "NoSchedule"
            operator: "Exists"
          containers:
          - name: node-cache
            image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0
            resources:
              requests:
                cpu: 25m
                memory: 5Mi
            args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ]
            securityContext:
              privileged: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9253
              name: metrics
              protocol: TCP
            livenessProbe:
              httpGet:
                host: 169.254.20.10
                path: /health
                port: 8080
              initialDelaySeconds: 60
              timeoutSeconds: 5
            volumeMounts:
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - name: config-volume
              mountPath: /etc/coredns
            - name: kube-dns-config
              mountPath: /etc/kube-dns
          volumes:
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
          - name: kube-dns-config
            configMap:
              name: kube-dns
              optional: true
          - name: config-volume
            configMap:
              name: node-local-dns
              items:
                - key: Corefile
                  path: Corefile.base
    ---
    # A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods.
    # We use this to expose metrics to Prometheus.
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        prometheus.io/port: "9253"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: node-local-dns
      name: node-local-dns
      namespace: kube-system
    spec:
      clusterIP: None
      ports:
        - name: metrics
          port: 9253
          targetPort: 9253
      selector:
        k8s-app: node-local-dns
    

    Warning

    The application works correctly only with the kube-system namespace.

  3. Create resources for NodeLocal DNS:

    kubectl apply -f node-local-dns.yaml
    

    Result:

    serviceaccount/node-local-dns created
    service/kube-dns-upstream created
    configmap/node-local-dns created
    daemonset.apps/node-local-dns created
    service/node-local-dns created
    
  4. Make sure that DaemonSet is successfully deployed and running:

    kubectl get ds -l k8s-app=node-local-dns -n kube-system
    

    Result:

    NAME            DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
    node-local-dns  3        3        3      3           3          <none>         24m
    

Change the NodeLocal DNS Cache configurationChange the NodeLocal DNS Cache configuration

To change configuration, edit the relevant configmap. For example, to enable DNS request logging for the cluster.local zone:

  1. Run this command:

    kubectl -n kube-system edit configmap node-local-dns
    
  2. Add the log line to the cluster.local zone configuration:

    ...
    apiVersion: v1
      data:
        Corefile: |
          cluster.local:53 {
              log
              errors
              cache {
                      success 9984 30
                      denial 9984 5
              }
    ...
    
  3. Save your changes:

    Result:

    configmap/node-local-dns edited
    

It may take several minutes to update the configuration.

Run DNS requestsRun DNS requests

To run test requests, use a pod with the DNS diagnostic utilities.

  1. Run the pod:

    kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
    

    Result:

    pod/dnsutils created
    
  2. Make sure the pod has entered the Running state:

    kubectl get pods dnsutils
    

    Result:

    NAME      READY  STATUS   RESTARTS  AGE
    dnsutils  1/1    Running  0         26m
    
  3. Connect to a pod:

    kubectl exec -i -t dnsutils -- sh
    
  4. Get the IP address of the local DNS cache:

    nslookup kubernetes.default
    

    Result:

    Server:         <kube-dns_IP_address>
    Address:        <kube-dns_IP_address>#53
    
    Name:   kubernetes.default.svc.cluster.local
    Address: 10.96.128.1
    
  5. Run the following requests:

    dig +short @169.254.20.10 www.com
    dig +short @<kube-dns_IP_address> example.com
    

    Result:

    # dig +short @169.254.20.10 www.com
    52.128.23.153
    # dig +short @<kube-dns_IP_address> example.com
    93.184.216.34
    

    After node-local-dns starts, the iptables rules will be configured so that the local DNS responds at both addresses (<kube-dns_IP_address>:53 and 169.254.20.10:53).

    You can access kube-dns using ClusterIp of the kube-dns-upstream service. You may need this address to configure request forwarding.

Set up traffic routing through NodeLocal DNSSet up traffic routing through NodeLocal DNS

All pods
Selected pods
  1. Create a pod for network traffic setup:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: dnschange
      namespace: default
    spec:
      priorityClassName: system-node-critical
      hostNetwork: true
      dnsPolicy: Default
      hostPID: true
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - effect: "NoExecute"
          operator: "Exists"
        - effect: "NoSchedule"
          operator: "Exists"
      containers:
      - name: dnschange
        image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
        tty: true
        stdin: true
        securityContext:
          privileged: true
        command:
          - nsenter
          - --target
          - "1"
          - --mount
          - --uts
          - --ipc
          - --net
          - --pid
          - --
          - sleep
          - "infinity"
        imagePullPolicy: IfNotPresent
      restartPolicy: Always
    EOF
    
  2. Connect to the dnschange pod you created:

    kubectl exec -it dnschange -- sh
    
  3. Open the /etc/default/kubelet file in the container to edit it:

    vi /etc/default/kubelet
    
  4. In the file, add the --cluster-dns=169.254.20.10 parameter (NodeLocal DNS cache address) to the KUBELET_OPTS variable value:

    KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubeconfig.conf --cert-dir=/var/lib/kubelet/pki/   --cloud-provider=external --config=/home/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/  kubelet-kubeconfig.conf --resolv-conf=/run/systemd/resolve/resolv.conf --v=2 --cluster-dns=169.254.20.10"
    
  5. Save the file and run the kubelet restart command:

    systemctl daemon-reload && systemctl restart kubelet
    

    Then, exit container mode by running the exit command.

  6. Delete the dnschange pod:

    kubectl delete pod dnschange
    
  7. To make sure all pods start running through NodeLocal DNS, restart them, e.g., using the command below:

    kubectl get deployments --all-namespaces | \
      tail +2 | \
      awk '{
        cmd=sprintf("kubectl rollout restart deployment -n %s %s", $1, $2) ;
        system(cmd)
      }'
    
  1. Run this command:

    kubectl edit deployment <pod_deployment_name>
    
  2. In the pod specification, replace the dnsPolicy: ClusterFirst setting in the spec.template.spec key with the following section:

      dnsPolicy: "None"
      dnsConfig:
        nameservers:
          - 169.254.20.10
        searches:
          - default.svc.cluster.local
          - svc.cluster.local
          - cluster.local
          - ru-central1.internal
          - internal
          - my.dns.search.suffix
        options:
          - name: ndots
            value: "5"
    

Check logsCheck logs

Run this command:

kubectl logs --namespace=kube-system -l k8s-app=node-local-dns -f

To stop displaying a log, press Ctrl + C.

Result:

...
[INFO] 10.112.128.7:50527 - 41658 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097538s
[INFO] 10.112.128.7:44256 - 26847 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.057075876s
...

Stop the DaemonSetStop the DaemonSet

To disable DaemonSet in NodeLocal DNS Cache, run:

kubectl delete -f node-local-dns.yaml

Result:

serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

  1. Delete the resources depending on how you created them:

    Manually
    Terraform

    Delete the Managed Service for Kubernetes cluster.

    1. In the terminal window, go to the directory containing the infrastructure plan.

      Warning

      Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

    2. Delete resources:

      1. Run this command:

        terraform destroy
        
      2. Confirm deleting the resources and wait for the operation to complete.

      All the resources described in the Terraform manifests will be deleted.

  2. If static public IP addresses were used for Managed Service for Kubernetes cluster and node access, release and delete them.

Was the article helpful?

Previous
DNS autoscaling based on cluster size
Next
DNS Challenge for Let's Encrypt® certificates
Yandex project
© 2025 Yandex.Cloud LLC