Setting up NodeLocal DNS in Yandex Managed Service for Kubernetes
To reduce the load from DNS queries in a Managed Service for Kubernetes cluster, use NodeLocal DNS.
Tip
If your Managed Service for Kubernetes cluster has more than 50 nodes, use DNS autoscaling.
Warning
If the Managed Service for Kubernetes cluster uses a Cilium network policy controller, the setup will have some unique features. Use this guide.
NodeLocal DNS is a Managed Service for Kubernetes cluster system component which acts as a local DNS cache on each node.
NodeLocal DNS is deployed in a cluster as a DaemonSetnode-local-dns pods in the kube-system namespace. NodeLocal DNS configures iptableskube-dns to the node-local-dns pod on the same node (local cache):
- If there is a valid entry in the cache that has not yet expired, the response is returned without accessing the cluster’s main DNS service.
- If no entry exists in the cache or if the entry has expired, the request goes to the main DNS service,
kube-dns.
Note
Redirects of DNS requests to the local cache are transparent to the pods: you do not need to modify the pod’s /etc/resolv.conf file and restart it. Disabling NodeLocal DNS does not require these actions as well.
Using NodeLocal DNS in a Managed Service for Kubernetes cluster offers the following benefits
- Reduced DNS request processing time.
- Reduced internal network traffic to avoid limitations on the number of connections.
- Reduced risk of conntrack failure due to fewer UDP requests to the DNS service.
- Improved resilience and scalability of the cluster DNS subsystem.
Follow this guide to install NodeLocal DNS in a Yandex Managed Service for Kubernetes cluster and test it using the dnsutils package. To do this, follow these steps:
If you no longer need the resources you created, delete them.
Required paid resources
- Managed Service for Kubernetes master (see Managed Service for Kubernetes pricing).
- Managed Service for Kubernetes cluster nodes: Use of computing resources and storage (see Compute Cloud pricing).
- Public IP addresses for Managed Service for Kubernetes cluster nodes (see Virtual Private Cloud pricing).
Getting started
Create your infrastructure
-
Create a cloud network and subnet.
-
Create a service account with the
k8s.clusters.agentandvpc.publicAdminroles. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and node group with public internet access and preconfigured security groups.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-node-local-dns.tf
configuration file of the Managed Service for Kubernetes cluster to the same working directory. This file describes:-
Managed Service for Kubernetes cluster.
-
Service account for the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the configuration file:
- Folder ID.
- Kubernetes versions for the Managed Service for Kubernetes cluster and node groups.
- Managed Service for Kubernetes cluster CIDR.
- Name of the Managed Service for Kubernetes cluster service account.
-
Validate your Terraform configuration files using this command:
terraform validateTerraform will display any configuration errors detected in your files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up your environment
-
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the
yc config set folder-id <folder_ID>command. You can also set a different folder for any specific command using the--folder-nameor--folder-idparameter. -
Install kubect
and configure it to work with the new cluster.
Install NodeLocal DNS
Install NodeLocal DNS using Cloud Marketplace as described in this guide.
-
Get the
kube-dnsservice IP address:kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP} -
Create a file named
node-local-dns.yaml. In thenode-local-dnsDaemonSetsettings, specify thekube-dnsIP address:node-local-dns.yaml
# Copyright 2018 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Modified for Yandex Cloud Usage --- apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system --- apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "KubeDNSUpstream" spec: ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns --- apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__UPSTREAM__SERVERS__ { prefer_udp } prometheus :9253 } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns hostNetwork: true dnsPolicy: Default # Don't use cluster DNS. tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: node-cache image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0 resources: requests: cpu: 25m memory: 5Mi args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: host: 169.254.20.10 path: /health port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - name: config-volume mountPath: /etc/coredns - name: kube-dns-config mountPath: /etc/kube-dns volumes: - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: kube-dns-config configMap: name: kube-dns optional: true - name: config-volume configMap: name: node-local-dns items: - key: Corefile path: Corefile.base --- # Headless Service has no ClusterIP and returns Pod IPs via DNS. # Used for Prometheus service discovery of node-local-dns metrics. apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" labels: k8s-app: node-local-dns name: node-local-dns namespace: kube-system spec: clusterIP: None ports: - name: metrics port: 9253 targetPort: 9253 selector: k8s-app: node-local-dnsWarning
The application works correctly only with the
kube-systemnamespace. -
Create resources for NodeLocal DNS:
kubectl apply -f node-local-dns.yamlResult:
serviceaccount/node-local-dns created service/kube-dns-upstream created configmap/node-local-dns created daemonset.apps/node-local-dns created service/node-local-dns created -
Make sure the DaemonSet is successfully deployed and running:
kubectl get ds -l k8s-app=node-local-dns -n kube-systemResult:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 3 3 3 3 3 <none> 24m
Create a test environment
To test the local DNS, the nettool pod containing the dnsutils network utility suite will be launched in your Managed Service for Kubernetes cluster.
-
Run the
nettoolpod:kubectl run nettool --image cr.yandex/yc/demo/network-multitool -- sleep infinity -
Make sure the pod has switched to
Running:kubectl get pods -
Find out which Managed Service for Kubernetes cluster node is hosting the
nettoolpod:kubectl get pod nettool -o wideYou can find the node name in the
NODEcolumn, for example:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nettool 1/1 Running 0 23h 10.1.0.68 <node_name> <none> <none> -
Get the IP address of the pod running NodeLocal DNS:
kubectl get pod -o wide -n kube-system | grep 'node-local.*<node_name>'Result:
node-local-dns-gv68c 1/1 Running 0 26m <pod_IP_address> <node_name> <none> <none>
Check the NodeLocal DNS functionality
To test the local DNS, several DNS requests will be made from the nettool pod. This will change the metrics for the number of DNS requests on the pod servicing NodeLocal DNS.
-
Get the values of the metrics for DNS requests before testing:
kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_totalResult:
# HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family. # TYPE coredns_dns_requests_total counter coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="."} 18 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="cluster.local."} 18 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="."} 18 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="cluster.local."} 18 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="cluster.local."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="in-addr.arpa."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="ip6.arpa."} 1The result demonstrates that NodeLocal DNS receives DNS requests on two IP addresses:
-
Address matching the
kube-dnscluster IP. Here, this is10.96.128.2:53; the actual value may differ.This is the main address. NodeLocal DNS configures iptables
to redirect requests tokube-dnsto thenode-local-dnspod on the same node. -
NodeLocal DNS local address (
169.254.20.10).This is a fallback address. You can use it to access the
node-local-dnspod directly.
-
-
Run these DNS requests:
kubectl exec -ti nettool -- nslookup kubernetes && \ kubectl exec -ti nettool -- nslookup kubernetes.default && \ kubectl exec -ti nettool -- nslookup ya.ruResult (IP addresses may differ):
Server: 10.96.128.2 Address: 10.96.128.2#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.128.1 Server: 10.96.128.2 Address: 10.96.128.2#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.128.1 Server: 10.96.128.2 Address: 10.96.128.2#53 Non-authoritative answer: Name: ya.ru Address: 5.255.255.242 Name: ya.ru Address: 77.88.44.242 Name: ya.ru Address: 77.88.55.242 Name: ya.ru Address: 2a02:6b8::2:242 -
Get the DNS request metric values again:
kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_totalResult:
# HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family. # TYPE coredns_dns_requests_total counter coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="."} 27 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="A",zone="cluster.local."} 30 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="."} 25 coredns_dns_requests_total{family="1",proto="udp",server="dns://10.96.128.2:53",type="AAAA",zone="cluster.local."} 26 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="cluster.local."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="in-addr.arpa."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://169.254.20.10:53",type="other",zone="ip6.arpa."} 1The result demonstrates that metric values have increased for the
kube-dnsaddress but remain unchanged for the NodeLocal DNS local address. This means pods continue to send DNS requests to thekube-dnsaddress, which are now handled by NodeLocal DNS.
Delete NodeLocal DNS
Delete the NodeLocal DNS application as described in this guide.
Run this command:
kubectl delete -f node-local-dns.yaml
Result:
serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the resources depending on how you created them:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
-
If you used static public IP addresses to access your Managed Service for Kubernetes cluster or nodes, release and delete them.