Setting up NodeLocal DNS Cache
To reduce the number of DNS queries on a Managed Service for Kubernetes cluster, enable NodeLocal DNS Cache.
Tip
If a Managed Service for Kubernetes cluster is made up of over 50 nodes, use automatic DNS scaling.
By default, pods send queries to the kube-dns
service. The nameserver
field in the /etc/resolv.conf
file is set to the ClusterIp
value of kube-dns
. A connection to the ClusterIP
is established using iptables
When NodeLocal DNS Cache is enabled, a DaemonSetnode-local-dns
). User pods now send queries to the agent running on their Managed Service for Kubernetes nodes.
If a query is in the agent cache, it returns a direct response. Otherwise, a TCP connection to the ClusterIP
kube-dns
is created. By default, the caching agent makes cache-miss requests to kube-dns
for the cluster.local
DNS zone of the Managed Service for Kubernetes cluster.
This helps avoid the DNAT rules, connection tracking
To set up DNS query caching:
- Install NodeLocal DNS
- Change the NodeLocal DNS Cache configuration
- Run DNS queries
- Set up traffic through NodeLocal DNS
- Check logs
Getting started
Create an infrastructure
-
Create a cloud network and subnet.
-
Create a service account with the
editor
role. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and a node group with public internet access and the security groups prepared earlier.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-node-local-dns.tf
configuration file of the Managed Service for Kubernetes cluster to the same working directory. The file describes:-
Managed Service for Kubernetes cluster.
-
Service account required for the cluster and Managed Service for Kubernetes node group to operate.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the configuration file:
- Folder ID.
- Kubernetes versions for the cluster and Managed Service for Kubernetes node groups.
- Managed Service for Kubernetes cluster CIDR.
- Name of the Managed Service for Kubernetes cluster service account.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Prepare the environment
-
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter. -
Install kubectl
and configure it to work with the created cluster.
Install NodeLocal DNS
Install NodeLocal DNS using Cloud Marketplace as described in this guide.
-
Retrieve the service IP address for
kube-dns
:kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
-
Create a file named
node-local-dns.yaml
. In thenode-local-dns
DaemonSet settings, specify thekube-dns
service IP address:node-local-dns.yaml
# Copyright 2018 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Modified for Yandex Cloud Usage --- apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system labels: --- apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "KubeDNSUpstream" spec: ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns --- apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system labels: data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__UPSTREAM__SERVERS__ { prefer_udp } prometheus :9253 } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns hostNetwork: true dnsPolicy: Default # Don't use cluster DNS. tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: node-cache image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0 resources: requests: cpu: 25m memory: 5Mi args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: host: 169.254.20.10 path: /health port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - name: config-volume mountPath: /etc/coredns - name: kube-dns-config mountPath: /etc/kube-dns volumes: - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: kube-dns-config configMap: name: kube-dns optional: true - name: config-volume configMap: name: node-local-dns items: - key: Corefile path: Corefile.base --- # A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods. # We use this to expose metrics to Prometheus. apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" labels: k8s-app: node-local-dns name: node-local-dns namespace: kube-system spec: clusterIP: None ports: - name: metrics port: 9253 targetPort: 9253 selector: k8s-app: node-local-dns
-
Create resources for NodeLocal DNS:
kubectl apply -f node-local-dns.yaml
Result:
serviceaccount/node-local-dns created service/kube-dns-upstream created configmap/node-local-dns created daemonset.apps/node-local-dns created service/node-local-dns created
-
Make sure that DaemonSet is successfully deployed and running:
kubectl get ds -l k8s-app=node-local-dns -n kube-system
Result:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 3 3 3 3 3 <none> 24m
Change the NodeLocal DNS Cache configuration
To change the configuration, edit the appropriate configmap
. For example, to enable DNS request logging for the cluster.local
zone:
-
Run this command:
kubectl -n kube-system edit configmap node-local-dns
-
Add the
log
line to thecluster.local
zone configuration:... apiVersion: v1 data: Corefile: | cluster.local:53 { log errors cache { success 9984 30 denial 9984 5 } ...
-
Save your changes.
Result:
configmap/node-local-dns edited
It may take several minutes to update the configuration.
Run DNS queries
To run test queries
-
Run the pod:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
Result:
pod/dnsutils created
-
Make sure the pod status changed to
Running
:kubectl get pods dnsutils
Result:
NAME READY STATUS RESTARTS AGE dnsutils 1/1 Running 0 26m
-
Connect to a pod:
kubectl exec -i -t dnsutils -- sh
-
Get the IP address of the local DNS cache:
nslookup kubernetes.default
Result:
Server: <kube-dns_IP_address> Address: <kube-dns_IP_address>#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.128.1
-
Run the following queries:
dig +short @169.254.20.10 www.com dig +short @<kube-dns_IP_address> example.com
Result:
# dig +short @169.254.20.10 www.com 52.128.23.153 # dig +short @<kube-dns_IP_address> example.com 93.184.216.34
After
node-local-dns
launches, the iptables rules will be configured for the local DNS to respond on both of the addresses (<kube-dns_IP>:53
and169.254.20.10:53
).The
kube-dns
service can be accessed at theClusterIp
address ofkube-dns-upstream
. You may need this address to configure request forwarding.
Set up traffic through NodeLocal DNS
-
Create a pod for network traffic setup:
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: dnschange namespace: default spec: priorityClassName: system-node-critical hostNetwork: true dnsPolicy: Default hostPID: true tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: dnschange image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 tty: true stdin: true securityContext: privileged: true command: - nsenter - --target - "1" - --mount - --uts - --ipc - --net - --pid - -- - sleep - "infinity" imagePullPolicy: IfNotPresent restartPolicy: Always EOF
-
Connect to the
dnschange
pod you created:kubectl exec -it dnschange -- sh
-
Open the
/etc/default/kubelet
file in the container to edit it:vi /etc/default/kubelet
-
In the file, add the
--cluster-dns=169.254.20.10
parameter (NodeLocal DNS cache address) to theKUBELET_OPTS
variable value:KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubeconfig.conf --cert-dir=/var/lib/kubelet/pki/ --cloud-provider=external --config=/home/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/ kubelet-kubeconfig.conf --resolv-conf=/run/systemd/resolve/resolv.conf --v=2 --cluster-dns=169.254.20.10"
-
Save the file and run the
kubelet
restart command:systemctl daemon-reload && systemctl restart kubelet
Next, exit the container mode by running the
exit
command. -
Delete the
dnschange
pod:kubectl delete pod dnschange
-
To make sure all pods start running through NodeLocal DNS, restart them, e.g., using the command below:
kubectl get deployments --all-namespaces | \ tail +2 | \ awk '{ cmd=sprintf("kubectl rollout restart deployment -n %s %s", $1, $2) ; system(cmd) }'
-
Run this command:
kubectl edit deployment <pod_deployment_name>
-
In the pod specification, replace the
dnsPolicy: ClusterFirst
setting in thespec.template.spec
key with the following section:dnsPolicy: "None" dnsConfig: nameservers: - 169.254.20.10 searches: - default.svc.cluster.local - svc.cluster.local - cluster.local - ru-central1.internal - internal - my.dns.search.suffix options: - name: ndots value: "5"
Check logs
Run this command:
kubectl logs --namespace=kube-system -l k8s-app=node-local-dns -f
To stop displaying a log, press Ctrl + C.
Result:
...
[INFO] 10.112.128.7:50527 - 41658 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097538s
[INFO] 10.112.128.7:44256 - 26847 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.057075876s
...
Stop the DaemonSet
To disable DaemonSet in NodeLocal DNS Cache, run:
kubectl delete -f node-local-dns.yaml
Result:
serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- If static public IP addresses were used for Managed Service for Kubernetes cluster and node access, release and delete them.
-
In the command line, go to the directory with the current Terraform configuration file with an infrastructure plan.
-
Delete the
k8s-node-local-dns.tf
configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the
k8s-node-local-dns.tf
configuration file will be deleted. -
-
If static public IP addresses were used for Managed Service for Kubernetes cluster and node access, release and delete them.