Setting up NodeLocal DNS Cache in Yandex Managed Service for Kubernetes
To reduce the number of DNS requests to a Managed Service for Kubernetes cluster, enable NodeLocal DNS Cache.
Tip
If a Managed Service for Kubernetes cluster contains more than 50 nodes, use automatic DNS scaling.
Warning
If the Managed Service for Kubernetes cluster uses a Cilium network policy controller, the setup will have some unique features. Use this guide.
By default, pods send requests to the kube-dns service. In /etc/resolv.conf, the nameserver field is set to ClusterIp of the kube-dns service. To establish a connection to ClusterIP, use iptables
Enabling NodeLocal DNS Cache in a Managed Service for Kubernetes cluster deploys a DaemonSetnode-local-dns pod) runs on each Managed Service for Kubernetes node. User pods now send requests to the agent running on their Managed Service for Kubernetes nodes.
If the request is in the agent's cache, the agent returns a direct response. Otherwise, the system creates a TCP connection to kube-dns ClusterIP. By default, the caching agent makes cache-miss requests to kube-dns for the cluster.local DNS zone of the Managed Service for Kubernetes cluster.
This helps avoid the DNAT rules, connection tracking
To set up DNS request caching:
- Install NodeLocal DNS.
- Change the NodeLocal DNS Cache configuration.
- Run DNS requests.
- Set up traffic routing through NodeLocal DNS.
- Check logs.
If you no longer need the resources you created, delete them.
Required paid resources
The support cost for this solution includes:
- Fee for using the master and outgoing traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
- Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
- Fee for a public IP address for cluster nodes (see Virtual Private Cloud pricing).
Getting started
Create your infrastructure
-
Create a cloud network and subnet.
-
Create a service account with the
k8s.clusters.agentandvpc.publicAdminroles. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster and node group with public internet access and preconfigured security groups.
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-node-local-dns.tf
configuration file of the Managed Service for Kubernetes cluster to the same working directory. This file describes:-
Managed Service for Kubernetes cluster.
-
Service account for the Managed Service for Kubernetes cluster and node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the configuration file:
- Folder ID.
- Kubernetes versions for the Managed Service for Kubernetes cluster and node groups.
- Managed Service for Kubernetes cluster CIDR.
- Name of the Managed Service for Kubernetes cluster service account.
-
Make sure the Terraform configuration files are correct using this command:
terraform validateTerraform will show any errors found in your configuration files.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Set up your environment
-
If you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the
yc config set folder-id <folder_ID>command. You can also set a different folder for any specific command using the--folder-nameor--folder-idparameter. -
Install kubect
and configure it to work with the new cluster.
Install NodeLocal DNS
Install NodeLocal DNS using Cloud Marketplace as described in this guide.
-
Get the
kube-dnsservice IP address:kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP} -
Create a file named
node-local-dns.yaml. In thenode-local-dnsDaemonSetsettings, specify thekube-dnsIP address:node-local-dns.yaml
# Copyright 2018 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Modified for Yandex Cloud Usage --- apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system labels: --- apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "KubeDNSUpstream" spec: ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns --- apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system labels: data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 <kube-dns_IP_address> forward . __PILLAR__UPSTREAM__SERVERS__ { prefer_udp } prometheus :9253 } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns hostNetwork: true dnsPolicy: Default # Don't use cluster DNS. tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: node-cache image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0 resources: requests: cpu: 25m memory: 5Mi args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: host: 169.254.20.10 path: /health port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - name: config-volume mountPath: /etc/coredns - name: kube-dns-config mountPath: /etc/kube-dns volumes: - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: kube-dns-config configMap: name: kube-dns optional: true - name: config-volume configMap: name: node-local-dns items: - key: Corefile path: Corefile.base --- # A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods. # We use this to expose metrics to Prometheus. apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" labels: k8s-app: node-local-dns name: node-local-dns namespace: kube-system spec: clusterIP: None ports: - name: metrics port: 9253 targetPort: 9253 selector: k8s-app: node-local-dnsWarning
The application works correctly only with the
kube-systemnamespace. -
Create resources for NodeLocal DNS:
kubectl apply -f node-local-dns.yamlResult:
serviceaccount/node-local-dns created service/kube-dns-upstream created configmap/node-local-dns created daemonset.apps/node-local-dns created service/node-local-dns created -
Make sure the DaemonSet is successfully deployed and running:
kubectl get ds -l k8s-app=node-local-dns -n kube-systemResult:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 3 3 3 3 3 <none> 24m
Change the NodeLocal DNS Cache configuration
To change configuration, edit the relevant configmap. For example, to enable DNS request logging for the cluster.local zone:
-
Run this command:
kubectl -n kube-system edit configmap node-local-dns -
Add the
logline to thecluster.localzone configuration:... apiVersion: v1 data: Corefile: | cluster.local:53 { log errors cache { success 9984 30 denial 9984 5 } ... -
Save your changes:
Result:
configmap/node-local-dns edited
It may take several minutes to update the configuration.
Run DNS requests
To run test requests
-
Run the pod:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yamlResult:
pod/dnsutils created -
Make sure the pod status changed to
Running:kubectl get pods dnsutilsResult:
NAME READY STATUS RESTARTS AGE dnsutils 1/1 Running 0 26m -
Connect to the pod:
kubectl exec -i -t dnsutils -- sh -
Get the IP address of the local DNS cache:
nslookup kubernetes.defaultResult:
Server: <kube-dns_IP_address> Address: <kube-dns_IP_address>#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.128.1 -
Run the following requests:
dig +short @169.254.20.10 www.com dig +short @<kube-dns_IP_address> example.comResult:
# dig +short @169.254.20.10 www.com 52.128.23.153 # dig +short @<kube-dns_IP_address> example.com 93.184.216.34After
node-local-dnsstarts, theiptablesrules will be configured so that the local DNS responds at both addresses (<kube-dns_IP_address>:53and169.254.20.10:53).You can access
kube-dnsusingClusterIpof thekube-dns-upstreamservice. You may need this address to configure request forwarding.
Set up traffic routing through NodeLocal DNS
-
Create a pod for network traffic configuration:
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: dnschange namespace: default spec: priorityClassName: system-node-critical hostNetwork: true dnsPolicy: Default hostPID: true tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: dnschange image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 tty: true stdin: true securityContext: privileged: true command: - nsenter - --target - "1" - --mount - --uts - --ipc - --net - --pid - -- - sleep - "infinity" imagePullPolicy: IfNotPresent restartPolicy: Always EOF -
Connect to the
dnschangepod you created:kubectl exec -it dnschange -- sh -
Open the
/etc/default/kubeletfile in the container to edit it:vi /etc/default/kubelet -
In the file, add the
--cluster-dns=169.254.20.10parameter (NodeLocal DNS cache address) to theKUBELET_OPTSvariable value:KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubeconfig.conf --cert-dir=/var/lib/kubelet/pki/ --cloud-provider=external --config=/home/kubernetes/kubelet-config.yaml --kubeconfig=/etc/kubernetes/ kubelet-kubeconfig.conf --resolv-conf=/run/systemd/resolve/resolv.conf --v=2 --cluster-dns=169.254.20.10" -
Save the file and run the
kubeletrestart command:systemctl daemon-reload && systemctl restart kubeletThen, exit container mode by running the
exitcommand. -
Delete the
dnschangepod:kubectl delete pod dnschange -
To make all pods use NodeLocal DNS, restart them, e.g., using this command:
kubectl get deployments --all-namespaces | \ tail +2 | \ awk '{ cmd=sprintf("kubectl rollout restart deployment -n %s %s", $1, $2) ; system(cmd) }'
-
Run this command:
kubectl edit deployment <pod_deployment_name> -
In the pod specification, replace the
dnsPolicy: ClusterFirstsetting in thespec.template.speckey with the following section:dnsPolicy: "None" dnsConfig: nameservers: - 169.254.20.10 searches: - default.svc.cluster.local - svc.cluster.local - cluster.local - ru-central1.internal - internal - my.dns.search.suffix options: - name: ndots value: "5"
Check logs
Run this command:
kubectl logs --namespace=kube-system -l k8s-app=node-local-dns -f
To stop displaying logs, press Ctrl + C.
Result:
...
[INFO] 10.112.128.7:50527 - 41658 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097538s
[INFO] 10.112.128.7:44256 - 26847 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.057075876s
...
Stop the DaemonSet
To disable NodeLocal DNS Cache, i.e., stop the DaemonSet, run this command:
kubectl delete -f node-local-dns.yaml
Result:
serviceaccount "node-local-dns" deleted
service "kube-dns-upstream" deleted
configmap "node-local-dns" deleted
daemonset.apps "node-local-dns" deleted
service "node-local-dns" deleted
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
-
Delete the resources depending on how you created them:
ManuallyTerraform-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-
-
-
If you used static public IP addresses to access your Managed Service for Kubernetes cluster or nodes, release and delete them.