Configuring NodeLocal DNS for the Cilium network policy controller
In this article, you will learn how to configure a local DNS for the Cilium network policy controller using the Local Redirect Policy
To set up a local DNS in a Managed Service for Kubernetes cluster:
- Create specifications for NodeLocal DNS and Local Redirect Policy.
- Create a test environment.
- Check the NodeLocal DNS functionality.
Getting started
-
Create a service account and assign to it the
k8s.tunnelClusters.agentandvpc.publicAdminroles. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a Managed Service for Kubernetes cluster with any suitable configuration.
When creating it, specify the service account and security groups you prepared in advance. Under Cluster network settings, select Enable tunnel mode.
-
Create a node group with any suitable configuration. When creating it, specify the preconfigured security groups.
-
Install kubect
and configure it to work with the new cluster. -
Get the
kube-dnsservice IP address:kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}
Create specifications for NodeLocal DNS and Local Redirect Policy
-
Create a file named
node-local-dns.yaml. In thenode-local-dnsDaemonSetsettings, specify thekube-dnsIP address:node-local-dns.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: node-local-dns namespace: kube-system --- apiVersion: v1 kind: Service metadata: name: kube-dns-upstream namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "KubeDNSUpstream" spec: ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns --- apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 0.0.0.0 forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 health } in-addr.arpa:53 { errors cache 30 reload loop bind 0.0.0.0 forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 0.0.0.0 forward . __PILLAR__CLUSTER__DNS__ { prefer_udp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 0.0.0.0 forward . __PILLAR__UPSTREAM__SERVERS__ { prefer_udp } prometheus :9253 } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: node-local-dns namespace: kube-system labels: k8s-app: node-local-dns spec: updateStrategy: rollingUpdate: maxUnavailable: 10% selector: matchLabels: k8s-app: node-local-dns template: metadata: labels: k8s-app: node-local-dns annotations: prometheus.io/port: "9253" prometheus.io/scrape: "true" spec: priorityClassName: system-node-critical serviceAccountName: node-local-dns dnsPolicy: Default # Don't use cluster DNS. tolerations: - key: "CriticalAddonsOnly" operator: "Exists" - effect: "NoExecute" operator: "Exists" - effect: "NoSchedule" operator: "Exists" containers: - name: node-cache image: registry.k8s.io/dns/k8s-dns-node-cache:1.17.0 resources: requests: cpu: 25m memory: 5Mi args: [ "-localip", "169.254.20.10,<kube-dns_IP_address>", "-conf", "/etc/Corefile", "-upstreamsvc", "kube-dns-upstream", "-skipteardown=true", "-setupinterface=false", "-setupiptables=false" ] securityContext: privileged: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9253 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - name: config-volume mountPath: /etc/coredns - name: kube-dns-config mountPath: /etc/kube-dns volumes: - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate - name: kube-dns-config configMap: name: kube-dns optional: true - name: config-volume configMap: name: node-local-dns items: - key: Corefile path: Corefile.baseWarning
The application works correctly only with the
kube-systemnamespace. -
Create the
node-local-dns-lrp.yamlfile:node-local-dns-lrp.yaml
--- apiVersion: "cilium.io/v2" kind: CiliumLocalRedirectPolicy metadata: name: "nodelocaldns" namespace: kube-system spec: redirectFrontend: serviceMatcher: serviceName: kube-dns namespace: kube-system redirectBackend: localEndpointSelector: matchLabels: k8s-app: node-local-dns toPorts: - port: "53" name: dns protocol: UDP - port: "53" name: dns-tcp protocol: TCP -
Create resources for NodeLocal DNS:
kubectl apply -f node-local-dns.yamlResult:
serviceaccount/node-local-dns created service/kube-dns-upstream created configmap/node-local-dns created daemonset.apps/node-local-dns created -
Create resources for the Local Redirect Policy:
kubectl apply -f node-local-dns-lrp.yamlResult:
ciliumlocalredirectpolicy.cilium.io/NodeLocal DNS created
Create a test environment
To test the local DNS, the nettool pod containing the dnsutils network utility suite will be launched in your Managed Service for Kubernetes cluster.
-
Run the
nettoolpod:kubectl run nettool --image cr.yandex/yc/demo/network-multitool -- sleep infinity -
Make sure the pod has switched to
Running:kubectl get pods -
Find out which Managed Service for Kubernetes cluster node is hosting the
nettoolpod:kubectl get pod nettool -o wideYou can find the node name in the
NODEcolumn, for example:NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nettool 1/1 Running 0 23h 10.1.0.68 <node_name> <none> <none> -
Get the IP address of the pod running NodeLocal DNS:
kubectl get pod -o wide -n kube-system | grep 'node-local.*<node_name>'Result:
node-local-dns-gv68c 1/1 Running 0 26m <pod_IP_address> <node_name> <none> <none>
Check the NodeLocal DNS functionality
To test the local DNS, several DNS requests will be made from the nettool pod. This will change the metrics for the number of DNS requests on the pod servicing NodeLocal DNS.
-
Get the values of the metrics for DNS requests before testing:
kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_totalResult:
# HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family. # TYPE coredns_dns_requests_total counter coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="other",zone="."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="other",zone="cluster.local."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="other",zone="in-addr.arpa."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="other",zone="ip6.arpa."} 1 -
Run these DNS requests:
kubectl exec -ti nettool -- nslookup kubernetes && \ kubectl exec -ti nettool -- nslookup kubernetes.default && \ kubectl exec -ti nettool -- nslookup ya.ruResult (IP addresses may differ):
Name: kubernetes.default.svc.cluster.local Address: 10.2.0.1 Server: 10.2.0.2 Address: 10.2.0.2#53 Name: kubernetes.default.svc.cluster.local Address: 10.2.0.1 Server: 10.2.0.2 Address: 10.2.0.2#53 Non-authoritative answer: Name: ya.ru Address: 87.250.250.242 Name: ya.ru Address: 2a02:6b8::2:242 -
Make sure the metric values have increased:
kubectl exec -ti nettool -- curl http://<pod_IP_address>:9253/metrics | grep coredns_dns_requests_totalResult:
# HELP coredns_dns_requests_total Counter of DNS requests made per zone, protocol and family. # TYPE coredns_dns_requests_total counter coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="A",zone="."} 3 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="A",zone="cluster.local."} 6 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="AAAA",zone="."} 1 coredns_dns_requests_total{family="1",proto="udp",server="dns://0.0.0.0:53",type="AAAA",zone="cluster.local."} 2 ...
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- If you used static public IP addresses to access your Managed Service for Kubernetes cluster or nodes, release and delete them.