Configuring the Cilium network policy controller
This tutorial shows the implementation of L3/L4 and L7 network policies
To use the Cilium network policy controller in a cluster:
- Install and configure Hubble UI, a network activity monitoring tool.
- Create a test environment.
- Create an L3/L4 network policy.
- Create an L7 network policy.
Getting started
Set up the infrastructure
-
Create a service account and assign to it the
k8s.tunnelClusters.agentandvpc.publicAdminroles. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a cluster with any suitable configuration.
-
In the Service account for resources and Service account for nodes fields, select
From list, then select the service account you created from the drop-down list. -
Under Master configuration, select the following values:
- Public address:
Auto. - Security groups:
From list. Specify security groups for the cluster.
- Public address:
-
Under Cluster network settings, select Enable tunnel mode.
-
-
Create a node group for the cluster with any suitable configuration.
Under Network settings, select the following values:
- Public address:
Auto. - Security groups:
From list. Specify security groups for the node groups.
- Public address:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cilium.tf
configuration file to the same working directory. You will need this file to create the following resources:-
Managed Service for Kubernetes cluster.
-
Node group for the cluster.
-
Service account for the cluster and its node group.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
In
k8s-cilium.tf, specify the following:- Folder ID.
- Kubernetes version for the cluster and node groups.
- Service account name.
-
Make sure the Terraform configuration files are correct using this command:
terraform validateIf there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run this command to view the planned changes:
terraform planIf you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.
-
If everything looks correct, apply the changes:
-
Run this command:
terraform apply -
Confirm updating the resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
.Timeouts
The Terraform provider sets time limits for operations with Managed Service for Kubernetes cluster and node group:
- Creating and editing a cluster: 30 minutes.
- Creating and updating a node group: 60 minutes.
- Deleting a node group: 20 minutes.
Operations in excess of this time will be interrupted.
How do I modify these limits?
Add the
timeoutssections (theyandex_kubernetes_clusterandyandex_kubernetes_node_groupresources, respectively) to the cluster and node group description.Here is an example:
resource "yandex_kubernetes_node_group" "<node_group_name>" { ... timeouts { create = "1h30m" update = "1h30m" delete = "30m" } } -
Get ready to use the cluster
Install and configure Hubble UI
-
Check the current status of Cilium in the cluster:
cilium statusCilium, Operator, and Hubble Relay should have the
OKstatus.Example of a command result
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: OK \__/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-operator Running: 1 hubble-relay Running: 1 Cluster Pods: 5/5 managed by Cilium Helm chart version: Image versions cilium cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1 cilium-operator cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1 hubble-relay cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1 -
Get a list of the cluster nodes running Cilium:
kubectl get cn -
Create a file named
hubble-ui.yamlcontaining specifications for the resources required for Hubble UI:hubble-ui.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: "hubble-ui" namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: hubble-ui-nginx namespace: kube-system data: nginx.conf: | server { listen 8081; listen [::]:8081; server_name localhost; root /app; index index.html; client_max_body_size 1G; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # CORS add_header Access-Control-Allow-Methods 'GET, POST, PUT, HEAD, DELETE, OPTIONS'; add_header Access-Control-Allow-Origin *; add_header Access-Control-Max-Age 1728000; add_header Access-Control-Expose-Headers content-length,grpc-status,grpc-message; add_header Access-Control-Allow-Headers range,keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout; if ($request_method = OPTIONS) { return 204; } # /CORS location /api { proxy_http_version 1.1; proxy_pass_request_headers on; proxy_hide_header Access-Control-Allow-Origin; proxy_pass http://127.0.0.1:8090; } location / { # double `/index.html` is required here try_files $uri $uri/ /index.html /index.html; } # Liveness probe location /healthz { access_log off; add_header Content-Type text/plain; return 200 'ok'; } } } --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hubble-ui labels: app.kubernetes.io/part-of: cilium rules: - apiGroups: - networking.k8s.io resources: - networkpolicies verbs: - get - list - watch - apiGroups: - "" resources: - componentstatuses - endpoints - namespaces - nodes - pods - services verbs: - get - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - list - watch - apiGroups: - cilium.io resources: - "*" verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hubble-ui labels: app.kubernetes.io/part-of: cilium roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: hubble-ui subjects: - kind: ServiceAccount name: "hubble-ui" namespace: kube-system --- kind: Service apiVersion: v1 metadata: name: hubble-ui namespace: kube-system labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: type: "ClusterIP" selector: k8s-app: hubble-ui ports: - name: http port: 80 targetPort: 8081 --- kind: Deployment apiVersion: apps/v1 metadata: name: hubble-ui namespace: kube-system labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: replicas: 1 selector: matchLabels: k8s-app: hubble-ui strategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: priorityClassName: serviceAccount: "hubble-ui" serviceAccountName: "hubble-ui" automountServiceAccountToken: true containers: - name: frontend image: "quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666" imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8081 livenessProbe: httpGet: path: /healthz port: 8081 readinessProbe: httpGet: path: / port: 8081 volumeMounts: - name: hubble-ui-nginx-conf mountPath: /etc/nginx/conf.d/default.conf subPath: nginx.conf - name: tmp-dir mountPath: /tmp terminationMessagePolicy: FallbackToLogsOnError - name: backend image: "quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803" imagePullPolicy: IfNotPresent env: - name: EVENTS_SERVER_PORT value: "8090" - name: FLOWS_API_ADDR value: "hubble-relay:80" ports: - name: grpc containerPort: 8090 volumeMounts: terminationMessagePolicy: FallbackToLogsOnError nodeSelector: kubernetes.io/os: linux volumes: - configMap: defaultMode: 420 name: hubble-ui-nginx name: hubble-ui-nginx-conf - emptyDir: {} name: tmp-dir -
Create the resources:
kubectl apply -f hubble-ui.yamlResult
serviceaccount/hubble-ui created configmap/hubble-ui-nginx created clusterrole.rbac.authorization.k8s.io/hubble-ui created clusterrolebinding.rbac.authorization.k8s.io/hubble-ui created service/hubble-ui created deployment.apps/hubble-ui created -
Check the Cilium status after installing Hubble UI:
cilium statusCilium, Operator, and Hubble Relay should have the
OKstatus. Thehubble-uicontainer must be in theRunning: 1state.Example of a command result
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: OK \__/ ClusterMesh: disabled Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 hubble-relay Running: 1 cilium-operator Running: 1 hubble-ui Running: 1 Cluster Pods: 6/6 managed by Cilium Helm chart version: Image versions cilium cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1 hubble-relay cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1 cilium-operator cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:******: 1 hubble-ui quay.io/cilium/hubble-ui:v0.13.0@sha256:******: 1 -
Check the states of Cilium system pods in your cluster:
for p in $(kubectl get po -o name -n kube-system -l k8s-app=cilium) do echo "\n"$p kubectl exec $p -n kube-system -c cilium-agent -- cilium status | tail -5 doneExample of a command result
pod/cilium-fwpg6 Proxy Status: OK, ip 172.16.0.1, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 5.29 Metrics: Ok Encryption: Disabled Cluster health: 3/3 reachable (2025-05-14T09:50:51Z) pod/cilium-ph5dx Proxy Status: OK, ip 172.16.0.37, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 5.72 Metrics: Ok Encryption: Disabled Cluster health: 3/3 reachable (2025-05-14T09:50:06Z) -
To access the Hubble UI web interface, run this command:
cilium hubble uiYour browser will open and redirect you to the Hubble UI web interface.
Note
If you close the terminal session running the command, you will lose access to the web interface.
Create a test environment
-
Create a file named
http-sw-app.yamlwith a specification of resources for test applications:http-sw-app.yaml
--- apiVersion: v1 kind: Service metadata: name: deathstar spec: type: ClusterIP ports: - port: 80 selector: org: empire class: deathstar --- apiVersion: apps/v1 kind: Deployment metadata: name: deathstar spec: replicas: 2 selector: matchLabels: org: empire class: deathstar template: metadata: labels: org: empire class: deathstar spec: containers: - name: deathstar image: docker.io/cilium/starwars --- apiVersion: v1 kind: Pod metadata: name: tiefighter labels: org: empire class: tiefighter spec: containers: - name: spaceship image: docker.io/tgraf/netperf --- apiVersion: v1 kind: Pod metadata: name: xwing labels: org: alliance class: xwing spec: containers: - name: spaceship image: docker.io/tgraf/netperf -
Create applications:
kubectl apply -f http-sw-app.yamlResult
service/deathstar created deployment.apps/deathstar created pod/tiefighter created pod/xwing created -
Make sure the pods and services you created are working:
kubectl get pods,svcExample of a command result
NAME READY STATUS RESTARTS AGE pod/deathstar-c74d84667-6x4gx 1/1 Running 1 7d pod/deathstar-c74d84667-jrdsp 1/1 Running 0 7d pod/tiefighter 1/1 Running 0 7d pod/xwing 1/1 Running 0 7d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/deathstar ClusterIP 10.96.18.169 <none> 80/TCP 7d service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d -
View the current status of Cilium endpoints:
kubectl -n kube-system exec daemonset/cilium -- cilium endpoint listMake sure the network policies are disabled for all endpoints: their status under
POLICY (ingress) ENFORCEMENTandPOLICY (egress) ENFORCEMENTshould be set toDisabled.Example of a command result part
Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init) ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 51 Disabled Disabled 2204 k8s:app.kubernetes.io/name=hubble-ui 10.112.0.97 ready k8s:app.kubernetes.io/part-of=cilium k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=hubble-ui 274 Disabled Disabled 23449 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 10.112.0.224 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns-autoscaler ... -
Make sure the
tiefighterandxwingapplications have access to thedeathstarAPI and return theShip landedstring, because the network policies are not activated:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing && \ kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landingThe output of both commands must be the same:
Ship landed Ship landed -
Go to the Hubble UI web interface and view data streams for your pods and services in the
defaultnamespace.The verdict for all data streams should be
forwarded.
Create an L3/L4 network policy
Apply the L3/L4 network policy to disable the xwing pod access to deathstar. Access rules for the tiefighter pod remain unchanged.
For access control, the following Kubernetes labels are assigned to pods when creating them:
org: empirefor thetiefighterpod.org: alliancefor thexwingpod.
The L3/L4 network policy only allows pods with the org: empire label to access deathstar.
-
Create a file named
sw_l3_l4_policy.yamlwith the policy specification:sw_l3_l4_policy.yaml
--- apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L3-L4 policy to restrict deathstar access to empire ships only" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP -
Create the
rule1policy:kubectl apply -f sw_l3_l4_policy.yamlResult:
ciliumnetworkpolicy.cilium.io/rule1 created -
View the current status of Cilium endpoints again:
kubectl -n kube-system exec daemonset/cilium -- cilium endpoint listMake sure the ingress policy is enabled for the endpoint associated with the
k8s:class=deathstarlabel: its status underPOLICY (ingress) ENFORCEMENTshould beEnabled.Example of a command result part
Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init) ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT ... 3509 Enabled Disabled 52725 k8s:class=deathstar 10.112.0.43 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire ... -
Check the availability of
deathstarfor thetiefighterpod:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landingResult:
Ship landed -
Make sure the
xwingpod has no access todeathstar:kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landingPress Ctrl + C to abort the command. The network policy has denied this pod access to the service.
-
Check how the policy works:
-
To view the policy specification and status, run this command:
kubectl describe cnp rule1 -
Go to the Hubble UI web interface and view data streams for your pods and services in the
defaultnamespace.- The verdict for streams from
tiefightertodeathstar.default.svc.cluster.local/v1/request-landingshould beforwarded. - The verdict for streams from
xwingtodeathstar.default.svc.cluster.local/v1/request-landingshould bedropped.
- The verdict for streams from
-
Create an L7 network policy
In this part of the tutorial, we will change the access policy for the tiefighter pod:
- Access to the
deathstar.default.svc.cluster.local/v1/exhaust-portAPI method will be denied. - Access to the
deathstar.default.svc.cluster.local/v1/request-landingAPI method will remain unchanged.
Access for the xwing pod will remain unchanged. This pod cannot access deathstar.
-
Make sure the
tiefighterpod has access to thedeathstar.default.svc.cluster.local/v1/exhaust-portmethod when using the existingrule1policy:kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-portResult:
Panic: deathstar exploded goroutine 1 [running]: main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) /code/src/github.com/empire/deathstar/ temp/main.go:9 +0x64 main.main() /code/src/github.com/empire/deathstar/ temp/main.go:5 +0x85 -
Create a file named
sw_l3_l4_l7_policy.yamlwith the updated policy specification:sw_l3_l4_l7_policy.yaml
--- apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L7 policy to restrict access to specific HTTP call" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: "POST" path: "/v1/request-landing" -
Update the existing
rule1policy:kubectl apply -f sw_l3_l4_l7_policy.yamlResult:
ciliumnetworkpolicy.cilium.io/rule1 configured -
Make sure the
tiefighterpod can access thedeathstar.default.svc.cluster.local/v1/request-landingmethod:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landingResult:
Ship landed -
Make sure access to the
deathstar.default.svc.cluster.local/v1/exhaust-portmethod is denied for thetiefighterpod:kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-portResult:
Access denied -
Make sure the
xwingpod cannot accessdeathstar:kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landingPress Ctrl + C to abort the command.
-
Check how the policy works:
-
To view the policy specification and status, run this command:
kubectl describe cnp rule1 -
Go to the Hubble UI web interface and view data streams for the pods and services in the
defaultnamespace:- The verdict for streams from
tiefightertodeathstar.default.svc.cluster.local/v1/request-landingshould beforwarded. - The verdict for streams from
tiefightertodeathstar.default.svc.cluster.local/v1/exhaust-portshould bedropped. - The verdict for streams from
xwingtodeathstar.default.svc.cluster.localshould bedropped.
- The verdict for streams from
-
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- If you used static public IP addresses to access your cluster or nodes, release and delete them.
-
In the terminal window, go to the directory containing the infrastructure plan.
Warning
Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.
-
Delete resources:
-
Run this command:
terraform destroy -
Confirm deleting the resources and wait for the operation to complete.
All the resources described in the Terraform manifests will be deleted.
-