Configuring the Cilium network policy controller
This scenario shows the implementation of L3/L4 and L7 network policies
To use the Cilium network policy controller in a cluster:
- Install and configure Hubble UI, a network activity monitoring tool.
- Create a test environment.
- Create an L3/L4 network policy.
- Create an L7 network policy.
Getting started
Prepare the infrastructure
-
Create a service account and assign to it the
k8s.tunnelClusters.agent
andvpc.publicAdmin
roles. -
Create security groups for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Create a cluster with any suitable configuration.
-
In the Service account for resources and Service account for nodes fields, select
From list
, then select the service account you created from the drop-down list. -
Under Master configuration, select the following values:
- Public address:
Auto
. - Security groups:
From list
. Specify security groups for the cluster.
- Public address:
-
Under Cluster network settings, select Enable tunnel mode.
-
-
Create a node group for the cluster in any suitable configuration.
Under Network settings, select the following values:
- Public address:
Auto
. - Security groups:
From list
. Specify security groups for the node groups.
- Public address:
-
If you do not have Terraform yet, install it.
-
Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.
-
Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it
. -
Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.
-
Download the k8s-cilium.tf
configuration file to the same working directory. This file will be used to create the following resources:-
Managed Service for Kubernetes cluster.
-
Node group for the cluster.
-
Service account the cluster and its node group need to operate.
-
Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.
Warning
The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.
-
Specify the following in the
k8s-cilium.tf
file:- Folder ID.
- Kubernetes version for the cluster and node groups.
- Name of the service account.
-
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Create the required infrastructure:
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console
. -
Before you start working with the cluster
Install and configure Hubble UI
-
Check the current status of Cilium in the cluster:
cilium status
Cilium, Operator, and Hubble Relay should have the
OK
status.Command result example
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: OK \__/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-operator Running: 1 hubble-relay Running: 1 Cluster Pods: 5/5 managed by Cilium Helm chart version: Image versions cilium cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1 cilium-operator cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1 hubble-relay cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1
-
Create a file named
hubble-ui.yaml
containing specifications for the resources required for Hubble UI:hubble-ui.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: "hubble-ui" namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: hubble-ui-nginx namespace: kube-system data: nginx.conf: | server { listen 8081; listen [::]:8081; server_name localhost; root /app; index index.html; client_max_body_size 1G; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # CORS add_header Access-Control-Allow-Methods 'GET, POST, PUT, HEAD, DELETE, OPTIONS'; add_header Access-Control-Allow-Origin *; add_header Access-Control-Max-Age 1728000; add_header Access-Control-Expose-Headers content-length,grpc-status,grpc-message; add_header Access-Control-Allow-Headers range,keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout; if ($request_method = OPTIONS) { return 204; } # /CORS location /api { proxy_http_version 1.1; proxy_pass_request_headers on; proxy_hide_header Access-Control-Allow-Origin; proxy_pass http://127.0.0.1:8090; } location / { # double `/index.html` is required here try_files $uri $uri/ /index.html /index.html; } # Liveness probe location /healthz { access_log off; add_header Content-Type text/plain; return 200 'ok'; } } } --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hubble-ui labels: app.kubernetes.io/part-of: cilium rules: - apiGroups: - networking.k8s.io resources: - networkpolicies verbs: - get - list - watch - apiGroups: - "" resources: - componentstatuses - endpoints - namespaces - nodes - pods - services verbs: - get - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - get - list - watch - apiGroups: - cilium.io resources: - "*" verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hubble-ui labels: app.kubernetes.io/part-of: cilium roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: hubble-ui subjects: - kind: ServiceAccount name: "hubble-ui" namespace: kube-system --- kind: Service apiVersion: v1 metadata: name: hubble-ui namespace: kube-system labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: type: "ClusterIP" selector: k8s-app: hubble-ui ports: - name: http port: 80 targetPort: 8081 --- kind: Deployment apiVersion: apps/v1 metadata: name: hubble-ui namespace: kube-system labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: replicas: 1 selector: matchLabels: k8s-app: hubble-ui strategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: labels: k8s-app: hubble-ui app.kubernetes.io/name: hubble-ui app.kubernetes.io/part-of: cilium spec: priorityClassName: serviceAccount: "hubble-ui" serviceAccountName: "hubble-ui" automountServiceAccountToken: true containers: - name: frontend image: "quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666" imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8081 livenessProbe: httpGet: path: /healthz port: 8081 readinessProbe: httpGet: path: / port: 8081 volumeMounts: - name: hubble-ui-nginx-conf mountPath: /etc/nginx/conf.d/default.conf subPath: nginx.conf - name: tmp-dir mountPath: /tmp terminationMessagePolicy: FallbackToLogsOnError - name: backend image: "quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803" imagePullPolicy: IfNotPresent env: - name: EVENTS_SERVER_PORT value: "8090" - name: FLOWS_API_ADDR value: "hubble-relay:80" ports: - name: grpc containerPort: 8090 volumeMounts: terminationMessagePolicy: FallbackToLogsOnError nodeSelector: kubernetes.io/os: linux volumes: - configMap: defaultMode: 420 name: hubble-ui-nginx name: hubble-ui-nginx-conf - emptyDir: {} name: tmp-dir
-
Create resources:
kubectl apply -f hubble-ui.yaml
Command result
serviceaccount/hubble-ui created configmap/hubble-ui-nginx created clusterrole.rbac.authorization.k8s.io/hubble-ui created clusterrolebinding.rbac.authorization.k8s.io/hubble-ui created service/hubble-ui created deployment.apps/hubble-ui created
-
Check Cilium status after installing Hubble UI:
cilium status
Cilium, Operator, and Hubble Relay should have the
OK
status. Thehubble-ui
container must be in theRunning: 1
state.Command result example
/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: OK \__/ ClusterMesh: disabled Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 hubble-relay Running: 1 cilium-operator Running: 1 hubble-ui Running: 1 Cluster Pods: 6/6 managed by Cilium Helm chart version: Image versions cilium cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1 hubble-relay cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1 cilium-operator cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:******: 1 hubble-ui quay.io/cilium/hubble-ui:v0.13.0@sha256:******: 1
-
To access the Hubble UI web interface, run this command:
cilium hubble ui
Your browser will open and redirect you to the Hubble UI web interface.
Note
If you close the terminal session running the command, you will lose access to the web interface.
Create a test environment
-
Create a file named
http-sw-app.yaml
with a specification of resources for test applications:http-sw-app.yaml
--- apiVersion: v1 kind: Service metadata: name: deathstar spec: type: ClusterIP ports: - port: 80 selector: org: empire class: deathstar --- apiVersion: apps/v1 kind: Deployment metadata: name: deathstar spec: replicas: 2 selector: matchLabels: org: empire class: deathstar template: metadata: labels: org: empire class: deathstar spec: containers: - name: deathstar image: docker.io/cilium/starwars --- apiVersion: v1 kind: Pod metadata: name: tiefighter labels: org: empire class: tiefighter spec: containers: - name: spaceship image: docker.io/tgraf/netperf --- apiVersion: v1 kind: Pod metadata: name: xwing labels: org: alliance class: xwing spec: containers: - name: spaceship image: docker.io/tgraf/netperf
-
Create applications:
kubectl apply -f http-sw-app.yaml
Command result
service/deathstar created deployment.apps/deathstar created pod/tiefighter created pod/xwing created
-
Make sure the pods and services you created are working:
kubectl get pods,svc
Command result example
NAME READY STATUS RESTARTS AGE pod/deathstar-c74d84667-6x4gx 1/1 Running 1 7d pod/deathstar-c74d84667-jrdsp 1/1 Running 0 7d pod/tiefighter 1/1 Running 0 7d pod/xwing 1/1 Running 0 7d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/deathstar ClusterIP 10.96.18.169 <none> 80/TCP 7d service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
-
View the current status of Cilium endpoints:
kubectl -n kube-system exec daemonset/cilium -- cilium endpoint list
Make sure network policies are disabled for all endpoints: their status under
POLICY (ingress) ENFORCEMENT
andPOLICY (egress) ENFORCEMENT
should be set toDisabled
.Example of partial command result
Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init) ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 51 Disabled Disabled 2204 k8s:app.kubernetes.io/name=hubble-ui 10.112.0.97 ready k8s:app.kubernetes.io/part-of=cilium k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=hubble-ui 274 Disabled Disabled 23449 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system 10.112.0.224 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns-autoscaler ...
-
Make sure the
tiefighter
andxwing
applications have access to thedeathstar
API and return theShip landed
string, because the network policies are not activated:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing && \ kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
The output of both commands must be the same:
Ship landed Ship landed
-
Go to the Hubble UI web interface and view data streams for pods and services in the
default
namespace.The verdict for all data streams should be
forwarded
.
Create an L3/L4 network policy
Apply an L3/L4 network policy to disable the xwing
pod's access to deathstar
. Access rules for the tiefighter
pod remain unchanged.
For access differentiation, the following Kubernetes labels are assigned to pods when creating them:
org: empire
for thetiefighter
pod.org: alliance
for thexwing
pod.
The L3/L4 network policy only allows the org: empire
labeled pods to access deathstar
.
-
Create a file named
sw_l3_l4_policy.yaml
with the policy specification:sw_l3_l4_policy.yaml
--- apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L3-L4 policy to restrict deathstar access to empire ships only" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP
-
Create the
rule1
policy:kubectl apply -f sw_l3_l4_policy.yaml
Command result:
ciliumnetworkpolicy.cilium.io/rule1 created
-
View the current status of Cilium endpoints again:
kubectl -n kube-system exec daemonset/cilium -- cilium endpoint list
Make sure the inbound direction policy is enabled for the endpoint associated with the
k8s:class=deathstar
label: its status underPOLICY (ingress) ENFORCEMENT
should beEnabled
.Example of partial command result
Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init) ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT ... 3509 Enabled Disabled 52725 k8s:class=deathstar 10.112.0.43 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire ...
-
Check the availability of
deathstar
for thetiefighter
pod:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
Command result:
Ship landed
-
Make sure the
xwing
pod has no access todeathstar
:kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
Press Ctrl + C to abort the command. The network policy has denied this pod access to the service.
-
Learn how the policy works:
-
To view the policy specification and status, run this command:
kubectl describe cnp rule1
-
Go to the Hubble UI web interface and view data streams for pods and services in the
default
namespace.- The verdict for streams from
tiefighter
todeathstar.default.svc.cluster.local/v1/request-landing
should beforwarded
. - The verdict for streams from
xwing
todeathstar.default.svc.cluster.local/v1/request-landing
should bedropped
.
- The verdict for streams from
-
Create an L7 network policy
In this part of the scenario, we will change the access policy for the tiefighter
pod:
- Access to the
deathstar.default.svc.cluster.local/v1/exhaust-port
API method will be disabled. - Access to the
deathstar.default.svc.cluster.local/v1/request-landing
API method will remain unchanged.
Access for the xwing
pod will remain unchanged. This pod cannot access deathstar
.
-
Make sure the
tiefighter
pod has access to thedeathstar.default.svc.cluster.local/v1/exhaust-port
method when using the existingrule1
policy:kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-port
Command result:
Panic: deathstar exploded goroutine 1 [running]: main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) /code/src/github.com/empire/deathstar/ temp/main.go:9 +0x64 main.main() /code/src/github.com/empire/deathstar/ temp/main.go:5 +0x85
-
Create a file named
sw_l3_l4_l7_policy.yaml
with the updated policy specification:sw_l3_l4_l7_policy.yaml
--- apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" spec: description: "L7 policy to restrict access to specific HTTP call" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: "POST" path: "/v1/request-landing"
-
Update the existing
rule1
policy:kubectl apply -f sw_l3_l4_l7_policy.yaml
Command result:
ciliumnetworkpolicy.cilium.io/rule1 configured
-
Make sure the
tiefighter
pod can access thedeathstar.default.svc.cluster.local/v1/request-landing
method:kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
Command result:
Ship landed
-
Make sure access to the
deathstar.default.svc.cluster.local/v1/exhaust-port
method is disabled for thetiefighter
pod:kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-port
Command result:
Access denied
-
Make sure the
xwing
pod cannot accessdeathstar
:kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
Press Ctrl + C to abort the command.
-
Learn how the policy works:
-
To view the policy specification and status, run this command:
kubectl describe cnp rule1
-
Go to the Hubble UI web interface and view data streams for pods and services in the
default
namespace:- The verdict for streams from
tiefighter
todeathstar.default.svc.cluster.local/v1/request-landing
should beforwarded
. - The verdict for streams from
tiefighter
todeathstar.default.svc.cluster.local/v1/exhaust-port
should bedropped
. - The verdict for streams from
xwing
todeathstar.default.svc.cluster.local
should bedropped
.
- The verdict for streams from
-
Delete the resources you created
Delete the resources you no longer need to avoid paying for them:
- Delete the Managed Service for Kubernetes cluster.
- If static public IP addresses were used for cluster and node access, release and delete them.
-
In the command line, go to the directory with the current Terraform configuration file with an infrastructure plan.
-
Delete the
k8s-cilium.tf
configuration file. -
Make sure the Terraform configuration files are correct using this command:
terraform validate
If there are any errors in the configuration files, Terraform will point them out.
-
Confirm updating the resources.
-
Run the command to view planned changes:
terraform plan
If the resource configuration descriptions are correct, the terminal will display a list of the resources to modify and their parameters. This is a test step. No resources are updated.
-
If you are happy with the planned changes, apply them:
-
Run the command:
terraform apply
-
Confirm the update of resources.
-
Wait for the operation to complete.
-
All the resources described in the
k8s-cilium.tf
configuration file will be deleted. -