Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
    • All guides
    • Connecting to a node over SSH
    • Connecting to a node via OS Login
    • Updating Kubernetes
    • Configuring autoscaling
      • Granting access to an app running in a Kubernetes cluster
      • Configuring the Calico network policy controller
      • Configuring the Cilium network policy controller
      • Configuring NodeLocal DNS for the Cilium network policy controller
      • Creating a network load balancer using an NGINX Ingress controller
    • Connecting external nodes to the cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Getting started
  • Prepare the infrastructure
  • Get ready to use the cluster
  • Install and configure Hubble UI
  • Create a test environment
  • Create an L3/L4 network policy
  • Create an L7 network policy
  • Delete the resources you created
  1. Step-by-step guides
  2. Network scenarios
  3. Configuring the Cilium network policy controller

Configuring the Cilium network policy controller

Written by
Yandex Cloud
Updated at May 5, 2025
  • Getting started
    • Prepare the infrastructure
    • Get ready to use the cluster
  • Install and configure Hubble UI
  • Create a test environment
  • Create an L3/L4 network policy
  • Create an L7 network policy
  • Delete the resources you created

This scenario shows the implementation of L3/L4 and L7 network policies that are managed by the Cilium network policy controller.

To use the Cilium network policy controller in a cluster:

  • Install and configure Hubble UI, a network activity monitoring tool.
  • Create a test environment.
  • Create an L3/L4 network policy.
  • Create an L7 network policy.

Getting startedGetting started

Prepare the infrastructurePrepare the infrastructure

Manually
Terraform
  1. Create a service account and assign to it the k8s.tunnelClusters.agent and vpc.publicAdmin roles.

  2. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

    Warning

    The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  3. Create a cluster with any suitable configuration.

    • In the Service account for resources and Service account for nodes fields, select From list, then select the service account you created from the drop-down list.

    • Under Master configuration, select the following values:

      • Public address: Auto.
      • Security groups: From list. Specify security groups for the cluster.
    • Under Cluster network settings, select Enable tunnel mode.

  4. Create a node group for the cluster in any suitable configuration.

    Under Network settings, select the following values:

    • Public address: Auto.
    • Security groups: From list. Specify security groups for the node groups.
  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the k8s-cilium.tf configuration file to the same working directory. This file will be used to create the following resources:

    • Network.

    • Subnet.

    • Managed Service for Kubernetes cluster.

    • Node group for the cluster.

    • Service account the cluster and its node group need to operate.

    • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

  6. Specify the following in the k8s-cilium.tf file:

    • Folder ID.
    • Kubernetes version for the cluster and node groups.
    • Name of the service account.
  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    If there are any errors in the configuration files, Terraform will point them out.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

Get ready to use the clusterGet ready to use the cluster

  1. Install kubect and configure it to work with the new cluster.

  2. Install Cilium CLI (cilium).

Install and configure Hubble UIInstall and configure Hubble UI

  1. Check the current status of Cilium in the cluster:

    cilium status
    

    Cilium, Operator, and Hubble Relay should have the OK status.

    Command result example
        /¯¯\
     /¯¯\__/¯¯\    Cilium:             OK
     \__/¯¯\__/    Operator:           OK
     /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
     \__/¯¯\__/    Hubble Relay:       OK
        \__/       ClusterMesh:        disabled
    
    DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
    Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
    Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
    Containers:            cilium             Running: 1
                           cilium-operator    Running: 1
                           hubble-relay       Running: 1
    Cluster Pods:          5/5 managed by Cilium
    Helm chart version:
    Image versions         cilium             cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1
                           cilium-operator    cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1
                           hubble-relay       cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1
    
  2. Create a file named hubble-ui.yaml containing specifications for the resources required for Hubble UI:

    hubble-ui.yaml
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: "hubble-ui"
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: hubble-ui-nginx
      namespace: kube-system
    data:
      nginx.conf: |
        server
        {
          listen 8081;
          listen [::]:8081;
          server_name localhost;
          root /app;
          index index.html;
          client_max_body_size 1G;
    
          location /
          {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
    
            # CORS
            add_header Access-Control-Allow-Methods 'GET, POST, PUT, HEAD, DELETE, OPTIONS';
            add_header Access-Control-Allow-Origin *;
            add_header Access-Control-Max-Age 1728000;
            add_header Access-Control-Expose-Headers content-length,grpc-status,grpc-message;
            add_header Access-Control-Allow-Headers range,keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout;
            if ($request_method = OPTIONS)
            {
              return 204;
            }
            # /CORS
    
            location /api
            {
              proxy_http_version 1.1;
              proxy_pass_request_headers on;
              proxy_hide_header Access-Control-Allow-Origin;
              proxy_pass http://127.0.0.1:8090;
            }
    
            location /
            {
              # double `/index.html` is required here
              try_files $uri $uri/ /index.html /index.html;
            }
    
            # Liveness probe
            location /healthz
            {
              access_log off;
              add_header Content-Type text/plain;
              return 200 'ok';
            }
          }
        }
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: hubble-ui
      labels:
        app.kubernetes.io/part-of: cilium
    rules:
    - apiGroups:
      - networking.k8s.io
      resources:
      - networkpolicies
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - componentstatuses
      - endpoints
      - namespaces
      - nodes
      - pods
      - services
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - apiextensions.k8s.io
      resources:
      - customresourcedefinitions
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - cilium.io
      resources:
      - "*"
      verbs:
      - get
      - list
      - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: hubble-ui
      labels:
        app.kubernetes.io/part-of: cilium
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: hubble-ui
    subjects:
    - kind: ServiceAccount
      name: "hubble-ui"
      namespace: kube-system
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: hubble-ui
      namespace: kube-system
      labels:
        k8s-app: hubble-ui
        app.kubernetes.io/name: hubble-ui
        app.kubernetes.io/part-of: cilium
    spec:
      type: "ClusterIP"
      selector:
        k8s-app: hubble-ui
      ports:
        - name: http
          port: 80
          targetPort: 8081
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: hubble-ui
      namespace: kube-system
      labels:
        k8s-app: hubble-ui
        app.kubernetes.io/name: hubble-ui
        app.kubernetes.io/part-of: cilium
    spec:
      replicas: 1
      selector:
        matchLabels:
          k8s-app: hubble-ui
      strategy:
        rollingUpdate:
          maxUnavailable: 1
        type: RollingUpdate
      template:
        metadata:
          annotations:
          labels:
            k8s-app: hubble-ui
            app.kubernetes.io/name: hubble-ui
            app.kubernetes.io/part-of: cilium
        spec:
          priorityClassName:
          serviceAccount: "hubble-ui"
          serviceAccountName: "hubble-ui"
          automountServiceAccountToken: true
          containers:
          - name: frontend
            image: "quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666"
            imagePullPolicy: IfNotPresent
            ports:
            - name: http
              containerPort: 8081
            livenessProbe:
              httpGet:
                path: /healthz
                port: 8081
            readinessProbe:
              httpGet:
                path: /
                port: 8081
            volumeMounts:
            - name: hubble-ui-nginx-conf
              mountPath: /etc/nginx/conf.d/default.conf
              subPath: nginx.conf
            - name: tmp-dir
              mountPath: /tmp
            terminationMessagePolicy: FallbackToLogsOnError
          - name: backend
            image: "quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803"
            imagePullPolicy: IfNotPresent
            env:
            - name: EVENTS_SERVER_PORT
              value: "8090"
            - name: FLOWS_API_ADDR
              value: "hubble-relay:80"
            ports:
            - name: grpc
              containerPort: 8090
            volumeMounts:
            terminationMessagePolicy: FallbackToLogsOnError
          nodeSelector:
            kubernetes.io/os: linux
          volumes:
          - configMap:
              defaultMode: 420
              name: hubble-ui-nginx
            name: hubble-ui-nginx-conf
          - emptyDir: {}
            name: tmp-dir
    
  3. Create resources:

    kubectl apply -f hubble-ui.yaml
    
    Command result
    serviceaccount/hubble-ui created
    configmap/hubble-ui-nginx created
    clusterrole.rbac.authorization.k8s.io/hubble-ui created
    clusterrolebinding.rbac.authorization.k8s.io/hubble-ui created
    service/hubble-ui created
    deployment.apps/hubble-ui created
    
  4. Check Cilium status after installing Hubble UI:

    cilium status
    

    Cilium, Operator, and Hubble Relay should have the OK status. The hubble-ui container must be in the Running: 1 state.

    Command result example
        /¯¯\
     /¯¯\__/¯¯\    Cilium:             OK
     \__/¯¯\__/    Operator:           OK
     /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
     \__/¯¯\__/    Hubble Relay:       OK
        \__/       ClusterMesh:        disabled
    
    Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
    Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
    DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
    Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
    Containers:            cilium             Running: 1
                           hubble-relay       Running: 1
                           cilium-operator    Running: 1
                           hubble-ui          Running: 1
    Cluster Pods:          6/6 managed by Cilium
    Helm chart version:
    Image versions         cilium             cr.yandex/******/k8s-addons/cilium/cilium:v1.12.9: 1
                           hubble-relay       cr.yandex/******/k8s-addons/cilium/hubble-relay:v1.12.9: 1
                           cilium-operator    cr.yandex/******/k8s-addons/cilium/operator-generic:v1.12.9: 1
                           hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:******: 1
                           hubble-ui          quay.io/cilium/hubble-ui:v0.13.0@sha256:******: 1
    
  5. To access the Hubble UI web interface, run this command:

    cilium hubble ui
    

    Your browser will open and redirect you to the Hubble UI web interface.

    Note

    If you close the terminal session running the command, you will lose access to the web interface.

Create a test environmentCreate a test environment

  1. Create a file named http-sw-app.yaml with a specification of resources for test applications:

    http-sw-app.yaml
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: deathstar
    spec:
      type: ClusterIP
      ports:
      - port: 80
      selector:
        org: empire
        class: deathstar
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deathstar
    spec:
      replicas: 2
      selector:
        matchLabels:
          org: empire
          class: deathstar
      template:
        metadata:
          labels:
            org: empire
            class: deathstar
        spec:
          containers:
          - name: deathstar
            image: docker.io/cilium/starwars
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: tiefighter
      labels:
        org: empire
        class: tiefighter
    spec:
      containers:
      - name: spaceship
        image: docker.io/tgraf/netperf
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: xwing
      labels:
        org: alliance
        class: xwing
    spec:
      containers:
      - name: spaceship
        image: docker.io/tgraf/netperf
    
  2. Create applications:

    kubectl apply -f http-sw-app.yaml
    
    Command result
    service/deathstar created
    deployment.apps/deathstar created
    pod/tiefighter created
    pod/xwing created
    
  3. Make sure the pods and services you created are working:

    kubectl get pods,svc
    
    Command result example
    NAME                            READY   STATUS    RESTARTS   AGE
    pod/deathstar-c74d84667-6x4gx   1/1     Running   1          7d
    pod/deathstar-c74d84667-jrdsp   1/1     Running   0          7d
    pod/tiefighter                  1/1     Running   0          7d
    pod/xwing                       1/1     Running   0          7d
    
    NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    service/deathstar    ClusterIP   10.96.18.169  <none>        80/TCP    7d
    service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   8d
    
  4. View the current status of Cilium endpoints:

    kubectl -n kube-system exec daemonset/cilium -- cilium endpoint list
    

    Make sure network policies are disabled for all endpoints: their status under POLICY (ingress) ENFORCEMENT and POLICY (egress) ENFORCEMENT should be set to Disabled.

    Example of partial command result
    Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init)
    ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4          STATUS
               ENFORCEMENT        ENFORCEMENT
    51         Disabled           Disabled          2204       k8s:app.kubernetes.io/name=hubble-ui                                                10.112.0.97   ready
                                                               k8s:app.kubernetes.io/part-of=cilium
                                                               k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
                                                               k8s:io.cilium.k8s.policy.cluster=default
                                                               k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
                                                               k8s:io.kubernetes.pod.namespace=kube-system
                                                               k8s:k8s-app=hubble-ui
    274        Disabled           Disabled          23449      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          10.112.0.224  ready
                                                               k8s:io.cilium.k8s.policy.cluster=default
                                                               k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler
                                                               k8s:io.kubernetes.pod.namespace=kube-system
                                                               k8s:k8s-app=kube-dns-autoscaler
    
    ...
    
  5. Make sure the tiefighter and xwing applications have access to the deathstar API and return the Ship landed string, because the network policies are not activated:

    kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing && \
    kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
    

    The output of both commands must be the same:

    Ship landed
    Ship landed
    
  6. Go to the Hubble UI web interface and view data streams for pods and services in the default namespace.

    The verdict for all data streams should be forwarded.

Create an L3/L4 network policyCreate an L3/L4 network policy

Apply an L3/L4 network policy to disable the xwing pod's access to deathstar. Access rules for the tiefighter pod remain unchanged.

For access differentiation, the following Kubernetes labels are assigned to pods when creating them:

  • org: empire for the tiefighter pod.
  • org: alliance for the xwing pod.

The L3/L4 network policy only allows the org: empire labeled pods to access deathstar.

  1. Create a file named sw_l3_l4_policy.yaml with the policy specification:

    sw_l3_l4_policy.yaml
    ---
    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: "rule1"
    spec:
      description: "L3-L4 policy to restrict deathstar access to empire ships only"
      endpointSelector:
        matchLabels:
          org: empire
          class: deathstar
      ingress:
      - fromEndpoints:
        - matchLabels:
            org: empire
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
    
  2. Create the rule1 policy:

    kubectl apply -f sw_l3_l4_policy.yaml
    

    Command result:

    ciliumnetworkpolicy.cilium.io/rule1 created
    
  3. View the current status of Cilium endpoints again:

    kubectl -n kube-system exec daemonset/cilium -- cilium endpoint list
    

    Make sure the inbound direction policy is enabled for the endpoint associated with the k8s:class=deathstar label: its status under POLICY (ingress) ENFORCEMENT should be Enabled.

    Example of partial command result
    Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init), install-cni-binaries (init)
    ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4          STATUS
               ENFORCEMENT        ENFORCEMENT
    
    ...
    
    3509       Enabled            Disabled          52725      k8s:class=deathstar                                                                 10.112.0.43   ready
                                                               k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                                                               k8s:io.cilium.k8s.policy.cluster=default
                                                               k8s:io.cilium.k8s.policy.serviceaccount=default
                                                               k8s:io.kubernetes.pod.namespace=default
                                                               k8s:org=empire
    
    ...
    
  4. Check the availability of deathstar for the tiefighter pod:

    kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
    

    The result will be as follows:

    Ship landed
    
  5. Make sure the xwing pod has no access to deathstar:

    kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
    

    Press Ctrl + C to abort the command. The network policy has denied this pod access to the service.

  6. Learn how the policy works:

    • To view the policy specification and status, run this command:

      kubectl describe cnp rule1
      
    • Go to the Hubble UI web interface and view data streams for pods and services in the default namespace.

      • The verdict for streams from tiefighter to deathstar.default.svc.cluster.local/v1/request-landing should be forwarded.
      • The verdict for streams from xwing to deathstar.default.svc.cluster.local/v1/request-landing should be dropped.

Create an L7 network policyCreate an L7 network policy

In this part of the scenario, we will change the access policy for the tiefighter pod:

  • Access to the deathstar.default.svc.cluster.local/v1/exhaust-port API method will be disabled.
  • Access to the deathstar.default.svc.cluster.local/v1/request-landing API method will remain unchanged.

Access for the xwing pod will remain unchanged. This pod cannot access deathstar.

  1. Make sure the tiefighter pod has access to the deathstar.default.svc.cluster.local/v1/exhaust-port method when using the existing rule1 policy:

    kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-port
    

    The result will be as follows:

    Panic: deathstar exploded
    
    goroutine 1 [running]:
    main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
      /code/src/github.com/empire/deathstar/
      temp/main.go:9 +0x64
    main.main()
      /code/src/github.com/empire/deathstar/
      temp/main.go:5 +0x85
    
  2. Create a file named sw_l3_l4_l7_policy.yaml with the updated policy specification:

    sw_l3_l4_l7_policy.yaml
    ---
    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: "rule1"
    spec:
      description: "L7 policy to restrict access to specific HTTP call"
      endpointSelector:
        matchLabels:
          org: empire
          class: deathstar
      ingress:
      - fromEndpoints:
        - matchLabels:
            org: empire
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
          rules:
            http:
            - method: "POST"
              path: "/v1/request-landing"
    
  3. Update the existing rule1 policy:

    kubectl apply -f sw_l3_l4_l7_policy.yaml
    

    The result will be as follows:

    ciliumnetworkpolicy.cilium.io/rule1 configured
    
  4. Make sure the tiefighter pod can access the deathstar.default.svc.cluster.local/v1/request-landing method:

    kubectl exec tiefighter -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
    

    The result will be as follows:

    Ship landed
    
  5. Make sure access to the deathstar.default.svc.cluster.local/v1/exhaust-port method is disabled for the tiefighter pod:

    kubectl exec tiefighter -- curl --silent --request PUT deathstar.default.svc.cluster.local/v1/exhaust-port
    

    The result will be as follows:

    Access denied
    
  6. Make sure the xwing pod cannot access deathstar:

    kubectl exec xwing -- curl --silent --request POST deathstar.default.svc.cluster.local/v1/request-landing
    

    Press Ctrl + C to abort the command.

  7. Learn how the policy works:

    • To view the policy specification and status, run this command:

      kubectl describe cnp rule1
      
    • Go to the Hubble UI web interface and view data streams for pods and services in the default namespace:

      • The verdict for streams from tiefighter to deathstar.default.svc.cluster.local/v1/request-landing should be forwarded.
      • The verdict for streams from tiefighter to deathstar.default.svc.cluster.local/v1/exhaust-port should be dropped.
      • The verdict for streams from xwing to deathstar.default.svc.cluster.local should be dropped.

Delete the resources you createdDelete the resources you created

Delete the resources you no longer need to avoid paying for them:

Manually
Terraform
  1. Delete the Managed Service for Kubernetes cluster.
  2. If static public IP addresses were used for cluster and node access, release and delete them.
  1. In the terminal window, go to the directory containing the infrastructure plan.

    Warning

    Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

  2. Delete resources:

    1. Run this command:

      terraform destroy
      
    2. Confirm deleting the resources and wait for the operation to complete.

    All the resources described in the Terraform manifests will be deleted.

Was the article helpful?

Previous
Configuring the Calico network policy controller
Next
Configuring NodeLocal DNS for the Cilium network policy controller
© 2025 Direct Cursus Technology L.L.C.