Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
  • Blog
  • Pricing
  • Documentation
Yandex project
© 2025 Yandex.Cloud LLC
Yandex Application Load Balancer
  • Getting started
    • All tutorials
    • Setting up virtual hosting
    • Creating an L7 load balancer with a Smart Web Security security profile through an Application Load Balancer Ingress controller
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Writing load balancer logs to PostgreSQL
    • Deploying and load testing a gRPC service with scaling in Yandex Managed Service for Kubernetes
    • Setting up Gateway API in Yandex Managed Service for Kubernetes
    • Configuring an Application Load Balancer L7 load balancer using an Ingress controller
    • Configuring logging for an Application Load Balancer L7 load balancer using an Ingress controller
    • Health checking your apps in a Managed Service for Kubernetes cluster using an Application Load Balancer L7 load balancer
    • Implementing a secure high-availability network infrastructure with a dedicated DMZ based on the next-generation firewall
    • Creating an L7 load balancer in Application Load Balancer with a Smart Web Security profile
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • L7 load balancer logs
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Install the Application Load Balancer Ingress controller
  • Create a test application
  • Create a security profile
  • Create an Ingress resource
  • Create a DNS record for the domain .
  • Check the result
  • Delete the resources you created
  1. Tutorials
  2. Creating an L7 load balancer with a Smart Web Security security profile through an Application Load Balancer Ingress controller

Creating an L7 load balancer with a Smart Web Security security profile through an Application Load Balancer Ingress controller

Written by
Yandex Cloud
Updated at May 5, 2025
  • Required paid resources
  • Getting started
  • Install the Application Load Balancer Ingress controller
  • Create a test application
  • Create a security profile
  • Create an Ingress resource
    • Create a DNS record for the domain .
  • Check the result
  • Delete the resources you created

With Yandex Smart Web Security, you can protect apps in a Yandex Managed Service for Kubernetes cluster from DDoS attacks and bots. To do this, publish your apps through an Ingress resource that has an assigned security profile in Smart Web Security and uses the Application Load Balancer Ingress controller.

Based on the Ingress resource, an L7 load balancer will be deployed with a security profile connected to the load balancer virtual hosts. Smart Web Security will be protecting the application backends specified in the Ingress resource: all HTTP requests to the backends will be processed according to the security profile rules.

To create an L7 load balancer with a connected security profile using an Ingress:

  1. Install the Application Load Balancer Ingress controller.
  2. Create a test application.
  3. Create a security profile.
  4. Create an Ingress resource.
  5. Create a DNS record for the domain.
  6. Check the result.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Fee for a DNS zone and DNS requests (see Cloud DNS pricing).
  • Fee for the Managed Service for Kubernetes cluster: using the master and outgoing traffic (see Managed Service for Kubernetes pricing).
  • Cluster nodes (VM) fee: using computing resources, operating system, and storage (see Compute Cloud pricing).
  • Fee for using the computing resources of the L7 load balancer (see Application Load Balancer pricing).
  • Fee for public IP addresses for cluster nodes and L7 load balancer (see Virtual Private Cloud pricing).
  • Fee for the number of requests to Smart Web Security (see Virtual Private Cloud pricing).

Getting startedGetting started

  1. Prepare the required infrastructure:

    Manually
    Terraform
    1. Create a service account for the Application Load Balancer Ingress controller to use.

      Assign the following roles to the account for the folder to create the cluster in:

      • alb.editor

      • vpc.publicAdmin

      • compute.viewer

      • smart-web-security.editor

        Warning

        You will need this role to correctly integrate the Application Load Balancer L7 load balancer with the security profile.

    2. Create a service account for the cluster and node group to use.

      Assign the following roles to the account for the folder to create the cluster in:

      • k8s.clusters.agent
      • vpc.publicAdmin
    3. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

      Also configure the security groups required for Application Load Balancer.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    4. Create a cluster. When creating a cluster, select:

      • Service account you created earlier to use for resources and nodes.
      • Security groups you created earlier to assign to the cluster.
      • Option for assigning a public address to the cluster. This address enables using the Kubernetes API from the internet.
    5. Create a node group in the cluster. When creating the node group, select:

      • Security groups you created earlier to assign to the node group.
      • Option for assigning a public address to the nodes. This address enables downloading images from the internet.
    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the alb-ready-k8s-cluster.tf cluster configuration file to the same working directory. This file describes:

      • Network.

      • Subnet.

      • Kubernetes cluster.

      • Service account required for the Managed Service for Kubernetes cluster and node group.

      • Service account required for the Application Load Balancer Ingress controller.

      • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

        Some rules are required for Application Load Balancer to work correctly.

        Warning

        The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

      • Security profile in Smart Web Security with a Smart Protection rule and a simple rule to test the profile; this rule will only allow traffic from a specific IP address.

        The default basic rule is not specified in the manifest and is created automatically.

    6. Specify the following in the configuration file:

      • Folder ID.
      • Kubernetes version for the Kubernetes cluster and node groups.
      • Kubernetes cluster CIDR; CIDR of services.
      • Name of the Managed Service for Kubernetes cluster service account.
      • Application Load Balancer service account name.
      • Smart Web Security security profile name.
      • IP address to allow traffic from.
    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      If there are any errors in the configuration files, Terraform will point them out.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

    Note

    If you deployed the infrastructure with Terraform, skip the Creating a security profile step.

  2. Make sure you have a domain and you can manage resource records in the DNS zone for that domain. Your test app will be available through Ingress on this domain’s subdomain.

    If you do not have a domain yet, register one with any domain name registrar. To manage your domain’s resource records with Yandex Cloud DNS, create a public DNS zone and delegate the domain.

    Note

    In this example, we will use example.com as a domain and demo.example.com as its subdomain.

    Use your own domains as you go through this tutorial.

  3. Install kubect and configure it to work with the new cluster.

Install the Application Load Balancer Ingress controllerInstall the Application Load Balancer Ingress controller

  1. Install the Application Load Balancer Ingress controller to the yc-alb namespace.

    When installing it, specify the service account you created earlier for the controller.

    Using the separate yc-alb namespace, you isolate the controller resources from those of your test application and Ingress.

  2. Make sure you successfully installed the controller:

    kubectl logs deployment.apps/yc-alb-ingress-controller -n yc-alb
    

    Logs should contain messages saying the Ingress controller successfully started.

    Example of partial command result
    ...    INFO    Starting EventSource    {"controller": "ingressgroup", ...}
    ...    INFO    Starting Controller     {"controller": "ingressgroup"}
    ...    INFO    Starting EventSource    {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting Controller     {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting EventSource    {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting Controller     {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    
    ...
    
    ...    INFO    Starting workers        {"controller": "ingressgroup", ...}
    ...    INFO    Starting workers        {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting workers        {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    

Create a test applicationCreate a test application

Create an application and an associated service for Ingress to expose:

  1. Create a demo-app1.yaml manifest for deploying your application:

    demo-app1.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: demo-app1
      labels:
        tutorial: sws
    data:
      nginx.conf: |
        worker_processes auto;
        events {
        }
        http {
          server {
            listen 80 ;
            location = /_healthz {
              add_header Content-Type text/plain;
              return 200 'ok';
            }
            location / {
              add_header Content-Type text/plain;
              return 200 'Index';
            }
            location = /app1 {
              add_header Content-Type text/plain;
              return 200 'This is APP#1';
            }
          }
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo-app1
      labels:
        app: demo-app1
        tutorial: sws
        version: v1
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: demo-app1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
      template:
        metadata:
          labels:
            app: demo-app1
            version: v1
        spec:
          terminationGracePeriodSeconds: 5
          volumes:
            - name: demo-app1
              configMap:
                name: demo-app1
          containers:
            - name: demo-app1
              image: nginx:latest
              ports:
                - name: http
                  containerPort: 80
              livenessProbe:
                httpGet:
                  path: /_healthz
                  port: 80
                initialDelaySeconds: 3
                timeoutSeconds: 2
                failureThreshold: 2
              volumeMounts:
                - name: demo-app1
                  mountPath: /etc/nginx
                  readOnly: true
              resources:
                limits:
                  cpu: 250m
                  memory: 128Mi
                requests:
                  cpu: 100m
                  memory: 64Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: demo-app1
      labels:
        tutorial: sws
    spec:
      selector:
        app: demo-app1
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
    
  2. Deploy the application:

    kubectl apply -f demo-app1.yaml
    

    This will create the ConfigMap, Deployment, and Service objects for the demo-app1 app.

  3. Make sure all objects were successfully created:

    kubectl get configmap,deployment,svc -l tutorial=sws
    
    Command result example
    NAME                  DATA   AGE
    configmap/demo-app1   1      ...
    
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/demo-app1   2/2     2            2           ...
    
    NAME                TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)      AGE
    service/demo-app1   NodePort   ...          <none>        80:.../TCP   ...
    

Create a security profileCreate a security profile

Create a security profile with a simple rule so you can easily test the profile. The rules in the profile will only allow traffic from a specific IP address.

Create a security profile:

  1. In the management console, select the folder you want to create a profile in.

  2. In the list of services, select Smart Web Security.

  3. Click Create profile and select From a preset template.

    The profile will contain a number of preconfigured security rules:

    • Smart Protection rule providing full protection for all traffic. This rule takes priority over the default basic rule.

    • Default basic rule denying all traffic that does not satisfy higher-priority rules.

      Tip

      Creating a pre-configured profile with full Smart Protection is preferable. This will ensure the highest level of security for your resource being protected.

  4. Set up the profile:

    • Name: Profile name, e.g., test-sp1.

    • Action for the default base rule: Action for the basic rule to effect.

      Leave Deny for the basic rule to deny all traffic.

  5. Add a security rule:

    1. Click Add rule.

    2. Specify the main rule settings:

      • Name: Name for the rule, e.g., test-rule1.

      • Priority: Specify a value to give the rule priority over the preconfigured rules, e.g., 999800.

        Note

        The smaller the value, the higher is the rule priority. The priorities for preconfigured rules are as follows:

        • Basic default rule: 1000000.
        • Smart Protection rule providing full protection: 999900.
      • Rule type: Select Base.

      • Action: Select Allow.

    3. Under Conditions, configure the conditions to only allow traffic from a specific IP address:

      1. Select the traffic scope for the rule: On condition.
      2. Select the IP condition.
      3. For IP, select the condition: Matches or falls within the range.
      4. Specify a public IP address, e.g., 203.0.113.200.
    4. Click Add.

    The new rule will appear in the list of security rules.

  6. Click Create.

The new profile will appear in the list of security profiles. Write down the ID of your new security profile as you will need it later.

Create an Ingress resourceCreate an Ingress resource

This Ingress resource will describe the Application Load Balancer parameters. The Ingress controller you installed earlier will deploy the load balancer with the specified parameters after the Ingress resource is created.

According to Ingress rules, traffic to the demo.example.com virtual host at the /app1 path will be routed to the service/demo-app1 backend. The security profile you created earlier will be used to protect this backend.

To create an Ingress resource:

  1. Create a file named demo-ingress.yaml with the Ingress resource description:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-ingress
      annotations:
        ingress.alb.yc.io/subnets: "<list_of_subnet_IDs>"
        ingress.alb.yc.io/security-groups: "<security_group_ID>"
        ingress.alb.yc.io/external-ipv4-address: "auto"
        ingress.alb.yc.io/group-name: "demo-sws"
        ingress.alb.yc.io/security-profile-id: "<security_profile_ID>"
    spec:
      rules:
        - host: demo.example.com
          http:
            paths:
              - path: /app1
                pathType: Exact
                backend:
                  service:
                    name: demo-app1
                    port:
                      number: 80
    

    Where:

    • ingress.alb.yc.io/subnets: List of IDs for subnets where the load balancer will reside.

      If you have created the infrastructure using Terraform, use the ID of the subnet named subnet-a.

    • ingress.alb.yc.io/security-groups: ID of the group you created for the load balancer.

      If you have created the infrastructure with Terraform, specify the ID of the group named alb-traffic.

    • ingress.alb.yc.io/security-profile-id: ID of the previously created security profile from Smart Web Security.

      Note

      The security profile will only apply to the virtual hosts of the Ingress resource in which the annotation is configured. For the Ingress resource described above, the profile will apply to a single virtual host, demo.example.com.

      This is the only Ingress resource in the demo-sws Ingress group. The security profile will not apply to virtual hosts of other Ingress resources if you add such resources to the group later.

To learn more about annotations, see Ingress resource fields and annotations.

  • Create an Ingress resource:

    kubectl apply -f demo-ingress.yaml
    

    The Application Load Balancer Ingress controller will start creating target groups, backend groups, HTTP routers, and the load balancer.

  • Remember to regularly check the status of the Ingress resource until the ADDRESS column displays the load balancer IP address:

    kubectl get ingress demo-ingress
    

    This means the load balancer has been successfully created and can accept traffic.

    Command result example
    NAME             CLASS    HOSTS              ADDRESS         PORTS   AGE
    demo-ingress     <none>   demo.example.com   <IP_address>      80      ...
    
  • Create a DNS record for the domain .Create a DNS record for the domain .

    1. Create an A record for the demo.example.com domain in the example.com zone. Specify the IP address of the previously created load balancer in its value.

    2. Wait until the DNS propagation is finished.

      To check that the propagation is successful, use relevant online tools or manual requests to different DNS servers:

      nslookup -type=a demo.example.com <DNS_server_IP_address>
      

    Check the resultCheck the result

    Requests to the application deployed in the Kubernetes cluster go through an Application Load Balancer. The virtual hosts to which those requests are directed are protected using the security profile. The profile configuration only allows traffic from a specific IP address, e.g., 203.0.113.200.

    Check that the load balancer works correctly given the security profile settings.

    1. Use a host with an allowed IP address (203.0.113.200) to check that traffic is routed according to the rule defined in the Ingress resource:

      curl http://demo.example.com/app1
      

      Expected result:

      This is demo-app1
      
    2. Use a host with an IP address that is not on the list of allowed ones (e.g., 203.0.113.100) to check that traffic is not routed:

      curl http://demo.example.com/app1
      

      The load balancer should return the HTTP 403 Forbidden code and a message saying access to the resource is restricted.

    If traffic routing does not work as expected, make sure everything is configured correctly:

    • The service account for the Ingress controller must have the required roles including those for using Smart Web Security.
    • Make sure the security groups for the Managed Service for Kubernetes cluster and its node groups are configured correctly. If a rule is missing, add it.
    • The security profile must be configured correctly to allow traffic from the relevant address.

    Tip

    After confirming the profile works properly, add more rules if required.

    Delete the resources you createdDelete the resources you created

    Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

    1. Delete the Ingress resource you created:

      kubectl delete ingress demo-ingress
      

      This will delete the load balancer and the associated HTTP router.

      The Smart Web Security security profile will be disconnected from the virtual hosts specified in the Ingress resource.

    2. Delete the Managed Service for Kubernetes cluster and its associated infrastructure:

      Manually
      Terraform

      Delete the Managed Service for Kubernetes cluster.

      If you need to, delete the service account and security groups created before you started.

      1. In the terminal window, go to the directory containing the infrastructure plan.

        Warning

        Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

      2. Delete resources:

        1. Run this command:

          terraform destroy
          
        2. Confirm deleting the resources and wait for the operation to complete.

        All the resources described in the Terraform manifests will be deleted.

    Was the article helpful?

    Previous
    Terraform
    Next
    Overview
    Yandex project
    © 2025 Yandex.Cloud LLC