Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Application Load Balancer
  • Getting started
    • All tutorials
    • Setting up virtual hosting
    • Creating an L7 load balancer with a Smart Web Security profile through an Application Load Balancer ingress controller
    • Integrating an L7 load balancer with CDN and Object Storage
    • Blue-green and canary deployment of service versions
    • Writing load balancer logs to PostgreSQL
    • Deploying and load testing a scalable gRPC service in Yandex Managed Service for Kubernetes
    • Setting up the Gateway API in Yandex Managed Service for Kubernetes
    • Configuring an L7 Application Load Balancer via an ingress controller
    • Configuring L7 Application Load Balancer logging via an ingress controller
    • Performing health checks on Managed Service for Kubernetes cluster applications via an L7 Application Load Balancer
    • Implementing a secure high-availability network infrastructure with a dedicated DMZ using the next-generation firewall
    • Creating an L7 Application Load Balancer with a Smart Web Security profile
    • Yandex Object Storage integration with Nextcloud
    • Deploying a web application on BareMetal servers with an L7 load balancer and Smart Web Security protection
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • L7 load balancer logs
  • Release notes

In this article:

  • Required paid resources
  • Getting started
  • Install the Application Load Balancer ingress controller
  • Create a test application
  • Create a security profile
  • Create an ingress resource
  • Create a DNS record for the domain .
  • Check the result
  • Delete the resources you created
  1. Tutorials
  2. Creating an L7 load balancer with a Smart Web Security profile through an Application Load Balancer ingress controller

Creating an L7 load balancer with a Smart Web Security security profile through an Application Load Balancer Ingress controller

Written by
Yandex Cloud
Updated at July 23, 2025
  • Required paid resources
  • Getting started
  • Install the Application Load Balancer ingress controller
  • Create a test application
  • Create a security profile
  • Create an ingress resource
    • Create a DNS record for the domain .
  • Check the result
  • Delete the resources you created

With Yandex Smart Web Security, you can protect apps in a Yandex Managed Service for Kubernetes cluster against DDoS attacks and bots. To do this, publish your apps through an ingress resource associated with a Smart Web Security profile that uses an Application Load Balancer ingress controller.

Based on the ingress resource, an L7 load balancer will be deployed with a security profile associated with the load balancer’s virtual hosts. Smart Web Security will be protecting the application backends specified in the ingress resource: all HTTP requests to the backends will be processed according to the security profile rules.

To create an L7 load balancer with an associated security profile using ingress:

  1. Install the Application Load Balancer ingress controller.
  2. Create a test application.
  3. Create a security profile.
  4. Create an ingress resource.
  5. Create a DNS record for the domain.
  6. Check the result.

If you no longer need the resources you created, delete them.

Required paid resourcesRequired paid resources

The support cost includes:

  • Fee for a DNS zone and DNS requests (see Cloud DNS pricing).
  • Fee for using the master and outbound traffic in a Managed Service for Kubernetes cluster (see Managed Service for Kubernetes pricing).
  • Fee for using computing resources, OS, and storage in cluster nodes (VMs) (see Compute Cloud pricing).
  • Fee for using an L7 load balancer’s computing resources (see Application Load Balancer pricing).
  • Fee for public IP addresses for cluster nodes and the L7 load balancer (see Virtual Private Cloud pricing).
  • Fee for the number of requests to Smart Web Security (see Smart Web Security pricing).

Getting startedGetting started

  1. Set up the required infrastructure:

    Manually
    Terraform
    1. Create a service account for the Application Load Balancer ingress controller to use.

      Assign the following roles to the account for the folder to create the cluster in:

      • alb.editor

      • vpc.publicAdmin

      • compute.viewer

      • smart-web-security.editor

        Warning

        You will need this role to correctly integrate the L7 Application Load Balancer with the security profile.

    2. Create a service account for the cluster and node group to use.

      Assign the following roles to the account for the folder to create the cluster in:

      • k8s.clusters.agent
      • vpc.publicAdmin
    3. Create security groups for the Managed Service for Kubernetes cluster and its node groups.

      Also configure the security groups required for Application Load Balancer.

      Warning

      The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

    4. Create a cluster. When creating a cluster, select:

      • Service account you created earlier to use for resources and nodes.
      • Security groups you created earlier to assign to the cluster.
      • Option for assigning a public address to the cluster. This address enables using the Kubernetes API from the internet.
    5. Create a node group in the cluster. When creating the node group, select:

      • Security groups you created earlier to assign to the node group.
      • Option for assigning a public address to the nodes. This address enables downloading images from the internet.
    1. If you do not have Terraform yet, install it.

    2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

    3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

    4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

    5. Download the alb-ready-k8s-cluster.tf cluster configuration file to the same working directory. This file describes:

      • Network.

      • Subnet.

      • Kubernetes cluster.

      • Service account required for the Managed Service for Kubernetes cluster and node group.

      • Service account required for the Application Load Balancer ingress controller.

      • Security groups which contain rules required for the Managed Service for Kubernetes cluster and its node groups.

        Some rules are required for Application Load Balancer to work correctly.

        Warning

        The configuration of security groups determines the performance and availability of the cluster and the services and applications running in it.

      • Security profile in Smart Web Security with a smart protection rule and a simple rule to test the profile; this rule will only allow traffic from a specific IP address.

        The default basic rule is not specified in the manifest and is created automatically.

    6. Specify the following in the configuration file:

      • Folder ID.
      • Kubernetes version for the Kubernetes cluster and node groups.
      • Kubernetes cluster CIDR; CIDR of the services.
      • Name of the Managed Service for Kubernetes cluster’s service account.
      • Application Load Balancer service account name.
      • Smart Web Security profile name.
      • IP address to allow traffic from.
    7. Make sure the Terraform configuration files are correct using this command:

      terraform validate
      

      Terraform will show any errors found in your configuration files.

    8. Create the required infrastructure:

      1. Run this command to view the planned changes:

        terraform plan
        

        If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

      2. If everything looks correct, apply the changes:

        1. Run this command:

          terraform apply
          
        2. Confirm updating the resources.

        3. Wait for the operation to complete.

      All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

    Note

    If you deployed the infrastructure with Terraform, skip the Creating a security profile step.

  2. Make sure you have a domain and can manage resource records in the DNS zone for that domain. Your test app will be available through ingress on this domain’s subdomain.

    If you do not have a domain yet, register one with any domain name registrar. To manage your domain’s resource records with Yandex Cloud DNS, create a public DNS zone and delegate the domain.

    Note

    In this tutorial, we will use example.com as a domain and demo.example.com as its subdomain.

    Use your own domains as you go through this tutorial.

  3. Install kubect and configure it to work with the new cluster.

Install the Application Load Balancer ingress controllerInstall the Application Load Balancer ingress controller

  1. Install the Application Load Balancer ingress controller to the yc-alb namespace.

    Specify the service account you created earlier for the controller.

    By uing the separate yc-alb namespace, you isolate the controller resources from those of your test application and ingress.

  2. Make sure you successfully installed the controller:

    kubectl logs deployment.apps/yc-alb-ingress-controller -n yc-alb
    

    Logs should contain messages saying the ingress controller successfully started.

    Example of a command result part
    ...    INFO    Starting EventSource    {"controller": "ingressgroup", ...}
    ...    INFO    Starting Controller     {"controller": "ingressgroup"}
    ...    INFO    Starting EventSource    {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting Controller     {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting EventSource    {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting Controller     {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    
    ...
    
    ...    INFO    Starting workers        {"controller": "ingressgroup", ...}
    ...    INFO    Starting workers        {"controller": "grpcbackendgroup", "controllerGroup": "alb.yc.io", ...}
    ...    INFO    Starting workers        {"controller": "httpbackendgroup", "controllerGroup": "alb.yc.io", ...}
    

Create a test applicationCreate a test application

Create an application and an associated service for ingress to expose:

  1. Create a manifest named demo-app1.yaml for deploying your application:

    demo-app1.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: demo-app1
      labels:
        tutorial: sws
    data:
      nginx.conf: |
        worker_processes auto;
        events {
        }
        http {
          server {
            listen 80 ;
            location = /_healthz {
              add_header Content-Type text/plain;
              return 200 'ok';
            }
            location / {
              add_header Content-Type text/plain;
              return 200 'Index';
            }
            location = /app1 {
              add_header Content-Type text/plain;
              return 200 'This is APP#1';
            }
          }
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: demo-app1
      labels:
        app: demo-app1
        tutorial: sws
        version: v1
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: demo-app1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
      template:
        metadata:
          labels:
            app: demo-app1
            version: v1
        spec:
          terminationGracePeriodSeconds: 5
          volumes:
            - name: demo-app1
              configMap:
                name: demo-app1
          containers:
            - name: demo-app1
              image: nginx:latest
              ports:
                - name: http
                  containerPort: 80
              livenessProbe:
                httpGet:
                  path: /_healthz
                  port: 80
                initialDelaySeconds: 3
                timeoutSeconds: 2
                failureThreshold: 2
              volumeMounts:
                - name: demo-app1
                  mountPath: /etc/nginx
                  readOnly: true
              resources:
                limits:
                  cpu: 250m
                  memory: 128Mi
                requests:
                  cpu: 100m
                  memory: 64Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: demo-app1
      labels:
        tutorial: sws
    spec:
      selector:
        app: demo-app1
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
    
  2. Deploy the application:

    kubectl apply -f demo-app1.yaml
    

    This will create the ConfigMap, Deployment, and Service objects for the demo-app1 app.

  3. Make sure all objects were successfully created:

    kubectl get configmap,deployment,svc -l tutorial=sws
    
    Example of a command result
    NAME                  DATA   AGE
    configmap/demo-app1   1      ...
    
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/demo-app1   2/2     2            2           ...
    
    NAME                TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)      AGE
    service/demo-app1   NodePort   ...          <none>        80:.../TCP   ...
    

Create a security profileCreate a security profile

Create a security profile with a simple rule so you can easily test the profile. The rules in the profile will only allow traffic from a specific IP address.

Create a security profile:

  1. In the management console, select the folder where you want to create a profile.

  2. In the list of services, select Smart Web Security.

  3. Click Create profile and select From a preset template.

    The profile will contain a number of preconfigured security rules:

    • Smart protection rule providing full protection for all traffic. This rule takes priority over the default basic rule.

    • Default basic rule denying all traffic that does not satisfy higher-priority rules.

      Tip

      Creating a pre-configured profile with full Smart Protection is preferable. This will ensure the highest level of security for your resource.

  4. Set up the profile:

    • Name: Profile name, e.g., test-sp1.

    • Action for the default base rule: Action for the basic rule to effect.

      Leave Deny for the basic rule to deny all traffic.

  5. Add a security rule:

    1. Click Add rule.

    2. Specify the main rule settings:

      • Name: Name for the rule, e.g., test-rule1.

      • Priority: Specify a value to give the rule priority over the preconfigured rules, e.g., 999800.

        Note

        The smaller the value, the higher is the rule priority. The priorities for preconfigured rules are as follows:

        • Basic default rule: 1000000.
        • Smart Protection rule providing full protection: 999900.
      • Rule type: Select Base.

      • Action: Select Allow.

    3. Under Conditions, configure the conditions to only allow traffic from a specific IP address:

      1. Select the traffic scope for the rule: On condition.
      2. Select the IP condition.
      3. For IP, select the condition: Matches or falls within the range.
      4. Specify a public IP address, e.g., 203.0.113.200.
    4. Click Add.

    The new rule will appear in the list of security rules.

  6. Click Create.

The new profile will appear in the list of security profiles. Write down the ID of your new security profile, as you will need it later.

Create an ingress resourceCreate an ingress resource

This ingress resource will describe the Application Load Balancer properties. The ingress controller you installed earlier will deploy the load balancer with the specified properties after the ingress resource is created.

As per the ingress rules, traffic to the demo.example.com virtual host at the /app1 path will be routed to the service/demo-app1 backend. The security profile you created earlier will protect this backend.

To create an ingress resource:

  1. Create a file named demo-ingress.yaml with the ingress resource description:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-ingress
      annotations:
        ingress.alb.yc.io/subnets: "<list_of_subnet_IDs>"
        ingress.alb.yc.io/security-groups: "<security_group_ID>"
        ingress.alb.yc.io/external-ipv4-address: "auto"
        ingress.alb.yc.io/group-name: "demo-sws"
        ingress.alb.yc.io/security-profile-id: "<security_profile_ID>"
    spec:
      rules:
        - host: demo.example.com
          http:
            paths:
              - path: /app1
                pathType: Exact
                backend:
                  service:
                    name: demo-app1
                    port:
                      number: 80
    

    Where:

    • ingress.alb.yc.io/subnets: List of IDs for subnets where the load balancer will reside.

      If you created the infrastructure using Terraform, use the ID of the subnet named subnet-a.

    • ingress.alb.yc.io/security-groups: ID of the group you created for the load balancer.

      If you created the infrastructure with Terraform, specify the ID of the group named alb-traffic.

    • ingress.alb.yc.io/security-profile-id: ID of the security profile you created ealier in Smart Web Security.

      Note

      The security profile will only apply to the virtual hosts of the ingress resource with a configured annotation. For the above ingress resource described above, the profile will apply to a single virtual host, demo.example.com.

      This is the only ingress resource in the demo-sws ingress group. The security profile will not apply to the virtual hosts of other ingress resources if you add such resources to the group later.

Learn more about annotations in Ingress resource fields and annotations.

  • Create an ingress resource:

    kubectl apply -f demo-ingress.yaml
    

    The Application Load Balancer ingress controller will start creating target groups, backend groups, HTTP routers, and the load balancer.

  • Remember to regularly check the ingress resource status until the ADDRESS column displays the load balancer’s IP address:

    kubectl get ingress demo-ingress
    

    This means the load balancer has been successfully created and can accept traffic.

    Example of a command result
    NAME             CLASS    HOSTS              ADDRESS         PORTS   AGE
    demo-ingress     <none>   demo.example.com   <IP_address>      80      ...
    
  • Create a DNS record for the domain .Create a DNS record for the domain .

    1. Create an A record for the demo.example.com domain in the example.com zone. In its value, specify the IP address of the load balancer you created earlier.

    2. Wait until the DNS propagation is finished.

      To make sure the propagation was successful, use applicable online tools or manual requests to different DNS servers:

      nslookup -type=a demo.example.com <DNS_server_IP_address>
      

    Check the resultCheck the result

    Requests to the application deployed in the Kubernetes cluster go through an Application Load Balancer. Protection of the virtual hosts to which those requests are routed is implemented with a security profile. The profile configuration only allows traffic from a specific IP address, e.g., 203.0.113.200.

    Make sure the load balancer works correctly as per the security profile settings.

    1. Use a host with an allowed IP address (203.0.113.200) to make sure traffic is routed as per the rule defined in the ingress resource:

      curl http://demo.example.com/app1
      

      Expected result:

      This is demo-app1
      
    2. Use a host with an IP address not on the list of allowed ones (e.g., 203.0.113.100) to make sure traffic is not routed:

      curl http://demo.example.com/app1
      

      The load balancer should return the HTTP 403 Forbidden code and a message saying access to the resource is restricted.

    If traffic routing does not work as expected, make sure everything is configured correctly:

    • The service account for the ingress controller must have the required roles including those for using Smart Web Security.
    • Make sure the security groups for the Managed Service for Kubernetes cluster and its node groups are configured correctly. If a rule is missing, add it.
    • The security profile must be configured correctly to allow traffic from the relevant address.

    Tip

    After confirming the profile works properly, add more rules if required.

    Delete the resources you createdDelete the resources you created

    Some resources are not free of charge. To avoid paying for them, delete the resources you no longer need:

    1. Delete the ingress resource you created:

      kubectl delete ingress demo-ingress
      

      This will delete the load balancer and the associated HTTP router.

      The Smart Web Security profile will be disassociated from the virtual hosts specified in the ingress resource.

    2. Delete the Managed Service for Kubernetes cluster and its associated infrastructure:

      Manually
      Terraform

      Delete the Managed Service for Kubernetes cluster.

      If needed, delete the service account and security groups you created before getting started.

      1. In the terminal window, go to the directory containing the infrastructure plan.

        Warning

        Make sure the directory has no Terraform manifests with the resources you want to keep. Terraform deletes all resources that were created using the manifests in the current directory.

      2. Delete resources:

        1. Run this command:

          terraform destroy
          
        2. Confirm deleting the resources and wait for the operation to complete.

        All the resources described in the Terraform manifests will be deleted.

    Was the article helpful?

    Previous
    Terraform
    Next
    Overview
    © 2025 Direct Cursus Technology L.L.C.