Yandex Cloud
Search
Contact UsGet started
  • Blog
  • Pricing
  • Documentation
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • ML & AI
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Customer Stories
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Blog
  • Pricing
  • Documentation
© 2025 Direct Cursus Technology L.L.C.
Yandex Smart Web Security
  • Getting started
    • All tutorials
    • Creating an L7 load balancer with a security profile
    • Creating an L7 load balancer with a security profile through an Application Load Balancer Ingress controller
    • API Gateway protection with Smart Web Security
    • Centralized online publication and DDoS protection of applications
      • Overview
        • Overview
        • Management console
        • Terraform
    • Overview
    • Security profiles
    • WAF
    • ARL (request limit)
    • Rules
    • Conditions
    • Lists
    • Quotas and limits
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Service migration recommendations
  • Getting started
  • Create a Smart Web Security security profile
  • Create an internal network load balancer for the NGINX Ingress controller
  • Create an L7 load balancer
  • Migrate user load from the external load balancer to the L7 load balancer
  • Keep public IP address for your service
  • Do not keep public IP address for your service
  1. Tutorials
  2. Migrating services from an NLB to L7 ALB for DDoS protection using Smart Web Security
  3. NLB as a target for ALB
  4. Management console

Migrating services from an external NLB load balancer to an L7 ALB load balancer with an internal NLB load balancer as a target using the management console

Written by
Yandex Cloud
Updated at May 13, 2025
  • Service migration recommendations
  • Getting started
  • Create a Smart Web Security security profile
  • Create an internal network load balancer for the NGINX Ingress controller
  • Create an L7 load balancer
  • Migrate user load from the external load balancer to the L7 load balancer
    • Keep public IP address for your service
    • Do not keep public IP address for your service

To migrate a service from an external network load balancer to an L7 load balancer:

  1. See recommendations for service migration.
  2. Complete the prerequisite steps.
  3. Create a Smart Web Security profile.
  4. Create an internal network load balancer for the NGINX Ingress controller.
  5. Create an L7 load balancer. At this step, you will connect the Smart Web Security security profile to a virtual host of an L7 load balancer.
  6. Migrate user load from the external network load balancer to the L7 load balancer.

Service migration recommendationsService migration recommendations

  1. In addition to DDoS protection at OSI L7 using Yandex Smart Web Security, we recommend enabling DDoS protection at L3-L4. To do this, reserve a public static IP address with DDoS protection in advance and use this address for the L7 load balancer's listener.

    If the network load balancer's listener already uses a public IP address with DDoS protection, you can save it and use it for the L7 load balancer.

    If the network load balancer's listener uses a public IP address without DDoS protection, DDoS protection at L3-L4 when migrating to an L7 load balancer can only be achieved by changing the public IP for your service.

    When using L3-L4 DDoS protection, configure a trigger threshold for the L3-L4 protection mechanisms aligned with the amount of legitimate traffic to the protected resource. To set up this threshold, contact support.

    Also, set the MTU value to 1450 for the targets downstream of the load balancer. For more information, see Setting up MTU when enabling DDoS protection.

  2. We recommend performing migration during the hours when the user load is at its lowest. If you plan to keep your public IP address, bear in mind that migration involves moving this IP address from the load balancer to the L7 load balancer. Your service will be unavailable during this period. Under normal conditions, this may last for several minutes.

  3. When using an L7 load balancer, requests to backends come with the source IP address from the range of internal IP addresses of the subnets specified when creating the L7 load balancer. The original IP address of the request source (user) is specified in the X-Forwarded-For header. If you want to log public IP addresses of users on the web server, reconfigure it.

  4. See the autoscaling and resource units in the L7 load balancer.

Getting startedGetting started

  1. Create subnets in three availability zones. These will be used for the L7 load balancer.

  2. Create security groups that allow the L7 load balancer to receive incoming traffic and send it to the targets and allow the targets to receive incoming traffic from the load balancer.

  3. When using HTTPS, add your service's TLS certificate to Yandex Certificate Manager.

  4. Reserve a static public IP address with DDoS protection at level L3-L4 for the L7 load balancer. See service migration recommendations.

Create a Smart Web Security security profileCreate a Smart Web Security security profile

Create a Smart Web Security security profile by selecting From a preset template.

Use these settings when creating the profile:

  • In the Action for the default base rule field, select Allow.
  • For the Smart Protection rule, enable Only logging (dry run).

These settings are limited to logging the info about the traffic without applying any actions to it. This will reduce the risk of disconnecting users due to profile configuration issues. Moving forward, you can disable Only logging (dry run) and configure prohibiting rules for your scenario in the security profile.

Create an internal network load balancer for the NGINX Ingress controllerCreate an internal network load balancer for the NGINX Ingress controller

  1. Create an internal network load balancer for the NGINX Ingress controller. Select an option that agrees with the method you initially used to deploy your NGINX Ingress controller:

    Using a Helm chart
    Using a manifest
    1. Add the configuration parameters for the internal network load balancer to the values.yaml file you used to initially configure the NGINX Ingress controller. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: true
          internal:
            enabled: true
            annotations:
              yandex.cloud/load-balancer-type: internal
              yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address>
            loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener>
            externalTrafficPolicy: Local
      
    2. Use this command to apply the NGINX Ingress controller configuration changes:

      helm upgrade <NGINX_Ingress_controller_name> -f values.yaml <chart_for_NGINX_Ingress_controller> -n <namespace>
      
    1. Create a YAML file and describe the Service resource in it:

      apiVersion: v1
      kind: Service
      metadata:
        name: <resource_name>
        namespace: <namespace>
        annotations:
          yandex.cloud/load-balancer-type: internal
          yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address>
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener>
        ports:
        - port: <HTTP_port_number_e.g._80>
          targetPort: <HTTP_port_number_of_NGINX_Ingress_controller_pod_e.g._80>
          protocol: TCP
          name: http
        - port: <HTTPS_port_number_e.g._443>
          targetPort: <HTTPS_port_number_of_NGINX_Ingress_controller_pod_e.g._443>
          protocol: TCP
          name: https
        selector:
          <NGINX_Ingress_controller_pod_selectors>
      
    2. Apply the changes using this command:

    kubectl apply -f <Service_resource_file>
    
  2. Wait until the internal network load balancer is created and a matching Service object appears. You can use this command to view information about the services:

    kubectl get service
    

Create an L7 load balancerCreate an L7 load balancer

  1. Create a target group for the L7 load balancer. Under Targets, select Outside VPC and specify an internal IP address for the internal network load balancer. Click Add target resource and then Create.

  2. Create a group of backends with the following parameters:

    1. Select HTTP as the backend group type.

    2. Under Backends, click Add and set up the backend:

      • Type: Target group.
      • Target groups: Target group you created earlier.
      • Port: TCP port configured for your internal network load balancer's listener. Usually, this is port 80 for HTTP and port 443 for HTTPS.
      • Under Protocol settings, select a protocol, HTTP or HTTPS, based on your service.
      • Under HTTP health check, delete the health check. Do not add it, as the network load balancer used as the target is a fault-tolerant service.
  3. Create an HTTP router.

    Under Virtual hosts, click Add virtual host and specify the virtual host settings:

    • Authority: Your service domain name.

    • Security profile: Smart Web Security profile you created earlier.

      Warning

      Linking your security profile to a virtual host of the L7 load balancer is the key step to connecting Smart Web Security.

    • Click Add route and specify the route settings:

      • Path: Starts with /.
      • Action: Routing.
      • Backend group: Backend group you created earlier.

    You can add multiple domains by clicking Add virtual host.

  4. Create an L7 load balancer by selecting the Manual creation method:

    • Specify the previously created security group.

      Warning

      The Managed Service for Kubernetes cluster node groups must have security group rules that allow incoming connections from the L7 load balancer to a range of ports (30000-32767) from the subnets hosting the L7 load balancer or from its security group.

    • Under Allocation, select the subnets in three availability zones for the load balancer's nodes. Enable traffic in these subnets.

    • Under Autoscaling settings, specify the minimum number of resource units per availability zone based on expected load.

      We recommend selecting the number of resource units based on load expressed in:

      • Number of requests per second (RPS)
      • Number of concurrent active connections
      • Number of new connections per second
      • Traffic processed per second
    • Under Listeners, click Add listener and set up the listener:

      • Under Public IP address, specify:

        • Port: TCP port configured for your internal network load balancer's listener. Usually, this is port 80 for HTTP and port 443 for HTTPS.
        • Type: List Select from the list a public IP address with DDoS protection at L3-L4. For more information, see service migration recommendations.
      • Under Receiving and processing traffic, specify:

        • Listener type: HTTP.
        • Protocol: Depending on your service, select HTTP or HTTPS.
        • If you select HTTPS, specify the TLS certificate you added to Certificate Manager earlier in the Certificates field.
        • HTTP router: HTTP router you created earlier.
  5. Wait until the L7 load balancer goes Active.

  6. Go to the new L7 load balancer and select Health checks on the left. Make sure you get HEALTHY for all the L7 load balancer's health checks.

  7. Run a test request to the service through the L7 load balancer, for example, using one of these methods:

    • Add this record to the hosts file on your workstation: <L7_load_balancer_public_IP_address> <service_domain_name>. Delete the record after the test.

    • Execute the request using cURL depending on the protocol type:

      curl http://<service_domain_name> \
          --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
      
      curl https://<service_domain_name> \
          --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
      

Migrate user load from the external load balancer to the L7 load balancerMigrate user load from the external load balancer to the L7 load balancer

Select one of the migration options:

  • Keep the public IP address for your service.
  • Do not keep public IP address for your service.

Keep public IP address for your serviceKeep public IP address for your service

  1. If your external network load balancer is using a dynamic public IP address, convert it to a static one.

  2. Delete the external network load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress controller:

    Using a Helm chart
    Using a manifest
    1. In the values.yaml file you used to initially configure the NGINX Ingress controller, under controller.service.external, set enabled: false. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: false
          ...
      
    2. Use this command to apply the NGINX Ingress controller configuration changes:

      helm upgrade <NGINX_Ingress_controller_name> -f values.yaml <chart_for_NGINX_Ingress_controller> -n <namespace>
      

    Delete the Service resource for the external network load balancer using this command:

    kubectl delete service <Service_resource_name_for_external_network_load_balancer>
    
  3. Wait until the external network load balancer for NGINX Ingress controller and its respective Service object are deleted. You can use this command to view information about the services:

    kubectl get service
    

    This will make your service unavailable through the external network load balancer.

  4. In the L7 load balancer, assign to the listener the public IP address previously assigned to the external network load balancer.

    CLI

    If you do not have the Yandex Cloud (CLI) command line interface yet, install and initialize it.

    The folder specified when creating the CLI profile is used by default. To change the default folder, use the yc config set folder-id <folder_ID> command. You can specify a different folder using the --folder-name or --folder-id parameter.

    To change a public IP address, run this command:

    yc application-load-balancer load-balancer update-listener <load_balancer_name> \
       --listener-name <listener_name> \
       --external-ipv4-endpoint address=<service_public_IP_address>,port=<service_port>
    

    Where address is the public IP address previously assigned to the external network load balancer.

  5. After the IP addresses changes, your service will again be available through the L7 load balancer. Monitor the L7 load balancer's user load from the load balancer statistics charts.

  6. Delete the now free static public IP address you selected when creating the L7 load balancer.

Do not keep public IP address for your serviceDo not keep public IP address for your service

  1. To migrate user load from an external network load balancer to an L7 load balancer, in the DNS service of your domain's public zone, change the A record value for the service domain name to the public IP address of the L7 load balancer. If the public domain zone was created in Yandex Cloud DNS, change the record using this guide.

    Note

    The propagation of DNS record updates depends on the time-to-live (TTL) value and the number of links in the DNS request chain. This process can take a long time.

  2. As the DNS record updates propagate, follow the increase of requests to the L7 load balancer from the load balancer statistics charts.

  3. Follow the decrease of the external network load balancer load using the processed_bytes and processed_packets load balancer metrics. You can also create a dashboard to visualize these metrics. The absence of load on the external network load balancer for a prolonged period of time indicates that the user load has been transfered to the L7 load balancer.

  4. (Optional) Delete the external network load balancer after migrating user load to the L7 load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress controller:

    Using a Helm chart
    Using a manifest
    1. In the values.yaml file you used to initially configure the NGINX Ingress controller, under controller.service.external, set enabled: false. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: false
          ...
      
    2. Use this command to apply the NGINX Ingress controller configuration changes:

      helm upgrade <NGINX_Ingress_controller_name> -f values.yaml <chart_for_NGINX_Ingress_controller> -n <namespace>
      

    Warning

    When you make changes to the NGINX Ingress controller configuration, your service will be temporarily unavailable.

    Delete the Service resource for the external network load balancer using this command:

    kubectl delete service <Service_resource_name_for_external_network_load_balancer>
    
  5. Wait until the external network load balancer for NGINX Ingress controller and its respective Service object are deleted. You can use this command to view information about the services:

    kubectl get service
    

Was the article helpful?

Previous
Overview
Next
Terraform
© 2025 Direct Cursus Technology L.L.C.