Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Network Load Balancer
  • Getting started
    • All tutorials
    • Architecture and protection of a basic internet service
    • Implementing fault-tolerant scenarios for NAT VMs
    • Configuring a fault-tolerant architecture in Yandex Cloud
    • Updating an instance group under load
    • Integrating Cloud DNS and a corporate DNS service
    • Connecting to Object Storage from Virtual Private Cloud
    • Connecting to Container Registry from Virtual Private Cloud
    • Implementing a secure high-availability network infrastructure with a dedicated DMZ based on the Check Point NGFW
    • Deploying Microsoft Exchange
    • Deploying an Always On availability group with an internal network load balancer
      • Overview
        • Overview
        • Management console
        • Terraform
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
  • FAQ

In this article:

  • Service migration recommendations
  • Create an internal network load balancer for the NGINX Ingress Controller
  • Create an infrastructure for the L7 load balancer
  • Test the L7 load balancer
  • Migrate user traffic from the external network load balancer to the L7 load balancer
  • Keep the public IP address for your service
  • Do not keep the public IP address for your service
  1. Tutorials
  2. Migrating services from an NLB to an L7 ALB to enable Smart Web Security protection
  3. NLB as a target for ALB
  4. Terraform

Migrating services from an external NLB to L7 ALB, with an internal NLB as a target, using Terraform

Written by
Yandex Cloud
Updated at October 20, 2025
  • Service migration recommendations
  • Create an internal network load balancer for the NGINX Ingress Controller
  • Create an infrastructure for the L7 load balancer
  • Test the L7 load balancer
  • Migrate user traffic from the external network load balancer to the L7 load balancer
    • Keep the public IP address for your service
    • Do not keep the public IP address for your service

To migrate a service from a network load balancer to an L7 load balancer:

  1. See the service migration recommendations.
  2. Create an internal network load balancer for the NGINX Ingress Controller.
  3. Create your infrastructure. At this step, you will associate the Smart Web Security profile with a virtual host of the L7 load balancer.
  4. Test the L7 load balancer.
  5. Migrate user traffic from the network load balancer to the L7 load balancer.

Service migration recommendationsService migration recommendations

  1. Optionally, enable L3-L4 DDoS protection (the OSI model). It will enhance the L7 protection provided by Yandex Smart Web Security after migration.

    To enable L3-L4 protection:

    1. Before the migration, reserve a public static IP address with DDoS protection and use this address for the L7 load balancer's listener. If you already have a protected public IP address for the load balancer, you can keep this address during migration. Otherwise, you will have to change the IP address to a protected one.

    2. Configure a trigger threshold for the protection mechanisms, consistent with the amount of legitimate traffic to the protected resource. To set up this threshold, contact support.

    3. Set the MTU value to 1450 for the targets downstream of the load balancer. For more information, see MTU and TCP MSS.

  2. Perform migration during the hours when the user load is at its lowest. If you decided to keep your public IP address, your service will be unavailable during the migration while this IP address is moved from the load balancer to the L7 load balancer. This usually takes a few minutes.

  3. When using an L7 load balancer, requests to backends come with the source IP address from the range of internal IP addresses of the subnets specified when creating the L7 load balancer. The original IP address of the request source (user) is specified in the X-Forwarded-For header. If you want to log public IP addresses of users on the web server, reconfigure it.

  4. Before the migration, define the minimum number of resource units for the autoscaling settings in the L7 load balancer:

    Select the number of resource units based on the analysis of your service load expressed in:

    • Number of requests per second (RPS).
    • Number of concurrent active connections.
    • Number of new connections per second.
    • Traffic processed per second.

Create an internal network load balancer for the NGINX Ingress ControllerCreate an internal network load balancer for the NGINX Ingress Controller

  1. Create an internal network load balancer for the NGINX Ingress controller. Select an option that agrees with the method you initially used to deploy your NGINX Ingress controller:

    Using a Helm chart
    Using a manifest
    1. Add the configuration parameters for the internal network load balancer to the values.yaml file you used to initially configure the NGINX Ingress controller. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: true
          internal:
            enabled: true
            annotations:
              yandex.cloud/load-balancer-type: internal
              yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address>
            loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener>
            externalTrafficPolicy: Local
      
    2. Use this command to apply the NGINX Ingress controller configuration changes:

      helm upgrade <NGINX_Ingress_controller_name> -f values.yaml <chart_for_NGINX_Ingress_controller> -n <namespace>
      
    1. Create a YAML file and describe the Service resource in it:

      apiVersion: v1
      kind: Service
      metadata:
        name: <resource_name>
        namespace: <namespace>
        annotations:
          yandex.cloud/load-balancer-type: internal
          yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address>
      spec:
        type: LoadBalancer
        externalTrafficPolicy: Local
        loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener>
        ports:
        - port: <80_or_another_port_number_for_HTTP>
          targetPort: <80_or_another_port_number_for_NGINX_Ingress_controller_pod_for_HTTP>
          protocol: TCP
          name: http
        - port: <443_or_another_port_number_for_HTTPS>
          targetPort: <443_or_another_port_number_for_NGINX_Ingress_controller_pod_for_HTTPS>
          protocol: TCP
          name: https
        selector:
          <NGINX_Ingress_controller_pod_selectors>
      
    2. Apply the changes using this command:

    kubectl apply -f <Service_resource_file>
    
  2. Wait until the internal network load balancer is created and a matching Service object appears. You can use this command to view information about the services:

    kubectl get service
    

Create an infrastructure for the L7 load balancerCreate an infrastructure for the L7 load balancer

  1. If you do not have Terraform yet, install it.

  2. Get the authentication credentials. You can add them to environment variables or specify them later in the provider configuration file.

  3. Configure and initialize a provider. There is no need to create a provider configuration file manually, you can download it.

  4. Place the configuration file in a separate working directory and specify the parameter values. If you did not add the authentication credentials to environment variables, specify them in the configuration file.

  5. Download the configuration file to the same working directory based on the protocol you are using:

    • HTTP: alb-int-nlb-http.tf configuration file
    • HTTPS: alb-int-nlb-https.tf configuration file

    These files describe:

    • Subnets for the L7 load balancer.
    • Security group for the L7 load balancer.
    • Static address for the L7 load balancer.
    • Importing a TLS certificate to Certificate Manager (if using HTTPS).
    • Smart Web Security profile.
    • Target group, backend group, and HTTP router for the L7 load balancer.
    • L7 load balancer.
  6. Specify the following variables in the configuration file:

    • domain_name: Your service domain name.
    • network_id: ID of the network containing the VMs from the network load balancer's target group.
    • ip_address_int_nlb: Internal IP address of the internal load balancer you created earlier.
    • certificate (for HTTPS): Path to the self-signed custom certificate.
    • private_key (for HTTPS): Path to the private key file.
  7. Make sure the Terraform configuration files are correct using this command:

    terraform validate
    

    Terraform will display any configuration errors detected in your files.

  8. Create the required infrastructure:

    1. Run this command to view the planned changes:

      terraform plan
      

      If you described the configuration correctly, the terminal will display a list of the resources to update and their parameters. This is a verification step that does not apply changes to your resources.

    2. If everything looks correct, apply the changes:

      1. Run this command:

        terraform apply
        
      2. Confirm updating the resources.

      3. Wait for the operation to complete.

    All the required resources will be created in the specified folder. You can check resource availability and their settings in the management console.

  9. In the management console, select the folder where you created the L7 load balancer.

  10. Select Application Load Balancer.

  11. Wait until the L7 load balancer goes Active.

  12. Specify the autoscaling settings in the L7 load balancer:

    1. In the management console, click the load balancer's name.
    2. Click and select Edit.
    3. Under Autoscaling settings, set the resource unit limit.
    4. Click Save.

Test the L7 load balancerTest the L7 load balancer

  1. In the management console, navigate to the new L7 load balancer and select Health checks on the left. Make sure you get HEALTHY for all the L7 load balancer's health checks.

  2. Run a test request to the service through the L7 load balancer, for example, using one of these methods:

    • Add this record to the hosts file on your workstation: <L7_load_balancer_public_IP_address> <service_domain_name>. Delete the record after the test.

    • Execute the request using cURL depending on the protocol type:

      curl http://<service_domain_name> \
          --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
      
      curl https://<service_domain_name> \
          --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
      

Migrate user traffic from the external network load balancer to the L7 load balancerMigrate user traffic from the external network load balancer to the L7 load balancer

Select one of these migration options:

  • Keep the public IP address for your service.
  • Do not keep the public IP address for your service.

Keep the public IP address for your serviceKeep the public IP address for your service

  1. If your external network load balancer is using a dynamic public IP address, convert it to a static one.

  2. Delete the external network load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress Controller:

    Using a Helm chart
    Using a manifest
    1. In the values.yaml file you used to initially configure the NGINX Ingress Controller, under controller.service.external, set enabled: false. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: false
          ...
      
    2. Use this command to apply the configuration changes for the NGINX Ingress Controller:

      helm upgrade <NGINX_Ingress_Controller_name> -f values.yaml <chart_for_NGINX_Ingress_Controller> -n <namespace>
      

    Delete the Service resource for the external network load balancer using this command:

    kubectl delete service <name_of_Service_resource_for_external_network_load_balancer>
    
  3. Wait until the external network load balancer for the NGINX Ingress Controller and its respective Service object are deleted. You can use this command to view information about the services:

    kubectl get service
    

    This will make your service unavailable through the external network load balancer.

  4. In the L7 load balancer, assign to the listener the public IP address previously assigned to the external network load balancer.

    1. Open the configuration file you used to create the L7 load balancer (alb-int-nlb-http.tf or alb-int-nlb-https.tf).

    2. In the load balancer description, update the address parameter under listener.endpoint.address.external_ipv4_address:

      resource "yandex_alb_load_balancer" "<load_balancer_name>" {
        ...
        listener {
          ...
          endpoint {
            address {
              external_ipv4_address {
                address = <service_public_IP_address>
              }
            }
            ports = [ <service_port> ]
          }
        }
      }
      

      Where address is the public IP address the network load balancer used previously.

    3. Apply the changes:

      1. In the terminal, go to the directory where you edited the configuration file.

      2. Make sure the configuration file is correct using this command:

        terraform validate
        

        If the configuration is correct, you will get this message:

        Success! The configuration is valid.
        
      3. Run this command:

        terraform plan
        

        You will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.

      4. Apply the changes:

        terraform apply
        
      5. Type yes and press Enter to confirm the changes.

  5. After the IP address changes, your service will again be available through the L7 load balancer. Monitor the L7 load balancer's user traffic on the load balancer statistics charts.

  6. Delete the now free static public IP address you selected when creating the L7 load balancer.

    1. Open the configuration file you used to create the L7 load balancer (alb-int-nlb-http.tf or alb-int-nlb-https.tf).

    2. Delete the yandex_vpc_address resource description from the file:

      resource "yandex_vpc_address" "static-address" {
        description = "Static public IP address for the Application Load Balancer"
        name        = "alb-static-address"
        external_ipv4_address {
          zone_id                  = "ru-central1-a"
          ddos_protection_provider = "qrator"
        }
      }
      
    3. Apply the changes:

      1. In the terminal, go to the directory where you edited the configuration file.

      2. Make sure the configuration file is correct using this command:

        terraform validate
        

        If the configuration is correct, you will get this message:

        Success! The configuration is valid.
        
      3. Run this command:

        terraform plan
        

        You will see a detailed list of resources. No changes will be made at this step. If the configuration contains any errors, Terraform will show them.

      4. Apply the changes:

        terraform apply
        
      5. Type yes and press Enter to confirm the changes.

Do not keep the public IP address for your serviceDo not keep the public IP address for your service

  1. To migrate user traffic from an external network load balancer to an L7 load balancer, in the DNS service of your domain's public zone, update the A record value for the service domain name to point to the L7 load balancer's public IP address. If the public domain zone was created in Yandex Cloud DNS, update the record using this guide.

    Note

    The migration may take a while because the propagation of DNS record's updates depends on its time-to-live (TTL) and the number of links in the DNS request chain.

  2. As the DNS record updates propagate, monitor the increase in requests to the L7 load balancer on the load balancer statistics charts.

  3. Monitor the decrease in traffic on the external network load balancer using the processed_bytes and processed_packets load balancer metrics. You can create a dashboard to visualize these metrics. If there is no load on the network load balancer for a long time, the migration to the L7 load balancer is complete.

  4. Optionally, once migration is complete, delete the external network load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress Controller:

    Using a Helm chart
    Using a manifest
    1. In the values.yaml file you used to initially configure the NGINX Ingress Controller, under controller.service.external, set enabled: false. Leave the other parameters in the file unchanged.

      controller:
        service:
          external:
            enabled: false
          ...
      
    2. Use this command to apply the configuration changes for the NGINX Ingress Controller:

      helm upgrade <NGINX_Ingress_Controller_name> -f values.yaml <chart_for_NGINX_Ingress_Controller> -n <namespace>
      

    Warning

    When you update the NGINX Ingress Controller configuration, your service will be temporarily unavailable.

    Delete the Service resource for the external network load balancer using this command:

    kubectl delete service <name_of_Service_resource_for_external_network_load_balancer>
    
  5. Optionally, wait until the external network load balancer for the NGINX Ingress Controller and its respective Service object are deleted. You can use this command to view information about the services:

    kubectl get service
    

Was the article helpful?

Previous
Management console
Next
Overview
© 2025 Direct Cursus Technology L.L.C.