Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI for business
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Security in Yandex Cloud
  • Key security principles
  • Division of responsibility for security
  • Compliance
  • Security measures on the Yandex Cloud side
  • Security tools available to cloud service users
    • All tutorials
      • Encrypting secrets in Managed Service for Kubernetes
      • Signing and verifying Container Registry Docker images in Managed Service for Kubernetes
      • Syncing with Managed Service for Kubernetes secrets
      • Getting the Yandex Lockbox secret value on the custom Kubernetes installation side
      • Accessing the Yandex Cloud API from a Managed Service for Kubernetes cluster using a workload identity federation
      • Creating an L7 load balancer with a Smart Web Security profile through an Application Load Balancer ingress controller
        • Overview
        • Management console
        • Terraform
      • Transferring Managed Service for Kubernetes cluster logs to Cloud Logging
  • User support policy during vulnerability scanning
  • Security bulletins
  • Public IP address ranges

In this article:

  • Service migration recommendations
  • Create your infrastructure
  • Create a Smart Web Security profile
  • Install an Application Load Balancer ingress controller and create resources in your Managed Service for Kubernetes cluster
  • Test the L7 load balancer
  • Migrate user traffic from the network load balancer to the L7 load balancer
  • Keep the public IP address for your service
  • Do not keep the public IP address for your service
  1. Tutorials
  2. Kubernetes security
  3. Migrating services from an NLB with a Managed Service for Kubernetes cluster as a target to an L7 ALB
  4. Management console

Migrating services from an NLB with a Yandex Managed Service for Kubernetes cluster as a target to an L7 ALB using the management console

Written by
Yandex Cloud
Updated at November 12, 2025
  • Service migration recommendations
  • Create your infrastructure
  • Create a Smart Web Security profile
  • Install an Application Load Balancer ingress controller and create resources in your Managed Service for Kubernetes cluster
  • Test the L7 load balancer
  • Migrate user traffic from the network load balancer to the L7 load balancer
    • Keep the public IP address for your service
    • Do not keep the public IP address for your service

To migrate a service from a network load balancer to an L7 load balancer:

  1. See the service migration recommendations.
  2. Create a migration infrastructure.
  3. Create a Smart Web Security profile.
  4. Install an Application Load Balancer ingress controller and create resources in your Managed Service for Kubernetes cluster. At this step, you will associate your Smart Web Security profile with the L7 load balancer.
  5. Test the L7 load balancer.
  6. Migrate user traffic from the network load balancer to the L7 load balancer.

Service migration recommendationsService migration recommendations

  1. Optionally, enable L3-L4 DDoS protection (the OSI model). It will enhance the L7 protection provided by Yandex Smart Web Security after migration.

    To enable L3-L4 protection:

    1. Before the migration, reserve a public static IP address with DDoS protection and use this address for the L7 load balancer's listener. If you already have a protected public IP address for the load balancer, you can keep this address during migration. Otherwise, you will have to change the IP address to a protected one.

    2. Configure a trigger threshold for the protection mechanisms, consistent with the amount of legitimate traffic to the protected resource. To set up this threshold, contact support.

    3. Set the MTU value to 1450 for the targets downstream of the load balancer. For more information, see MTU and TCP MSS.

  2. Perform migration during the hours when the user load is at its lowest. If you decided to keep your public IP address, your service will be unavailable during the migration while this IP address is moved from the load balancer to the L7 load balancer. This usually takes a few minutes.

  3. When using an L7 load balancer, requests to backends come with the source IP address from the range of internal IP addresses of the subnets specified when creating the L7 load balancer. The original IP address of the request source (user) is specified in the X-Forwarded-For header. If you want to log public IP addresses of users on the web server, reconfigure it.

  4. Before the migration, define the minimum number of resource units for the autoscaling settings in the L7 load balancer:

    Select the number of resource units based on the analysis of your service load expressed in:

    • Number of requests per second (RPS).
    • Number of concurrent active connections.
    • Number of new connections per second.
    • Traffic processed per second.
  5. The features of the Application Load Balancer load balancer may differ from those of your load balancer deployed in the Managed Service for Kubernetes cluster. See Application Load Balancer ingress controller description and operating principles.

  6. Set up backend health checks on your Application Load Balancer. Thanks to health checks, the load balancer spots unavailable backends in a timely manner and diverts traffic to other backends. Once the application is updated, traffic will again be distributed across all backends.

    For more information, see Tips for configuring Yandex Application Load Balancer health checks and Annotations (metadata.annotations).

Create your infrastructureCreate your infrastructure

  1. Create subnets in three availability zones for the L7 load balancer.

  2. Create security groups that allow the L7 load balancer to receive inbound traffic and send it to the targets and allow the targets to receive inbound traffic from the load balancer.

  3. When using HTTPS, add the TLS certificate of your service to Yandex Certificate Manager.

  4. Optionally, reserve an L3-L4 DDoS-protected static public IP address for the L7 load balancer.

  5. The Managed Service for Kubernetes services used as backends must be of the NodePort type. If your service type is different, change it to NodePort. For more information about this type, see this Kubernetes article.

Create a Smart Web Security profileCreate a Smart Web Security profile

Create a Smart Web Security profile by selecting From a preset template.

Use these settings when creating the profile:

  • In the Action for the default base rule field, select Allow.
  • For the Smart Protection rule, enable Only logging (dry run).

These settings enable logging of traffic information, but no actions will be applied to the traffic. This will reduce the risk of disconnecting users due to profile configuration issues. Further on, you will have the option to disable Only logging (dry run) and configure deny rules for your use case in the security profile.

Install an Application Load Balancer ingress controller and create resources in your Managed Service for Kubernetes clusterInstall an Application Load Balancer ingress controller and create resources in your Managed Service for Kubernetes cluster

Tip

We recommend using the new Yandex Cloud Gwin controller instead of an Application Load Balancer Ingress controller.

  1. Install the Yandex Application Load Balancer ingress controller.

  2. Create an IngressClass resource for the L7 load balancer's Ingress controller:

    1. Create a YAML file and describe the IngressClass resource in it.

      IngressClass resource example:

      apiVersion: networking.k8s.io/v1
      kind: IngressClass
      metadata:
        labels:
          app.kubernetes.io/component: controller
        name: ingress-alb
      spec:
        controller: ingress.alb.yc.io/yc-alb-ingress-controller
      
    2. Use the following command to create the IngressClass resource:

      kubectl apply -f <IngressClass_resource_file>
      
  3. Create an Ingress resource:

    1. Read the descriptions of the Ingress resource fields and annotations and see the example.

    2. Create a YAML file and describe the Ingress resource in it:

      1. Complete the annotations section for the L7 load balancer settings:

        • ingress.alb.yc.io/subnets: IDs of the subnets in the three availability zones for the L7 load balancer nodes. Specify the IDs separated by commas with no spaces.

        • ingress.alb.yc.io/security-groups: ID of one or more security groups for the L7 load balancer. For multiple groups, specify their IDs separated by commas with no spaces.

        • ingress.alb.yc.io/external-ipv4-address: Previously reserved static public IP address.

        • ingress.alb.yc.io/group-name: Name of the Ingress resource group. Ingress resources are grouped together, each group served by a separate Application Load Balancer instance with a dedicated public IP address.

        • ingress.alb.yc.io/security-profile-id: ID of the previously created Smart Web Security security profile.

          Warning

          The security profile will be linked to the virtual host of the L7 load balancer. Smart Web Security cannot be made operational without linking a security profile to the L7 load balancer's virtual host.

        • ingress.alb.yc.io/autoscale-min-zone-size: Minimum number of resource units per availability zone, based on expected load.

      2. For the ingressClassName field, enter the name of the IngressClass resource you created earlier.

      3. When using HTTPS, complete the tls section:

        • hosts: Your service domain name the TLS certificate corresponds to.
        • secretName: TLS certificate of your service in Yandex Certificate Manager, in yc-certmgr-cert-id-<certificate_ID> format.
      4. Complete the rules section in line with your rules for distribution of incoming traffic among backends depending on the domain name (host field) and requested resource (http.paths field).

        • host: Your service domain name.

        • pathType: Type of reference to the requested resource:

          • Exact: Request URI path must match the path field value.
          • Prefix: Request URI path must start with the path field value.
        • path: Incoming request URI path (if Exact) or its prefix (if Prefix).

        • backend: Reference to a backend or group of backends to process the requests with the specified domain name and URI path. Specify either a service backend (service) or a backend group (resource) but not both.

          • service: Managed Service for Kubernetes backend service for processing requests:

            • name: Managed Service for Kubernetes service name. The Service resource this field refers to must be described in line with this configuration.
            • port: Service port Ingress is going to address. For the service port, specify either a number (number) or a name (name) but not both.

            Warning

            The Managed Service for Kubernetes services used as backends must be of the NodePort type.

          • resource: Reference to the HttpBackendGroup group of backends to process the requests. A backend group can route traffic to either Managed Service for Kubernetes services or Yandex Object Storage buckets. When using a backend group, advanced Application Load Balancer functionality is available. You can also specify relative backend weights to allocate traffic to them in proportion.

            • kind: HttpBackendGroup
            • name: Backend group name. The name must match the value specified in the metadata.name field of the HttpBackendGroup resource. The HttpBackendGroup resource this field refers to must be described in line with this configuration.
            • apiGroup: alb.yc.io

      Ingress resource example:

      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: <resource_name>
        annotations:
          ingress.alb.yc.io/subnets: <IDs_of_subnets_in_three_availability_zones>
          ingress.alb.yc.io/security-groups: <L7_load_balancer_security_group_ID>
          ingress.alb.yc.io/external-ipv4-address: <static_public_IP_address>
          ingress.alb.yc.io/group-name: <resource_group_name>
          ingress.alb.yc.io/security-profile-id: <Smart_Web_Security_security_profile_ID>
          ingress.alb.yc.io/autoscale-min-zone-size: <minimum_number_of_L7_load_balancer_resource_units_per_zone>
      spec:
        ingressClassName: <IngressClass_resource_name>
        tls:
          - hosts:
              - <service_domain_name>
            secretName: yc-certmgr-cert-id-<certificate_ID>
        rules:
          - host: <service_domain_name>
            http:
              paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: <Kubernetes_service_name>
                    port:
                      number: <443_or_another_port_number>
      
    3. Use the following command to create the Ingress resource:

      kubectl apply -f <Ingress_resource_file>
      
  4. An L7 load balancer will be deployed based on the Ingress resource configuration. Wait until its creation is complete and Ingress has a public IP address linked. You will need this IP address to check requests. You can view resource info using this command:

    kubectl get ingress <Ingress_resource_name> -w
    

Test the L7 load balancerTest the L7 load balancer

Run a test request to the service through the L7 load balancer, for example, using one of these methods:

  • Add this record to the hosts file on your workstation: <L7_load_balancer_public_IP_address> <service_domain_name>. Delete the record after the test.

  • Execute the request using cURL depending on the protocol type:

    curl http://<service_domain_name> \
        --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
    
    curl https://<service_domain_name> \
        --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
    

Migrate user traffic from the network load balancer to the L7 load balancerMigrate user traffic from the network load balancer to the L7 load balancer

Select one of these migration options:

  • Keep the public IP address for your service.
  • Do not keep the public IP address for your service.

Keep the public IP address for your serviceKeep the public IP address for your service

  1. If your external network load balancer uses a dynamic public IP address, convert it to a static one.

  2. Delete all listeners in the network load balancer to release the static public IP address. This will make your service unavailable through the network load balancer.

  3. In the L7 load balancer, assign to the listener the public IP address previously used by the network load balancer:

    1. Open the YAML file that describes the Ingress resource.

    2. Under annotations, in the ingress.alb.yc.io/external-ipv4-address field, specify the public IP address previously assigned to the network load balancer.

    3. Apply the changes using this command:

      kubectl apply -f <Ingress_resource_file>
      
  4. Wait for the Ingress resource to finish switching its public IP address. You can view resource information using this command:

    kubectl get ingress <Ingress_resource_name> -w
    

    After the IP address changes, your service will again be available through the L7 load balancer.

  5. Navigate to the L7 load balancer:

    1. In the management console, navigate to the folder with the Managed Service for Kubernetes cluster.
    2. Select Managed Service for Kubernetes.
    3. Select the cluster.
    4. Select Network on the left and then the Ingress tab on the right. For your Ingress resource, follow the L7 load balancer link in the Load balancer column.
    5. Monitor the L7 load balancer's user traffic on the load balancer statistics charts.
  6. Delete the released static public IP address previously reserved for the L7 load balancer.

  7. Optionally, delete the network load balancer after migrating user traffic to the L7 load balancer.

Do not keep the public IP address for your serviceDo not keep the public IP address for your service

  1. To migrate user traffic from a network load balancer to an L7 load balancer, in your domain's public zone DNS service, change the A record for the service domain name to the L7 load balancer's public IP address. If the public domain zone was created in Yandex Cloud DNS, update the record using this guide.

    Note

    The migration may take a while because the propagation of DNS record's updates depends on its time-to-live (TTL) and the number of links in the DNS request chain.

  2. As the DNS record updates propagate, monitor the increase in requests to the L7 load balancer.

    1. In the management console, navigate to the folder with the Managed Service for Kubernetes cluster.
    2. Select Managed Service for Kubernetes.
    3. Select the cluster in question.
    4. Select Network on the left and then the Ingress tab on the right. For your Ingress resource, follow the L7 load balancer link in the Load balancer column.
    5. Monitor the L7 load balancer's user traffic on the load balancer statistics charts.
  3. Monitor the decrease in traffic on the network load balancer using the processed_bytes and processed_packets load balancer metrics. You can create a dashboard to visualize these metrics. If there is no load on the network load balancer for a long time, the migration to the L7 load balancer is complete.

  4. Optionally, delete the network load balancer after migrating user traffic to the L7 load balancer.

Was the article helpful?

Previous
Overview
Next
Terraform
© 2025 Direct Cursus Technology L.L.C.