Migrating services from an external NLB to L7 ALB, with an internal NLB as a target, using the management console
- Service migration recommendations
- Create your infrastructure
- Create a Smart Web Security profile
- Create an internal network load balancer for the NGINX Ingress Controller
- Create an L7 load balancer
- Test the L7 load balancer
- Migrate user traffic from the external network load balancer to the L7 load balancer
To migrate a service from an external network load balancer to an L7 load balancer:
- See the service migration recommendations.
- Create a migration infrastructure.
- Create a Smart Web Security profile.
- Create an internal network load balancer for the NGINX Ingress Controller.
- Create an L7 load balancer. At this step, you will associate the Smart Web Security profile with a virtual host of the L7 load balancer.
- Test the L7 load balancer.
- Migrate user traffic from the external network load balancer to the L7 load balancer.
Service migration recommendations
-
Optionally, enable L3-L4 DDoS protection (the OSI model
). It will enhance the L7 protection provided by Yandex Smart Web Security after migration.To enable L3-L4 protection:
-
Before the migration, reserve a public static IP address with DDoS protection and use this address for the L7 load balancer's listener. If you already have a protected public IP address for the load balancer, you can keep this address during migration. Otherwise, you will have to change the IP address to a protected one.
-
Configure a trigger threshold for the protection mechanisms, consistent with the amount of legitimate traffic to the protected resource. To set up this threshold, contact support
. -
Set the MTU value to
1450for the targets downstream of the load balancer. For more information, see MTU and TCP MSS.
-
-
Perform migration during the hours when the user load is at its lowest. If you decided to keep your public IP address, your service will be unavailable during the migration while this IP address is moved from the load balancer to the L7 load balancer. This usually takes a few minutes.
-
When using an L7 load balancer, requests to backends come with the source IP address from the range of internal IP addresses of the subnets specified when creating the L7 load balancer. The original IP address of the request source (user) is specified in the
X-Forwarded-Forheader. If you want to log public IP addresses of users on the web server, reconfigure it. -
Before the migration, define the minimum number of resource units for the autoscaling settings in the L7 load balancer:
Select the number of resource units based on the analysis of your service load expressed in:
- Number of requests per second (RPS).
- Number of concurrent active connections.
- Number of new connections per second.
- Traffic processed per second.
Create your infrastructure
-
Create subnets in three availability zones for the L7 load balancer.
-
Create security groups that allow the L7 load balancer to receive inbound traffic and send it to the targets and allow the targets to receive inbound traffic from the load balancer.
-
When using HTTPS, add the TLS certificate of your service to Yandex Certificate Manager.
-
Optionally, reserve an L3-L4 DDoS-protected static public IP address for the L7 load balancer.
Create a Smart Web Security profile
Create a Smart Web Security profile by selecting From a preset template.
Use these settings when creating the profile:
- In the Action for the default base rule field, select
Allow. - For the Smart Protection rule, enable Only logging (dry run).
These settings enable logging of traffic information, but no actions will be applied to the traffic. This will reduce the risk of disconnecting users due to profile configuration issues. Further on, you will have the option to disable Only logging (dry run) and configure deny rules for your use case in the security profile.
Create an internal network load balancer for the NGINX Ingress Controller
-
Create an internal network load balancer for the NGINX Ingress controller. Select an option that agrees with the method you initially used to deploy your NGINX Ingress controller:
Using a Helm chartUsing a manifest-
Add the configuration parameters for the internal network load balancer to the
values.yamlfile you used to initially configure the NGINX Ingress controller. Leave the other parameters in the file unchanged.controller: service: external: enabled: true internal: enabled: true annotations: yandex.cloud/load-balancer-type: internal yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address> loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener> externalTrafficPolicy: Local -
Use this command to apply the NGINX Ingress controller configuration changes:
helm upgrade <NGINX_Ingress_controller_name> -f values.yaml <chart_for_NGINX_Ingress_controller> -n <namespace>
-
Create a YAML file and describe the
Serviceresource in it:apiVersion: v1 kind: Service metadata: name: <resource_name> namespace: <namespace> annotations: yandex.cloud/load-balancer-type: internal yandex.cloud/subnet-id: <subnet_ID_for_internal_network_load_balancer_IP_address> spec: type: LoadBalancer externalTrafficPolicy: Local loadBalancerIP: <IP_address_of_internal_network_load_balancer_listener> ports: - port: <80_or_another_port_number_for_HTTP> targetPort: <80_or_another_port_number_for_NGINX_Ingress_controller_pod_for_HTTP> protocol: TCP name: http - port: <443_or_another_port_number_for_HTTPS> targetPort: <443_or_another_port_number_for_NGINX_Ingress_controller_pod_for_HTTPS> protocol: TCP name: https selector: <NGINX_Ingress_controller_pod_selectors> -
Apply the changes using this command:
kubectl apply -f <Service_resource_file> -
-
Wait until the internal network load balancer is created and a matching
Serviceobject appears. You can use this command to view information about the services:kubectl get service
Create an L7 load balancer
-
Create a target group for the L7 load balancer. Under Targets, select Outside VPC and specify an internal IP address for the internal network load balancer. Click Add target resource and then Create.
-
Create a backend group with the following settings:
-
Select
HTTPas the backend group type. -
Under Backends, click Add and set up the backend:
- Type:
Target group. - Target groups: Target group you created earlier.
- Port: TCP port configured for your internal network load balancer's listener. Usually, this is port
80for HTTP and port443for HTTPS. - Under Protocol settings, select
HTTPorHTTPSdepending on the protocol used by your service. - Under HTTP health check, delete the health check. Do not add it, as the network load balancer used as the target is a fault-tolerant service.
- Type:
-
-
Under Virtual hosts, click Add virtual host and configure the virtual host:
-
Authority: Your service domain name.
-
Security profile: Smart Web Security profile you created earlier.
Warning
Smart Web Security cannot be made operational without linking a security profile to the L7 load balancer's virtual host.
-
Click Add route and configure the route:
- Path:
Starts with/. - Action:
Routing. - Backend group: Backend group you created earlier.
- Path:
You can add multiple domains by clicking Add virtual host.
-
-
Create an L7 load balancer by selecting Manual:
-
Specify the security group you created earlier.
Warning
The node groups in the Managed Service for Kubernetes cluster must have inbound security group rules allowing traffic from the L7 load balancer on ports 30000-32767, coming either from its subnets or its security group.
-
Under Allocation, select subnets in three availability zones for the load balancer nodes. Enable traffic in these subnets.
-
Under Autoscaling settings, specify the minimum number of resource units per availability zone based on expected load.
-
Under Listeners, click Add listener and set up the listener:
-
Under Public IP address, specify:
- Port: TCP port configured for your internal network load balancer's listener. Usually, this is port
80for HTTP and port443for HTTPS. - Type:
List. Select a public IP address from the list. If you plan to enable DDoS protection at levels L3-L4, select a static public IP address with DDoS protection installed.
- Port: TCP port configured for your internal network load balancer's listener. Usually, this is port
-
Under Receiving and processing traffic, specify:
- Listener type:
HTTP. - Protocol: Select
HTTPorHTTPSdepending on the protocol your service uses. - If you select
HTTPS, specify the TLS certificate you added to Certificate Manager earlier in the Certificates field. - HTTP router: Select the HTTP router you created earlier.
- Listener type:
-
-
Test the L7 load balancer
-
Wait until the L7 load balancer goes
Active. -
Navigate to the new L7 load balancer and select Health checks on the left. Make sure you get
HEALTHYfor all checks. -
Run a test request to the service through the L7 load balancer, for example, using one of these methods:
-
Add this record to the
hostsfile on your workstation:<L7_load_balancer_public_IP_address> <service_domain_name>. Delete the record after the test. -
Execute the request using cURL
depending on the protocol type:curl http://<service_domain_name> \ --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>curl https://<service_domain_name> \ --resolve <service_domain_name>:<service_port>:<public_IP_address_of_L7_load_balancer>
-
Migrate user traffic from the external network load balancer to the L7 load balancer
Select one of these migration options:
Keep the public IP address for your service
-
If your external network load balancer is using a dynamic public IP address, convert it to a static one.
-
Delete the external network load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress Controller:
Using a Helm chartUsing a manifest-
In the
values.yamlfile you used to initially configure the NGINX Ingress Controller, undercontroller.service.external, setenabled: false. Leave the other parameters in the file unchanged.controller: service: external: enabled: false ... -
Use this command to apply the configuration changes for the NGINX Ingress Controller:
helm upgrade <NGINX_Ingress_Controller_name> -f values.yaml <chart_for_NGINX_Ingress_Controller> -n <namespace>
Delete the
Serviceresource for the external network load balancer using this command:kubectl delete service <name_of_Service_resource_for_external_network_load_balancer> -
-
Wait until the external network load balancer for the NGINX Ingress Controller and its respective
Serviceobject are deleted. You can use this command to view information about the services:kubectl get serviceThis will make your service unavailable through the external network load balancer.
-
In the L7 load balancer, assign to the listener the public IP address previously assigned to the external network load balancer.
CLIIf you do not have the Yandex Cloud CLI installed yet, install and initialize it.
By default, the CLI uses the folder specified when creating the profile. To change the default folder, use the
yc config set folder-id <folder_ID>command. You can also set a different folder for any specific command using the--folder-nameor--folder-idparameter.To change the public IP address, run this command:
yc application-load-balancer load-balancer update-listener <load_balancer_name> \ --listener-name <listener_name> \ --external-ipv4-endpoint address=<service_public_IP_address>,port=<service_port>Where
addressis the public IP address previously assigned to the external network load balancer. -
After the IP address changes, your service will again be available through the L7 load balancer. Monitor the L7 load balancer's user traffic on the load balancer statistics charts.
-
Delete the now free static public IP address you selected when creating the L7 load balancer.
Do not keep the public IP address for your service
-
To migrate user traffic from an external network load balancer to an L7 load balancer, in the DNS service of your domain's public zone, update the
Arecord value for the service domain name to point to the L7 load balancer's public IP address. If the public domain zone was created in Yandex Cloud DNS, update the record using this guide.Note
The migration may take a while because the propagation of DNS record's updates depends on its time-to-live (TTL) and the number of links in the DNS request chain.
-
As the DNS record updates propagate, monitor the increase in requests to the L7 load balancer on the load balancer statistics charts.
-
Monitor the decrease in traffic on the external network load balancer using the
processed_bytesandprocessed_packetsload balancer metrics. You can create a dashboard to visualize these metrics. If there is no load on the network load balancer for a long time, the migration to the L7 load balancer is complete. -
Optionally, once migration is complete, delete the external network load balancer. Select an option that agrees with the method you initially used to deploy your NGINX Ingress Controller:
Using a Helm chartUsing a manifest-
In the
values.yamlfile you used to initially configure the NGINX Ingress Controller, undercontroller.service.external, setenabled: false. Leave the other parameters in the file unchanged.controller: service: external: enabled: false ... -
Use this command to apply the configuration changes for the NGINX Ingress Controller:
helm upgrade <NGINX_Ingress_Controller_name> -f values.yaml <chart_for_NGINX_Ingress_Controller> -n <namespace>
Warning
When you update the NGINX Ingress Controller configuration, your service will be temporarily unavailable.
Delete the
Serviceresource for the external network load balancer using this command:kubectl delete service <name_of_Service_resource_for_external_network_load_balancer> -
-
Optionally, wait until the external network load balancer for the NGINX Ingress Controller and its respective
Serviceobject are deleted. You can use this command to view information about the services:kubectl get service