Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Network Load Balancer
  • Getting started
    • Overview
    • Network load balancer types
    • Listener
    • Targets and target groups
    • Health checks
    • Implementation specifics
    • Use cases
    • Best practices
    • Quotas and limits
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes
  • FAQ

In this article:

  • Listener in all availability zones
  • Use cases
  • Traffic flows
  • Processing UDP traffic
  • Use cases
  • Ensuring locality in traffic processing by the internal load balancer
  • Use cases
  • Achieving routing convergence in the availability zone
  • Network load balancer and Cloud Interconnect
  • Use cases
  • Routing traffic via the internal balancer
  • Use cases
  • Use cases
  1. Concepts
  2. Implementation specifics

Implementation specifics

Written by
Yandex Cloud
Updated at June 10, 2025
  • Listener in all availability zones
    • Use cases
  • Traffic flows
  • Processing UDP traffic
    • Use cases
  • Ensuring locality in traffic processing by the internal load balancer
    • Use cases
  • Achieving routing convergence in the availability zone
  • Network load balancer and Cloud Interconnect
    • Use cases
  • Routing traffic via the internal balancer
    • Use cases
    • Use cases

Listener in all availability zonesListener in all availability zones

The IP address of the load balancer’s traffic listener is externally announced as a /32 prefix from all Yandex Cloud availability zones. If one of the availability zones goes down, the network equipment redirects incoming traffic to the listener's IP address in the running availability zones.

Use casesUse cases

  • Updating an instance group under load
  • Architecture and protection of a basic internet service
  • Deploying Microsoft Exchange
  • Deploying an Always On availability group with an internal network load balancer

Traffic flowsTraffic flows

The workflow of the EXTERNAL load balancer is as follows:

  1. The listener receives the traffic from the Yandex Cloud border router for the IP address and port it is configured for.
  2. The listener calculates the 5-tuple hash function from the parameters of the received IP packet. The data provided to the hash function includes:
    • Transmission protocol (TCP or UDP).
    • Sender public IP address
    • Sender's port (TCP or UDP).
    • Load balancer listener public IP address
    • Load balancer's traffic listener's port (TCP or UDP).
  3. The listener routes traffic to one of the operational resources in the target group based on the result of the hash function calculation.
  4. The resource in the target group processes the received traffic and sends the result back to the network load balancer.

Below is a diagram showing an example of an external client application using a web service in Yandex Cloud.

Traffic path from a client application to the web service:

  1. Traffic from the 1.2.3.4:30325 client application (any socket/port number can be used) is sent as a sequence of IP packets to the load balancer, and the 158.160.0.x:443 traffic listener receives it.
  2. The listener calculates the hash function with 5-tuple addressing based on the parameters of the received IP packet and routes the traffic to vm-a1 in the target group. At the same time, the virtual network retains the information that the traffic bound for the 158.160.0.x:443 listener was sent to the 10.0.1.1:8443 resource.
  3. vm-a1 processes the received request and sends the response back to the client application using its IP address, 10.0.1.1.
  4. The virtual network is aware (see step 2) that the traffic from the client application was previously received by the load balancer's listener and sent for processing to vm-a1. This information allows the virtual network to change the sender's address and port (perform source NAT) for all packets sent from 10.0.1.1:8443 to 158.160.0.x:443. The traffic is then sent to the destination address according to routing policies and reaches the client application.
  5. Traffic goes to the destination address according to routing policies and reaches the client application.

Note

The dashed line in the diagram above shows the backup path to the vm-b1 VM, which the listener would have chosen if the availability check for the vm-a1 VM had failed.

Processing UDP trafficProcessing UDP traffic

By default, UDP traffic processing is disabled for the network load balancer as it is impossible to ensure consistent distribution of UDP packets with the same 5-tuple hash function to the same resource in the target group. However, the network load balancer can be used, for example, for processing DNS traffic that does not require maintaining the connection state.

To enable UDP traffic, contact our support.

Use casesUse cases

  • Integrating Cloud DNS and a corporate DNS service
  • Architecture and protection of a basic internet service

Ensuring locality in traffic processing by the internal load balancerEnsuring locality in traffic processing by the internal load balancer

If a client located inside VPC sends traffic to the internal network load balancer, the listener will distribute this traffic only to those resources in target groups that are in the same availability zone as the client.

If there are no target resources running in the availability zone where the client is located, traffic will be evenly distributed among target resources in other zones.

Use casesUse cases

  • Implementing fault-tolerant scenarios for NAT VMs
  • Connecting to Object Storage from Virtual Private Cloud
  • Connecting to Container Registry from Virtual Private Cloud
  • Implementing fault-tolerant scenarios for NAT VMs
  • Deploying an Always On availability group with an internal network load balancer

Achieving routing convergence in the availability zoneAchieving routing convergence in the availability zone

If the last target resource in the availability zone is disabled (or its health check fails), this zone is excluded from traffic routing via the load balancer. The process of routing protocol convergence can take up to two minutes. During this convergence interval, the traffic bound for this target resource will be dropped.

If the first target resource in the availability zone becomes available after a successful health check, the actual return of the resource to traffic processing will also occur after a convergence interval required to announce the resource prefix from this availability zone.

Network load balancer and Cloud InterconnectNetwork load balancer and Cloud Interconnect

The internal network load balancer enables interaction between the balancer's listener IP address and the on-premises resources.

You cannot use on-prem resources as part of load balancer groups because the network balancer and the resources in target groups behind it must be in the same network.

Use casesUse cases

  • Configuring Cloud Interconnect access to cloud networks behind NGFWs

Routing traffic via the internal balancerRouting traffic via the internal balancer

An internal network load balancer uses routes of all subnets in the selected Virtual Private Cloud network. These include dynamic routes from Cloud Interconnect and static routes from VPC routing tables.

If multiple routes have the same destination prefix but different next hop addresses in the route table, outgoing traffic from the load balancer's target resources will be distributed among these next hop addresses. Keep this in mind when traffic reaches the balancer through network instances (e.g., firewalls) that can track incoming and outgoing traffic streams and do not allow traffic asymmetry.

If the traffic to the load balancer did not pass through a network VM, it may discard the response traffic received from target resources. To avoid traffic loss, configure routing based on your situation:

  • Route tables contain static routes with identical prefixes.
  • Source NAT configured on network VMs.

The scenario where routing tables have static routes with identical prefixes and different next hop addresses of network VMs is not supported.

Use casesUse cases

  • Deploying an Always On availability group with an internal network load balancer

Route tables contain static routes with identical prefixesRoute tables contain static routes with identical prefixes

Routes must have the next hop IP of one of the network VMs. Network VMs run in Active/Standby mode. To ensure fault tolerance of outgoing traffic, set up traffic forwarding, e.g., using route-switcher.

Use casesUse cases

  • Architecture and protection of a basic internet service

Source NAT configured on network VMsSource NAT configured on network VMs

Make sure you set up Source NAT to network VM addresses. Network VMs run in Active/Active mode. To set up Source NAT, refer to the documentation for software deployed on your network VM. View an example of how to set up Source NAT on a Check Point NGFW.

Route tables contain static routes with identical prefixes and different next hop IPs of network VMsRoute tables contain static routes with identical prefixes and different next hop IPs of network VMs

Alert

This use case is not supported. Use one of the options described above.

Was the article helpful?

Previous
Health checks
Next
Use cases
© 2025 Direct Cursus Technology L.L.C.