Routing
- Best practices for routing in Cloud Interconnect
- Load balancing (Active-Active)
- Direction-based traffic prioritization (Active-Standby)
- VPN gateway traffic failover
- Static route priority
- 0.0.0.0/0route load balancing
- Direction-based traffic prioritization for 0.0.0.0/0
- Working with security groups
- Use cases
When connecting a customer infrastructure via Yandex Cloud Interconnect, you will typically need to set up traffic routing between the cloud resources and customer infrastructure resources.
By routing, we mean tools to manage traffic in Yandex Cloud.
Best practices for routing in Cloud Interconnect
-
Prior to deploying cloud resources, make sure you have a well-planned IP addressing scheme. Subnet IP address ranges in Yandex Cloud must not overlap with those in the client infrastructure.
-
Always set up two communication circuits through two points of presence.
-
To simplify the setup of fault-tolerant BGP routing, consider using the same BGP ASN for multiple customer edge routers connecting to Cloud Interconnect. You can use different BGP ASNs, e.g., when setting up connections via telecom providers. Keep in mind that Yandex Cloud is not responsible for configuring the customer and telecom provider network hardware.
-
Each customer edge router that establishes eBGP peering with Yandex Cloud hardware should also establish iBGP peering with other customer edge routers.
-
Use prefixes of different lengths for BGP announcements on customer edge routers to distribute outgoing traffic from cloud subnets across communication circuits:
- Short prefixes, such as
/8
, have the lowest route priority. - Long prefixes, such as
/32
, have the highest route priority.
- Short prefixes, such as
-
When selecting a communication circuit for outgoing traffic from the customer infrastructure to cloud networks, consider using the
Local Preference
BGP attribute on the customer edge router. -
You can use Cloud Interconnect along with a NAT gateway if customer edge routers do not announce the
0.0.0.0/0
default route over BGP to Yandex Cloud. If customer edge routers do announce the0.0.0.0/0
default route over BGP to Yandex Cloud, you cannot use a NAT gateway. -
Currently, Yandex Cloud does not support routing of outgoing traffic from cloud subnets to the customer infrastructure using BGP community
attributes.
Alert
You cannot use matching prefixes in VPC route tables and customer edge router announcements at the same time.
Load balancing (Active-Active)
The example below shows load balancing using two private connections set up through two points of presence.
Customer edge routers announce the 10.0.0.0/8
prefix from the customer infrastructure over BGP through two points of presence towards Yandex Cloud. Yandex Cloud will then use ECMP load balancing to distribute traffic between those points of presence.
Note that this balancing mode can create traffic asymmetry. For example, a request originating from the customer infrastructure to cloud resources could arrive through the M9
point of presence, while the response will be sent through NORD
.
While Yandex Cloud hardware allows and correctly handles traffic asymmetry, specific types of equipment within the customer infrastructure, such as firewalls, may experience issues with asymmetric traffic patterns.
To allow asymmetric traffic from Yandex Cloud, disable RPF
Direction-based traffic prioritization (Active-Standby)
To prioritize traffic by direction in Cloud Interconnect, you can use the following methods:
The longest prefix match method has a higher priority than BGP AS path prepending when it comes to the algorithm for selecting the best route on routers. We recommend that you only choose one of the suggested methods rather than use both at the same time.
Longest prefix match (LPM)
Below, you can see an example of prioritizing traffic through two private connections set up via two points of presence using the longest prefix match method.
The customer edge router (R2) uses the NORD
PoP to announce the short prefix from the customer infrastructure, 10.0.0.0/8
, over BGP towards Yandex Cloud.
Another customer edge router (R1) uses the M9
PoP to announce two long (more specific) prefixes from the customer infrastructure, 10.0.0.0/9
and 10.128.0.0/9
, over BGP towards Yandex Cloud.
Yandex Cloud will treat announcements via M9
as more specific ones, i.e., of higher priority.
This way, all traffic from the 172.16.1.0/24
, 172.16.2.0/24
, and 172.16.3.0/24
cloud subnets to the customer infrastructure will be routed through the private connection to M9
. If this connection fails, the traffic will automatically failover to the private connection to NORD
.
BGP AS path prepend
Below, you can see an example of prioritizing traffic through two private connections set up via two points of presence with the BGP AS path prepend method.
You can learn more about BGP AS path prepending here
The customer edge router (R1) uses the M9
PoP to announce the 10.0.0.0/8
prefix from the customer infrastructure over BGP towards Yandex Cloud. The BGP AS_PATH
attribute will default to 65001
, while the AS path length (amount of autonomous system number values) will be 1.
Another customer edge router (R2) announces the same prefix (10.0.0.0/8
) from the customer infrastructure over BGP through the NORD
PoP towards Yandex Cloud.
Before announcing the prefix, the BGP routing policy on the R2 router adds the customer's autonomous system number (BGP ASN) to the AS_PATH
attribute value so that it will be equal to 65001 65001
and the AS path length will be 2. This makes the prefix with such AS path length less preferable for external BGP routers.
This way, for the 10.0.0.0/8
traffic, Yandex Cloud will select the best route via the M9
PoP, while the route via the NORD
PoP will act as a failover due to its longer AS path.
All traffic from the 172.16.1.0/24
, 172.16.2.0/24
and 172.16.3.0/24
cloud subnets to the customer infrastructure will be routed through the private connection to M9
. If this connection fails, the traffic will automatically failover to the private connection to NORD
.
VPN gateway traffic failover
You can use a VPN gateway to make your Cloud Interconnect connection failsafe. For example, this might be an option when you cannot set up two physical circuits via two points of presence to ensure a fault-tolerant connection of the customer infrastructure to Yandex Cloud.
The customer edge router (R1) uses the M9
PoP to announce two long prefixes from the customer infrastructure, 10.0.0.0/9
and 10.128.0.0/9
, over BGP towards Yandex Cloud.
Setting up a backup connection from Yandex Cloud to the customer infrastructure involves deploying an IPSEC VPN gateway in the ru-central1-b
availability zone and configuring static routing within the VPC.
Cloud resource subnets in all three availability zones share a single route table with the 10.0.0.0/8 via 172.16.2.10
static route (prefix). Since this /8
route (prefix) is shorter than the /9
prefixes announced over BGP, it will have a lower priority while the Cloud Interconnect connection is running.
If the Cloud Interconnect connection fails, the longer /9
prefixes will be removed from the cloud network and the entire traffic towards the customer infrastructure will be automatically routed via the shorter /8
prefix using a static route to the VPN gateway.
Static route priority
The following flowchart shows how to set up traffic routing from a cloud network for a specific prefix via a VPN gateway, while sending all other traffic over a Cloud Interconnect connection:
The customer edge router uses the M9
PoP to announce the short prefix from the customer infrastructure, 10.0.0.0/8
, over BGP towards Yandex Cloud.
The cloud network's static route table is used to set up traffic routing for the long prefix from the customer infrastructure, 10.10.10.0/24
, through a VPN gateway with the 172.16.2.10
IP address, which is deployed in the ru-central1-b
availability zone.
This way, all traffic from the cloud network to the 10.0.0.0/8
customer infrastructure will be transmitted via the Cloud Interconnect connection, while the traffic heading to the 10.10.10.0/24
subnet will run through the VPN gateway.
0.0.0.0/0
route load balancing
In some cases, for example, to connect cloud resources to the internet via the customer infrastructure, you need to set up 0.0.0.0/0
route announcement over BGP towards Yandex Cloud.
The flowchart above shows how the traffic from cloud subnets connected to Cloud Interconnect is unconditionally routed to customer edge routers via both points of presence.
Security groups cannot be assigned to resources outside Yandex Cloud; therefore, the correct way to filter traffic is to use IPv4 prefixes rather than links to other security groups.
In this case, the customer can configure traffic filtering rules on customer edge routers before sending it to the internet through their own NAT gateway without using the Yandex Cloud infrastructure.
0.0.0.0/0
Direction-based traffic prioritization for To prioritize traffic by direction in Cloud Interconnect, you can use the following methods:
- Longest Prefix Match (LPM)
- BGP AS path prepend (available as of 03/07/2023)
The longest prefix match method has a higher priority than BGP AS path prepending when it comes to the algorithm for selecting the best route on routers. We recommend that you only choose one of the suggested methods rather than use both at the same time.
Longest prefix match (LPM)
Below, you can see an example of prioritizing traffic through two private connections set up via two points of presence.
The customer edge router (R2) uses the NORD
PoP to announce the default route from the customer infrastructure, 0.0.0.0/0
, over BGP towards Yandex Cloud.
Another customer edge router (R1) uses the M9
PoP to announce two long (more specific) prefixes from the customer infrastructure, 0.0.0.0/1
and 128.0.0.0/1
, over BGP towards Yandex Cloud.
Yandex Cloud will treat announcements via M9
as more specific ones, i.e., of higher priority.
This way, all traffic from the cloud subnets will be routed through the private connection to M9
. If this connection fails, the traffic will automatically failover to the private connection to NORD
.
BGP AS path prepend
Below, you can see an example of prioritizing traffic through two private connections set up via two points of presence with the BGP AS path prepend method.
You can learn more about BGP AS path prepending here
The customer edge router (R1) uses the M9
PoP to announce the default route from the customer infrastructure, 0.0.0.0/0
, over BGP towards Yandex Cloud. The AS_PATH
attribute will default to 65001
, while the AS path length (amount of autonomous system number values) will be 1.
Another customer edge router (R2) announces the same prefix (0.0.0.0/0
) from the customer infrastructure over BGP through the NORD
PoP towards Yandex Cloud.
Before announcing the prefix, the BGP routing policy on the R2 router adds the customer's autonomous system number (BGP ASN) to the AS_PATH
attribute value so that it will be equal to 65001 65001
and the AS path length will be 2. This makes the prefix with such AS path length less preferable for external BGP routers.
This way, for the 0.0.0.0/0
traffic, Yandex Cloud will select the best route via the M9
PoP, while the route via the NORD
PoP will act as a failover due to its longer AS path.
All traffic from the cloud subnets to the customer infrastructure will be routed through the private connection to M9
. If this connection fails, the traffic will automatically failover to the private connection to NORD
.
Working with security groups
Security groups are used to protect Yandex Cloud resources and cannot be used for filtering traffic outside Yandex Cloud.
Security group rules should be set up for the prefixes announced by client routers to Yandex Cloud. For example, to allow access from the client infrastructure to a web application (port 443) deployed in Yandex Cloud, set up a security group as follows:
ingress {
protocol = "TCP"
port = 443
description = "Allow ingress traffic from Interconnect to Web server"
v4_cidr_blocks = ["172.16.1.5/32"]
}
egress {
protocol = "ANY"
description = "We allow any egress traffic"
v4_cidr_blocks = ["10.0.0.0/8"]
}
The Egress
security group rule allows any cloud resources to access customer infrastructure resources on any port without any restriction.
If required, you can use more granular rules to only allow access to specific IP addresses or subnets and ports:
ingress {
protocol = "TCP"
port = 443
description = "Allow ingress traffic from Interconnect to Web server"
v4_cidr_blocks = ["172.16.1.5/32"]
}
egress {
protocol = "TCP"
port = 3389
description = "Allow RDP traffic to server behind Interconnect"
v4_cidr_blocks = ["10.10.10.10/32"]
}