Configuring Cloud Interconnect access to cloud networks behind NGFWs
In this tutorial, we will create a secure high-availability Yandex Cloud network infrastructure using a next-generation firewall (NGFW) to segment it into security zones. Each network segment will contain single-purpose resources isolated from others. For example, we will place public-facing services, e.g., frontend applications, in the DMZApplication
segment. Each segment will have its own cloud folder and a dedicated VPC cloud network. To connect these segments, we will use Next-Generation Firewall VMs deployed in two availability zones to ensure fault tolerance.
In Yandex Cloud tutorials, you can find the following NGFW-based implementations of a fault-tolerant network infrastructure:
To establish IP network connectivity between your on-premise resources and Yandex Cloud resources, you can use Yandex Cloud Interconnect.
In this tutorial, you will set up routing for your cloud network and configure a Cloud Interconnect private connection to enable network connectivity between your on-premise infrastructure and the segments hosted behind the NGFW.
You can see this solution in the diagram below.
Name | Description |
---|---|
FW-A | Primary NGFW in zone A |
FW-B | Standby NGFW in zone B |
VPC interconnect |
VPC connecting your on-premise infrastructure over Cloud Interconnect |
VPC dmz |
VPC hosting frontend internet-facing applications |
VPC app |
VPC hosting backend applications |
A.A.A.0/24 |
interconnect VPC subnet hosting FW-A |
B.B.B.0/24 |
interconnect VPC subnet hosting FW-B |
C.C.0.0/16 |
Aggregated prefix of the dmz VPC subnets you want to access from your on-premise infrastructure |
D.D.0.0/16 |
Aggregated prefix of the app VPC subnets you want to access from your on-premise infrastructure |
dmz
and app
VPC-hosted resources
Traffic routing from your on-premise infrastructure to the If the prerequisites are met and your cloud route tables and Cloud Interconnect are configured according to steps below:
- Your on-premise infrastructure traffic will arrive at the primary NGFW that will route it to the relevant VPC:
dmz
orapp
. - If the primary NGFW fails, the
route-switcher
module will redirect the traffic arriving at the primary zone to the standby NGFW in the different availability zone. - If the availability zone with the primary NGFW fails, the
route-switcher
module will redirect traffic to the standby NGFW that will further route it to the relevant VPC:dmz
orapp
.
Prerequisites
- Use
route-switcher
to switchinterconnect
,dmz
, andapp
VPC-directed traffic from the primary to the standby NGFW if the first one fails. You can read aboutroute-switcher
in the UserGate NGFW and Check Point NGFW tutorials. - Network prefixes in the route tables must not overlap your on-premises network prefixes.
- Routes announced from the on-premise infrastructure through Cloud Interconnect must not overlap with the address spaces of the
interconnect
,dmz
, orapp
VPC subnets. - Prefixes in the Cloud Interconnect private connection announcements, i.e.,
A.A.A.0/24
,B.B.B.0/24
,C.C.0.0/16
, andD.D.0.0/16
in our example, must not overlap. - The NGFW must have security policies allowing access from the
interconnect
VPC to thedmz
andapp
VPC, based on your organization’s security requirements. - Route tables for the
dmz
andapp
VPC subnets must include routes to your on-premise infrastructure networks. A common practice is to use the default route,0.0.0.0/0
. These routes must use the primary NGFW forNext hop
. - The NGFW must have static routes to your on-premise infrastructure networks. For
Next hop
, these routes must use the gateway address, which is the first address in theinterconnect
VPC NGFW-hosting subnet, e.g.,x.x.x.1
for thex.x.x.x.0/24
subnet. - We recommend you to plan your
dmz
andapp
VPC network address space the way you can use aggregated prefixes. In our example, it isC.C.0.0/16
andD.D.0.0/16
. With aggregated prefixes, you will only need to configureinterconnect
VPC route tables and prefix announcements in private Cloud Interconnect connections once. You will not have to change them when adding new subnets to thedmz
andapp
VPC.
interconnect
VPC route tables
Configuring Configure route tables in the interconnect
VPC according to the tables below and apply them to the primary and standby NGFW-hosting subnets.
Create a route table containing more specific routes (with a network prefix /17
) to the dmz
and app
VPC subnets and apply it to the primary NGFW-hosting subnet, i.e., A.A.A.0/24
in zone A. Remember to add prefixes from that table to the announcement settings for private connections of the primary NGFW availability zone.
Destination prefix | Next hop |
---|---|
C.C.0.0/17 |
FW-A IP address in the interconnect VPC |
C.C.128.0/17 |
FW-A IP address in the interconnect VPC |
D.D.0.0/17 |
FW-A IP address in the interconnect VPC |
D.D.128.0/17 |
FW-A IP address in the interconnect VPC |
Create a route table containing less specific routes (with a network prefix /16
) to the dmz
and app
VPC subnets and apply it to the standby NGFW-hosting subnet, i.e., B.B.B.0/24
in zone B. Remember to add prefixes from that table to the announcement settings for private connections of the standby NGFW availability zone.
Destination prefix | Next hop |
---|---|
C.C.0.0/16 |
FW-A IP address in the interconnect VPC |
D.D.0.0/16 |
FW-A IP address in the interconnect VPC |
These settings ensure that traffic to the dmz
and app
VPC subnets is routed to the primary NGFW. If the primary NGFW fails, the route-switcher
By using more specific or less specific route table prefixes, you can configure the Cloud Interconnect private connection announcements so that your on-premise infrastructure traffic bound for the dmz
and app
VPC subnets goes to the primary NGFW availability zone and, if that one fails, it is redirected to the standby NGFW availability zone.
Creating private Cloud Interconnect connections
You can find the Cloud Interconnect deployment options in the documentation. For a fault-tolerant connection to the service, we recommend setting up multiple trunks, one per point of presence.
Follow this guide to set up a private connection depending on how you connect to Cloud Interconnect.
When submitting a ticket to Yandex Cloud support, specify the following for each private connection in your subnet announcements for availability zones:
- For the primary NGFW availability zone:
- Prefixes from the route table applied to the
interconnect
VPC primary NGFW-hosting subnet. In our example, these are more specific/17
prefixes for theru-central1-a
zone. These announcements will ensure that your on-premise traffic bound for thedmz
andapp
VPC subnets will go to the primary NGFW availability zone. - Prefix of the
interconnect
VPC primary NGFW-hosting subnet, along with otherinterconnect
VPC subnet prefixes in the same zone that you need to announce into your on-premise infrastructure.
- Prefixes from the route table applied to the
- For the standby NGFW availability zone:
- Prefixes from the route table applied to the
interconnect
VPC standby NGFW-hosting subnet. In our example, these are less specific/16
prefixes for theru-central1-b
zone. These announcements will ensure that your on-premise traffic bound for thedmz
andapp
VPC subnets will go to the standby NGFW availability zone if the primary zone fails. - Prefix of the
interconnect
VPC standby NGFW-hosting subnet, along with otherinterconnect
VPC subnet prefixes in the same zone that you need to announce into your on-premise infrastructure.
- Prefixes from the route table applied to the
For our tutorial example, specify the following details under vpc
:
vpc:
vpc_net_id: <VPC_interconnect_ID>
vpc_subnets:
ru-central1-a: [A.A.A.0/24, C.C.0.0/17, C.C.128.0/17, D.D.0.0/17, D.D.128.0/17]
ru-central1-b: [B.B.B.0/24, C.C.0.0/16, D.D.0.0/16]