Implementing a secure high-availability network infrastructure with a dedicated DMZ based on the Check Point NGFW
Follow this tutorial to deploy a high-availability fail-safe network infrastructure with a dedicated DMZ
The infrastructure elements are hosted in two availability zones and grouped by purpose into individual folders. This solution enables you to publish generally available web resources, such as front-end applications, in a DMZ that is isolated from the internal infrastructure and ensures security and high availability of the entire perimeter.
The solution has the following basic segments (folders):
- The public folder contains Application Load Balancer to enable public access from the internet to applications published in the DMZ.
- The mgmt folder is designed for hosting NGFWs and cloud infrastructure management resources. It includes two VMs with firewalls (fw-a and fw-b), a VM of the centralized firewall management server (mgmt-server), and a VM for accessing the VPN-based control segment (jump-vm).
- The dmz folder enables you to publish applications with public access from the internet.
- The app and database folders can be used to host the business logic of applications (in this tutorial, no VMs are placed there).
For more information, see the project repository
To deploy a secure high-availability network infrastructure with a dedicated DMZ based on the Check Point next-generation firewall:
- Prepare your cloud.
- Prepare the environment.
- Deploy your resources.
- Set up firewall gateways.
- Enable the route-switcher module.
- Test the solution for performance and fault tolerance.
If you no longer need the resources you created, delete them.
Next-Generation Firewall
A next-generation firewall is used for cloud network protection and segmentation with a dedicated DMZ for public-facing applications. Yandex Cloud Marketplace offers multiple NGFW solutions.
This scenario deploys the Check Point CloudGuard IaaS solution offering the following features:
- Firewalling
- NAT
- Intrusion prevention
- Antivirus
- Bot protection
- Application layer granular traffic control
- Session logging
- Centralized management with Check Point Security Management
In this guide, the Check Point CloudGuard IaaS solution is configured with basic access control and NAT policies.
Prepare your cloud
Sign up for Yandex Cloud and create a billing account:
- Go to the management console
and log in to Yandex Cloud or create an account if you do not have one yet. - On the Yandex Cloud Billing
page, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The infrastructure support cost includes:
- Fee for continuously running VMs (see Yandex Compute Cloud pricing).
- Fee for using Application Load Balancer (see Yandex Application Load Balancer pricing).
- Fee for using Network Load Balancer (see Yandex Network Load Balancer pricing).
- Fee for using public IP addresses and outgoing traffic (see Yandex Virtual Private Cloud pricing).
- Fee for using functions (see Yandex Cloud Functions pricing).
- Fee for using the CheckPoint NGFW.
Required quotas
Warning
The tutorial involves deploying a resource-intensive infrastructure.
Make sure your cloud has sufficient quotas not being used by resources for other jobs.
Amount of resources used by the tutorial
Resource | Amount |
---|---|
Folders | 7 |
Instance groups | 1 |
Virtual machines | 6 |
VM vCPUs | 18 |
VM RAM | 30 GB |
Disks | 6 |
SSD size | 360 GB |
HDD size | 30 GB |
Cloud networks | 7 |
Subnets | 14 |
Route tables | 4 |
Security groups | 10 |
Static public IP addresses | 2 |
Public IP addresses | 2 |
Static routes | 17 |
Buckets | 1 |
Cloud functions | 1 |
Triggers for cloud functions | 1 |
Total RAM for all running functions | 128 MB |
Network load balancers (NLB) | 2 |
NLB target groups | 2 |
Application load balancers (ALB) | 1 |
ALB backend groups | 1 |
ALB target groups | 1 |
Prepare the environment
The tutorial uses Windows software and the Windows Subsystem for Linux
The infrastructure is deployed using Terraform
Configure WSL
-
Check if WSL is installed on your PC. To do this, run the following command in the CLI terminal:
wsl -l
If WSL is installed, the terminal will display a list of available distributions, for example:
Windows Subsystem for Linux Distributions: docker-desktop (Default) docker-desktop-data Ubuntu
-
If WSL is not installed, install
it and repeat the previous step. -
In addition, you can install a familiar Linux distribution, e.g., Ubuntu
, on top of WSL. -
To make the installed distribution the default system, run this command:
wsl --setdefault ubuntu
-
To switch the terminal to the Linux subsystem operation mode, run:
wsl ~
Note
All steps described below are completed in the Linux terminal.
Create a service account with the admin privileges for the cloud
-
In the management console
, select the folder where you want to create a service account. -
In the list of services, select Identity and Access Management.
-
Click Create service account.
-
Enter a name for the service account, e.g.,
sa-terraform
.The name format requirements are as follows:
- The name must be from 3 to 63 characters long.
- It may contain lowercase Latin letters, numbers, and hyphens.
- The first character must be a letter and the last character cannot be a hyphen.
Make sure the service account name is unique within your cloud.
-
Click Create.
-
Assign the admin role to the service account.
- On the management console home page
, select the cloud. - Go to the Access bindings tab.
- Click Configure access.
- In the window that opens, select Service accounts and then select the
sa-terraform
service account. - Click
Add role and select theadmin
role. - Click Save.
- On the management console home page
If you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name
or --folder-id
parameter.
-
Create a service account:
yc iam service-account create --name sa-terraform
Where
name
is the service account name. The naming requirements are as follows:- The name must be from 3 to 63 characters long.
- It may contain lowercase Latin letters, numbers, and hyphens.
- The first character must be a letter and the last character cannot be a hyphen.
Result:
id: ajehr0to1g8bh0la8c8r folder_id: b1gv87ssvu497lpgjh5o created_at: "2023-03-04T09:03:11.665153755Z" name: sa-terraform
-
Assign the account the admin role:
yc resource-manager cloud add-access-binding <cloud_ID> \ --role admin \ --subject serviceAccount:<service_account_ID>
Result:
done (1s)
To create a service account, use the create REST API method for the ServiceAccount resource or the ServiceAccountService/Create gRPC API call.
To assign the service account a role for a cloud or folder, use the updateAccessBindings
REST API method for the Cloud or Folder resource:
-
Select the role to assign to the service account. You can find the description of the roles in the Yandex Identity and Access Management documentation in the Yandex Cloud role reference.
-
Get the ID of the service accounts folder.
-
Get an IAM token required for authorization in the Yandex Cloud API.
-
Get a list of folder service accounts to find out their IDs:
export FOLDER_ID=b1gvmob95yys******** export IAM_TOKEN=CggaATEVAgA... curl \ --header "Authorization: Bearer ${IAM_TOKEN}" \ "https://iam.api.cloud.yandex.net/iam/v1/serviceAccounts?folderId=${FOLDER_ID}"
Result:
{ "serviceAccounts": [ { "id": "ajebqtreob2d********", "folderId": "b1gvmob95yys********", "createdAt": "2018-10-18T13:42:40Z", "name": "my-robot", "description": "my description" } ] }
-
Create the request body, e.g., in the
body.json
file. Set theaction
property toADD
androleId
to the appropriate role, such aseditor
, and specify theserviceAccount
type and service account ID in thesubject
property:body.json:
{ "accessBindingDeltas": [{ "action": "ADD", "accessBinding": { "roleId": "editor", "subject": { "id": "ajebqtreob2d********", "type": "serviceAccount" } } }] }
-
Assign a role to a service account. For example, for a folder with the
b1gvmob95yys********
ID:export FOLDER_ID=b1gvmob95yys******** export IAM_TOKEN=CggaAT******** curl \ --request POST \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${IAM_TOKEN}" \ --data '@body.json' \ "https://resource-manager.api.cloud.yandex.net/resource-manager/v1/folders/${FOLDER_ID}:updateAccessBindings"
Install the required utilities
-
Install Git
using the following command:sudo apt install git
-
Install Terraform:
-
Go to the root folder:
cd ~
-
Create a folder named
terraform
and open it:mkdir terraform cd terraform
-
Download the
terraform_1.3.9_linux_amd64.zip
file:curl \ --location \ --remote-name \ https://hashicorp-releases.yandexcloud.net/terraform/1.3.9/terraform_1.3.9_linux_amd64.zip
-
Install the
zip
utility and unpack the ZIP archive:apt install zip unzip terraform_1.3.9_linux_amd64.zip
-
Add the path to the folder with the executable to the
PATH
variable:export PATH=$PATH:~/terraform
-
Make sure Terraform is installed by running this command:
terraform -help
-
-
Create a configuration file specifying the provider source for Terraform:
-
Create a file named
.terraformrc
using the nativenano
editor:cd ~ nano .terraformrc
-
Add the following section to the file:
provider_installation { network_mirror { url = "https://terraform-mirror.yandexcloud.net/" include = ["registry.terraform.io/*/*"] } direct { exclude = ["registry.terraform.io/*/*"] } }
For more information about setting up mirrors, see the Terraform
documentation.
-
Deploy your resources
-
Clone the
yandex-cloud-examples/yc-dmz-with-high-available-ngfw
GitHub repository and go to theyc-dmz-with-high-available-ngfw
folder:git clone https://github.com/yandex-cloud-examples/yc-dmz-with-high-available-ngfw.git cd yc-dmz-with-high-available-ngfw
-
Set up the CLI profile to run operations on behalf of the service account:
CLIIf you do not have the Yandex Cloud command line interface yet, install and initialize it.
The folder specified in the CLI profile is used by default. You can specify a different folder using the
--folder-name
or--folder-id
parameter.-
Create an authorized key for your service account and save the file:
yc iam key create \ --service-account-id <service_account_ID> \ --folder-id <ID_of_folder_with_service_account> \ --output key.json
Where:
service-account-id
: Service account ID.folder-id
: ID of the folder in which the service account was created.output
: Name of the file with the authorized key.
Result:
id: aje8nn871qo4******** service_account_id: ajehr0to1g8b******** created_at: "2023-03-04T09:16:43.479156798Z" key_algorithm: RSA_2048
-
Create a CLI profile to run operations on behalf of the service account:
yc config profile create sa-terraform
Result:
Profile 'sa-terraform' created and activated
-
Set the profile configuration:
yc config set service-account-key key.json yc config set cloud-id <cloud_ID> yc config set folder-id <folder_ID>
Where:
-
Add the credentials to the environment variables:
export YC_TOKEN=$(yc iam create-token) export YC_CLOUD_ID=$(yc config get cloud-id) export YC_FOLDER_ID=$(yc config get folder-id)
-
-
Get your PC's IP address:
curl 2ip.ru
Result:
192.2**.**.**
-
Open the
terraform.tfvars
file in thenano
editor to edit as follows:-
The line with the cloud ID:
cloud_id = "<cloud_ID>"
-
The line with a list of allowed public IP addresses for
jump-vm
access:trusted_ip_for_access_jump-vm = ["<PC_external_IP_address>/32"]
-
-
Deploy the resources in the cloud using Terraform:
-
Initialize Terraform:
terraform init
-
Check the Terraform file configuration:
terraform validate
-
Check the list of cloud resources you are about to create:
terraform plan
-
Create resources:
terraform apply
-
Set up firewall gateways
As an example, this tutorial describes steps for configuring firewalls named FW-A and FW-B with basic access control policies and NAT required to test performance and fault tolerance, but insufficient to deploy an infrastructure in the production environment.
Connect to the control segment via a VPN
After deploying the infrastructure, the mgmt
folder will contain a VM named jump-vm
based on an Ubuntu image with the WireGuard VPNjump-vm
on your PC to access the mgmt
, dmz
, app
, and database
segment subnets.
To set up the VPN tunnel:
-
Get the username in the Linux subsystem:
whoami
-
Install
WireGuard on your PC. -
Open WireGuard and click Add Tunnel.
-
In the dialog box that opens, select the
jump-vm-wg.conf
file in theyc-dmz-with-high-available-ngfw
folder.To find the directory created in a Linux subsystem, e.g., Ubuntu, type the file path in the dialog box address bar:
\\wsl$\Ubuntu\home\<Ubuntu_user_name>\yc-dmz-with-high-available-ngfw
Where
<Ubuntu_user_name>
is the previously obtained name of the current Linux distribution user. -
Click Activate to activate the tunnel.
-
Check network connectivity with the management server via the WireGuard VPN tunnel by running the following command in the terminal:
ping 192.168.1.100
Warning
If the packets fail to reach the management server, make sure the
mgmt-jump-vm-sg
security group rules for incoming traffic have your PC external IP address specified correctly.
Run SmartConsole
To manage and set up the Check Point
-
Connect to the NGFW management server. To do so, in your browser, go to
https://192.168.1.100
. -
Sign in using
admin
as both username and password. -
In the Gaia Portal interface that opens, download the SmartConsole GUI client. To do this, click Manage Software Blades using SmartConsole. Download Now!.
-
Install SmartConsole on your PC.
-
Get a password for access to SmartConsole. Run the following command in the terminal window:
terraform output fw_smartconsole_mgmt-server_password
-
Open SmartConsole and sign in with
admin
for username,192.168.1.100
for management server IP address, and the SmartConsole password you got in the previous step.
Add the firewall gateways
Add an FW-A firewall gateway to the management server using the Wizard:
-
In the Objects drop-down list at the top left, select More object types → Network Object → Gateways and Servers → New Gateway....
-
Click Wizard Mode.
-
In the dialog box that opens, enter the following:
- Gateway name:
FW-A
- Gateway platform:
CloudGuard IaaS
- IPv4:
192.168.1.10
- Gateway name:
-
Click Next.
-
Get a password for access to firewalls. Run the following command in the terminal window:
terraform output fw_sic-password
-
In the One-time password field, enter the password you obtained in the previous step.
-
Click Next, and then Finish.
Similarly, add the FW-B firewall gateway with the following values to the management server:
- Gateway name:
FW-B
- IPv4:
192.168.2.10
Configure the FW-A gateway network interfaces
Configure the eth0
network interface of the FW-A gateway:
- In the Gateways & Servers tab, open the FW-A gateway setup dialog.
- In the Network Management tab, the Topology table, select the
eth0
interface and click Modify.... - Under Leads To, select Override.
- Next to the Specific option, hover over the
FW-A-eth0
interface name and click the edit icon in the window that opens. - In the dialog box that opens, rename
FW-A-eth0
tomgmt
. - Under Security Zone, activate Specify Security Zone and select InternalZone.
Similarly, configure the eth1
, eth2
, eth3
, and eth4
network interfaces:
-
For the
eth1
interface, specify ExternalZone under Security Zone. Do not rename this interface. -
Rename the
eth2
interface todmz
, activate Interface leads to DMZ, and specify DMZZone.Set up Automatic Hide NAT to hide the addresses of VMs hosted in the DMZ segment with access to the internet. To do this:
- In the
dmz
interface editing dialog box, clickNet_10.160.1.0
and go to the NAT tab. - Activate Add automatic address translation rules, select Hide from the drop-down list and enable Hide behind gateway.
- Repeat these same steps for
Net_10.160.2.0
.
- In the
-
Rename the
eth3
interface toapp
and specify InternalZone. -
Rename the
eth4
interface todatabase
and specify InternalZone.
Configure the FW-B gateway network interfaces
Configure the FW-B gateway network interfaces the same way as those of the FW-A gateway. When naming the interfaces, select existing names from the list.
To select an interface name from currently set ones:
- Under Leads To, select Override.
- Find the relevant name in the drop-down list next to the Specific option.
Warning
Renaming the interfaces again will cause the network object name replication error when setting security policies.
Create network objects
-
In the Objects drop-down list at the top left, select New Network... to create networks named
public - a
andpublic - b
with the following parameters:Name Network address Net mask public - a 172.16.1.0 255.255.255.0 public - b 172.16.2.0 255.255.255.0 -
Select New Network Group... to create a group named
public
and add thepublic - a
andpublic - b
networks to it. -
Select New Host... to create hosts with the following parameters:
Name IPv4 address dmz-web-server 10.160.1.100 FW-a-dmz-IP 10.160.1.10 FW-a-public-IP 172.16.1.10 FW-b-dmz-IP 10.160.2.10 FW-b-public-IP 172.16.2.10 -
Select More object types → Network Object → Service → New TCP... to create a TCP service for the application deployed in the DMZ segment and specify
TCP_8080
as its name and8080
as the port.
Set security policy rules
To add a security rule:
- In the Security policies tab, select Policy under Access Control.
- In the rule table, right-click next to the New Rule option in the context menu that opens and select Above or Below.
- In a new line:
- In the Name column, enter
Web-server port forwarding on FW-a
. - In the Source column, click
+
and select thepublic
object. - In the Destination column, select the
FW-a-public-IP
object. - In the Services & Applications column, select the
TCP_8080
object. - In the Action column, select
Accept
. - In the Track column, select
Log
. - In the Install On column, select the
FW-a
object.
- In the Name column, enter
In the same way, add other rules from the basic rule table below to test the firewall policies, run NLB health checks, publish a test application from the DMZ segment, and test its fault tolerance.
No | Name | Source | Destination | VPN | Services & Applications | Action | Track | Install On |
---|---|---|---|---|---|---|---|---|
1 | Web-server port forwarding on FW-a | public | FW-a-public-IP | Any | TCP_8080 | Accept | Log | FW-a |
2 | Web-server port forwarding on FW-b | public | FW-b-public-IP | Any | TCP_8080 | Accept | Log | FW-b |
3 | FW management & NLB healthcheck | mgmt | FW-a, FW-b, mgmt-server | Any | https, ssh | Accept | Log | Policy Targets (All gateways) |
4 | Stealth | Any | FW-a, FW-b, mgmt-server | Any | Any | Drop | Log | Policy Targets (All gateways) |
5 | mgmt to DMZ | mgmt | dmz | Any | Any | Accept | Log | Policy Targets (All gateways) |
6 | mgmt to app | mgmt | app | Any | Any | Accept | Log | Policy Targets (All gateways) |
7 | mgmt to database | mgmt | database | Any | Any | Accept | Log | Policy Targets (All gateways) |
8 | ping from dmz to internet | dmz | ExternalZone | Any | icmp-requests (Group) | Accept | Log | Policy Targets (All gateways) |
9 | Cleanup rule | Any | Any | Any | Any | Drop | Log | Policy Targets (All gateways) |
Set up a static NAT table
Source NAT
ensures that an application's response passes through the same firewall as the user's request. Destination NAT
routes user requests to the network traffic load balancer downstream of which there is the application's group of web servers.
Headers of packets received from Application Load Balancer with user requests to the application published in the DMZ will be translated to the Source IP
of the firewall DMZ interfaces and the Destination IP
of the web server traffic load balancer.
To set up the NAT tables of the FW-A gateway:
- Go to the NAT subsection of the Access Control section.
- In the rule table, right-click next to the New Rule option in the context menu that opens and select Above or Below.
- In a new line:
- In the Original Source column, click
+
and select thepublic
object. - In the Original Destination column, select the
FW-a-public-IP
object. - In the Original Services column, select the
TCP_8080
object. - In the Translated Source column, select the
FW-a-dmz-IP
object. - In the Translated Destination column, select the
dmz-web-server
object. - In the Install On column, select the
FW-a
object.
- In the Original Source column, click
- Make sure to change the NAT method for
FW-a-dmz-IP
. To do this, right-click theFW-a-dmz-IP
object in the table and select NAT Method > Hide.
In the same way, set up the static NAT table for the FW-B gateway based on the table below:
No | Original Source | Original Destination | Original Services | Translated Source | Translated Destination | Translated Services | Install On |
---|---|---|---|---|---|---|---|
1 | public | FW-a-public-IP | TCP_8080 | FW-a-dmz-IP (Hide) | dmz-web-server | Original | FW-a |
2 | public | FW-b-public-IP | TCP_8080 | FW-b-dmz-IP (Hide) | dmz-web-server | Original | FW-b |
Apply the security policy rules
- Click Install Policy at the top left of the screen.
- In the dialog box that opens, click Push & Install.
- In the next dialog, click Install and wait for the process to complete.
Enable the route-switcher module
After you complete the NGFW setup, make sure that FW-A and FW-B health checks return Healthy
. To do this, in the Yandex Cloud management consolemgmt
folder, select Network Load Balancer and go to the route-switcher-lb-...
network load balancer page. Expand the target group and make sure the target resources are Healthy
. If they are Unhealthy
, make sure that FW-A and FW-B are up and running and configured.
Once the FW-A and FW-B status changes to Healthy
, open the route-switcher.tf
file and change the start_module
parameter value of the route-switcher
module to true
. To enable the module, run these commands:
terraform plan
terraform apply
Within 5 minutes, the route-switcher
module starts providing fault tolerance of outgoing traffic across the segments.
Test the solution for performance and fault tolerance
Test the system performance
-
To find out the public IP address of the load balancer, run this command in the terminal:
terraform output fw-alb_public_ip_address
-
Make sure the network infrastructure is externally accessible. To do so, in your browser, go to:
http://<ALB_load_balancer_public_IP_address>
If the system is accessible from the outside, you will see the
Welcome to nginx!
page. -
Make sure the firewall security policy rules that allow traffic are active. To do this, go to the
yc-dmz-with-high-available-ngfw
folder on your PC and connect to a VM in the DMZ segment over SSH:cd ~/yc-dmz-with-high-available-ngfw ssh -i pt_key.pem admin@<VM_internal_IP_address_in_DMZ_segment>
-
To check that there is access from the VM in the DMZ segment to a public resource on the Internet, run this command:
ping ya.ru
The command must run according to the
ping from dmz to internet
rule that allows traffic. -
Make sure the security policy rules that prohibit traffic are applied.
To check that
Jump VM
in themgmt
segment cannot be accessed from thedmz
segment, run this command:ping 192.168.1.101
The command must fail according to the
Cleanup rule
rule that prohibits traffic.
Testing fault tolerance
-
Install
httping
on your PC to make regular HTTP requests:sudo apt-get install httping
-
To find out the public IP address of the load balancer, run this command in the terminal:
terraform output fw-alb_public_ip_address
-
Enable incoming traffic to the application published in the DMZ segment by making the following request to the ALB public IP:
httping http://<ALB_load_balancer_public_IP_address>
-
Open another terminal window and connect to a VM in the DMZ segment over SSH:
ssh -i pt_key.pem admin@<VM_internal_IP_address_in_DMZ_segment>
-
Set a password for the
admin
user:sudo passwd admin
-
In the Yandex Cloud management console
, change the parameters of this VM:- In the list of services, select Compute Cloud.
- In the left-hand panel, select
Virtual machines. - In the line with the appropriate VM, click
and select Edit. - In the window that opens, under Additional, enable Access to serial console.
- Click Save changes.
-
Connect to the VM serial console, enter the
admin
username and the password you set earlier. -
Enable outgoing traffic from the VM in the DMZ segment to a resource on the internet using the
ping
command:ping ya.ru
-
In the Yandex Cloud management console
, in themgmt
folder, stop thefw-a
VM by emulating the recovery of the main firewall. -
Monitor the loss of packets sent by
httping
andping
. After FW-A fails, there may be a traffic loss for approximately 1 minute with subsequent traffic recovery. -
Make sure the FW-B address is used in the
dmz-rt
route table in thedmz
folder fornext hop
. -
In the Yandex Cloud management console
, run thefw-a
VM by emulating the recovery of the main firewall. -
Monitor the loss of packets sent by
httping
andping
. After FW-A is restored, there may be a traffic loss for approximately 1 minute with subsequent traffic recovery. -
Make sure the FW-A address is used in the
dmz-rt
route table in thedmz
folder fornext hop
.
How to delete the resources you created
To stop paying for the resources you created, run this command:
terraform destroy
Terraform will permanently delete all the resources: networks, subnets, VMs, load balancers, folders, etc.
As the resources you created reside in folders, a faster way to delete all resources is to delete all the folders using the Yandex Cloud console and then delete the terraform.tfstate
file from the yc-dmz-with-high-available-ngfw
folder on your PC.