Configuring IPSec gateways to connect external nodes to a cluster
With Yandex Managed Service for Kubernetes, you can connect servers from outside Yandex Cloud as Kubernetes cluster nodes. To connect one, first, set up network connectivity between the remote network hosting the external server and the cloud network hosting your Managed Service for Kubernetes cluster. You can do this using a VPN.
Below is an example of establishing network connectivity over the IPSec protocol. Here, the external server is a VM residing in another Yandex Cloud cloud network.
Getting started
-
Create your main cloud network with three subnets in different availability zones.
-
In the main network, create a Managed Service for Kubernetes cluster with a highly available master.
To create an external node group, the Managed Service for Kubernetes cluster must operate in tunnel mode. This mode can be enabled only when creating the cluster.
-
Install kubect
and configure it to work with the new cluster. -
In the main network, create a Compute Cloud VM with a public IP address; name it
VM-1
. On this VM, you will set up the main IPSec gateway. -
Create an additional cloud network with one subnet.
-
In the additional network, create a Compute Cloud VM with a public IP address; name it
VM-2
. You will use this VM to set up the additional IPSec gateway.
Configuring security groups
-
In the main network, create a security group and assign it to
VM-1
. Add the following rules to the group:Outgoing trafficIncoming trafficDescription Port range Protocol Destination name CIDR blocks any
0-65535
Any
CIDR
0.0.0.0/0
Description Port range Protocol Source CIDR blocks icmp
0-65535
ICMP
CIDR
0.0.0.0/0
ssh
22
TCP
CIDR
0.0.0.0/0
ipsec-udp-500
500
UDP
CIDR
<VM_2_public_address>/32
ipsec-udp-4500
4500
UDP
CIDR
<VM_2_public_address>/32
VM-2-subnet
0-65535
Any
CIDR
<VM_2_subnet_CIDR>
-
In the additional network, create a security group and assign it to
VM-2
. Add the following rules to the group:Outgoing trafficIncoming trafficDescription Port range Protocol Destination name CIDR blocks any
0-65535
Any
CIDR
0.0.0.0/0
Description Port range Protocol Source CIDR blocks icmp
0-65535
ICMP
CIDR
0.0.0.0/0
ssh
22
TCP
CIDR
0.0.0.0/0
ipsec-udp-500
500
UDP
CIDR
<VM_1_public_address>/32
ipsec-udp-4500
4500
UDP
CIDR
<VM_1_public_address>/32
k8s-VM-1-subnets
0-65535
Any
CIDR
<main_subnet1_CIDR>
,<main_subnet2_CIDR>
,<<main_subnet3_CIDR>
cluster&services
0-65535
Any
CIDR
<cluster_CIDR>
,<CIDRs_of_services>
-
Add the following rule to the security group of the Managed Service for Kubernetes cluster and node groups:
Incoming trafficDescription Port range Protocol Source CIDR blocks VM-2-subnet
0-65535
Any
CIDR
<VM_2_subnet_CIDR>
Configuring routing
-
Configure routing for the main IPSec gateway:
-
In the main network, create a route table and add a static route to it:
- Destination prefix: Specify the CIDR of the
VM-2
's subnet. - IP address: Specify the
VM-1
's internal IP address.
- Destination prefix: Specify the CIDR of the
-
Associate the route table with all subnets in your main network.
-
-
Configure routing for the additional IPSec gateway:
-
In the additional network, create a route table.
-
Add a static route for the route table:
- Destination prefix: Specify the CIDR of the
VM-1
's subnet. - IP address: Specify the
VM-2
's internal IP address.
Repeat this step for each subnet in your main network.
- Destination prefix: Specify the CIDR of the
-
Associate the route table to the
VM-2
's subnet.
-
Setting up IPSec gateways
-
Set up the main IPsec gateway:
-
Connect to
VM-1
over SSH. -
Install strongSwan:
sudo apt update && sudo apt install strongswan
-
Open the
ipsec.conf
configuration file:sudo nano /etc/ipsec.conf
-
Replace the file contents with this text:
# basic configuration config setup charondebug="all" uniqueids=yes conn VM-1 type=tunnel auto=route keyexchange=ikev2 ike=aes256-sha2_256-modp2048! esp=aes256-sha2_256! authby=secret left=<VM_1_internal_address> leftsubnet=<main_subnet1_CIDR>,<main_subnet2_CIDR>,<main_subnet3_CIDR> leftsourceip=<VM_1_internal_address> leftid=<VM_1_public_address> right=<VM_2_public_address> rightsubnet=<VM_2_subnet_CIDR> aggressive=no keyingtries=%forever ikelifetime=86400s
For more information about parameters, see the strongSwan documentation
. -
Open the
ipsec.secrets
file that is used for authentication:sudo nano /etc/ipsec.secrets
-
Replace the file contents with this text:
<VM_1_public_address> <VM_2_public_address> : PSK "<password>"
To learn more about the
ipsec.secrets
file format, see the strongSwan documentation .
-
-
Set up the additional IPSec gateway:
-
Connect to
VM-2
over SSH. -
Install strongSwan
:sudo apt update && sudo apt install strongswan
-
Open the
ipsec.conf
configuration file:sudo nano /etc/ipsec.conf
-
Replace the file contents with this text:
# basic configuration config setup charondebug="all" conn VM-2 type=tunnel auto=route keyexchange=ikev2 ike=aes256-sha2_256-modp2048! esp=aes256-sha2_256! authby=secret left=<VM_2_internal_address> leftid=<VM_2_public_address> leftsubnet=<VM_2_subnet_CIDR> right=<VM_1_public_address> rightsubnet=<main_subnet1_CIDR>,<main_subnet2_CIDR>,<main_subnet3_CIDR> rightsourceip=<VM_1_internal_address> aggressive=no keyingtries=%forever ikelifetime=86400s lifetime=43200s lifebytes=576000000 dpddelay=30s
For more information about parameters, see the strongSwan documentation
. -
Open the
ipsec.secrets
file required for authentication:sudo nano /etc/ipsec.secrets
-
Replace the file contents with this text:
<VM_2_public_address> <VM_1_public_address> : PSK "<password>"
Passwords must be the same for both VMs.
To learn more about the
ipsec.secrets
file format, see the strongSwan documentation .
-
-
Restart strongSwan on both VMs:
sudo ipsec restart
-
Check the connection status on both VMs:
sudo ipsec statusall
You should see
ESTABLISHED
in the command output, indicating a successfully established connection:... Security Associations (1 up, 0 connecting): VM-1[1]: ESTABLISHED 5 seconds ago, 10.128.*.**[46.21.***.***]...84.201.***.***[84.201.***.***] ...
If the connection was not established, try establishing it manually. On the
VM-1
VM, run the following command:sudo ipsec up VM-1
You may run this command on just one of the VMs.
-
Configure MTU on both VMs:
ETH_NIC=eth0 sudo iptables -t mangle -A FORWARD -i ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360 sudo iptables -t mangle -A FORWARD -o ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360 echo "net.ipv4.ip_no_pmtu_disc = 1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p /etc/sysctl.conf
Warning
If you keep the default MTU value, network traffic losses may possibly occur.
-
Connect
VM-2
to the Managed Service for Kubernetes cluster as its external node.
Troubleshooting
docker-ce
and containerd
packages on an external node
Errors when using the To diagnose and fix this error:
-
View the list of services that are not functioning properly:
sudo systemctl --failed
Result:
UNIT LOAD ACTIVE SUB DESCRIPTION docker.socket loaded failed failed Docker Socket for the API LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed.
-
Check the
docker.socket
status:sudo systemctl status docker.socket
Result:
docker.socket - Docker Socket for the API Loaded: loaded (/lib/systemd/system/docker.socket; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2024-02-10 09:53:37 UTC; 6s ago Triggers: ● docker.service Listen: /run/docker.sock (Stream) CPU: 1ms Feb 10 09:53:37 ext-node systemd[1]: Starting Docker Socket for the API... Feb 10 09:53:37 ext-node systemd[7052]: docker.socket: Failed to resolve group docker: No such process Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Control process exited, code=exited, status=216/GROUP Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Failed with result 'exit-code'. Feb 10 09:53:37 ext-node systemd[1]: Failed to listen on Docker Socket for the API.
-
Look up errors in system logs:
sudo journalctl -xe
Result:
... Feb 10 09:56:40 ext-node maintainer[19298]: E: Sub-process /usr/bin/dpkg returned an error code (1) ...
-
Reinstall the packages and fix the errors:
sudo apt install -f
-
When the installer prompts you for action with the
config.toml
file, enterN
to keep the current version of the file.