Configuring WireGuard gateways to connect external nodes to a cluster
With Yandex Managed Service for Kubernetes, you can connect servers from outside Yandex Cloud as Kubernetes cluster nodes. To connect one, first, set up network connectivity between the remote network hosting the external server and the cloud network hosting your Managed Service for Kubernetes cluster. You can do this using a VPN.
Below is an example of establishing network connectivity over the WireGuard
Getting started
-
Create your main cloud network with three subnets in different availability zones.
-
In the main network, create a Managed Service for Kubernetes cluster with a highly available master.
To create an external node group, the Managed Service for Kubernetes cluster must operate in tunnel mode. This mode can be enabled only when creating the cluster.
-
Install kubect
and configure it to work with the new cluster. -
In the main network, create a Compute Cloud VM with a public IP address; name it
VM-1
. On this VM, you will set up the main WireGuard gateway. -
Create an additional cloud network with one subnet.
-
In the additional network, create a Compute Cloud VM with a public IP address; name it
VM-2
. On this VM, you will set up the additional WireGuard gateway.
Configuring security groups
-
In the main network, create a security group and assign it to
VM-1
. Add the following rules to the group:Outgoing trafficIncoming trafficDescription Port range Protocol Destination name CIDR blocks any
0-65535
Any
CIDR
0.0.0.0/0
Description Port range Protocol Source CIDR blocks icmp
0-65535
ICMP
CIDR
0.0.0.0/0
ssh
22
TCP
CIDR
0.0.0.0/0
wireguard
51821
UDP
CIDR
<VM_2_public_address>/32
VM-2-subnet
0-65535
Any
CIDR
<VM_2_subnet_CIDR>
-
In the additional network, create a security group and assign it to
VM-2
. Add the following rules to the group:Outgoing trafficIncoming trafficDescription Port range Protocol Destination name CIDR blocks any
0-65535
Any
CIDR
0.0.0.0/0
Description Port range Protocol Source CIDR blocks icmp
0-65535
ICMP
CIDR
0.0.0.0/0
ssh
22
TCP
CIDR
0.0.0.0/0
wireguard
51822
UDP
CIDR
<VM_1_public_address>/32
k8s-VM-1-subnets
0-65535
Any
CIDR
<main_subnet1_CIDR>
,<main_subnet2_CIDR>
,<<main_subnet3_CIDR>
cluster&services
0-65535
Any
CIDR
<cluster_CIDR>
,<CIDRs_of_services>
-
Add the following rule to the security group of the Managed Service for Kubernetes cluster and node group:
Incoming trafficDescription Port range Protocol Source CIDR blocks VM-2-subnet
0-65535
Any
CIDR
<VM_2_subnet_CIDR>
Configuring routing
-
Configure routing for the main WireGuard gateway:
-
In the main network, create a route table and add a static route to it:
- Destination prefix: Specify the CIDR of the
VM-2
's subnet. - IP address: Specify the
VM-1
's internal IP address.
- Destination prefix: Specify the CIDR of the
-
Associate the route table with all subnets in your main network.
-
-
Configure routing for the additional WireGuard gateway:
-
In the additional network, create a route table.
-
Add a static route for the route table:
- Destination prefix: Specify the CIDR of the
VM-1
's subnet. - IP address: Specify the
VM-2
's internal IP address.
Repeat this step for each subnet in your main network.
- Destination prefix: Specify the CIDR of the
-
Associate the route table to the
VM-2
's subnet.
-
Setting up WireGuard gateways
-
Set up the main WireGuard gateway:
-
Connect to
VM-1
over SSH. -
Install WireGuard:
sudo apt update && sudo apt install wireguard
-
Generate and save the encryption keys:
wg genkey | sudo tee vm1_private.key | wg pubkey | sudo tee vm1_public.key > /dev/null wg genkey | sudo tee vm2_private.key | wg pubkey | sudo tee vm2_public.key > /dev/null
In the current directory, the system will create these four files:
vm1_private.key
: Contains the private encryption key forVM-1
.vm1_public.key
: Contains the public encryption key forVM-1
.vm2_private.key
: Contains the private encryption key forVM-2
.vm2_public.key
: Contains the public encryption key forVM-2
.
-
Create a configuration file named
wg0.conf
:sudo nano /etc/wireguard/wg0.conf
-
Add the following configuration to it:
[Interface] PrivateKey = <vm1_private.key_file_contents> Address = 10.0.0.1/32 ListenPort = 51821 PreUp = sysctl -w net.ipv4.ip_forward=1 [Peer] PublicKey = <vm2_public.key_file_contents> Endpoint = <VM_2_public_address>:51822 AllowedIPs = <VM_2_subnet_CIDR>, 10.0.0.2/32 PersistentKeepalive = 15
Learn more about the configuration parameters
.Save the changes and close the file.
-
Apply the configuration:
sudo systemctl restart wg-quick@wg0
-
-
Set up the additional WireGuard gateway:
-
Connect to
VM-2
over SSH. -
Install WireGuard:
sudo apt update && sudo apt install wireguard
-
Create a configuration file named
wg0.conf
:sudo nano /etc/wireguard/wg0.conf
-
Add the following configuration to it:
[Interface] PrivateKey = <vm2_private.key_file_contents> Address = 10.0.0.2/32 ListenPort = 51822 PreUp = sysctl -w net.ipv4.ip_forward=1 [Peer] PublicKey = <vm1_public.key_file_contents> Endpoint = <VM_1_public_address>:51821 AllowedIPs = <main_subnet1_CIDR>, <main_subnet2_CIDR>, <main_subnet3_CIDR>, 10.0.0.1/32 PersistentKeepalive = 15
Learn more about the configuration parameters
.Save the changes and close the file.
-
Apply the configuration:
sudo systemctl restart wg-quick@wg0
-
-
Check the connection status on both VMs:
sudo wg show
You should see
latest handshake
in the command output, indicating a successfully established connection:... latest handshake: 3 seconds ago ...
-
Configure MTU on both VMs:
ETH_NIC=eth0 sudo iptables -t mangle -A FORWARD -i ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360 sudo iptables -t mangle -A FORWARD -o ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360 echo "net.ipv4.ip_no_pmtu_disc = 1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p /etc/sysctl.conf
Warning
If you keep the default MTU value, network traffic losses may possibly occur.
-
Connect
VM-2
to the Managed Service for Kubernetes cluster as its external node.
Troubleshooting
docker-ce
and containerd
packages on an external node
Errors when using the To diagnose and fix this error:
-
View the list of services that are not functioning properly:
sudo systemctl --failed
Result:
UNIT LOAD ACTIVE SUB DESCRIPTION docker.socket loaded failed failed Docker Socket for the API LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed.
-
Check the
docker.socket
status:sudo systemctl status docker.socket
Result:
docker.socket - Docker Socket for the API Loaded: loaded (/lib/systemd/system/docker.socket; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2024-02-10 09:53:37 UTC; 6s ago Triggers: ● docker.service Listen: /run/docker.sock (Stream) CPU: 1ms Feb 10 09:53:37 ext-node systemd[1]: Starting Docker Socket for the API... Feb 10 09:53:37 ext-node systemd[7052]: docker.socket: Failed to resolve group docker: No such process Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Control process exited, code=exited, status=216/GROUP Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Failed with result 'exit-code'. Feb 10 09:53:37 ext-node systemd[1]: Failed to listen on Docker Socket for the API.
-
Look up errors in system logs:
sudo journalctl -xe
Result:
... Feb 10 09:56:40 ext-node maintainer[19298]: E: Sub-process /usr/bin/dpkg returned an error code (1) ...
-
Reinstall the packages and fix the errors:
sudo apt install -f
-
When the installer prompts you for action with the
config.toml
file, enterN
to keep the current version of the file.