Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Education and Science
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
    • All guides
    • Connecting to a node over SSH
    • Connecting to a node via OS Login
    • Updating Kubernetes
    • Configuring autoscaling
    • Activating a Kubernetes Terraform provider
    • Connecting external nodes to the cluster
    • Configuring WireGuard gateways to connect external nodes to a cluster
    • Configuring IPSec gateways to connect external nodes to a cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Getting started
  • Configuring security groups
  • Configuring routing
  • Setting up WireGuard gateways
  • Troubleshooting
  • Errors when using the docker-ce and containerd packages on an external node
  1. Step-by-step guides
  2. Configuring WireGuard gateways to connect external nodes to a cluster

Configuring WireGuard gateways to connect external nodes to a cluster

Written by
Yandex Cloud
Updated at August 6, 2025
  • Getting started
  • Configuring security groups
  • Configuring routing
  • Setting up WireGuard gateways
  • Troubleshooting
    • Errors when using the docker-ce and containerd packages on an external node

With Yandex Managed Service for Kubernetes, you can connect servers from outside Yandex Cloud as Kubernetes cluster nodes. To connect one, first, set up network connectivity between the remote network hosting the external server and the cloud network hosting your Managed Service for Kubernetes cluster. You can do this using a VPN.

Below is an example of establishing network connectivity over the WireGuard protocol. Here, the external server is a VM residing in another Yandex Cloud cloud network.

Getting startedGetting started

  1. Create your main cloud network with three subnets in different availability zones.

  2. In the main network, create a Managed Service for Kubernetes cluster with a highly available master.

    To create an external node group, the Managed Service for Kubernetes cluster must operate in tunnel mode. This mode can be enabled only when creating the cluster.

  3. Install kubect and configure it to work with the new cluster.

  4. In the main network, create a Compute Cloud VM with a public IP address; name it VM-1. On this VM, you will set up the main WireGuard gateway.

  5. Create an additional cloud network with one subnet.

  6. In the additional network, create a Compute Cloud VM with a public IP address; name it VM-2. On this VM, you will set up the additional WireGuard gateway.

Configuring security groupsConfiguring security groups

  1. In the main network, create a security group and assign it to VM-1. Add the following rules to the group:

    Outgoing traffic
    Incoming traffic
    Description Port range Protocol Destination name CIDR blocks
    any 0-65535 Any CIDR 0.0.0.0/0
    Description Port range Protocol Source CIDR blocks
    icmp 0-65535 ICMP CIDR 0.0.0.0/0
    ssh 22 TCP CIDR 0.0.0.0/0
    wireguard 51821 UDP CIDR <VM_2_public_address>/32
    VM-2-subnet 0-65535 Any CIDR <VM_2_subnet_CIDR>
  2. In the additional network, create a security group and assign it to VM-2. Add the following rules to the group:

    Outgoing traffic
    Incoming traffic
    Description Port range Protocol Destination name CIDR blocks
    any 0-65535 Any CIDR 0.0.0.0/0
    Description Port range Protocol Source CIDR blocks
    icmp 0-65535 ICMP CIDR 0.0.0.0/0
    ssh 22 TCP CIDR 0.0.0.0/0
    wireguard 51822 UDP CIDR <VM_1_public_address>/32
    k8s-VM-1-subnets 0-65535 Any CIDR <main_subnet1_CIDR>, <main_subnet2_CIDR>, <<main_subnet3_CIDR>
    cluster&services 0-65535 Any CIDR <cluster_CIDR>, <CIDRs_of_services>
  3. Add the following rule to the security group of the Managed Service for Kubernetes cluster and node group:

    Incoming traffic
    Description Port range Protocol Source CIDR blocks
    VM-2-subnet 0-65535 Any CIDR <VM_2_subnet_CIDR>

Configuring routingConfiguring routing

  1. Configure routing for the main WireGuard gateway:

    1. In the main network, create a route table and add a static route to it:

      • Destination prefix: Specify the CIDR of the VM-2's subnet.
      • IP address: Specify the VM-1's internal IP address.
    2. Associate the route table with all subnets in your main network.

  2. Configure routing for the additional WireGuard gateway:

    1. In the additional network, create a route table.

    2. Add a static route for the route table:

      • Destination prefix: Specify the CIDR of the VM-1's subnet.
      • IP address: Specify the VM-2's internal IP address.

      Repeat this step for each subnet in your main network.

    3. Associate the route table to the VM-2's subnet.

Setting up WireGuard gatewaysSetting up WireGuard gateways

  1. Set up the main WireGuard gateway:

    1. Connect to VM-1 over SSH.

    2. Install WireGuard:

      sudo apt update && sudo apt install wireguard
      
    3. Generate and save the encryption keys:

      wg genkey | sudo tee vm1_private.key | wg pubkey | sudo tee vm1_public.key > /dev/null
      wg genkey | sudo tee vm2_private.key | wg pubkey | sudo tee vm2_public.key > /dev/null
      

      In the current directory, the system will create these four files:

      • vm1_private.key: Contains the private encryption key for VM-1.
      • vm1_public.key: Contains the public encryption key for VM-1.
      • vm2_private.key: Contains the private encryption key for VM-2.
      • vm2_public.key: Contains the public encryption key for VM-2.
    4. Create a configuration file named wg0.conf:

      sudo nano /etc/wireguard/wg0.conf
      
    5. Add the following configuration to it:

      [Interface]
      PrivateKey = <vm1_private.key_file_contents>
      Address = 10.0.0.1/32
      ListenPort = 51821
      
      PreUp = sysctl -w net.ipv4.ip_forward=1
      
      [Peer]
      PublicKey = <vm2_public.key_file_contents>
      Endpoint = <VM_2_public_address>:51822
      AllowedIPs = <VM_2_subnet_CIDR>, 10.0.0.2/32
      PersistentKeepalive = 15
      

      Learn more about the configuration parameters.

      Save the changes and close the file.

    6. Apply the configuration:

      sudo systemctl restart wg-quick@wg0
      
  2. Set up the additional WireGuard gateway:

    1. Connect to VM-2 over SSH.

    2. Install WireGuard:

      sudo apt update && sudo apt install wireguard
      
    3. Create a configuration file named wg0.conf:

      sudo nano /etc/wireguard/wg0.conf
      
    4. Add the following configuration to it:

      [Interface]
      PrivateKey = <vm2_private.key_file_contents>
      Address = 10.0.0.2/32
      ListenPort = 51822
      
      PreUp = sysctl -w net.ipv4.ip_forward=1
      
      [Peer]
      PublicKey = <vm1_public.key_file_contents>
      Endpoint = <VM_1_public_address>:51821
      AllowedIPs = <main_subnet1_CIDR>, <main_subnet2_CIDR>, <main_subnet3_CIDR>, 10.0.0.1/32
      PersistentKeepalive = 15
      

      Learn more about the configuration parameters.

      Save the changes and close the file.

    5. Apply the configuration:

      sudo systemctl restart wg-quick@wg0
      
  3. Check the connection status on both VMs:

    sudo wg show
    

    You should see latest handshake in the command output, indicating a successfully established connection:

    ...
    latest handshake: 3 seconds ago
    ...
    
  4. Configure MTU on both VMs:

    ETH_NIC=eth0
    sudo iptables -t mangle -A FORWARD -i ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
    sudo iptables -t mangle -A FORWARD -o ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
    echo "net.ipv4.ip_no_pmtu_disc = 1" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p /etc/sysctl.conf
    

    Warning

    If you keep the default MTU value, network traffic losses may possibly occur.

  5. Connect VM-2 to the Managed Service for Kubernetes cluster as its external node.

TroubleshootingTroubleshooting

Errors when using the and packages on an external nodeErrors when using the docker-ce and containerd packages on an external node

To diagnose and fix this error:

  1. View the list of services that are not functioning properly:

    sudo systemctl --failed
    

    Result:

    UNIT LOAD ACTIVE SUB DESCRIPTION
    docker.socket loaded failed failed Docker Socket for the API
    LOAD = Reflects whether the unit definition was properly loaded.
    ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
    SUB = The low-level unit activation state, values depend on unit type.
    1 loaded units listed.
    
  2. Check the docker.socket status:

    sudo systemctl status docker.socket
    

    Result:

    docker.socket - Docker Socket for the API
    Loaded: loaded (/lib/systemd/system/docker.socket; disabled; vendor preset: enabled)
    Active: failed (Result: exit-code) since Tue 2024-02-10 09:53:37 UTC; 6s ago
    Triggers: ● docker.service
    Listen: /run/docker.sock (Stream)
    CPU: 1ms
    Feb 10 09:53:37 ext-node systemd[1]: Starting Docker Socket for the API...
    Feb 10 09:53:37 ext-node systemd[7052]: docker.socket: Failed to resolve group docker: No such process
    Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Control process exited, code=exited, status=216/GROUP
    Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Failed with result 'exit-code'.
    Feb 10 09:53:37 ext-node systemd[1]: Failed to listen on Docker Socket for the API.
    
  3. Look up errors in system logs:

    sudo journalctl -xe
    

    Result:

    ...
    Feb 10 09:56:40 ext-node maintainer[19298]: E: Sub-process /usr/bin/dpkg returned an error code (1)
    ...
    
  4. Reinstall the packages and fix the errors:

    sudo apt install -f
    
  5. When the installer prompts you for action with the config.toml file, enter N to keep the current version of the file.

Was the article helpful?

Previous
Connecting external nodes to the cluster
Next
Configuring IPSec gateways to connect external nodes to a cluster
© 2025 Direct Cursus Technology L.L.C.