Yandex Cloud
Search
Contact UsGet started
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
    • Featured
    • Infrastructure & Network
    • Data Platform
    • Containers
    • Developer tools
    • Serverless
    • Security
    • Monitoring & Resources
    • AI Studio
    • Business tools
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
  • Pricing
  • Customer Stories
  • Documentation
  • Blog
© 2025 Direct Cursus Technology L.L.C.
Service page
Yandex Managed Service for Kubernetes
Documentation
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
    • All guides
    • Connecting to a node over SSH
    • Connecting to a node via OS Login
    • Updating Kubernetes
    • Configuring autoscaling
    • Activating a Kubernetes Terraform provider
    • Connecting external nodes to the cluster
    • Configuring WireGuard gateways to connect external nodes to a cluster
    • Configuring IPSec gateways to connect external nodes to a cluster
  • Access management
  • Pricing policy
  • Terraform reference
  • Monitoring metrics
  • Audit Trails events
  • Release notes

In this article:

  • Getting started
  • Configuring security groups
  • Configuring routing
  • Setting up IPSec gateways
  • Troubleshooting
  • Errors when using the docker-ce and containerd packages on an external node
  1. Step-by-step guides
  2. Configuring IPSec gateways to connect external nodes to a cluster

Configuring IPSec gateways to connect external nodes to a cluster

Written by
Yandex Cloud
Updated at August 6, 2025
  • Getting started
  • Configuring security groups
  • Configuring routing
  • Setting up IPSec gateways
  • Troubleshooting
    • Errors when using the docker-ce and containerd packages on an external node

With Yandex Managed Service for Kubernetes, you can connect servers from outside Yandex Cloud as Kubernetes cluster nodes. To connect one, first, set up network connectivity between the remote network hosting the external server and the cloud network hosting your Managed Service for Kubernetes cluster. You can do this using a VPN.

Below is an example of establishing network connectivity over the IPSec protocol. Here, the external server is a VM residing in another Yandex Cloud cloud network.

Getting startedGetting started

  1. Create your main cloud network with three subnets in different availability zones.

  2. In the main network, create a Managed Service for Kubernetes cluster with a highly available master.

    To create an external node group, the Managed Service for Kubernetes cluster must operate in tunnel mode. This mode can be enabled only when creating the cluster.

  3. Install kubect and configure it to work with the new cluster.

  4. In the main network, create a Compute Cloud VM with a public IP address; name it VM-1. On this VM, you will set up the main IPSec gateway.

  5. Create an additional cloud network with one subnet.

  6. In the additional network, create a Compute Cloud VM with a public IP address; name it VM-2. You will use this VM to set up the additional IPSec gateway.

Configuring security groupsConfiguring security groups

  1. In the main network, create a security group and assign it to VM-1. Add the following rules to the group:

    Outgoing traffic
    Incoming traffic
    Description Port range Protocol Destination name CIDR blocks
    any 0-65535 Any CIDR 0.0.0.0/0
    Description Port range Protocol Source CIDR blocks
    icmp 0-65535 ICMP CIDR 0.0.0.0/0
    ssh 22 TCP CIDR 0.0.0.0/0
    ipsec-udp-500 500 UDP CIDR <VM_2_public_address>/32
    ipsec-udp-4500 4500 UDP CIDR <VM_2_public_address>/32
    VM-2-subnet 0-65535 Any CIDR <VM_2_subnet_CIDR>
  2. In the additional network, create a security group and assign it to VM-2. Add the following rules to the group:

    Outgoing traffic
    Incoming traffic
    Description Port range Protocol Destination name CIDR blocks
    any 0-65535 Any CIDR 0.0.0.0/0
    Description Port range Protocol Source CIDR blocks
    icmp 0-65535 ICMP CIDR 0.0.0.0/0
    ssh 22 TCP CIDR 0.0.0.0/0
    ipsec-udp-500 500 UDP CIDR <VM_1_public_address>/32
    ipsec-udp-4500 4500 UDP CIDR <VM_1_public_address>/32
    k8s-VM-1-subnets 0-65535 Any CIDR <main_subnet1_CIDR>, <main_subnet2_CIDR>, <<main_subnet3_CIDR>
    cluster&services 0-65535 Any CIDR <cluster_CIDR>, <CIDRs_of_services>
  3. Add the following rule to the security group of the Managed Service for Kubernetes cluster and node groups:

    Incoming traffic
    Description Port range Protocol Source CIDR blocks
    VM-2-subnet 0-65535 Any CIDR <VM_2_subnet_CIDR>

Configuring routingConfiguring routing

  1. Configure routing for the main IPSec gateway:

    1. In the main network, create a route table and add a static route to it:

      • Destination prefix: Specify the CIDR of the VM-2's subnet.
      • IP address: Specify the VM-1's internal IP address.
    2. Associate the route table with all subnets in your main network.

  2. Configure routing for the additional IPSec gateway:

    1. In the additional network, create a route table.

    2. Add a static route for the route table:

      • Destination prefix: Specify the CIDR of the VM-1's subnet.
      • IP address: Specify the VM-2's internal IP address.

      Repeat this step for each subnet in your main network.

    3. Associate the route table to the VM-2's subnet.

Setting up IPSec gatewaysSetting up IPSec gateways

  1. Set up the main IPsec gateway:

    1. Connect to VM-1 over SSH.

    2. Install strongSwan:

      sudo apt update && sudo apt install strongswan
      
    3. Open the ipsec.conf configuration file:

      sudo nano /etc/ipsec.conf
      
    4. Replace the file contents with this text:

      # basic configuration
      
      config setup
        charondebug="all"
        uniqueids=yes
      
      conn VM-1
        type=tunnel
        auto=route
        keyexchange=ikev2
        ike=aes256-sha2_256-modp2048!
        esp=aes256-sha2_256!
        authby=secret
        left=<VM_1_internal_address>
        leftsubnet=<main_subnet1_CIDR>,<main_subnet2_CIDR>,<main_subnet3_CIDR>
        leftsourceip=<VM_1_internal_address>
        leftid=<VM_1_public_address>
        right=<VM_2_public_address>
        rightsubnet=<VM_2_subnet_CIDR>
        aggressive=no
        keyingtries=%forever
        ikelifetime=86400s
      

      For more information about parameters, see the strongSwan documentation.

    5. Open the ipsec.secrets file that is used for authentication:

      sudo nano /etc/ipsec.secrets
      
    6. Replace the file contents with this text:

      <VM_1_public_address> <VM_2_public_address> : PSK "<password>"
      

      To learn more about the ipsec.secrets file format, see the strongSwan documentation.

  2. Set up the additional IPSec gateway:

    1. Connect to VM-2 over SSH.

    2. Install strongSwan:

      sudo apt update && sudo apt install strongswan
      
    3. Open the ipsec.conf configuration file:

      sudo nano /etc/ipsec.conf
      
    4. Replace the file contents with this text:

      # basic configuration
      
      config setup
        charondebug="all"
      
      conn VM-2
        type=tunnel
        auto=route
        keyexchange=ikev2
        ike=aes256-sha2_256-modp2048!
        esp=aes256-sha2_256!
        authby=secret
        left=<VM_2_internal_address>
        leftid=<VM_2_public_address>
        leftsubnet=<VM_2_subnet_CIDR>
        right=<VM_1_public_address>
        rightsubnet=<main_subnet1_CIDR>,<main_subnet2_CIDR>,<main_subnet3_CIDR>
        rightsourceip=<VM_1_internal_address>
        aggressive=no
        keyingtries=%forever
        ikelifetime=86400s
        lifetime=43200s
        lifebytes=576000000
        dpddelay=30s
      

      For more information about parameters, see the strongSwan documentation.

    5. Open the ipsec.secrets file required for authentication:

      sudo nano /etc/ipsec.secrets
      
    6. Replace the file contents with this text:

      <VM_2_public_address> <VM_1_public_address> : PSK "<password>"
      

      Passwords must be the same for both VMs.

      To learn more about the ipsec.secrets file format, see the strongSwan documentation.

  3. Restart strongSwan on both VMs:

    sudo ipsec restart
    
  4. Check the connection status on both VMs:

    sudo ipsec statusall
    

    You should see ESTABLISHED in the command output, indicating a successfully established connection:

    ...
    Security Associations (1 up, 0 connecting):
         VM-1[1]: ESTABLISHED 5 seconds ago, 10.128.*.**[46.21.***.***]...84.201.***.***[84.201.***.***]
    ...
    

    If the connection was not established, try establishing it manually. On the VM-1 VM, run the following command:

    sudo ipsec up VM-1
    

    You may run this command on just one of the VMs.

  5. Configure MTU on both VMs:

    ETH_NIC=eth0
    sudo iptables -t mangle -A FORWARD -i ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
    sudo iptables -t mangle -A FORWARD -o ${ETH_NIC} -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
    echo "net.ipv4.ip_no_pmtu_disc = 1" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p /etc/sysctl.conf
    

    Warning

    If you keep the default MTU value, network traffic losses may possibly occur.

  6. Connect VM-2 to the Managed Service for Kubernetes cluster as its external node.

TroubleshootingTroubleshooting

Errors when using the and packages on an external nodeErrors when using the docker-ce and containerd packages on an external node

To diagnose and fix this error:

  1. View the list of services that are not functioning properly:

    sudo systemctl --failed
    

    Result:

    UNIT LOAD ACTIVE SUB DESCRIPTION
    docker.socket loaded failed failed Docker Socket for the API
    LOAD = Reflects whether the unit definition was properly loaded.
    ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
    SUB = The low-level unit activation state, values depend on unit type.
    1 loaded units listed.
    
  2. Check the docker.socket status:

    sudo systemctl status docker.socket
    

    Result:

    docker.socket - Docker Socket for the API
    Loaded: loaded (/lib/systemd/system/docker.socket; disabled; vendor preset: enabled)
    Active: failed (Result: exit-code) since Tue 2024-02-10 09:53:37 UTC; 6s ago
    Triggers: ● docker.service
    Listen: /run/docker.sock (Stream)
    CPU: 1ms
    Feb 10 09:53:37 ext-node systemd[1]: Starting Docker Socket for the API...
    Feb 10 09:53:37 ext-node systemd[7052]: docker.socket: Failed to resolve group docker: No such process
    Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Control process exited, code=exited, status=216/GROUP
    Feb 10 09:53:37 ext-node systemd[1]: docker.socket: Failed with result 'exit-code'.
    Feb 10 09:53:37 ext-node systemd[1]: Failed to listen on Docker Socket for the API.
    
  3. Look up errors in system logs:

    sudo journalctl -xe
    

    Result:

    ...
    Feb 10 09:56:40 ext-node maintainer[19298]: E: Sub-process /usr/bin/dpkg returned an error code (1)
    ...
    
  4. Reinstall the packages and fix the errors:

    sudo apt install -f
    
  5. When the installer prompts you for action with the config.toml file, enter N to keep the current version of the file.

Was the article helpful?

Previous
Configuring WireGuard gateways to connect external nodes to a cluster
Next
All tutorials
© 2025 Direct Cursus Technology L.L.C.