Yandex Cloud
Search
Contact UsTry it for free
  • Customer Stories
  • Documentation
  • Blog
  • All Services
  • System Status
  • Marketplace
    • Featured
    • Infrastructure & Network
    • Data Platform
    • AI for business
    • Security
    • DevOps tools
    • Serverless
    • Monitoring & Resources
  • All Solutions
    • By industry
    • By use case
    • Economics and Pricing
    • Security
    • Technical Support
    • Start testing with double trial credits
    • Cloud credits to scale your IT product
    • Gateway to Russia
    • Cloud for Startups
    • Center for Technologies and Society
    • Yandex Cloud Partner program
    • Price calculator
    • Pricing plans
  • Customer Stories
  • Documentation
  • Blog
© 2026 Direct Cursus Technology L.L.C.
Yandex BareMetal
  • Getting started
    • All guides
      • Leasing a server
      • Getting information about a server
      • Updating a server
      • Connecting to the KVM console
      • Stopping and starting a server
      • Cancel a server lease
      • Uploading a custom OS image
      • Connecting an existing BareMetal server to Cloud Backup
      • Using the Rescue CD
      • Configuring the MC-LAG aggregation group
      • Resetting a password on the server
      • Replacing a disk in a RAID array
      • Adding a new SSH key for a user
      • Restoring the OS bootloader
      • Disk status analysis with HW Watcher
    • Overview
      • Overview
      • Server configurations
      • Disk status analysis
      • Additional server settings
      • Overview
      • DHCP
      • MC-LAG
      • Restrictions in BareMetal networks
    • Images
    • Quotas and limits
    • All tutorials
    • Connecting an existing BareMetal server to Cloud Backup
    • Configuring VRRP for a cluster of BareMetal servers
    • Establishing network connectivity in a BareMetal private subnet
    • Establishing network connectivity between BareMetal and Virtual Private Cloud private subnets
    • Establishing network connectivity between a BareMetal private subnet and on-premise resources
    • Delivering USB devices to a BareMetal server or virtual machine
    • Configuring an OPNsense firewall in high availability cluster mode
    • Deploying a web app on BareMetal servers with an L7 load balancer and Smart Web Security protection
    • Connecting a BareMetal server as an external node to a Managed Service for Kubernetes cluster
  • Monitoring metrics
  • Audit Trails events
  • Access management
  • Pricing policy
  • FAQ
  1. Step-by-step guides
  2. Servers
  3. Configuring the MC-LAG aggregation group

Setting up an MC-LAG aggregation group

Written by
Yandex Cloud
Updated at December 1, 2025

Servers with MC-LAG support use two network adapters to simultaneously connect to each of the networks (public and private). To ensure fault tolerance, each of the network interface pairs connected to the networks must form an aggregation group on the server side. For more information, see Configuring aggregation groups and network interfaces.

Note

When setting up aggregation groups, do not connect to the server via the network interface you are going to include in a group: the connection with be lost when you create the group. The KVM console is the most reliable way to configure MC-LAG groups.

Currently, you can set up MC-LAG groups in Linux Ubuntu 20.04, 22.04, 24.04, and Debian 11. As an example, this guide uses a server with two pairs of network adapters with connection speed of 25 Gbps each.

To set up a link aggregation group:

Ubuntu/Debian (Netplan)
  1. Install ethtool:

    apt install ethtool
    
  2. Make sure the required network interfaces are installed in the system and active:

    ip link
    

    Result:

    ...
    2: etx3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d6 brd ff:ff:ff:ff:ff:ff
    3: etx4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d7 brd ff:ff:ff:ff:ff:ff
    4: etx1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 58:a2:e1:ad:38:2a brd ff:ff:ff:ff:ff:ff
    5: etx2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 58:a2:e1:ad:38:2b brd ff:ff:ff:ff:ff:ff
    

    As you can see from the output, the server has four active network interfaces:

    • etx3: With the b8:ce:f6:40:12:d6 MAC address.
    • etx4: With the b8:ce:f6:40:12:d7 MAC address.
    • etx1: With the 58:a2:e1:ad:38:2a MAC address.
    • etx2: With the 58:a2:e1:ad:38:2b MAC address.
  3. Find out which of the interfaces belong to the public network, and which to the private one:

    1. In the management console, select the folder the server belongs to.

    2. Go to BareMetal and select the server in the list of servers.

      On the page that opens, in the MAC address field under Public network and Private network, you can see the MAC addresses of interfaces connected to the public and private networks, respectively.

    3. Use the information obtained in the two previous steps to identify the server interface pairs connected to the public and private networks. In the example above, the pairs are as follows:

      Public network:

      • etx3: With the b8:ce:f6:40:12:d6 MAC address.
      • etx1: With the 58:a2:e1:ad:38:2a MAC address.

      Private network:

      • etx4: With the b8:ce:f6:40:12:d7 MAC address.
      • etx2: With the 58:a2:e1:ad:38:2b MAC address.
  4. Find out the name of the Netplan configuration file:

    ls /etc/netplan/
    

    Result:

    50-cloud-init.yaml
    
  5. Open the Netplan configuration file with a text editor. In this guide, we use nano:

    nano /etc/netplan/50-cloud-init.yaml
    
  6. Edit the Netplan configuration by adding aggregation groups (the bonds section):

    network:
        bonds:
            bond1:
                dhcp4: true
                interfaces:
                - <public_interface_1_name>
                - <public_interface_2_name>
                macaddress: <public_interface_1_or_2_MAC_address>
                parameters:
                    lacp-rate: fast
                    mode: 802.3ad
                    transmit-hash-policy: layer3+4
            bond2:
                dhcp4: true
                interfaces:
                - <private_interface_1_name>
                - <private_interface_2_name>
                macaddress: <private_interface_1_or_2_MAC_address>
                parameters:
                    lacp-rate: fast
                    mode: 802.3ad
                    transmit-hash-policy: layer3+4
        ethernets:
            etx1:
                dhcp4: false
                match:
                    macaddress: 58:a2:e1:ad:38:2a
                set-name: etx1
            etx2:
                dhcp4: false
                match:
                    macaddress: 58:a2:e1:ad:38:2b
                set-name: etx2
            etx3:
                dhcp4: false
                match:
                    macaddress: b8:ce:f6:40:12:d6
                set-name: etx3
            etx4:
                dhcp4: false
                match:
                    macaddress: b8:ce:f6:40:12:d7
                set-name: etx4
        version: 2
    

    Where:

    • <public_interface_1_name>, <public_interface_2_name>: Names of the interfaces which belong to the public network, as you found out earlier.
    • <public_interface_1_or_2_MAC_address>: MAC address of an interface which belongs to the public network, as you found out earlier.
    • <private_interface_1_name>, <public_interface_2_name>: Names of the interfaces which belong to the private network, as you found out earlier.
    • <private_interface_1_or_2_MAC_address>: MAC address of an interface which belongs to the private network, as you found out earlier.

    Warning

    Note that DHCP must be:

    • Enabled (dhcp4: true) for aggregation groups (the bonds section).
    • Disabled (dhcp4: false) for individual interfaces (the ethernets section).
    Netplan configuration example
    network:
        bonds:
            bond1:
                dhcp4: true
                interfaces:
                - etx3
                - etx1
                macaddress: b8:ce:f6:40:12:d6
                parameters:
                    lacp-rate: fast
                    mode: 802.3ad
                    transmit-hash-policy: layer3+4
            bond2:
                dhcp4: true
                interfaces:
                - etx4
                - etx2
                macaddress: b8:ce:f6:40:12:d7
                parameters:
                    lacp-rate: fast
                    mode: 802.3ad
                    transmit-hash-policy: layer3+4
        ethernets:
            etx1:
                dhcp4: false
                match:
                    macaddress: 58:a2:e1:ad:38:2a
                set-name: etx1
            etx2:
                dhcp4: false
                match:
                    macaddress: 58:a2:e1:ad:38:2b
                set-name: etx2
            etx3:
                dhcp4: false
                match:
                    macaddress: b8:ce:f6:40:12:d6
                set-name: etx3
            etx4:
                dhcp4: false
                match:
                    macaddress: b8:ce:f6:40:12:d7
                set-name: etx4
        version: 2
    
  7. Apply the new Netplan configuration:

    netplan apply
    
  8. Make sure the list of network interfaces displays the aggregation groups:

    ip link
    

    Result:

    ...
    2: etx3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d6 brd ff:ff:ff:ff:ff:ff
    3: etx4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d7 brd ff:ff:ff:ff:ff:ff
    4: etx1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d6 brd ff:ff:ff:ff:ff:ff
    5: etx2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d7 brd ff:ff:ff:ff:ff:ff
    6: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d6 brd ff:ff:ff:ff:ff:ff
    7: bond2: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether b8:ce:f6:40:12:d7 brd ff:ff:ff:ff:ff:ff
    

    As you can see from the output, the server now has two MC-LAG aggregation groups, bond1 and bond2.

    Note

    If aggregation groups are inactive (DOWN), activate them:

    ip link set bond1 up
    ip link set bond2 up
    
  9. View the information about the groups you created. As an example, let’s use the aggregation group connected to the public network:

    ethtool bond1
    

    Result:

    Settings for bond1:
      Supported ports: [ ]
      Supported link modes:   Not reported
      Supported pause frame use: No
      Supports auto-negotiation: No
      Supported FEC modes: Not reported
      Advertised link modes:  Not reported
      Advertised pause frame use: No
      Advertised auto-negotiation: No
      Advertised FEC modes: Not reported
      Speed: 50000Mb/s
      Duplex: Full
      Port: Other
      PHYAD: 0
      Transceiver: internal
      Auto-negotiation: off
      Link detected: yes
    

    As you can see from the output, the connection speed for the bond1 group is 50 Gbps.

  10. Simulate an incident where a link in the bond1 public network aggregation group fails. To do this, disable one of the group’s network interfaces:

    ip link set etx3 down
    
  11. View the group information again:

    ethtool bond1
    

    Result:

    Settings for bond1:
      Supported ports: [ ]
      Supported link modes:   Not reported
      Supported pause frame use: No
      Supports auto-negotiation: No
      Supported FEC modes: Not reported
      Advertised link modes:  Not reported
      Advertised pause frame use: No
      Advertised auto-negotiation: No
      Advertised FEC modes: Not reported
      Speed: 25000Mb/s
      Duplex: Full
      Port: Other
      PHYAD: 0
      Transceiver: internal
      Auto-negotiation: off
      Link detected: yes
    

    As you can see from the output, the connection speed for the bond1 group has been reduced to 25 Gbps, but the network connectivity has been maintained. To check this, connect to the server over SSH.

  12. Activate the interface you disabled earlier and make sure the aggregation group is running again at the maximum speed:

    ip link set etx3 up
    ethtool bond1
    

    Result:

    Settings for bond1:
      Supported ports: [ ]
      Supported link modes:   Not reported
      Supported pause frame use: No
      Supports auto-negotiation: No
      Supported FEC modes: Not reported
      Advertised link modes:  Not reported
      Advertised pause frame use: No
      Advertised auto-negotiation: No
      Advertised FEC modes: Not reported
      Speed: 50000Mb/s
      Duplex: Full
      Port: Other
      PHYAD: 0
      Transceiver: internal
      Auto-negotiation: off
      Link detected: yes
    

See alsoSee also

  • Reserving a BareMetal network connection using MC-LAG
  • Network
  • Restrictions in BareMetal networks

Was the article helpful?

Previous
Reinstalling the OS from your ISO image
Next
Resetting a password on the server
© 2026 Direct Cursus Technology L.L.C.