How to do link aggregation (bonding and team) in Linux

Link aggregation applies to various methods of combining (aggregating) multiple network connections in parallel in order to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links should fail.

Bonding

The Linux bonding driver provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr:

  • Round-robin balance-rr/0: Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.
  • Active-backup active-backup/1: Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
  • … see Linux Ethernet Bonding Driver HOWTO for all options.
## bonding using ifcfg/rhel
# create ethernet channel bonding
$ cat /etc/sysconfig/network-scripts/ifcfg-{eth1,eth2}
....
MASTER=bond0
SLAVE=yes

# configure channel bonding interface mode=0 (balance-rr) or mode=1 (active-backup)
$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
# balance-rr
#BONDING_OPTS="mode=0 miimon=100"
# active-backup
BONDING_OPTS="mode=1 miimon=100"

# if your version of initscripts that desnt support 'BONDING_OPTS' use '/etc/modprobe.d/bonding.conf'
$ cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 mode=balance-rr miimon=100
# use '/etc/modprobe.d/bonding.conf' for non-interface specific options
$ cat /etc/modprobe.d/bonding.conf
options bonding max_bonds=1

$ modprobe -r bonding; modprobe bonding; service network restart
# if you get error, make sure bonding is loaded 'modprobe bonding'

# manually down/up slave to check bonding working
$ watch -n .1 cat /proc/net/bonding/bond0
$ ifconfig eth1 down

from bond@rhel6

## bonding in debian
$ sudo apt-get install ifenslave-2.6

# configure network interfaces
$ cat /etc/network/interfaces
auto eth0
iface eth0 inet manual
    bond-master bond0
    bond-primary eth0 eth1
auto eth1
iface eth1 inet manual
    bond-master bond0
    bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
    bond-mode active-backup # or balance-rr
    bond-slaves none # we already defined the interfaces above with bond-master
    bond-miimon 100 # how often link state is sinspected for failures

# to verify bonding
$ watch -n .1 cat /proc/net/bonding/bond0
$ tail -f /var/log/messages
$ /etc/init.d/networking start

# if you get 'bonding: Warning: either miimon or arp_interval ...'
$ cat /etc/modprobe.d/aliases-bond.conf
alias bond0 bonding
options bonding mode=1 arp_interval=2000 arp_ip_target=192.168.3.1

from bonding@ubuntu and bonding@debian

## bonding using nmcli
# create 'bond0' connection
$ nmcli con add type bond con-name mybond0 ifname mybond0 mode active-backup
# add slave adaptors
$ nmcli con add type bond-slave ifname ens7 master mybond0
$ nmcli con add type bond-slave ifname ens3 master mybond0
# bring up slaves first, then bond
$ nmcli con up bond-slave-ens7
$ nmcli con up bond-slave-ens3
$ nmcli con up bond-mybond0
# view status
$ nmcli -p con show active

from bond@rhel7

Team

The Linux team driver provides an alternative to bonding driver. The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon. See libteam. Team daemon teamd supports a number of modes: called _runners:

  • broadcast: all packets are sent via all available ports
  • roundrobin: data is transmitted over all ports in turn
  • random: same as previous but transmit port is selected randomly for each outgoing
  • activebackup: one port or link is used while others are kept as a backup
  • loadbalance: uses a hash function to try to reach a perfect balance when selecting ports for packet transmission
  • … see libteam for all options
## installation
$ sudo yum install teamd

# convert bond to team
$ /usr/bin/bond2team --master bond0 --rename team0

## teaming using teamd
$ cp /usr/share/doc/teamd-*/example_configs/activebackup_ethtool_1.conf \ ~/activebackup_ethtool_1.conf
$ teamd -g -f activebackup_ethtool_1.conf -d
$ teamdctl team0 state
# add address to team interface
$ ip addr add 192.168.23.2/24 dev team0
$ ip addr show team0
$ ip link set dev team0 up
# terminate daemon and remove team0
$ teamd -t team0 -k

## teaming using teamdctl (teamd client)
# add 'em1' port from 'team0'
$ teamdctl team0 port add em1
# remove 'em1' port from 'team0'
$ teamdctl team0 port remove em1
# apply JSON config
$ teamdctl team0 port config update em1 '{"prio": -10, "sticky": true}'
# dump/view port config
$ teamdctl team0 port config dump em1

## teaming using ifcfg
$ cat /etc/sysconfig/network-scripts/ifcfg-team0
DEVICE=team0
DEVICETYPE=Team
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.11.1
PREFIX=24
TEAM_CONFIG='{"runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}}'
$ cat /etc/sysconfig/network-scripts/ifcfg-eno{1,2}
...
DEVICETYPE=TeamPort
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio": 100}' # -MAXINT..MAXINT, defaults to 0
$ ifup team0
$ ip link show

## temming using teamnl/iputils
# add port/enslave
$ ip link set dev em1 down
$ ip link set dev em1 master team0
# set team options
$ teamnl team0 setoption mode activebackup
# show team ports
$ teamnl team0 ports
# add team address
$ ip addr add 192.168.252.2/24 dev team0
# bring up team
$ ip link set team0 up
# show active ports
$ teamnl team0 getoption activeport

## teaming using nmcli
# create team interface and add ports/slaves
$ nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}'
$ nmcli con add type team-slave con-name team0-port1 ifname eno1 master team0
$ nmcli con add type team-slave con-name team0-port1 ifname eno2 master team0
$ nmcli con show
# assign IP address to team and enable connection
$ nmcli con mod team0 ipv4.addresses "192.168.1.24/24 192.168.1.1"
$ nmcli con mod team0 ipv4.method manual
$ nmcli con up team0
$ teamdctl team0 state

from teaming@rhel7

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s