In this part, we will focus on configuration tasks required to enable multihoming in our fabric.
EVPN-MH configuration touches Ethernet Segment (ES) peers. ES peer is a leaf that has links to a multihomed host. In our case
leaf2 are ES peers, because CE1 is connected to both of them.
The following items need to be configured on ES Peers:
- A LAG and member interfaces
- Ethernet segment
- MAC-VRF interface mapping
For an all-active multihoming SR Linux nodes need to be configured with a LAG interface facing the CE.
The following configuration snippet can be pasted in the CLI of
leaf2 to create a logical LAG interface
lag1 with LACP support.
lag1 interface was created with
vlan-tagging enabled that allows multiple subinterfaces with different VLAN tags to use it. This way each subinterface can be connected to a different MAC-VRF.
0 has been added to
lag type can be LACP or static. For this lab we chose to use LACP for our LAG, so LACP parameters must match in all ES-peer nodes - leaf1 and leaf2.
And finally, we bind the physical interface(s) to the logical LAG interface to complete the LAG configuration part.
As shown in config snippet above, the physical interface
ethernet-1/1 will be part of
lag1 interface on both leaf1 and leaf2 nodes.
All PEs that provide multihoming to a CE must be similarly configured with the lag and interface configurations.
When a CE device is connected to one or more PEs via a set of Ethernet links, then this set of Ethernet links constitutes an "Ethernet segment". This is a key concept of EVPN Multihoming.
In SR Linux, the
ethernet segments are configured under the
system network-instance protocols context.
ethernet-segment is created with a name ES-1 under
bgp-instance 1 with the
For a multihomed site, each Ethernet segment (ES) is identified by a unique non-zero identifier called an Ethernet Segment Identifier (ESI).
An ESI is encoded as a 10-octet integer in line format with the most significant octet sent first.
multi-homing-mode must match in all ES peers. At last, we assign the interface
lag1 to ES-1.
Besides the ethernet segments,
bgp-vpn is also configured with
bgp-instance 1 to use the BGP information (RT/RD) for the ES routes exchanged in EVPN to enable multihoming.
Typically, an L2 multi-homed LAG subinterface needs to be associated with a MAC-VRF.
To provide the load-balancing for all-active multihoming segments, set
ecmp to the expected number of leaves (PE) serving the CE1.
Since we have two leaves connected to CE1, we set
The entire MAC-VRF with VXLAN configuration is covered here.
This completes an all-active EVPN-MH configuration. Now let's have a look at the multihomed CE1 host and its configuration.
Customer Edge Device#
To create a multihomed connection, our CE1 emulated host has a
bond0 interface configured with interfaces
eth2 underneath. Similar to the SR Linux part, it is configured with LACP (802.3ad).
The single-homed CE2 has multiple interfaces to a single leaf3 switch. These interfaces are placed in different VRFs so that CE2 can simulate multiple remote endpoints.
Below are the CE interface configurations that are executed by containerlab during the deployment.
#!/bin/bash # creating bond interface w/LACP ip link add bond0 type bond mode 802.3ad ip link set address 00:c1:ab:00:00:11 dev bond0 ip addr add 192.168.0.11/24 dev bond0 ip link set eth1 down ip link set eth2 down ip link set eth1 master bond0 ip link set eth2 master bond0 ip link set eth1 up ip link set eth2 up ip link set bond0 up
#!/bin/bash # setting three isolated (w/vrfs) interfaces # with IPs from the same subnet # to simulate multiple remote clients in one container. ip link set address 00:c1:ab:00:00:21 dev eth1 ip link set address 00:c1:ab:00:00:22 dev eth2 ip link set address 00:c1:ab:00:00:23 dev eth3 ip link add dev vrf-1 type vrf table 1 ip link set dev vrf-1 up ip link set dev eth1 master vrf-1 ip link add dev vrf-2 type vrf table 2 ip link set dev vrf-2 up ip link set dev eth2 master vrf-2 ip link add dev vrf-3 type vrf table 3 ip link set dev vrf-3 up ip link set dev eth3 master vrf-3 ip addr add 192.168.0.21/24 dev eth1 ip addr add 192.168.0.22/24 dev eth2 ip addr add 192.168.0.23/24 dev eth3
This is primarily to get better entropy for load balancing, so you can observe CE1 sending/receiving packets to/from both PEs, as shown below.
Now, let's see how EVPN-MH control plane works and which commands you can use to verify the configuration.
An Ethernet segment can span up to four provider edge (PE) routers. ↩