Skip to content

L2 EVPN Multihoming with SR Linux

Tutorial name EVPN L2 Multihoming with SR Linux
Lab components 4 SR Linux nodes, 2 Linux nodes
Resource requirements 4 vCPU
8 GB
Lab srl-labs/srl-evpn-mh-lab
Main ref documents RFC 7432 - BGP MPLS-Based Ethernet VPN
RFC 8365 - A Network Virtualization Overlay Solution Using Ethernet VPN (EVPN)
Nokia 7220 SR Linux Advanced Solutions Guide
Nokia 7220 SR Linux EVPN-VXLAN Guide
Version information containerlab:0.44.0, srlinux:23.7.1, docker-ce:23.0.3
Authors Alperen Akpinar

Multihoming is a common networking feature that allows a customer edge (CE) device to be connected to two or more provider edge (PE) devices in a network. This provides redundant connectivity, efficient link utilization and allows the network to continue providing services even if one of the PE devices or links fails.

In the pre-EVPN era multihoming was enabled by Multi-chassis LAG (MC-LAG) or Virtual Port Channel (vPC) technologies. These technologies are still used in many networks, but they have some limitations and started to show their age.
For example, MC-LAG and vPC are proprietary technologies and are not standardized, that makes it hard to build DC fabrics on a multivendor gear. They are also not quite suitable for large-scale deployments and have more limitations that we covered in the Single Tier Datacenters - Evolving Away From Multi-chassis LAG blog post.

EVPN has built-in multihoming (MH) capability, which is defined by RFCs 7432, 8365. EVPN MH can be used to improve the reliability, performance, and manageability of networks. It is particularly well-suited for data center networks, where high availability and performance are critical.

In this tutorial, you will learn about L2 multihoming with EVPN and how to configure it in an SR Linux-based fabric.

EVPN provides multihoming with the Ethernet segments (ES), which might be a new concept for some readers. Therefore, the terminology is also discussed in the following chapters.

Lab#

To familiarize ourselves with EVPN multihoming and get some hands-on experience we will use the evpn-multihoming lab that consists of one Spine, three Leaf (PE1) switches, and two Linux hosts (CE2). One multi-homed CE is connected to leaf1 and leaf2, and another is connected to only leaf3 with three links.

EVPN multihoming lab topology

As usual, this lab is deployed by containerlab and can be used on any Linux VM with the resources listed in the table at the beginning.

The lab comes with startup configuration files provided for SR Linux leaf and spine switches. These files contain basic L2 EVPN configuration as explained in L2 EVPN Basics tutorial. It is recommended to read the basics tutorial if you have not yet played with SR Linux or EVPN.

Besides the SR Linux startup configurations, the config directory also contains interface configurations for the CE hosts (Linux containers).

name: evpn-mh

topology:
  kinds:
    srl:
      image: ghcr.io/nokia/srlinux:23.7.1
    linux:
      image: ghcr.io/srl-labs/alpine:latest

  nodes:
    # srl nodes with startup configs
    leaf1:
      kind: srl
      type: ixrd2
      startup-config: configs/leaf1.cfg
    leaf2:
      kind: srl
      type: ixrd2
      startup-config: configs/leaf2.cfg
    leaf3:
      kind: srl
      type: ixrd2
      startup-config: configs/leaf3.cfg
    spine1:
      kind: srl
      type: ixrd3
      startup-config: configs/spine1.cfg
    # alpine linux nodes with interface config bind and execution
    ce1:
      kind: linux
      binds:  
        - configs/ce1-config.sh:/ce1-config.sh
      exec:
        - bash /ce1-config.sh    
    ce2:
      kind: linux
      binds:
        - configs/ce2-config.sh:/ce2-config.sh
      exec:
        - bash /ce2-config.sh

  links:
    # inter-switch links
    - endpoints: ["leaf1:e1-49", "spine1:e1-1"]
    - endpoints: ["leaf2:e1-49", "spine1:e1-2"]
    - endpoints: ["leaf3:e1-49", "spine1:e1-3"]
    # ce links
    - endpoints: ["ce1:eth1", "leaf1:e1-1"]
    - endpoints: ["ce1:eth2", "leaf2:e1-1"]
    - endpoints: ["ce2:eth1", "leaf3:e1-1"]
    - endpoints: ["ce2:eth2", "leaf3:e1-2"]
    - endpoints: ["ce2:eth3", "leaf3:e1-3"]
Configurations

Below are the startup configuration files used by the fabric switches and CE hosts.

# configuration of the physical interface and its subinterface
set / interface ethernet-1/1 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/1 subinterface 0 ipv4 address 192.168.11.2/30
set / interface ethernet-1/2 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/2 subinterface 0 ipv4 address 192.168.12.2/30
set / interface ethernet-1/3 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/3 subinterface 0 ipv4 address 192.168.13.2/30

# system interface configuration
set / interface system0 admin-state enable
set / interface system0 subinterface 0 ipv4 admin-state enable
set / interface system0 subinterface 0 ipv4 address 10.0.1.1/32

# associating interfaces with net-ins default
set / network-instance default interface ethernet-1/1.0
set / network-instance default interface ethernet-1/2.0
set / network-instance default interface ethernet-1/3.0
set / network-instance default interface system0.0

# routing policy
set / routing-policy policy all default-action
set / routing-policy policy all default-action policy-result accept

# BGP configuration
set / network-instance default protocols bgp autonomous-system 201
set / network-instance default protocols bgp router-id 10.0.1.1
set / network-instance default protocols bgp group eBGP-underlay export-policy all
set / network-instance default protocols bgp group eBGP-underlay import-policy all
set / network-instance default protocols bgp afi-safi ipv4-unicast admin-state enable
set / network-instance default protocols bgp neighbor 192.168.11.1 peer-as 101
set / network-instance default protocols bgp neighbor 192.168.11.1 peer-group eBGP-underlay
set / network-instance default protocols bgp neighbor 192.168.12.1 peer-as 102
set / network-instance default protocols bgp neighbor 192.168.12.1 peer-group eBGP-underlay
set / network-instance default protocols bgp neighbor 192.168.13.1 peer-as 103
set / network-instance default protocols bgp neighbor 192.168.13.1 peer-group eBGP-underlay
# uplink interface to spine
set / interface ethernet-1/49 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/49 subinterface 0 ipv4 address 192.168.11.1/30

# system interface configuration
set / interface system0 admin-state enable
set / interface system0 subinterface 0 ipv4 admin-state enable
set / interface system0 subinterface 0 ipv4 address 10.0.0.1/32

# associating interfaces with net-ins default
set / network-instance default interface ethernet-1/49.0
set / network-instance default interface system0.0

# routing policy
set / routing-policy policy all default-action
set / routing-policy policy all default-action policy-result accept

# BGP configuration
set / network-instance default protocols bgp autonomous-system 101
set / network-instance default protocols bgp router-id 10.0.0.1
set / network-instance default protocols bgp group eBGP-underlay export-policy all
set / network-instance default protocols bgp group eBGP-underlay import-policy all
set / network-instance default protocols bgp group eBGP-underlay peer-as 201
set / network-instance default protocols bgp group iBGP-overlay export-policy all
set / network-instance default protocols bgp group iBGP-overlay import-policy all
set / network-instance default protocols bgp group iBGP-overlay peer-as 100
set / network-instance default protocols bgp group iBGP-overlay afi-safi evpn admin-state enable
set / network-instance default protocols bgp group iBGP-overlay afi-safi ipv4-unicast
set / network-instance default protocols bgp group iBGP-overlay afi-safi ipv4-unicast admin-state disable
set / network-instance default protocols bgp group iBGP-overlay local-as as-number 100
set / network-instance default protocols bgp group iBGP-overlay timers minimum-advertisement-interval 1
set / network-instance default protocols bgp afi-safi ipv4-unicast admin-state enable
set / network-instance default protocols bgp neighbor 10.0.0.2 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.2 transport local-address 10.0.0.1
set / network-instance default protocols bgp neighbor 10.0.0.3 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.3 transport local-address 10.0.0.1
set / network-instance default protocols bgp neighbor 192.168.11.2 peer-group eBGP-underlay

# MAC-VRF
set / network-instance mac-vrf-1 type mac-vrf
set / network-instance mac-vrf-1 admin-state enable
set / network-instance mac-vrf-1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 admin-state enable
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 evi 111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target export-rt target:100:111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target import-rt target:100:111

# VXLAN tunnel interface
set / tunnel-interface vxlan1 vxlan-interface 1 type bridged
set / tunnel-interface vxlan1 vxlan-interface 1 ingress vni 1
# uplink interface to spine
set / interface ethernet-1/49 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/49 subinterface 0 ipv4 address 192.168.12.1/30

# system interface configuration
set / interface system0 admin-state enable
set / interface system0 subinterface 0 ipv4 admin-state enable
set / interface system0 subinterface 0 ipv4 address 10.0.0.2/32

# associating interfaces with net-ins default
set / network-instance default interface ethernet-1/49.0
set / network-instance default interface system0.0

# routing policy
set / routing-policy policy all default-action
set / routing-policy policy all default-action policy-result accept

# BGP configuration
set / network-instance default protocols bgp autonomous-system 102
set / network-instance default protocols bgp router-id 10.0.0.2
set / network-instance default protocols bgp group eBGP-underlay export-policy all
set / network-instance default protocols bgp group eBGP-underlay import-policy all
set / network-instance default protocols bgp group eBGP-underlay peer-as 201
set / network-instance default protocols bgp group iBGP-overlay export-policy all
set / network-instance default protocols bgp group iBGP-overlay import-policy all
set / network-instance default protocols bgp group iBGP-overlay peer-as 100
set / network-instance default protocols bgp group iBGP-overlay afi-safi evpn admin-state enable
set / network-instance default protocols bgp group iBGP-overlay afi-safi ipv4-unicast admin-state disable
set / network-instance default protocols bgp group iBGP-overlay local-as as-number 100
set / network-instance default protocols bgp group iBGP-overlay timers minimum-advertisement-interval 1
set / network-instance default protocols bgp afi-safi ipv4-unicast admin-state enable
set / network-instance default protocols bgp neighbor 10.0.0.1 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.1 transport local-address 10.0.0.2
set / network-instance default protocols bgp neighbor 10.0.0.3 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.3 transport local-address 10.0.0.2
set / network-instance default protocols bgp neighbor 192.168.12.2 peer-group eBGP-underlay

# MAC-VRF
set / network-instance mac-vrf-1 type mac-vrf
set / network-instance mac-vrf-1 admin-state enable
set / network-instance mac-vrf-1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 admin-state enable
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 evi 111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target export-rt target:100:111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target import-rt target:100:111

# VXLAN tunnel interface
set / tunnel-interface vxlan1 vxlan-interface 1 type bridged
set / tunnel-interface vxlan1 vxlan-interface 1 ingress vni 1
# interface configuration of the ce
set / interface ethernet-1/1 vlan-tagging true
set / interface ethernet-1/1 subinterface 0 type bridged
set / interface ethernet-1/1 subinterface 0 admin-state enable
set / interface ethernet-1/1 subinterface 0 vlan encap untagged
set / interface ethernet-1/2 vlan-tagging true
set / interface ethernet-1/2 subinterface 0 type bridged
set / interface ethernet-1/2 subinterface 0 admin-state enable
set / interface ethernet-1/2 subinterface 0 vlan encap untagged
set / interface ethernet-1/3 vlan-tagging true
set / interface ethernet-1/3 subinterface 0 type bridged
set / interface ethernet-1/3 subinterface 0 admin-state enable
set / interface ethernet-1/3 subinterface 0 vlan encap untagged

# uplink interface to spine
set / interface ethernet-1/49 subinterface 0 ipv4 admin-state enable
set / interface ethernet-1/49 subinterface 0 ipv4 address 192.168.13.1/30

# system interface configuration
set / interface system0 admin-state enable
set / interface system0 subinterface 0 ipv4 admin-state enable
set / interface system0 subinterface 0 ipv4 address 10.0.0.3/32

# associating interfaces with net-ins default
set / network-instance default interface ethernet-1/49.0
set / network-instance default interface system0.0

# routing policy
set / routing-policy policy all default-action policy-result accept

# BGP configuration
set / network-instance default protocols bgp autonomous-system 103
set / network-instance default protocols bgp router-id 10.0.0.3
set / network-instance default protocols bgp group eBGP-underlay export-policy all
set / network-instance default protocols bgp group eBGP-underlay import-policy all
set / network-instance default protocols bgp group eBGP-underlay peer-as 201
set / network-instance default protocols bgp group iBGP-overlay export-policy all
set / network-instance default protocols bgp group iBGP-overlay import-policy all
set / network-instance default protocols bgp group iBGP-overlay peer-as 100
set / network-instance default protocols bgp group iBGP-overlay afi-safi evpn admin-state enable
set / network-instance default protocols bgp group iBGP-overlay afi-safi ipv4-unicast admin-state disable
set / network-instance default protocols bgp group iBGP-overlay local-as as-number 100
set / network-instance default protocols bgp group iBGP-overlay timers minimum-advertisement-interval 1
set / network-instance default protocols bgp afi-safi ipv4-unicast admin-state enable
set / network-instance default protocols bgp neighbor 10.0.0.1 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.1 transport local-address 10.0.0.3
set / network-instance default protocols bgp neighbor 10.0.0.2 peer-group iBGP-overlay
set / network-instance default protocols bgp neighbor 10.0.0.2 transport local-address 10.0.0.3
set / network-instance default protocols bgp neighbor 192.168.13.2 peer-group eBGP-underlay

# MAC-VRF
set / network-instance mac-vrf-1 type mac-vrf
set / network-instance mac-vrf-1 admin-state enable
set / network-instance mac-vrf-1 interface ethernet-1/1.0
set / network-instance mac-vrf-1 interface ethernet-1/2.0
set / network-instance mac-vrf-1 interface ethernet-1/3.0
set / network-instance mac-vrf-1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 admin-state enable
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 vxlan-interface vxlan1.1
set / network-instance mac-vrf-1 protocols bgp-evpn bgp-instance 1 evi 111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target export-rt target:100:111
set / network-instance mac-vrf-1 protocols bgp-vpn bgp-instance 1 route-target import-rt target:100:111

# VXLAN tunnel interface
set / tunnel-interface vxlan1 vxlan-interface 1 type bridged
set / tunnel-interface vxlan1 vxlan-interface 1 ingress vni 1
#!/bin/bash
# creating bond interface w/LACP
ip link add bond0 type bond mode 802.3ad
ip link set address 00:c1:ab:00:00:11 dev bond0
ip addr add 192.168.0.11/24 dev bond0
ip link set eth1 down 
ip link set eth2 down
ip link set eth1 master bond0
ip link set eth2 master bond0
ip link set eth1 up 
ip link set eth2 up  
ip link set bond0 up
#!/bin/bash
# setting three isolated (w/vrfs) interfaces
# with IPs from the same subnet 
# to simulate multiple remote clients in one container.
ip link set address 00:c1:ab:00:00:21 dev eth1
ip link set address 00:c1:ab:00:00:22 dev eth2
ip link set address 00:c1:ab:00:00:23 dev eth3
ip link add dev vrf-1 type vrf table 1
ip link set dev vrf-1 up
ip link set dev eth1 master vrf-1
ip link add dev vrf-2 type vrf table 2
ip link set dev vrf-2 up
ip link set dev eth2 master vrf-2
ip link add dev vrf-3 type vrf table 3
ip link set dev vrf-3 up
ip link set dev eth3 master vrf-3
ip addr add 192.168.0.21/24 dev eth1
ip addr add 192.168.0.22/24 dev eth2
ip addr add 192.168.0.23/24 dev eth3

Deployment#

Courtesy of containerlab, lab deployment is just one click away:

git clone https://github.com/srl-labs/srl-evpn-mh-lab.git && \
cd srl-evpn-mh-lab && \
sudo containerlab deploy
INFO[0000] Containerlab v0.44.0 started
INFO[0000] Parsing & checking topology file: evpn-mh01.clab.yml
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="2001:172:20:20::/64", MTU="1500"
INFO[0000] Creating container: "ce2"
INFO[0000] Creating container: "ce1"
INFO[0000] Creating container: "spine1"
INFO[0000] Creating container: "leaf3"
INFO[0000] Creating container: "leaf1"
INFO[0000] Creating container: "leaf2"
# -- snip --
+---+---------------------+--------------+--------------------------------+-------+---------+----------------+----------------------+
| # |        Name         | Container ID |             Image              | Kind  |  State  |  IPv4 Address  |     IPv6 Address     |
+---+---------------------+--------------+--------------------------------+-------+---------+----------------+----------------------+
| 1 | clab-evpn-mh-ce1    | 459de7e146a1 | ghcr.io/srl-labs/alpine:latest | linux | running | 172.20.20.6/24 | 2001:172:20:20::6/64 |
| 2 | clab-evpn-mh-ce2    | 64fb2845aa60 | ghcr.io/srl-labs/alpine:latest | linux | running | 172.20.20.7/24 | 2001:172:20:20::7/64 |
| 3 | clab-evpn-mh-leaf1  | 73ef7e76ef36 | ghcr.io/nokia/srlinux:23.7.1   | srl   | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 4 | clab-evpn-mh-leaf2  | 549668d25122 | ghcr.io/nokia/srlinux:23.7.1   | srl   | running | 172.20.20.5/24 | 2001:172:20:20::5/64 |
| 5 | clab-evpn-mh-leaf3  | 9d67b788a7a2 | ghcr.io/nokia/srlinux:23.7.1   | srl   | running | 172.20.20.4/24 | 2001:172:20:20::4/64 |
| 6 | clab-evpn-mh-spine1 | 244a0dd574a2 | ghcr.io/nokia/srlinux:23.7.1   | srl   | running | 172.20.20.8/24 | 2001:172:20:20::8/64 |
+---+---------------------+--------------+--------------------------------+-------+---------+----------------+----------------------+

When containerlab completes the deployment, you get a summary table with the connection details of the deployed nodes. In the "Name" column, you will find the names of the deployed containers. You can use these names to reach the nodes, e.g. to connect to the SSH of leaf1:

ssh admin@clab-evpn-mh-leaf1 #(1)!
  1. Default credentials admin:NokiaSrl1!

To connect to the Linux hosts (CEs):

ssh admin@clab-evpn-mh-ce1 #(1)!
  1. Credentials admin:srllabs@123

or

docker exec -it clab-evpn-mh-ce1 bash
docker exec -it clab-evpn-mh-ce2 bash

The fabric comes up with L2 EVPN service deployed and operational. You can check the status of the EVPN service using verification commands listed in the L2 EVPN Basics tutorial.

EVPN Multihoming Terminology#

Before we dive into the practicalities, let's look at some terms specific to EVPN multihoming.

Ethernet Segment (ES)#

Defines the CE links associated with multiple PEs (up to 4). An ES is configured in all PEs that the multi-homed CE is connected to and has a unique Ethernet Segment Identifier (ESI) that is advertised via EVPN.

Ethernet segments

Multihoming Modes#

The standard defines two multihoming modes: single-active and all-active. In single-active mode, the CE device only utilizes one uplink towards the leaves, while in all-active mode, all links are used, and load balancing occurs. This tutorial covers the all-active multihoming scenario.

EVPN multihoming modes

A LAG is logical bundle of individual interfaces/ports and is required for an all-active mode but is optional for a single-active mode.

MAC-VRF#

An L2 network instance, essentially a broadcast domain in SR Linux. Interface(s) or LAG must be connected to a MAC-VRF for L2 multihoming.

Advanced Multihoming Procedures#

The following procedures are essential for EVPN multihoming, but aren't typical configuration items:

  • Designated Forwarder (DF): The leaf that is elected to forward BUM traffic. The election is based on the route-type 4 (RT4) exchange, known as the ES routes of EVPN.
  • Split-horizon (Local bias): A mechanism to prevent BUM traffic received by CE from being looped back to itself by a peer leaf. Local bias is used for all-active and is based on RT4 exchange.
  • Aliasing: Aliasing allows remote leaf to balance traffic across the leaf peers that advertise the same ESI via RT1.
  • Fast convergence: Fast convergence ensures that traffic is quickly rerouted in the event of a failure. With RT1 updates, the remote leaf can quickly remove a failed destination from the ESI, without depending on individual RT2 withdrawals.

EVPN route types 1 and 4 are used to implement the multihoming procedures.

For more information about EVPN multihoming procedures and route-types, consult with the EVPN VXLAN Guide.

Let's now move on to the configuration part.


  1. Provider Edge device 

  2. Customer Edge device 

Comments