Project structure#
The Ansible Inventory#
In this project, we use the native file-based Ansible inventory. It lists the hosts that are part of the fabric and groups them in a way that reflects the fabric topology. The inventory file - ansible-inventory.yml
- is located in the inv
directory; group_vars
contains connectivity parameters for specific device groups, like srl
for SR Linux.
Ansible is instructed to use this inventory file by setting inventory = inv
in the ansible.cfg
configuration file.
The ansible-inventory.yml
defines four groups:
srl
- for all SR Linux nodesspine
- for the spine nodesleaf
- for the leaf nodes.hosts
- for emulated hosts.
Intents#
Intents describe desired state of the fabric via structured data in YAML files. The files are stored in an intent directory that is specified as an extra-var option to the ansible-playbook
command, e.g. ansible-playbook -e intent_dir=
absolute path to the intent dir
There are 2 types of intents:
Level-1 intents#
Level-1 intents are infrastructure-level intents and describe per-device configuration following an abstracted device-model. Each top-level resource has a custom data model that is close to the SR Linux data model but different. This device abstraction layer allows to support multiple NOS types (like SROS) and also to shield device-model changes across releases from the defined intent.
The data model for these intents are defined per level-1 resource (e.g. network_instance
, interface
, system
, ...) and are defined in json-schema format in directory playbooks/roles/infra/criteria
.
Level-1 intents can be defined at host- and group-level (as defined in the Ansible inventory). Host-level intent files need to start with host_infra
, e.g. host_infra.yml
and group-level intent files have to start with group_infra
. An example of a group-level infra intent is:
group_infra.yml
(partial)leaf:
interfaces:
ethernet-1/{1..4,10,49..50}:
admin_state: enable
ethernet-1/{1..4,10}:
vlan_tagging: yes
ethernet-1/{49..50}:
irb1:
system0:
admin_state: enable
...
leaf
references a group in the Ansible inventory and this applies to all nodes in that group. Intent files support ranges as shown in above example.
Node-level intents follow the same device model and may have overlapping definitions with the group-level intents. Host-level intents always take precedence over group-level intents.
Level-2 intents#
Level-2 intents are intents at a higher abstraction layer and describe fabric-wide intents, such as fabric intents that describe high-level underlay parameters and service intents such as bridge-domains (l2vpn) and inter-subnet routing (l3vpn) and multi-homing intents (lags and ethernet-segments).
The data model for each level-2 intent type are defined in the respective roles that transform level-2 intent into level-1 intent:
- FABRIC intent schema is defined in
playbooks/roles/fabric/criteria/fabric_intent.json
. Intent file must havefabric
in its name - L2VPN intent schema in
playbooks/roles/l2vpn/criteria/l2vpn.json
. Intent files must start withl2vpn
, e.g.l2vpn_acme.yml
- L3VPN intent schema in
playbooks/roles/l3vpn/criteria/l3vpn.json
. Intent files must start withl3vpn
- Multi-homing schema in
playbooks/roles/mh_access/criteria/mh_access.json
. Intent files must start withmh_access
We'll discuss these in more detail later in this tutorial when we configure the fabric.
The Ansible Playbook#
The Ansible playbook cf_fabric.yml
is the main entry point for the project. It contains a single play that applies a sequence of roles to all nodes in the leaf
and spine
groups:
cf_fabric.yml
- name: Configure fabric
gather_facts: false
hosts:
- leaf
- spine
- borderleaf
- superspine
- dcgw
roles:
## INIT -m Set device facts
- role: initialize
tags: [always]
## INFRA - Load infrastructure-related intent
- role: fabric
vars:
intent_dir: "{{ intent_dir }}"
tags:
- infra
- role: infra
vars:
intent_dir: "{{ intent_dir }}"
tags: [infra]
## SERVICES - Load/generate service-related intent
- role: services # Loads l2vpn and l3vpn intents from ./intent dir
vars:
intent_dir: "{{ intent_dir }}"
tags: [services]
- role: mh_access # Generates low-level intent from 'mh_access' intent
vars:
mh_access: mh_access # make input explicit, 'l2vpn' is generated by role 'services' (redundant)
tags: [services, mh_access]
- role: l2vpn # Generates low-level intent from 'l2vpn' intent
vars:
l2vpn: l2vpn # make input explicit, 'l2vpn' is generated by role 'services' (redundant)
tags: [services, l2vpn]
- role: l3vpn # Generate low-level intent from 'l3vpn' intent
vars:
l3vpn: l3vpn # make input explicit, 'l3vpn' is generated by role 'services' (redundant)
tags: [services, l3vpn]
## CONFIG PUSH - Generate low-level JSON-RPC data from low-level intent and set device config
- role: configure
vars:
purge: true # purge resources from device not in intent, set with --extra-vars "purge=false"
save_startup: false # save config to startup-config, override with --extra-vars "save_startup=true" to ansible-playbook
commit_confirm_timeout: "{{ confirm_timeout | default(0) | int }}" # confirm timeout in seconds
purgeable:
- interface
- subinterface
- network-instance
- tunnel-interface
- bfd
- es
tags: [always]
- name: Commit changes when confirm_timeout is set
gather_facts: false
hosts:
- leaf
- spine
- borderleaf
- superspine
tasks:
- name: Commit changes
when: confirm_timeout | default(0) | int > 0
block:
- name: Pausing playbook before confirming commits
ansible.builtin.pause:
seconds: "{{ confirm_timeout | default(0) | int - 5 }}" # 5 seconds less than confirm_timeout
prompt: "Abort and allow commits to revert in {{ confirm_timeout | int }} secs.\nContinue or wait to go ahead and confirm commits"
- name: Get commits
nokia.srlinux.get:
paths:
- path: /system/configuration/commit
datastore: state
register: commits
# - ansible.builtin.debug:
# var: commits
- name: Check for commits requiring confirmation
ansible.builtin.set_fact:
unconfirmed_commits: "{{ commits.result[0].commit | selectattr('status', 'equalto', 'unconfirmed') | list }}"
- ansible.builtin.debug:
var: unconfirmed_commits
- name: Confirm commits
nokia.srlinux.config:
datastore: tools
update:
- path: /system/configuration/confirmed-accept
when: unconfirmed_commits | length > 0
The playbook is structured in 3 sections:
- the
hosts
variable at play-level defines the hosts that are part of the fabric. In this case, all hosts in theleaf
andspine
groups. Group definition and membership is defined in the inventory file. - the
roles
variable defines the roles that are applied to the hosts defined in thehosts
section. The roles are applied in the order they are defined in the playbook. The roles are grouped in 4 sections:INIT
,INFRA
,SERVICES
andCONFIG-PUSH
.- INIT: This section initializes some extra global variables or Ansible facts that are used by other roles. These facts include:
- the current 'running config' of the device
- the SR Linux software version
- the LLDP neighborship states
- INFRA: This section has 2 roles:
fabric
: this role generates level-1 intents based on a fabric intent defined in the intent directory. If there is no fabric intent file present, this role will have no effect (skipped)infra
: this role validates and merges group- and host infra intents to form a level-1 per-device infra intent.
- SERVICES: This section validates the level-2 intents (
services
role) and each of the roles in the rest of the this section transforms the level-2 intent into a per-device level-1 intent. - CONFIG PUSH: This section applies configuration to the nodes. This is where the level-1 intent is transformed into actual device configuration. It also has the capability to prune resources that exist on the device but have no matching intent. This requires the
.role.purge
to be set totrue
. The list of purgeable resources is also configurable via.role.purgeable
.
- INIT: This section initializes some extra global variables or Ansible facts that are used by other roles. These facts include:
Following diagram gives an overview how the low-level device intent is constructed from the various roles: