Skip to content

Project structure#

The Ansible Inventory#

In this project, we use the native file-based Ansible inventory. It lists the hosts that are part of the fabric and groups them in a way that reflects the fabric topology. The inventory file - ansible-inventory.yml - is located in the inv directory; group_varscontains connectivity parameters for specific device groups, like srl for SR Linux.

β”œβ”€β”€ ansible-inventory.yml
└── group_vars
    └── srl.yml

Ansible is instructed to use this inventory file by setting inventory = inv in the ansible.cfg configuration file.

The ansible-inventory.yml defines four groups:

  • srl - for all SR Linux nodes
  • spine - for the spine nodes
  • leaf - for the leaf nodes.
  • hosts - for emulated hosts.


Intents describe desired state of the fabric via structured data in YAML files. The files are stored in an intent directory that is specified as an extra-var option to the ansible-playbook command, e.g. ansible-playbook -e intent_dir=absolute path to the intent dir

There are 2 types of intents:

Level-1 intents#

Level-1 intents are infrastructure-level intents and describe per-device configuration following an abstracted device-model. Each top-level resource has a custom data model that is close to the SR Linux data model but different. This device abstraction layer allows to support multiple NOS types (like SROS) and also to shield device-model changes across releases from the defined intent.

The data model for these intents are defined per level-1 resource (e.g. network_instance, interface, system, ...) and are defined in json-schema format in directory playbooks/roles/infra/criteria.

Level-1 intents can be defined at host- and group-level (as defined in the Ansible inventory). Host-level intent files need to start with host_infra, e.g. host_infra.yml and group-level intent files have to start with group_infra. An example of a group-level infra intent is:

group_infra.yml (partial)
      admin_state: enable
      vlan_tagging: yes
      admin_state: enable

leaf references a group in the Ansible inventory and this applies to all nodes in that group. Intent files support ranges as shown in above example.

Node-level intents follow the same device model and may have overlapping definitions with the group-level intents. Host-level intents always take precedence over group-level intents.

Level-2 intents#

Level-2 intents are intents at a higher abstraction layer and describe fabric-wide intents, such as fabric intents that describe high-level underlay parameters and service intents such as bridge-domains (l2vpn) and inter-subnet routing (l3vpn) and multi-homing intents (lags and ethernet-segments).

The data model for each level-2 intent type are defined in the respective roles that transform level-2 intent into level-1 intent:

We'll discuss these in more detail later in this tutorial when we configure the fabric.

The Ansible Playbook#

The Ansible playbook cf_fabric.yml is the main entry point for the project. It contains a single play that applies a sequence of roles to all nodes in the leaf and spine groups:

- name: Configure fabric
  gather_facts: false
    - leaf
    - spine
    - borderleaf
    - superspine
    - dcgw
    ## INIT -m Set device facts
    - role: initialize
      tags: [always]
    ## INFRA - Load infrastructure-related intent
    - role: fabric
        intent_dir: "{{ intent_dir }}"
        - infra
    - role: infra
        intent_dir: "{{ intent_dir }}"
      tags: [infra]
    ## SERVICES - Load/generate service-related intent
    - role: services          # Loads l2vpn and l3vpn intents from ./intent dir
        intent_dir: "{{ intent_dir }}"
      tags: [services]
    - role: mh_access         # Generates low-level intent from 'mh_access' intent
        mh_access: mh_access  # make input explicit, 'l2vpn' is generated by role 'services' (redundant)
      tags: [services, mh_access]
    - role: l2vpn             # Generates low-level intent from 'l2vpn' intent
        l2vpn: l2vpn          # make input explicit, 'l2vpn' is generated by role 'services' (redundant)
      tags: [services, l2vpn]
    - role: l3vpn             # Generate low-level intent from 'l3vpn' intent
        l3vpn: l3vpn          # make input explicit, 'l3vpn' is generated by role 'services' (redundant)
      tags: [services, l3vpn]
    ## CONFIG PUSH - Generate low-level JSON-RPC data from low-level intent and set device config
    - role: configure
        purge: true           # purge resources from device not in intent, set with --extra-vars "purge=false"
        save_startup: false   # save config to startup-config, override with --extra-vars "save_startup=true" to ansible-playbook
        commit_confirm_timeout: "{{ confirm_timeout | default(0) | int }}"   # confirm timeout in seconds
          - interface
          - subinterface
          - network-instance
          - tunnel-interface
          - bfd
          - es
      tags: [always]
- name: Commit changes when confirm_timeout is set
  gather_facts: false
    - leaf
    - spine
    - borderleaf
    - superspine
    - name: Commit changes
      when: confirm_timeout | default(0) | int > 0
        - name: Pausing playbook before confirming commits
            seconds: "{{ confirm_timeout | default(0) | int - 5 }}"  # 5 seconds less than confirm_timeout
            prompt: "Abort and allow commits to revert in {{ confirm_timeout | int }} secs.\nContinue or wait to go ahead and confirm commits"
        - name: Get commits
              - path: /system/configuration/commit
                datastore: state
          register: commits
#        - ansible.builtin.debug:
#            var: commits
        - name: Check for commits requiring confirmation
            unconfirmed_commits: "{{ commits.result[0].commit | selectattr('status', 'equalto', 'unconfirmed') | list }}"
        - ansible.builtin.debug:
            var: unconfirmed_commits
        - name: Confirm commits
            datastore: tools
              - path: /system/configuration/confirmed-accept
          when: unconfirmed_commits | length > 0

The playbook is structured in 3 sections:

  1. the hosts variable at play-level defines the hosts that are part of the fabric. In this case, all hosts in the leaf and spine groups. Group definition and membership is defined in the inventory file.
  2. the roles variable defines the roles that are applied to the hosts defined in the hosts section. The roles are applied in the order they are defined in the playbook. The roles are grouped in 4 sections: INIT, INFRA, SERVICES and CONFIG-PUSH.
    • INIT: This section initializes some extra global variables or Ansible facts that are used by other roles. These facts include:
      • the current 'running config' of the device
      • the SR Linux software version
      • the LLDP neighborship states
    • INFRA: This section has 2 roles:
      • fabric: this role generates level-1 intents based on a fabric intent defined in the intent directory. If there is no fabric intent file present, this role will have no effect (skipped)
      • infra: this role validates and merges group- and host infra intents to form a level-1 per-device infra intent.
    • SERVICES: This section validates the level-2 intents (services role) and each of the roles in the rest of the this section transforms the level-2 intent into a per-device level-1 intent.
    • CONFIG PUSH: This section applies configuration to the nodes. This is where the level-1 intent is transformed into actual device configuration. It also has the capability to prune resources that exist on the device but have no matching intent. This requires the .role.purge to be set to true. The list of purgeable resources is also configurable via .role.purgeable.

Following diagram gives an overview how the low-level device intent is constructed from the various roles:

Transforming high-level intent to device configuration