To start deploying labs orchestrated by KNE a user needs to install
kne command line utility and have a k8s cluster available. Follow KNE setup instructions to install
kne and its dependencies.
We used the following components and their versions in this tutorial:
By following the setup instructions, you should have the following utilities successfully installed:
Once the necessary utilities are installed, proceed with the KNE cluster installation. KNE cluster consists of the following high-level components:
- Kind cluster: A kind-based k8s cluster to allow automated deployment.
- Load Balancer service: An Load Balancer service used in the KNE cluster to allow for external access to the nodes. Supported LB services: MetalLB.
- CNI: configuration of a CNI plugin used in the KNE cluster to layout L2 links between the network nodes deployed in a cluster. Supported CNI plugins: meshnet-cni.
- External controllers: an optional list of external controllers that manage custom resources.
KNE provides a cluster manifest file (aka "deployment file") along with the command to install cluster components using
kne deploy command3.
Deployment file contains
controllers section that enables automated installation of external controllers, such as srl-controller. KNE pins particular versions of external controllers to guarantee compatibility between the KNE and controller layers. For example, KNE v0.1.9 deploys srl-controller v0.5.0. If a user wants to use a different version of a controller, they need to remove the controller from the
controllers list and install it manually.
kne deploy and following the cluster deployment instructions, cluster installation boils down to a single command:
The deployment process should finish without errors, stating that every component of a KNE cluster has been deployed successfully. At this point, it is helpful to check that the cluster and its components are healthy.
Ensure that a kind cluster named
kne is active.
kubectl is configured to work with
meshnet CNI is running as a daemonset:
SR Linux controller#
SR Linux controller manages SR Linux containers deployment on top of the KNE clusters and provides the necessary APIs for KNE to deploy SR Linux nodes as part of the network topology. It is automatically installed by the KNE CLI tool.
Installing SR Linux controller manually
SR Linux controller is an open-source project hosted at srl-labs/srl-controller repository and can be easily installed on a k8s cluster as per its installation instructions, for example to test a version that was not yet releases or adopted by KNE:
Additional controllers can be installed by following the respective installation instructions provided in the KNE documentation.
srl-controller is installed successfully, it can be seen in its namespace as a deployment:
❯ kubectl get deployments -n srlinux-controller NAME READY UP-TO-DATE AVAILABLE AGE srlinux-controller-controller-manager 1/1 1 1 12m
If a user intends to run a topology with chassis-based SR Linux nodes4, they must install a valid license.
The same lab can be used with unlicensed IXR-D/H variants; to adapt the lab to unlicensed SR Linux variants users need to:
model: "ixr6e"string from the KNE topology file
remove the openconfig configuration blob from the startup-config file
In the case of a
kind cluster, it is advised to load container images to the kind cluster preemptively. Doing so will ensure that necessary images are present in the cluster when KNE creates network topologies.
To load srlinux container image to the kind cluster:
The tutorial is based on this particular release, but newer releases might work as well. ↩
For this tutorial, we leverage kind (Kubernetes in Docker) to stand up a personal k8s installation. Using kind is not a hard requirement but merely an easy and quick way to get a personal k8s cluster. ↩
Users are free to install cluster components manually.
kne deployaims to automate the prerequisites installation using the tested configurations. ↩
Hardware types ixr6/10, ixr-6e/10e ↩