I use Containerlab as my go-to tool for creating virtual labs and I love it so far! After some consideration whether I should write a dedicated blog post about what Containerlab is and how it differs from tools like EVE-NG and GNS3, I decided not to do it because there already are great blog posts about that like the one written by Suresh Vina: https://www.packetswitch.co.uk/containerlabs-intro/
I think it wouldn't add much value if I wrote another article about something someone else already did a great job on. That's why in this blog post I will focus on one specific topic within Containerlab.
Sometimes, pushing a base configuration on the nodes can be a very tedious task, especially when you are running a lot of nodes in one topology. One approach to push configuration to a batch of nodes is to use handy tools like Netmiko and Nornir. With those tools you can create powerful scripts that can push configuration on multiple devices simultaneously.
In this blog post however, I will focus on a feature that Containerlab provides out of the box - the startup-config statement. I will use vJunos-Switch to demonstrate this.
P.S.: If you got excited about reading Netmiko and Nornir and now are a bit disappointed, don't worry! We will cover these in upcoming posts as well.
Now, let's dive into the actual topic!
Defining the startup-config
In this example, I will show you how to define a startup-config on a vJunos-Switch.
The startup-config can be defined in a simple .txt file just like the following one:
# ./vjunos_switch_cfg.txt
vlans {
MGMT {
vlan-id 100
}
CORP {
vlan-id 200
}
VOIP {
vlan-id 300
}
GUEST {
vlan-id 400
}
}
protocols {
lldp {
interface all
}
}
system {
name-server {
8.8.8.8;
8.8.4.4
}
ntp {
server 8.8.8.8
}
}
In this example, I defined a couple VLANs, enabled lldp on all interfaces and set DNS servers and an NTP server.
This file is located in the same directory as the topology file:
.
├── lab_with_startup-config.yml
└── vjunos_switch_cfg.txt
You then have to use the startup-config statement in the topology file to tell the switch where it can find the config. This is how the topology file looks like:
name: with_startup-config
topology:
nodes:
vswitch1:
kind: juniper_vjunosswitch
image: vrnetlab/juniper_vjunos-switch:23.4R2-S2.1
startup-config: vjunos_switch_cfg.txt
If you now deploy the topology, a new directory named after your topology will be created, which is usual behaviour when deploying a topology for the first time. Inside it you will find subdirectories named after the defined nodes in that topology. Under our vswitch1 we can now find a config - our startup-config:
.
├── clab-with_startup-config
│ ├── ansible-inventory.yml
│ ├── authorized_keys
│ ├── topology-data.json
│ └── vswitch1
│ └── config
│ └── startup-config.cfg
├── lab_with_startup-config.yml
└── vjunos_switch_cfg.txt
The .cfg file looks exactly like our .txt file:
# ./clab-with_startup-config/vswitch1/config/startup-config.cfg
vlans {
MGMT {
vlan-id 100
}
CORP {
vlan-id 200
}
VOIP {
vlan-id 300
}
GUEST {
vlan-id 400
}
}
protocols {
lldp {
interface all
}
}
system {
name-server {
8.8.8.8;
8.8.4.4
}
ntp {
server 8.8.8.8
}
}
Checking the Device Config
Now, let's check if the startup-config really got applied to our switch.
The default credentials to log in to a vJunos-Switch node in Containerlab are admin:admin@123
$ ssh admin@clab-with_startup-config-vswitch1
Warning: Permanently added 'clab-with_startup-config-vswitch1' (ED25519) to the list of known hosts.
(admin@clab-with_startup-config-vswitch1) Password:
--- JUNOS 23.4R2-S2.1 Kernel 64-bit JNPR-12.1-20240604.39c9257_buil
admin@vswitch1>
Let's check the configured VLANs on the device:
admin@vswitch1> show configuration vlans
CORP {
vlan-id 200;
}
GUEST {
vlan-id 400;
}
MGMT {
vlan-id 100;
}
VOIP {
vlan-id 300;
}
Looks good. How about the DNS and NTP servers?
admin@vswitch1> show configuration system
host-name vswitch1;
root-authentication {
encrypted-password "$6$k8mfAQas$KUvysXOkgCb7.xF3GCBpXiE8Mc19hUew8lytGlH7NRGJ1SuEWvm8yTaQFtUh3B50veAevI88B0kghIV9LBJ.W."; ## SECRET-DATA
}
login {
user admin {
uid 2000;
class super-user;
authentication {
encrypted-password "$6$8t/TGmYX$WDgbrY6c0NPTxVxlsdDRjPfoqZsxU4bKrN.UnTyKWV3rsGzpm3RJ1H9.4mmMsB7F7e6TLHaxRyQmmvTJvIQRx."; ## SECRET-DATA
}
}
}
services {
netconf {
ssh;
}
ssh {
root-login allow;
}
}
management-instance;
name-server {
8.8.8.8;
8.8.4.4;
}
ntp {
server 8.8.8.8;
}
They are also there. So far so good. Last check - LLDP:
admin@vswitch1> show configuration protocols
lldp {
interface all;
}
Great, our startup-config got applied correctly!
Remote startup-config
It is also possible to define a remote https location for a startup-config file. That said, you could for example reference to actual config files of your real network devices, as long as your Containerlab instance can reach those remote locations. That way, you could replicate your real network with live configs in a virtual environment to test stuff with minimal configuration overhead. Isn't that cool?
Cleanup of startup-config
If you delete a lab, your startup-configs won't be deleted. That means, if you change the topology file and remove the startup-config attribute and redeploy the topology, the devices will load their previously defined startup-config because they are downloaded during the deployment process and won't be removed automatically. You would have to remove them manually. They will, however, be overwritten if you make a change to your defined startup-config file.
You can find additional information about this topic in the nodes manual.
Comments