top of page

Creating a Juniper Mist Lab with Containerlab

I have been working with Juniper Mist for almost two years now and I absolutely love it! I helped a lot of our customers implement Juniper Mist and migrate from traditionally maintained networks to this excellent cloud-hosted solution. It is pretty intuitive and once you play around with it a bit, you will quickly get the hang of it.

Speaking about playing around and getting warm with Juniper Mist, today I have a really interesting topic for all the people out there who want to get into it and learn how to manage a switching network with Mist. Stay tuned till the end, you won't regret it, I promise! In this post, I assume that you already have basic knowledge about how to navigate through the Mist dashboard and doing simple tasks like renaming a switch and assigning it to a site.


Prerequisites


You don't really need much to run a lab you can play around with in Mist. If you want to follow along this blog post, what you will need is the following:


  1. A Juniper Mist account

  2. A Containerlab instance with vJunos-Switch installed


That's it!

Creating a Juniper Mist account is very simple. Navigate to manage.mist.com and simply follow the instructions on how to create a new account. Pay attention to the cloud instance you want to register in though. Choose the one that suits you best.

If you need help installing vJunos-Switch in Containerlab, I covered that in one of my previous blog posts. Check it out if you missed that one!


Creating a New Mist Organization and Site


For the purpose of demonstrating, I created a fresh organization in Juniper Mist. Inside the new organization, I created a site called "Mist Switching". I would advise you to create a new one too. You could also use the default "Primary Site" but usually the first thing I do in a new organization is removing this default site and create the sites I need on my own.

Now that we have our new organization and a site, we can move forward and have a look about the virtual lab setup.


Lab Topology


For this demonstration, we will be working with a very basic setup:


Two switches connected to each other with two interfaces which we are going to configure into an ether-channel via Juniper Mist management.

The topology file in Containerlab looks like this:

# mist_switching.yml

name: mist_switching

topology:
  nodes:
    vswitch1:
      kind: juniper_vjunosswitch
      image: vrnetlab/juniper_vjunos-switch:23.4R2-S2.1
      startup-config: vjunosswitch_cfg.txt
    vswitch2:
      kind: juniper_vjunosswitch
      image: vrnetlab/juniper_vjunos-switch:23.4R2-S2.1
      startup-config: vjunosswitch_cfg.txt


  links:
    - endpoints: ["vswitch1:ge-0/0/0", "vswitch2:ge-0/0/0"]
    - endpoints: ["vswitch1:ge-0/0/1", "vswitch2:ge-0/0/1"]

Make sure to change the version tag if you installed a different version of vJunos-Switch. Some of you may wonder about the startup-config attribute. We will come to that in a second, just don't deploy the lab yet.

In order to connect a vJunos-Switch to Mist, you need to adopt it to the Mist platform. To do that, you have to navigate to the inventory in Juniper Mist and select the "Switches" tab. There, click the "Adopt Switches" button. You will be presented with a set of commands you have to copy and paste to the virtual switches:

Since we have to do that on every switch, this is where the startup-config attribute in the Containerlab topology file comes in handy. We can put the config snippet that is needed to connect to Mist into a pre-defined startup-config file.

Check out my blog post about the startup-config attribute in Containerlab if you want do learn a bit more about it.

Since the config snippet presented by Mist comes in "set" format, we can't use it like that in the startup-config file. I did the work and translated it so Containerlab can push it to the devices correctly:

# vjunosswitch_cfg.txt

system {
    login {
        user mist {
            class super-user;
            authentication {
                encrypted-password "some-password"; ## SECRET-DATA
                ssh-rsa "ssh-rsa some-ssh-key"; ## SECRET-DATA
            }
        }
    }
    services {
        ssh {
            protocol-version v2;
        }
        outbound-ssh {
            client mist {
                device-id some-device-id;
                secret "some-secret"; ## SECRET-DATA
                keep-alive {
                    retry 12;
                    timeout 5;
                }
                services netconf;
                oc-term.eu.mist.com {
                    port 2200;
                    retry 1000;
                    timeout 60;
                }
            }
        }
    }
    name-server {
        8.8.8.8;
    }            
    authentication-order password;
}
routing-options {
    static {
        route 0.0.0.0/0 next-hop 10.0.0.2;
    }
}

Make sure to update the unique secret data and device-id with the ones you get presented with.

I also added a DNS server since by default, vJunos-Switches don't come with any when deployed in Containerlab. We need name-resolution so the switch can connect to the Mist cloud. I also added a static route. A vJunos-Switch deployed in Containerlab always comes with the dedicated management routing-instance enabled which the management port fxp0 is mapped to. This routing-instance comes with the exact same route I added to the startup-config file but is added only to the management routing-instance. The pre-defined next-hop is basically the gateway for the management network that Containerlab ships by default. If you want to learn more about the internal network wiring in Containerlab, check out their documentation. By default, switches will try to establish an outbound-ssh session to the Mist cloud via the default routing-instance. I experimented with it and changed this so the switch would use the management routing-instance to connect to the cloud. While this works perfectly fine on real switches, this didn't work on the vJunos-Switches. I assume this might be a limitation on the virtual switches, so we need to try a different way. The route I defined in the startup-config will be added to the default routing-instance. We also need to remove the management routing-instance to make it work, but we can't delete parts of the configuration with the startup-config file, so we have to remove this config object manually after the virtual switches boot up.

Ensure that the startup-config .txt file is located in the same directory as the topology file or adjust the path in the topology file accordingly.

So, let's now deploy the lab in Containerlab.


Connecting the Switches to the Juniper Mist Cloud


First, we connect to our switches and disable the management routing-instance. The default credentials to connect to a vJunos-Switch in Containerlab are admin:admin@123

admin@0200041cf5ee> configure
Entering configuration mode
[edit]
admin@0200041cf5ee# delete system management-instance
[edit]
admin@0200041cf5ee# commit-and-quit

After a short time, the switch should connect to the Juniper Mist cloud. You can verify this using the "show system connections | grep 2200" command. vJunos-Switches connect to the cloud via TCP port 2200, hence "grep 2200".

admin@0200041cf5ee> show system connections | grep 2200
tcp4       0      0  10.0.0.15.57082                               52.58.151.124.2200                            ESTABLISHED

Make sure to disable the management routing-instance on all switches.

When both switches have an established connection towards the Juniper Mist cloud, you should be able to see the switches in your inventory in the Mist platform:

They will be in "Unassigned" status. Assign them to the site you created. Give it a minute and you should have something like this:

You might notice that the switches have the same IP address. That is indeed true for the management interfaces. Since the devices run in separate containers, that's fine.

Now we successfully onboarded our virtual switches, great!

Now let's move on to the actual configuration part in Mist. From here, you can basically play around with the switches and try all the switching features. Just keep in mind that building a VC with vJunos-Switches is not supported, so unfortunately we can't lab this. Besides that, we can do anything really.

In the next and final section of this post, we will have a look at an example configuration I created for this purpose.


Example Configuration


I created a Switch Template for the purpose of this blog post. I named it "Basic Switch Template" and added a few things like DNS and NTP servers, a new user, a login banner, a hand-full of VLANs and a port profile called "interconnect" which we will apply to the ports that connect the switches with each other:

In the "Select Switches Configuration" section, I added a new rule called vjunosswitch that applies configuration only to switches of the model VJUNOS:

Ports ge-0/0/0 and ge-0/0/1 will be configured with the port profile "interconnect" and in an ether-channel with each other:

The remaining ports will be configured with port profile "disabled" and hence actually be disabled. They will also have a description "Disabled" configured to them:

The only thing that's left is to assign this template to our site. And since the port config will be applied to all switches of the model VJUNOS, both our switches should be configured accordingly.

All right, time to check the results!


Config Results


When checking the switch ports in the GUI, we can already see that our links connecting the switches with each other are configured in an ether-channel with the right port profile:


Here is also some proof from the CLI:

admin@vJunos-Switch_1> show lacp interfaces
Aggregated interface: ae0
    LACP state:           Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
      ge-0/0/0           Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      ge-0/0/0         Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
      ge-0/0/1           Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      ge-0/0/1         Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
    LACP protocol:        Receive State  Transmit State          Mux State 
      ge-0/0/0                  Current   Fast periodic Collecting distributing
      ge-0/0/1                  Current   Fast periodic Collecting distributing

When trying to establish a fresh SSH connection to the switches, we are presented with our configured login banner:

$ ssh admin@clab-mist_switching-vswitch1
Warning: Permanently added 'clab-mist_switching-vswitch1' (ED25519) to the list of known hosts.
This vJunos-Switch is managed by Mist!
Configuration pushed via CLI might be overwritten by cloud config!
(admin@clab-mist_switching-vswitch1) Password:

All the unused ports, ge-0/0/2 to ge-0/0/9, are administratively disabled and have the description we set in the Switch Template:

admin@vJunos-Switch_1> show interfaces descriptions 
Interface       Admin Link Description
ge-0/0/2        down  down Disabled
ge-0/0/3        down  down Disabled
ge-0/0/4        down  down Disabled
ge-0/0/5        down  down Disabled
ge-0/0/6        down  down Disabled
ge-0/0/7        down  down Disabled
ge-0/0/8        down  down Disabled
ge-0/0/9        down  down Disabled
irb.0           up    down default

The user I added in the template is also configured on the switches:

admin@vJunos-Switch_1> show configuration groups top system login user vithu 
uid 2001;
class super-user;
authentication {
    encrypted-password "some-password"; ## SECRET-DATA
}

DNS and NTP servers are also configured correctly:

admin@vJunos-Switch_1> show configuration groups top system name-server         
8.8.8.8;
admin@vJunos-Switch_1> show configuration groups top system ntp            
server 8.8.8.8;

I think that was it. Everything we defined in the Mist platform got pushed to our virtual switches we spun up in Containerlab!


Closing Words


I think it's amazing that we now have the capability of creating virtual labs to practice the handling of switch configuration via Mist, what do you think? Just to mention, you can of course use any network virtualization or emulation tool like EVE-NG or GNS3. I picked Containerlab because that is what I frequently use. I also covered Containerlab in one of my previous posts, so it just made sense for me to deploy the virtual lab in Containerlab.

With all that said - happy labbing!

Comments


bottom of page