Ansible Limit When Using Netbox as Inventory

3 03 2023

I’m currently using Ansible to template a large and growing number of devices for an ISP that I’m working for. The last part of the process is to use Netbox as a source of truth to write the configs using Jinja2 templates. The work is done as part of a CI/CD pipeline, and runs on a specific Gitlab Runner instance – finally the config is pre-staged onto the device’s filesystem to be checked by a engineer before deployment.

I’ve been finding the growing list of hosts a bit hard work, and, seemingly undocumented in the Netbox docs is how to put a site-specific limit on the playbook run. This is easily done in regular Ansible by using .ini-style host file groups like this:

[siteA]
sitea-router001
sitea-router002

[siteB]
siteb-router001
siteb-router002

You can then do ‘ansible-playbook -l siteB’ to restrict what gets generated. How you do this when Netbox is the source of inventory is less clear.

It turns out that sites are pre-pended in Netbox with the string ‘sites_’. So, in your dynamic inventory file (in my case, called nb-inventory.yml) you need to tell it to group hosts by site by including the sites keyword under the group_by section:

plugin: netbox.netbox.nb_inventory
api_endpoint: https://YOURURLHERE
token: YOURTOKENHERE
validate_certs: False
config_context: False
group_by:
  - device_roles
  - sites

query_filters:
  - has_primary_ip: 'true'

compose:
  ansible_network_os: platform.slug

Then, the command to retrieve hosts by site is:

ansible-playbook main.yml -i nb-inventory.yml -l sites_sitea

This should return something like the following:

    "sites_sitea": {
        "hosts": [
            "sitea-router001",
            "sitea-router002"
        ]
    }

Et voila…





Installing AWX 19 on MicroK8s in AWS

22 04 2021

AWX is now deployed on Kubernetes (since AWX release 18), which is great – the only thing is, what do you do if this is the only application you need Kubernetes for? It is a bit of a hassle setting up the K8s master and worker nodes just for a single application.

The documentation suggests you use Minikube for this, but that seems to be designed for local / testing use only. There’s no middle ground between these two options, so I decided to work it out on MicroK8s.

MicroK8s is Canonical’s minimal production Kubernetes environment. It installs on one host, but can be set up for high availability and even run on a Raspberry Pi!

Here are the instructions if you want to do the same.

Read the rest of this entry »




Notes on Pushing Ansible-generated FortiOS Configs

5 02 2021

I’m working on a project to push out configuration files to Fortigates using the ‘configuration restore’ capability in FortiOS. The configs are generated using Jinja2 templates and then restored to the remote device via SCP. This post is to collect together a few of the pitfalls and things I learned in the process. Hopefully it will help someone else out of a hole.


Why use SCP in the first place?

I had every intention of using the FortiOS Ansible modules for this process, specifically fortinet.fortios.fortios_system_config_backup_restore. The issue with doing so is that it operates over the REST API. To use the API, you have to go on to the box and generate an API token. The issue here is that you only see the token in cleartext at the point of creation, after which it is stored cryptographically in the config. This means that on the script host you need to keep a vault with both versions – cleartext to push to the API, and cryptotext to insert into the config file you are pushing.

Instead, it is easier to enable SCP on the devices, put an admin PKI user’s public key in every config and restore over SCP instead. To do this:

config system global     
  set admin-scp enable
end
config system admin
  edit scripthost
    set accprofile prof_admin ssh-public-key1 "ssh-rsa YOUR PUBLIC KEY HERE"
  next
end




SCP Backups Pitfall

Backup over SCP using the PKI key is achieved as follows – make sure you are backing up the file sys_config from the remote Fortigate:

scp -i <private key> scripthost@<device IP>:sys_config <local filename> 

The pitfall here is that (at least in FortiOS 6.4.3) only the admin user that is doing the backing up is included in the file that is retrieved. So if you restore that file, all the other admin users are missing! If you do a backup/restore via the GUI or the API, it seems as though all admin users are backed up.

Ungraceful SCP Connections

By default, when you restore via SCP to FortiOS (this is 6.4.3), the device immediately reboots when it gets its config. This leaves the client SCP session hanging until the SCP program times out. Not particularly nice because in an ansible playbook it takes ages to time out and the result appears as a failure, even though the restore was successful.

You can work around this by messing with timeouts on the SCP client but ultimately, the SCP client still sees the graceless disconnection as a failure.

Instead, a CLI-only command in FortiOS exists to prevent the device rebooting after a config restore. Use this and the SCP session is terminated gracefully after restoration:

config system global
set reboot-upon-config-restore disable
end

Important Things in the Configuration

There are two lines that need to be in the configuration file that you create using Jinja. If they’re not there, the Fortigate will tell you that the configuration is invalid.

At the very top of the configuration, you need to make sure that Jinja makes this line. In the example below, the model of Fortigate is FGVM64 (a virtual Fortigate). If this does not match the device type you are sending a config to, it will reject it as invalid. Other models exist of course – e.g. FGT60E, FGT60F – just make sure the model matches. Other fields here have to exist, but do not appear to have to match anything:

#config-version=FGVM64-6.4.3-FW-build1778-201021:opmode=0:vdom=0:user=andy

Immediately below that there needs to be this:

#config-version=UNKNOWN

The version line needs to be there but as you can see, I put a dummy value in it. The actual version will be written by FortiOS when the file is saved to the device.





Public/private key SSH access to Fortigate

7 10 2020

To save having to enter usernames and passwords for your devices, it is a lot more convenient to use public/private key authentication. When SSHing to the device, you simply specify the username and authentication using the keys is automatic.

Windows users can use puttygen to make key pairs, and PuTTY as an SSH client to connect to devices. This process is quite well described here: https://www.ssh.com/ssh/putty/windows/puttygen

By default, keys (on a Linux or Macos host) are in your home directory, under the ~.ssh/ directory. A keypair is generated using ssh-keygen like so:

andrew@host % ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/andrew/.ssh/id_rsa): andrew_test
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in andrew_test.
Your public key has been saved in andrew_test.pub.
The key fingerprint is:
SHA256:nx4REDACTEDGN69tY andrew@host
The key's randomart image is:
+---[RSA 3072]----+
| 1. o+|
| o o& o|
| * o..- =.|
| .. |
| S. =B xx . |
| .+. |
| . +.=. o. +E|
| o o+* .|
+----[SHA256]-----+
andrew@host %

In the example above, I created it as ‘andrew_test’ – this will make two files – andrew_test and andrew_test.pub. The first one is your PRIVATE key and should remain secure on your system. The second is your PUBLIC key which you can distribute. If you don’t specify a name, default it will create files called id_rsa and id_rsa.pub.

You can do a ‘more andrew_test.pub’ to see the contents of this file. Copy it to the clipboard because you need it in the next step.

Note: For extra security, if I had specified a passphrase in the section above, I would have to enter that phrase every time the key is used. In this example, I did not set a passphrase.

Log into the Fortigate you wish to administer and create a new user like so, pasting the cryptotext you found in the .pub file between quotation marks:

config system admin
edit <new username>
set ssh-public-key1 "<PASTE CRYPTOTEXT HERE>"
set accprofile super_admin
set password <password>
end

NOTE: Make sure you add a password for the user – otherwise, when logging on via the serial port (which does not support public/private key authentication), no password will be required!

Then exit from that login session, and log in again as the user you defined.

If you specified a custom name for your keypair, you need to do the following:

ssh -i ~/.ssh/<key-name>.pub <new username>@<fortigate IP>

If you didn’t specify a name, it will use the id_rsa.pub file by default, so you can simply type:

ssh <new username>@<fortigate IP>

Here is a working example of this last case – as you can see, there’s no prompt for a password:

andrew@host % ssh andrew@10.0.0.25
Fortigate-Test #





Cable testing on Juniper EX switches

22 09 2020

Does anyone use the cable test feature on EX switches? Did you even know you could do this kind of thing?

Read the rest of this entry »




fpc1 vlan-id(32768) to bd-id mapping doesn’t exist in itable

21 09 2020

If you are getting this message appearing repeatedly on a Juniper switch (e.g. an EX4300), check you don’t have an IRB interface that is not attached to a VLAN. Alternatively, check your IRBs all have IP addresses.





Restoring data to Netbox Docker

16 09 2020

Having just shot myself in the foot by deleting docker and losing a container I had been working on, here is the command to restore data to netbox-docker’s Postgres database:

sudo docker exec -i netbox-docker_postgres_1 psql --username netbox netbox < /path/to/backup/file.sql

Phew…





DHCP Relay Issues With Microsoft Surface Pro Docks and Junos

7 09 2020

After deploying some new Juniper EX4600 core switches, my customer complained that he was experiencing about 45 seconds of delay in getting an IP address on a Surface Pro connected to a dock. The second time of connecting, it took about 8 seconds which was more acceptable. The 45 second delay came back every time they moved the Surface Pro to a new dock.

Read the rest of this entry »




Migration Strategy: Moving From MPLS/LDP to Segment Routing

21 03 2019

MPLS core networks that use Label Distribution Protocol (LDP) are common in SP core networks and have served us well. So, the thought of pulling the guts out of the core is pretty daunting and invites the question why you would want to perform open-heart surgery on such critical infrastructure.   This article attempts to explain the benefits that would accrue from such a move and gives a high-level view of a migration strategy.

Why Do I Need Segment Routing?

  • Simplicity:   LDP was invented as a label distribution protocol for MPLS because nobody wanted to go back to the standards bodies to re-invent OSPF or IS-IS so that they could carry labels.  A pragmatic decision, but one that results in networks having to run two protocols.  Two protocols means twice the complexity.  
    Segment Routing simplifies things by allowing you to turn off LDP.  Instead it carries label (or Segment ID) information in extensions to the IGP.  This then leaves you with only IS-IS or OSPF to troubleshoot.  As Da Vinci reportedly said, ‘simplicity is the ultimate sophistication’. 

Read the rest of this entry »




3 Challenges for Network Engineers Adopting Ansible

14 03 2019

Ansible, ansible, ansible seems to be all we hear these days. There are lots of resources out there all trying to convince us this is the new way get stuff done. The reality is quite different – adoption of tools like this is slow in the networking world, and making the move is hard for command-line devotees.

Here are the three main problems I encountered in my adoption of Ansible as a modern way to manage devices:

Read the rest of this entry »