Ansible and Junos Notes

18 10 2016

I’m working on a project to push out configs to Juniper devices and upgrade them if necessary. Ultimately it will be vendor-independent.  In the first instance I thought about writing it all in Python, but there’s really no need because quite a lot of legwork has already been done for you in the form of ‘PyEz’ and the Junos Ansible core modules.

Juniper give you a few examples to get you started, but don’t really explain what each of the lines in the YAML file does, but I guess they expect you to figure that out.  Below are a few notes on things I discovered – perhaps obvious to some, but they might help someone else.

Ansible’s agentless nature

Ansible has the upper hand over Chef and Puppet in that it is agentless.  All you need is SSH.  Or so they tell you.

Ansible actually needs the system it is making changes on to be running Python.  So really, *that’s* the agent that Ansible talks to.   Since most network devices don’t have Python (it is roadmapped for Junos soon), that means you’ve got a problem – you can’t use most of the commands module of Ansible to execute remote commands.

With Junos you have two choices:

  1. Use the ‘raw’ module to send SSH commands to Junos:  This opens up a channel, sends the command and closes it again.  No error-checking, no frills.
  2. Use the Juniper Ansible modules: Used with a playbook you need ‘connection: local’ so that the Python part is done on the controlling node.  Then the module uses Netconf to connect to the Junos device to issue commands.

There is no other way – since there’s no Python on Junos and no way to get into a shell over SSH, these are your only options.

‘No module named jnpr.junos’ When Running Ansible

In the examples Juniper give, they don’t tell you that the Ansible module ‘Juniper.junos’ relies on a Python module called ‘jnpr.junos’.  (It is mentioned elsewhere if you look for it.)

So if you’ve done an ‘ansible-galaxy install Juniper.junos’ you could be forgiven for thinking that you’ve downloaded the modules you need.  You then gaily go on to have a crack at the example given above, but get this error:

$ ansible-playbook juniper-test.yml

PLAY [Get info] *********************************************************

TASK [Checking Netconf connection] **************************************
ok: [192.168.30.12]

TASK [Get info from Junos device] ***************************************
fatal: [192.168.30.12]: FAILED! => {"changed": false, "failed": true, "msg": "ImportError: No module named jnpr.junos"}

NO MORE HOSTS LEFT ******************************************************
 to retry, use: --limit @/Users/amulheirn/Documents/scripts/softpush/juniper-test.retry

PLAY RECAP **************************************************************
192.168.30.12 : ok=1 changed=0 unreachable=0 failed=1
$

To resolve this, you need to download the PyEz module for Python.  On my Mac, I did this using ‘sudo pip install junos-eznc’

 

Authenticating

I thought I’d be able to issue a command like ‘ansible-playbook junipertest.yml -u UserName -k’.   Ansible would then use the specified username and ask for the password interactively before it began – which it does.  However I was persistently getting the following authentication error:

TASK [Get info from Junos device] ***************************************
fatal: [192.168.30.12]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to 192.168.30.12: ConnectAuthError(192.168.30.12)"}

This was a bit confusing for a while.  It seems you can’t use command-line switches to pass username and password to a playbook.  Instead, the Juniper.junos module wants you to make a  variables section in the YAML file, and then pass the variable contents to the tasks that are specified later in the file.   The result is that you still  interactively asked for a username and password, and can successfully authenticate.

 

My YAML file

My YAML file looks like this – it successfully retrieves the version running on the device. I have commented down the right-hand side of this to add explanations, so you will need to remove these:

---
- name: Get info      # The name of the 'play'
  hosts: all          # Run on all hosts in inventory file
  roles:
  - Juniper.junos     # Invokes the Junos Ansible module
  connection: local   # Tells it to run locally
  gather_facts: no

 vars_prompt:           # Variable section
 - name: USERNAME       # Name of the variable
   prompt: User name    # Text presented to user
   private: no          # Obscure input with stars = no
 - name: DEVICE_PASSWORD
   prompt: Device password
   private: yes         # Obscure input with stars = yes

 tasks:                 # A series of tasks
   # Check a netconf connection can be made on port 830
 - name: Checking Netconf connection
   wait_for: host={{ inventory_hostname }} port=830 timeout=5

 # Retrieve the facts from the device using user/pass specified earlier
 - name: Get info from Junos device
   junos_get_facts:
   host={{ inventory_hostname }}
   user={{ USERNAME }}
   passwd={{ DEVICE_PASSWORD }}
   savedir=./
 register: junos

 # Three tasks to print results to screen
 - name: Print model
   debug: msg="{{ junos.facts.model }}"
 - name: Print version
   debug: msg="{{ junos.facts.version }}"
 - name: Print serial
   debug: msg="{{ junos.facts.serialnumber }}"
~


The above seems to work ok.  If you want to see all the facts that can be retrieved, change one of the last lines to just:

debug: msg="{{ junos.facts }}"

In doing this, you will get back all possible sub-facts that can be inspected – here’s the output I receive from an EX2200 in our lab:

 "msg": {
 "HOME": "/var/home/axians-prestage",
 "RE0": {
 "last_reboot_reason": "Router rebooted after a normal shutdown.",
 "mastership_state": "master",
 "model": "EX2200-24T-4G",
 "status": "OK",
 "up_time": "3 days, 23 hours, 38 seconds"
 },
 "domain": null,
 "fqdn": "",
 "has_2RE": false,
 "hostname": "",
 "ifd_style": "SWITCH",
 "master": "RE0",
 "model": "EX2200-24T-4G",
 "personality": "SWITCH",
 "serialnumber": "CW02122xxx91",
 "switch_style": "VLAN",
 "vc_capable": true,
 "vc_mode": "Enabled",
 "version": "12.2R9.3",
 "version_RE0": "12.2R9.3",
 "version_info": {
     "build": 3,
     "major": [
         12,
         2
     ],
     "minor": "9",
     "type": "R"
    }
  }
}

As you can see from the previous YAML file, you can retrieve junos.facts.version in order to get just that one ‘sub-fact’.  Replace ‘version’ with any of the above facts and see what you get – e.g. junos.facts.hostname

 

Ansible files

I wanted my work to be portable to a colleague’s computer with minimal fuss.  Ansible looks for a config file in /etc/ansible/ansible.cfg I believe, but it looks for a config file in the current directory first.  That is nice – it means you can override the system’s settings on a per playbook basis.

So here is my ansible.cfg file:

$ more ansible.cfg
# Ansible config file for softpush
[defaults]
inventory = ./hosts
log_path = ./ansible.log

 

My hosts file is pretty basic – it only contains a single IP address at the moment.  According to docs.ansible.com you should be able to alias a host using this:

jumper ansible_port=5555 ansible_host=192.0.2.50

‘jumper’ in this example is a host alias – apparently it doesn’t even have to be a real hostname.   However I found that this did not work for me – it tried to use the alias to connect to the host, rather than the IP address specified by ansible_host.  The error was:

$ ansible-playbook juniper-test.yml
User name: prestage
Device password:

PLAY [Get info] *********************************************************

TASK [Checking Netconf connection] **************************************
fatal: [test]: FAILED! => {"changed": false, "elapsed": 5, "failed": true, "msg": "Timeout when waiting for test:830"}

NO MORE HOSTS LEFT ******************************************************
 to retry, use: --limit @/Users/username/Documents/scripts/softpush/juniper-test.retry

PLAY RECAP **************************************************************
test : ok=0 changed=0 unreachable=0 failed=1

I edited the /etc/hosts file on my machine to include the same alias, and that now works fine.    Not sure this is intended behaviour – why specify an ansible_host value in the ansible hosts file if your /etc/hosts file contains the IP address as well?

Update on 27th Oct:  I’ve discovered the above does actually work – not entirely sure what I was doing wrong last time, but with this in the inventory it works fine:

[Devices]
line2001 ansible_host=192.168.30.20

 

Authentication

SSH Public Key Authentication

There are a few ways that authentication can be achieved.  The most preferred way is to use an SSH public key, so no password is required.  This means generating a public/private key for the user and then transferring the public key to the Junos host’s config.  Since my script is for pre-staging lots of devices, I felt that was overkill – if I was going to use Ansible for daily management of these devices, then it would be worthwhile, but that won’t be the case here.

Interactive Authentication

An alternative is the method described above, where the credentials are requested interactively. The ‘vars_prompt:’ section of the YAML file is what makes this happen.

The Juniper.junos module pays no attention to the ‘-u <USERNAME>’ command-line argument, nor does it observe the -k argument, which prompts for a password. I’m not sure why that is, but it is documented here.  If you put those command-line switches in, they are accepted and ignored – instead, the $USER environment variable from your computer is sent, resulting (probably) in an auth failure.

 

‘Insecure’ Authentication

Instead of using ‘vars_prompt:’, you can write the username and password into a ‘vars:’ section of the YAML file.  Obviously this isn’t secure, but since my script is for lab purposes, security of this information isn’t a concern.   Just replace the ‘vars_prompt:’ section shown above with something like this:

 vars:
 - USERNAME: someusername
 - DEVICE_PASSWORD: yourpassword

 

An alternative to doing  this is to put the usernames/passwords in the hosts file, though again this is not recommended:

$ more hosts
[Devices]
192.168.30.20 ansible_user=MyUsername ansible_ssh_pass=MyPassword

 

 

Vault Authentication

The proper way to store usernames and passwords is in the Ansible vault.  This is a file that is automatically encrypted with AES256 encryption.  You can pull in just the password, or a variety of variables as per the example on this Juniper page.   Quite cool, but too complex for my basic lab setup.

 

Formatting The Output

Instead of calling each fact one after the other, it is possible to do this and create a comma-separated list of facts:

 - name: Print model
   debug: msg="{{ junos.facts.serialnumber }},{{ junos.facts.model }},{{ junos.facts.version }}"

The result is:

TASK [Print model] *************************************************************
ok: [test] => {
 "msg": "CW0212xxx591,EX2200-24T-4G,12.2R9.3"
}

 

That’s all very well, but maybe you want to write this to a text file in that format?  Simply create another task that uses the copy module to write the output to a file.  Here’s the ‘Print model’ task again, followed by a new task called ‘Write details’:

 # Print results to screen, then write a CSV file
 - name: Print model
 debug: msg="{{ junos.facts.serialnumber }},{{ junos.facts.model }},{{ junos.facts.version }}"
 - name: Write details
 copy: content="{{ junos.facts.serialnumber }},{{ junos.facts.model }},{{ junos.facts.version }}" dest=./{{ inventory_hostname }}.txt

This results in a file in the current directory in the format <hostname>.txt:

$ more 192.168.30.12.txt
CW0212286591,EX2200-24T-4G,12.2R9.3

$

Issues

When running this playbook, the Juniper.junos module is supposed to write output to a file in the location specified by ‘savedir=’ in the YAML file.  It does do this, but fails to pre-pend the hostname on the file, so you get a file called ‘-facts.json’.   This is a problem because the filename begins with a ‘-‘ and it is therefore interpreted as a command-line switch by vi, cat and more.

Opening this file in the GUI reveals it to be a JSON formatted file containing all of the ‘facts’, just missing the hostname:

{"domain": null, "hostname": "", "ifd_style": "SWITCH", "version_info": {"major": [12, 2], "type": "R", "build": 3, "minor": "9"}, "version_RE0": "12.2R9.3", "serialnumber": "CW0212xxx591", "fqdn": "", "RE0": {"status": "OK", "last_reboot_reason": "Router rebooted after a normal shutdown.", "model": "EX2200-24T-4G", "up_time": "4 days, 18 minutes, 55 seconds", "mastership_state": "master"}, "has_2RE": false, "switch_style": "VLAN", "version": "12.2R9.3", "master": "RE0", "HOME": "/var/home/axians-prestage", "vc_mode": "Enabled", "model": "EX2200-24T-4G", "vc_capable": true, "personality": "SWITCH"}

It occurred to me that it isn’t the hostname on my Ansible control machine that should be in the filename – it should be the hostname of the device I am configuring.  As you can see above, the hostname is an empty string.  That’s because my device is new – all it has on it is an IP address and username/password.

I used the ‘set system host-name <name>’ command to give the device a name.  Re-ran the playbook and this time got the hostname on the file as expected:

$ more line1-facts.json
{"domain": null, "hostname": "line1", "ifd_style": "SWITCH", "version_info": {"major": [12, 2], "type": "R", "build": 3, "minor": "9"}, "version_RE0": "12.2R9.3", "serialnumber": "CW0212xxx591", "fqdn": "line1", "RE0": {"status": "OK", "last_reboot_reason": "Router rebooted after a normal shutdown.", "model": "EX2200-24T-4G", "up_time": "4 days, 29 minutes, 33 seconds", "mastership_state": "master"}, "has_2RE": false, "switch_style": "VLAN", "version": "12.2R9.3", "master": "RE0", "HOME": "/var/home/axians-prestage", "vc_mode": "Enabled", "model": "EX2200-24T-4G", "vc_capable": true, "personality": "SWITCH"}
$

 

 

Advertisements

Actions

Information

2 responses

19 10 2016
Bren Gun

An excellent piece. I would recommend the key based authentication for SSH though I have found it makes things much quicker and easier. (Most of my issues have been authentication based).

19 10 2016
DataPlumber

Thanks! Yes – the public key auth looked very sensible. I might change to that next…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: