Public/private key SSH access to Fortigate

7 10 2020

To save having to enter usernames and passwords for your devices, it is a lot more convenient to use public/private key authentication. When SSHing to the device, you simply specify the username and authentication using the keys is automatic.

Windows users can use puttygen to make key pairs, and PuTTY as an SSH client to connect to devices. This process is quite well described here: https://www.ssh.com/ssh/putty/windows/puttygen

By default, keys (on a Linux or Macos host) are in your home directory, under the ~.ssh/ directory. A keypair is generated using ssh-keygen like so:

andrew@host % ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/andrew/.ssh/id_rsa): andrew_test
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in andrew_test.
Your public key has been saved in andrew_test.pub.
The key fingerprint is:
SHA256:nx4REDACTEDGN69tY andrew@host
The key's randomart image is:
+---[RSA 3072]----+
| 1. o+|
| o o& o|
| * o..- =.|
| .. |
| S. =B xx . |
| .+. |
| . +.=. o. +E|
| o o+* .|
+----[SHA256]-----+
andrew@host %

In the example above, I created it as ‘andrew_test’ – this will make two files – andrew_test and andrew_test.pub. The first one is your PRIVATE key and should remain secure on your system. The second is your PUBLIC key which you can distribute. If you don’t specify a name, default it will create files called id_rsa and id_rsa.pub.

You can do a ‘more andrew_test.pub’ to see the contents of this file. Copy it to the clipboard because you need it in the next step.

Note: For extra security, if I had specified a passphrase in the section above, I would have to enter that phrase every time the key is used. In this example, I did not set a passphrase.

Log into the Fortigate you wish to administer and create a new user like so, pasting the cryptotext you found in the .pub file between quotation marks:

config system admin
edit <new username>
set ssh-public-key1 "<PASTE CRYPTOTEXT HERE>"
set accprofile super_admin
set password <password>
end

NOTE: Make sure you add a password for the user – otherwise, when logging on via the serial port (which does not support public/private key authentication), no password will be required!

Then exit from that login session, and log in again as the user you defined.

If you specified a custom name for your keypair, you need to do the following:

ssh -i ~/.ssh/<key-name>.pub <new username>@<fortigate IP>

If you didn’t specify a name, it will use the id_rsa.pub file by default, so you can simply type:

ssh <new username>@<fortigate IP>

Here is a working example of this last case – as you can see, there’s no prompt for a password:

andrew@host % ssh andrew@10.0.0.25
Fortigate-Test #





Cable testing on Juniper EX switches

22 09 2020

Does anyone use the cable test feature on EX switches? Did you even know you could do this kind of thing?

Read the rest of this entry »




fpc1 vlan-id(32768) to bd-id mapping doesn’t exist in itable

21 09 2020

If you are getting this message appearing repeatedly on a Juniper switch (e.g. an EX4300), check you don’t have an IRB interface that is not attached to a VLAN. Alternatively, check your IRBs all have IP addresses.





Restoring data to Netbox Docker

16 09 2020

Having just shot myself in the foot by deleting docker and losing a container I had been working on, here is the command to restore data to netbox-docker’s Postgres database:

sudo docker exec -i netbox-docker_postgres_1 psql --username netbox netbox < /path/to/backup/file.sql

Phew…





DHCP Relay Issues With Microsoft Surface Pro Docks and Junos

7 09 2020

After deploying some new Juniper EX4600 core switches, my customer complained that he was experiencing about 45 seconds of delay in getting an IP address on a Surface Pro connected to a dock. The second time of connecting, it took about 8 seconds which was more acceptable. The 45 second delay came back every time they moved the Surface Pro to a new dock.

Read the rest of this entry »




Migration Strategy: Moving From MPLS/LDP to Segment Routing

21 03 2019

MPLS core networks that use Label Distribution Protocol (LDP) are common in SP core networks and have served us well. So, the thought of pulling the guts out of the core is pretty daunting and invites the question why you would want to perform open-heart surgery on such critical infrastructure.   This article attempts to explain the benefits that would accrue from such a move and gives a high-level view of a migration strategy.

Why Do I Need Segment Routing?

  • Simplicity:   LDP was invented as a label distribution protocol for MPLS because nobody wanted to go back to the standards bodies to re-invent OSPF or IS-IS so that they could carry labels.  A pragmatic decision, but one that results in networks having to run two protocols.  Two protocols means twice the complexity.  
    Segment Routing simplifies things by allowing you to turn off LDP.  Instead it carries label (or Segment ID) information in extensions to the IGP.  This then leaves you with only IS-IS or OSPF to troubleshoot.  As Da Vinci reportedly said, ‘simplicity is the ultimate sophistication’. 

Read the rest of this entry »




3 Challenges for Network Engineers Adopting Ansible

14 03 2019

Ansible, ansible, ansible seems to be all we hear these days. There are lots of resources out there all trying to convince us this is the new way get stuff done. The reality is quite different – adoption of tools like this is slow in the networking world, and making the move is hard for command-line devotees.

Here are the three main problems I encountered in my adoption of Ansible as a modern way to manage devices:

Read the rest of this entry »




Pulling Configs from Cisco NSO using curl and json2yaml.py

14 03 2019

We’re using Cisco NSO in our lab at the moment to provision L3VPNs across multi-vendor environments as part of a demo. Just noting down a few things here for future reference:

Read the rest of this entry »




Testing notes: simulating link failure by filtering BFD packets

28 12 2018

In some testing I am doing, I need to prove that BFD can be used with iBGP to tell the BGP protocol when there is an interruption.  This will enable BGP to be brought down much faster than if regular BGP timers are used.

To make this easier to do, I used a firewall filter on one of the two routers to filter out BFD but accept all other packets:
Single-hop BFD (i.e. across a link) uses UDP 3784, while multi-hop BFD uses 4784.  Since my BFD sessions are configured between loopbacks, it is this latter type I need to filter.

In the example below, CORE1 is a BGP client of CORE2, which is the route-reflector.

The following was configured on the routers to bring up the BFD session (I am only showing one side – you can figure out the mirror of this yourself I think):

[edit protocols bgp group CORE neighbor 10.0.0.6]
      bfd-liveness-detection {
          minimum-receive-interval 300;
          multiplier 3;
          transmit-interval {
              minimum-interval 100;
          }
      }

When the remote side was done, the session came up:


axians@CORE1> show bfd session
Dec 28 17:17:10
                               Detect Transmit
Address       State Interface  Time   Interval  Multiplier
10.0.0.6      Up              0.900     0.300        3


To bring down the BFD session, apply the following filter outbound on the core-facing interface(s):


axians@CORE1# show | compare rollback 2
Dec 28 17:23:33
[edit interfaces ae1 unit 0 family inet]
  filter {
    output BLOCK-BFD;
  }
[edit firewall family inet]
  filter BLOCK-BFD {
    term T1 {
      from {
        protocol udp;
        port 4784;
      }
      then {
        discard;
      }
    }
    term T2 {
      then accept;
    }
}


As soon as the filter is applied, BFD times-out and brings down the BGP session:

Dec 28 17:39:13 CORE2 bfdd[1935]: %DAEMON-4: BFD Session 10.0.0.2 (IFL 0) state Up -> Down LD/RD(16/23) Up time:00:06:07 Local diag: CtlExpire Remote diag: None Reason: Detect Timer Expiry.

Dec 28 17:39:13 CORE2 bfdd[1935]: %DAEMON-4-BFDD_TRAP_MHOP_STATE_DOWN: local discriminator: 16, new state: down, peer addr: 10.0.0.2




Juniper RADIUS-delivered switching filters

6 11 2018

I’ve been experimenting with getting RADIUS to deploy switching filters to Juniper switches recently, as part of a reference architecture demo.  The concept is called REACH2020 and combines network virtualisation with the ability to identify network users and devices so that categories of user can be put into different virtual networks.   This leaves the firewall that connects the virtual networks together as a convenient single point of control.

Anyway, back to the matter in hand.  It turns out there’s a limit to the length of switching filter you can send a Juniper EX.

Read the rest of this entry »