Trunking and sub-interfaces on the same switchport

7 10 2008

For some reason, I never knew that you could trunk and use a sub-interface on the same port of a Catalyst 6500, so I’m recording it here for personal reference.

What I wanted to achieve was to connect two 7600 routers over an Ethernet pseudowire (E-Line, EoMPLS circuit, AToM circuit, Martini circuit – whatever it’s called these days).   The reason I needed to do so was that the interveninig 6500 routers were only getting a default route via BGP from the 7600s.

The network looks a bit like this:

Peering over EoMPLS circuit

Peering over EoMPLS circuit

The routers R1 and R2 have a full internet routing table, and they send only a default route to the 6500s (because the supervisor on these switches can’t handle all the routes).

Configure a full iBGP mesh and everyone’s happy, right?   That’s what I thought, but it didn’t work out that way.

The customer implemented this and found that when the transit peerings on R1 were down, there was a routing loop.  The reason is that although R1 still has the full routing table from R2 via the peering they have, when packets are sent by R1 to the left-hand 6500, it only has a default route.  That default route is coming in from R1 – so it just sends the packets back to R1!

So, short of investing in a new circuit to connect the two routers up directly, there were several options available:

1. Configure the site inter-link as a layer-2 one, and run VLAN tags across it

2. Configure an EoMPLS circuit between new sub-interfaces on R1 and R2
We didn’t like the sound of option 1, because to make the VLAN resilient would require spanning-tree.  So we went for option 2 instead.

The interfaces on the 6500s were trunk links, so I thought to make this new interface I’d need to configure a VLAN and an SVI.  Then put the MPLS “xconnect” command on the SVI.  This didn’t work though – the command “show mpls l2transport vc detail” was coming up with “Invalid NEXT HOP” or something like that.   So I thought the plan was dead in the water.  However, the following 6500 config worked very nicely:

interface GigabitEthernet2/1
 switchport trunk allowed vlan 101
 switchport mode trunk
interface GigabitEthernet2/1.100
 encapsulation dot1Q 100
 no snmp trap link-status
 xconnect 100 encapsulation mpls

This is the configuration of the 6500 port facing the R1.  VLAN 101 is the one that carries OSPF for iBGP peerings between R1 and the two 6500s.  Sub-interface Gig2/1.100 is the start of an EoMPLS circuit that runs across to the other 6500.   Packets received with VLAN tag 100 are encapsulated in MPLS and transported across the network.  R1 and R2 now have a direct connection.

Don’t forget what most Cisco MPLS books don’t tell you – if you’re running MPLS and configuring EoMPLS, the MTU on the MPLS-enabled interfaces needs to be increased by 8 bytes.  This is to cope with the TWO MPLS labels that are applied to the packets.  The outer label is used for hop-by-hop forwarding of the packet to the egress PE, while the inner label is used to tell the egress PE which VPN the packet belongs to (based in this case on the VC ID).   The VC ID (in this case the number 100 on the xconnect command) needs to agree at both ends and be unique.




5 responses

19 01 2009

What IOS did you used in that test?

20 01 2009

This feature is in 12.2SXH – I was using 12.2(33)SXH.



13 08 2012

Why not simply use iBGP between R1 & R2?

19 08 2012
rick ross

^^ because they aren’t directly connected

23 08 2012

I mean why not simply connect R1 & R2

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: