ECMP next hop on Juniper M T and SRX series routers

If like me, you have to jump around customer requirements, you may one day find yourself in a situation where you need to utilise capacity on 2 or more links between locations. My preference is to bond my uplinks with 802.1ax/802.3ad/LACP and let the upstream provider deal with the rest. Sometimes the providers let you down and can do nothing. They cant run LACP from their edge device to you, and they can’t transit your LACP frames so that you can perform your own LACP between locations. Also sometimes you have multiple links for different providers.

In this situation your last resort is Equal Cost Multi Path(ECMP) next hop. If you have a 2 or more routes in your routing table with exactly the same metrics and there are none that are more preferred, an ECMP decision is triggered. On Juniper routing platforms this is quite rudimentary in that one of the routes will be chosen (at random or based on src/dst hashes) for a particular route and installed in the FIB (the hardware forwarding engine). This means that the effectiveness of the traffic spread is limited to the number of routes in your table in a particular direction.

My typical implementation involves running OSPF between routers on each link with identical metrics. From the “remote” end of the network I do not aggregate the advertised prefixes, as this would reduce the pool of routes, and instead advertise all prefixes individually. This is often a whole bunch of /32 point to point customer IP’s and this is also partially why I choose to use OSPF for this.

Advertising from the core however is a bit more of a problem. Here typically you are advertising mainly the default route. There may be some peering routes that you have on either end and you may include those too, but typically you do not want to be sending a full table to some remote end of the network, as usually the reason you are here in the first place is that you are resource constrained.

The practical upshot is that traffic will balance ok in the direction towards the “remote” node, but very little or not at all inbound from the “remote” node. Typically this is the “download” direction and usually the direction most of the load is in in any case, but our situation is not ideal.

To achieve a better spread, and to not have to worry to much about how many routes you are using, you need to implement a policy on the forwarding table. I know it sounds like I made that up, but yes, thats a real thing. If you do not do this then your traffic spread/diversity will be constrained by the points discussed above.

So we create the policy..

set policy-options policy-statement my-default-balancing-policy then load-balance consistent-hash

And then apply it to the forwarding table..

set routing-options forwarding-table export my-default-balancing-policy

This will now let your traffic use all equal routes instead of just the selected one.

Your 2 balancing options are consistent-hash and per-packet. Per packet will send packets down each link in a round robin fashion and will result in nearly perfect load spread. However, this will cause out of order packet delivery between the sites as there will always be performance differences on the links which is why I never use it. The performance impact of out of order packets, on TCP specifically, is significant. The consistent-hash looks at the traffic IP source, destination and protocol fields and uses those values to calculate which link to use. This is good at keeping traffic flows on one path and packet delivery consistent.

ECMP algorithm choice on the MX series platform is performed quite differently, but many of the points discussed above are still valid. This is to be expected as the MX is a routing and switching platform so hashing at multiple layers is possible (L2/L3/L4) There are many more options to consider and we will leave that for another time.

A final note, the above hash looks at L3 information as a key for hashing and on an MPLS enabled network this may not be enough. You can also set ECMP options for MPLS with the following statement.

set chassis maximum-ecmp 16

Options are 16/32/64 and allow for up to that many alternate LSP to load balance across (thats if you have multiple LSP’s to your destination).

Monitoring HP G5 server hardware RAID on Debian

Personally I prefer to use Linux MDADM software raid because of the following factors

  • Homogenous set of utilities, always the same, unlike all the different custom utils from all the many hardware vendors.
  • Long term support for the platform.
  • Proven performance and stability.
  • Cheaper RAID cards use the CPU in any case and the MDADM implementation will blow it out the water for features/performance.
  • Ability to run any type of RAID level unlike most hardware which usually only support 0,1 and 0+1.

But some times you get a system with decent dedicated controllers with cache and battery backup and you want to be able to offload to it. This is what was in my G5 system

lspci -nn
06:00.0 RAID bus controller [0104]: Hewlett-Packard Company Smart Array Controller [103c:3230] (rev 04)

Googling “pciid 103c:3230” quickly yielded that I was dealing with a “HP Smart Array P400i” card

Now while the card is supported by the OS out of the box an I can see any array that I created in the BIOS, the problem that I sit with is that I need to be able to monitor the disks for failure and issue rebuild commands without taking the system down. Trying to get this right with the vendor provided tools is usually near impossible as the vendor abandoned support and usually only had support for one or 2 commercial linux distros in any case. Enter the good folks at the HWraid project

Just add their repository and install the tools for your card (in this case the HP tools)

echo deb http://hwraid.le-vert.net/debian squeeze main >> /etc/apt/sources.list
apt-get update
apt-get install hpacucli

Now we test the tools.

hpacucli controller slot=0 physicaldrive all show
Smart Array P400i in Slot 0 (Embedded)
   
      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 500 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 500 GB, OK)

Success.. now that I have the tools that can interrogate the controller, I need to build some monitoring, so I add a script which I schedule to run every hour in cron.

#!/bin/bash
MAIL=noc@acme.com
HPACUCLI=`which hpacucli`
HPACUCLI_TMP=/tmp/hpacucli.log

if [ `$HPACUCLI ctrl all show config | grep -E 'Failed|Rebuilding'| wc -l` -gt 0 ]
then
msg="RAID Controller Errors"
logger -p syslog.error -t RAID "$msg"
$HPACUCLI ctrl all show config > $HPACUCLI_TMP
mail -s "$HOSTNAME [ERROR] - $msg" "$MAIL" < $HPACUCLI_TMP
echo $msg
cat $HPACUCLI_TMP
rm -f $HPACUCLI_TMP
fi

Configure your mail subsystem and ensure your system is actually able to send mail.

dpkg-reconfigure exim4-config

The script is very basic but it gets the job done, and yes you will generate alerts every hour if there is an issue until its resolved, think of it as a feature. The script sends a mail to the hardcoded email address as well as adds it to your syslog. If you are performing syslog monitoring and alerts with something like Solarwinds, Splunk or Graylog then you could rather depend on those systems for alerts by checking for the alert message in syslog and scrap the emailing bit of the script.

Juniper M10 value propositon

The Juniper M series of routers have been obsoleted but are a really good value proposition if all you need is a few gig of reliable routing capability. The M10/M10i is a redundant H/A solution that is readily available in the refurbished market from sub 2000 USD and the most cost effective way to bootstrap a small enterprise with a robust core.

What You Get

  • 5U Chassis
  • 2 x C-FEB Forwarding Boards (active + standby)
  • 8 x PIC Physical Interface Card slots
  • 2 x Routing Engines 400Mhz CPU 256M RAM (higher spec available for more $)
  • 4 x 300W PSU (2 Required for operation 3/4 for redundancy)
  • In Service replaceable fan tray

Pro’s

  • Fully redundant PSU, routing engine and switch fabric.
  • High availability features
  • Cheap and available
  • Enterprise grade
  • MPLS and IPV6 support
  • Dedicated out of band ethernet and RS232 management ports
  • Plenty of SONET/SDH/ATM PIC options at reasonable pricing

Con’s

  • Limited capacity (1G per PIC/slot)
  • Relatively inefficient (power and size vs throughput)
  • End Of Life
  • No layer-2 capability (kind of.. see below)
  • Only vlan-tagging support, no stacked-vlan-tagging or flexible-vlan-tagging (ie. no L3 support for  Q-in-Q).

The 1G limitation per port is the hard limit on this device and I would not want to try and use it anywhere beyond a total of 4G of capacity as balancing evenly across ports starts to become a factor. The M series routers are IP/MPLS routers supporting all the standard BGP/IS-IS/OSPF and MPLS protocols as well as allowing for multiple routing instances.

Stateful firewalling and VPN is possible but would require a services PIC, two for redundancy. If this is what you are needing then you are generally better off looking at something else as they can be expensive and limited to 1G per module.

Now the M series were made in an era when layer-2 and layer-3 functionality was typically serviced by separate devices, so the M series routers are just that, routers, they have no switching capability at all. Well kind of, it support 802.3ad LACP link bonding between PIC’s and with the progression of technology and standards, Junos and the M platform received upgrades and features which included MPLS and VPLS functionality which is technically a layer-2 technology.

Because of the lack of switching support VPLS is limited in what it can do and you can run into issues if you are not aware of of these limitation. How we typically implement is as follows

  • 4 x 1G ethernet PIC’s
  • 2 x 2 Ports bonded with LACP into 2 aggregated ethernet ports on switches

(these numbers can be doubled for more capacity)

We use the 2 aggregated ports to provide a network facing port and a services facing port. They also provide link redundancy into the network, they typically all uplink into the same switch stack and are used to provide a pair of interfaces/VLAN’s for the M10 which can also be used to loop traffic where required. You could do this all with one AE port with all the physicals in it and just use VLANs but I like to split my roles across interfaces for easier visibility and troubleshooting.

So on one VLAN on the network AE we set up MPLS capability with all the layer-3 stuff that’s required to make MPLS work. On the services port we setup a customer or service facing VLAN that we want to tunnel using VPLS. This is done by setting the VLAN port encapsulation to vlan-vpls and then creating a routing instance of  instance-type vpls and adding the interface to that instance type. Now this is technically a switching function that is being performed on a routing only device.

The caveat is that you need to create a separate routing instance for every layer-2 service that you want to use. You CANNOT use VLAN’s on the VPLS service because you will run into MAC learning issues due to the fact that the M10 is not layer-2 aware and cannot differentiate between the different broadcast domains of multiple VLAN’s. It will work for a bit but you will run into random dropped packets as the MAC learning table on your endpoint devices gets polluted.

The flexible-vlan-tagging or stacked-vlan-tagging option on interfaces is allowed but ultimately not supported. On commit the device spits warnings in the messages log and when you try and configure the inner and outer tag the router will not accept the configuration. You should configure the vlan-tagging option instead.

A simpler supported L2 feature is the l2circuit using MPLS.  It is a point to point only tunnel that does not perform any MAC learning whatsoever, it just take the frame on one side and spits it out on the other. This can be configured on VLANs on ethernet ports if the encapsulation type on the VLAN is set to vlan-ccc. The port will accept further tags if they are present as well as “L2 local” frames such as LACP,LLDP,STP BPDU’s etc… The service is only really limited by overall MTU. This is because the M10 is not involved in any L2 learning so it will transparently pass the frame from one endpoint to the other endpoint. This is also why you can only have one endpoint because the M10 cannot make a path determination with no address information.

The down side is that troubleshooting can be a bit harder in that you cannot see any learned MAC’s but the up side is that you do not need to worry about memory + MAC learning limits.

As mentioned above, the lack of L2 support means that we usually pair an M10 with 2 EX4200’s in VC mode. QFX would be better but we are looking at a budget solution here so they don’t make sense. This gives you a certain amount of L2 flexibility that will cover most use cases. Be aware that EX series switches only support VLAN swap and push functions NOT VLAN pop. This can be somewhat limiting in this environment. One final note regarding the EX configuration for l2circuits, you can configure “dot1-tunneling layer2-protocol-tunneling all” on the EX4200’s which will ensure you can transparently take all frames from a customer facing VLAN to a l2circuit on the M10. This is also where we can look at MAC learning for troubleshooting as the switch will learn customer MAC’s, and also where we set MAC learning limits to prevent possible issues introduced on the EX by customer networks.

Making your Debian server networking redundant

You will need at least the following…

  1. A pair of stacked switches that support creating an LACP bonded port across the stack on 2 different nodes. This gives you the best of all worlds being able to provide redundancy and increase your bandwidth.
  2. Or alternatively, 2 ports on the same or on different unstacked switches. This is the bare minimum you can do to mitigate link failure. Note this setup has no polling mechanism so if the physical ethernet link stays up but is not operational because of device switching failure, or a failure on the another port on the device that provides the uplink, then this wont help you.

On your server you will need 2 (or more) network cards and some “simple” setup

Install the packages that you will need in case you don’t have them already.

  • apt-get install ifenslave vlan bridge-utils

The example sets up the following

  • eth0 and eth1 bonded together into bond0
  • create 2 bridges br8 and br9
  • create 2 vlans bond0.8 and bond0.9
  • place them in each bridge respecitvely
  • add IP details on br9
  • br8 has no L3 config on it and in this specific case is used by KVM to bridge virtual machines into as they come online

For option 1 edit your /etc/network/interfaces to look something like this


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo bond0 bond0.8 bond0.9 br8 br9
iface lo inet loopback

iface bond0 inet manual
 bond-slaves eth0 eth1
 bond-mode 802.3ad
 mond-miimon 100
 bond-use-carrier 1
 bond-lacp-rate 1
 bond-min-links 1
 # send traffic over the available links based on src/dst MAC address
 bond-xmit-hash-policy layer2
 mtu 1600

iface bond0.8 inet manual
iface bond0.9 inet manual

iface br8 inet manual
 bridge_stp off
 bridge_ports bond0.8

iface br9 inet static
 address 192.168.0.2
 netmask 255.255.255.0
 gateway 192.168.0.1
 bridge_ports bond0.9
 bridge_stp off

For option 2 edit your /etc/network/interfaces to look something like this (only the bond0) config changes


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo bond0 bond0.8 bond0.9 br8 br9
iface lo inet loopback

iface bond0 inet manual
 slaves eth0 eth1
 bond_mode active-backup
 bond_miimon 100
 bond_downdelay 200
 bond_updelay 200

iface bond0.8 inet manual
iface bond0.9 inet manual

iface br8 inet manual
 bridge_stp off
 bridge_ports bond0.8

iface br9 inet static
 address 192.168.0.2
 netmask 255.255.255.0
 gateway 192.168.0.1
 bridge_ports bond0.9
 bridge_stp off

Most use cases probably will not require bridging or VLAN but I thought it best to provide examples of the entire feature set, you can always reduce to what you need.

Broken web pages/downloads – AAAaaarrrrgh!

Broken pages nowadays account for half of my nightmare support scenarios, you know those ones where you have half loaded, broken or endlessly loading pages. This is usually accompanied with a comment about how its fine on the users 3G/DSL/whatever network.  In the past this was usually more likely to be a MTU related issue, but with the proliferation of CDN hosting nowadays, your more likely suspect is now a broken path to a CDN.

The situation in South Africa lends itself to this as some of the larger ISP’s have private CDN deployments alongside open peered deployments. Often a user is trying to get content from a CDN they should not have access to, or a CDN that is not optimal. Sometimes this is because of bad CDN configuration but more often than not its because your users are not using the correct DNS servers for resolution.

CDN relies heavily on DNS for being able to determine the origin AS and location of a request, and then replies based on this information accordingly.  Google and OpenDNS are often culprits here as well intentioned users love to use these (I blame google for making it so easy with 8.8.8.8). While extensions to DNS have helped with being able to identify the source of the request the issues are not completely gone and will continue to rear their head for some time still. I have also seen scenarios where domain controllers are set up to use one network provider (DNS settings included) while LAN users use a different provider/gateway (aka you), meaning the domain controller gives DNS responses to your clients from a server on a different network altogether

The tools I usually for troubleshooting these kinds of issues are

  • dig/nslookup to check resolution discrepancies between you and the client network.
  • The browsers developer view. Just head on over to the sources tab and see what resources the page is actually loading, gone are the days of simple sites, all sites now include content from all over the show for advertising, tracking and to load balance.
  • http://www.cdnplanet.com/tools/cdnfinder/ . It lets you point it to a web site and it reports what external resources the site uses and what CDN’s those resources are on.

If this doesn’t resolve your issues then possibly your issue is with your upstream providers. They either have a broken transparent application cache/accelerator (good luck finding someone there that knows something about them) or you are running on a seriously messed up bonded link. I have dealt with both of these before and maybe I will share more on this another day.