Monitoring HP G5 server hardware RAID on Debian

Personally I prefer to use Linux MDADM software raid because of the following factors

  • Homogenous set of utilities, always the same, unlike all the different custom utils from all the many hardware vendors.
  • Long term support for the platform.
  • Proven performance and stability.
  • Cheaper RAID cards use the CPU in any case and the MDADM implementation will blow it out the water for features/performance.
  • Ability to run any type of RAID level unlike most hardware which usually only support 0,1 and 0+1.

But some times you get a system with decent dedicated controllers with cache and battery backup and you want to be able to offload to it. This is what was in my G5 system

lspci -nn
06:00.0 RAID bus controller [0104]: Hewlett-Packard Company Smart Array Controller [103c:3230] (rev 04)

Googling “pciid 103c:3230” quickly yielded that I was dealing with a “HP Smart Array P400i” card

Now while the card is supported by the OS out of the box an I can see any array that I created in the BIOS, the problem that I sit with is that I need to be able to monitor the disks for failure and issue rebuild commands without taking the system down. Trying to get this right with the vendor provided tools is usually near impossible as the vendor abandoned support and usually only had support for one or 2 commercial linux distros in any case. Enter the good folks at the HWraid project

Just add their repository and install the tools for your card (in this case the HP tools)

echo deb squeeze main >> /etc/apt/sources.list
apt-get update
apt-get install hpacucli

Now we test the tools.

hpacucli controller slot=0 physicaldrive all show
Smart Array P400i in Slot 0 (Embedded)
      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 500 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 500 GB, OK)

Success.. now that I have the tools that can interrogate the controller, I need to build some monitoring, so I add a script which I schedule to run every hour in cron.

HPACUCLI=`which hpacucli`

if [ `$HPACUCLI ctrl all show config | grep -E 'Failed|Rebuilding'| wc -l` -gt 0 ]
msg="RAID Controller Errors"
logger -p syslog.error -t RAID "$msg"
$HPACUCLI ctrl all show config > $HPACUCLI_TMP
mail -s "$HOSTNAME [ERROR] - $msg" "$MAIL" < $HPACUCLI_TMP
echo $msg

Configure your mail subsystem and ensure your system is actually able to send mail.

dpkg-reconfigure exim4-config

The script is very basic but it gets the job done, and yes you will generate alerts every hour if there is an issue until its resolved, think of it as a feature. The script sends a mail to the hardcoded email address as well as adds it to your syslog. If you are performing syslog monitoring and alerts with something like Solarwinds, Splunk or Graylog then you could rather depend on those systems for alerts by checking for the alert message in syslog and scrap the emailing bit of the script.

Juniper M10 value propositon

The Juniper M series of routers have been obsoleted but are a really good value proposition if all you need is a few gig of reliable routing capability. The M10/M10i is a redundant H/A solution that is readily available in the refurbished market from sub 2000 USD and the most cost effective way to bootstrap a small enterprise with a robust core.

What You Get

  • 5U Chassis
  • 2 x C-FEB Forwarding Boards (active + standby)
  • 8 x PIC Physical Interface Card slots
  • 2 x Routing Engines 400Mhz CPU 256M RAM (higher spec available for more $)
  • 4 x 300W PSU (2 Required for operation 3/4 for redundancy)
  • In Service replaceable fan tray


  • Fully redundant PSU, routing engine and switch fabric.
  • High availability features
  • Cheap and available
  • Enterprise grade
  • MPLS and IPV6 support
  • Dedicated out of band ethernet and RS232 management ports
  • Plenty of SONET/SDH/ATM PIC options at reasonable pricing


  • Limited capacity (1G per PIC/slot)
  • Relatively inefficient (power and size vs throughput)
  • End Of Life
  • No layer-2 capability (kind of.. see below)
  • Only vlan-tagging support, no stacked-vlan-tagging or flexible-vlan-tagging (ie. no L3 support for  Q-in-Q).

The 1G limitation per port is the hard limit on this device and I would not want to try and use it anywhere beyond a total of 4G of capacity as balancing evenly across ports starts to become a factor. The M series routers are IP/MPLS routers supporting all the standard BGP/IS-IS/OSPF and MPLS protocols as well as allowing for multiple routing instances.

Stateful firewalling and VPN is possible but would require a services PIC, two for redundancy. If this is what you are needing then you are generally better off looking at something else as they can be expensive and limited to 1G per module.

Now the M series were made in an era when layer-2 and layer-3 functionality was typically serviced by separate devices, so the M series routers are just that, routers, they have no switching capability at all. Well kind of, it support 802.3ad LACP link bonding between PIC’s and with the progression of technology and standards, Junos and the M platform received upgrades and features which included MPLS and VPLS functionality which is technically a layer-2 technology.

Because of the lack of switching support VPLS is limited in what it can do and you can run into issues if you are not aware of of these limitation. How we typically implement is as follows

  • 4 x 1G ethernet PIC’s
  • 2 x 2 Ports bonded with LACP into 2 aggregated ethernet ports on switches

(these numbers can be doubled for more capacity)

We use the 2 aggregated ports to provide a network facing port and a services facing port. They also provide link redundancy into the network, they typically all uplink into the same switch stack and are used to provide a pair of interfaces/VLAN’s for the M10 which can also be used to loop traffic where required. You could do this all with one AE port with all the physicals in it and just use VLANs but I like to split my roles across interfaces for easier visibility and troubleshooting.

So on one VLAN on the network AE we set up MPLS capability with all the layer-3 stuff that’s required to make MPLS work. On the services port we setup a customer or service facing VLAN that we want to tunnel using VPLS. This is done by setting the VLAN port encapsulation to vlan-vpls and then creating a routing instance of  instance-type vpls and adding the interface to that instance type. Now this is technically a switching function that is being performed on a routing only device.

The caveat is that you need to create a separate routing instance for every layer-2 service that you want to use. You CANNOT use VLAN’s on the VPLS service because you will run into MAC learning issues due to the fact that the M10 is not layer-2 aware and cannot differentiate between the different broadcast domains of multiple VLAN’s. It will work for a bit but you will run into random dropped packets as the MAC learning table on your endpoint devices gets polluted.

The flexible-vlan-tagging or stacked-vlan-tagging option on interfaces is allowed but ultimately not supported. On commit the device spits warnings in the messages log and when you try and configure the inner and outer tag the router will not accept the configuration. You should configure the vlan-tagging option instead.

A simpler supported L2 feature is the l2circuit using MPLS.  It is a point to point only tunnel that does not perform any MAC learning whatsoever, it just take the frame on one side and spits it out on the other. This can be configured on VLANs on ethernet ports if the encapsulation type on the VLAN is set to vlan-ccc. The port will accept further tags if they are present as well as “L2 local” frames such as LACP,LLDP,STP BPDU’s etc… The service is only really limited by overall MTU. This is because the M10 is not involved in any L2 learning so it will transparently pass the frame from one endpoint to the other endpoint. This is also why you can only have one endpoint because the M10 cannot make a path determination with no address information.

The down side is that troubleshooting can be a bit harder in that you cannot see any learned MAC’s but the up side is that you do not need to worry about memory + MAC learning limits.

As mentioned above, the lack of L2 support means that we usually pair an M10 with 2 EX4200’s in VC mode. QFX would be better but we are looking at a budget solution here so they don’t make sense. This gives you a certain amount of L2 flexibility that will cover most use cases. Be aware that EX series switches only support VLAN swap and push functions NOT VLAN pop. This can be somewhat limiting in this environment. One final note regarding the EX configuration for l2circuits, you can configure “dot1-tunneling layer2-protocol-tunneling all” on the EX4200’s which will ensure you can transparently take all frames from a customer facing VLAN to a l2circuit on the M10. This is also where we can look at MAC learning for troubleshooting as the switch will learn customer MAC’s, and also where we set MAC learning limits to prevent possible issues introduced on the EX by customer networks.

Making your Debian server networking redundant

You will need at least the following…

  1. A pair of stacked switches that support creating an LACP bonded port across the stack on 2 different nodes. This gives you the best of all worlds being able to provide redundancy and increase your bandwidth.
  2. Or alternatively, 2 ports on the same or on different unstacked switches. This is the bare minimum you can do to mitigate link failure. Note this setup has no polling mechanism so if the physical ethernet link stays up but is not operational because of device switching failure, or a failure on the another port on the device that provides the uplink, then this wont help you.

On your server you will need 2 (or more) network cards and some “simple” setup

Install the packages that you will need in case you don’t have them already.

  • apt-get install ifenslave vlan bridge-utils

The example sets up the following

  • eth0 and eth1 bonded together into bond0
  • create 2 bridges br8 and br9
  • create 2 vlans bond0.8 and bond0.9
  • place them in each bridge respecitvely
  • add IP details on br9
  • br8 has no L3 config on it and in this specific case is used by KVM to bridge virtual machines into as they come online

For option 1 edit your /etc/network/interfaces to look something like this

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo bond0 bond0.8 bond0.9 br8 br9
iface lo inet loopback

iface bond0 inet manual
 bond-slaves eth0 eth1
 bond-mode 802.3ad
 mond-miimon 100
 bond-use-carrier 1
 bond-lacp-rate 1
 bond-min-links 1
 # send traffic over the available links based on src/dst MAC address
 bond-xmit-hash-policy layer2
 mtu 1600

iface bond0.8 inet manual
iface bond0.9 inet manual

iface br8 inet manual
 bridge_stp off
 bridge_ports bond0.8

iface br9 inet static
 bridge_ports bond0.9
 bridge_stp off

For option 2 edit your /etc/network/interfaces to look something like this (only the bond0) config changes

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo bond0 bond0.8 bond0.9 br8 br9
iface lo inet loopback

iface bond0 inet manual
 slaves eth0 eth1
 bond_mode active-backup
 bond_miimon 100
 bond_downdelay 200
 bond_updelay 200

iface bond0.8 inet manual
iface bond0.9 inet manual

iface br8 inet manual
 bridge_stp off
 bridge_ports bond0.8

iface br9 inet static
 bridge_ports bond0.9
 bridge_stp off

Most use cases probably will not require bridging or VLAN but I thought it best to provide examples of the entire feature set, you can always reduce to what you need.