OpenBSD PPPoE and RFC 4638

January 20th, 2012

I upgraded my Internet connection from ADSL 2+ to FTTC a while ago. I’m with Eclipse as an ISP, but it’s basically the same product as BT Infinity, right down to the Openreach-branded modem, (a Huawei Echolife HG612 to be exact).

With this modem, you need to use a router or some software that can do RFC 2516 PPPoE so I simply kept using OpenBSD on my trusty Soekris net4501 and set up a pppoe(4) interface, job done. However what became apparent is the 133 MHz AMD Elan CPU couldn’t fully utilise the 40 Mb/s bandwidth I now had, at best I could get 16-20 Mb/s with a favourable wind. An upgrade was needed.

Given I’d had around 8 years of flawless service from the net4501, another Soekris board was the way to go. Enter the net6501 with comparatively loads more CPU grunt, RAM and interestingly Gigabit NIC chips; not necessarily for the faster speed, but because they can naturally handle a larger MTU.

The reason for this was that I had read that the Huawei modem and BT FTTC network fully supported RFC 4638, which means you can have an MTU of 1,500 bytes on your PPPoE connection which matches what you’ll have on your internal network. Traditionally a PPPoE connection only allowed 1,492 bytes on account of the overhead of 8 bytes of PPPoE headers in every Ethernet frame payload. Because of this it was almost mandatory to perform MSS clamping on traffic to prevent problems. So having an MTU of 1,500 bytes should avoid the need for any clamping trickery, but means your Ethernet interface needs to cope with an MTU of 1,508 bytes, hence the Gigabit NIC (which can accommodate an MTU of 9,000 bytes with no problems).

One small problem remained, pppoe(4) on OpenBSD 5.0 didn’t support RFC 4638. While I sat down and started to add support I noticed someone had added this to the NetBSD driver already, (which is where the OpenBSD driver originated from), so based on their changes I created a similar patch and with some necessary improvements based on feedback from OpenBSD developers it has now been committed to CVS in time for the 5.1 release.

To make use of the larger MTU is fairly obvious, simply set the MTU explicitly on both the Ethernet and PPPoE interfaces to 8 bytes higher than their default. As an example, my /etc/hostname.em0 now contains:

mtu 1508 up

And similarly my /etc/hostname.pppoe0 contains:

inet NONE mtu 1500 \
        pppoedev em0 authproto chap \
        authname 'user' authkey 'secret' up
!/sbin/route add default -ifp \$if

I also added support to tcpdump(8) to display the additional PPPoE tag used to negotiate the larger MTU, so when you bring the interface up, watch for PPP-Max-Payload tags going back and forth during the discovery phase.

With that done the remaining step is to remove any scrub max-mss rules in pf.conf(5) as with any luck they should no longer be required.

PPPoE fixes for natpmpd

December 16th, 2011

I recently started using the pppoe(4) driver on OpenBSD, and with it found a few small bugs in how natpmpd handles these sorts of dynamic interfaces.

One simple bug being it refused to start up if the interface didn’t already exist and also it considered as a valid IP address and would broadcast that to any clients on the network. This situation happened due to the way pppoe(4) interfaces are initially set up and would correct itself quickly once the PPPoE session was negotiated.

Both of these bugs should be fixed so natpmpd should now correctly deny any request until the interface gets created and negotiates a normal IP address, (normal Ethernet interface behaviour should be unchanged).

Forcing GUID Partition Table on a disk with CentOS 6

August 6th, 2011

CentOS 6 is now out so I can finally build up a new HP ProLiant Microserver that I purchased for a new home server. Amongst many new features, CentOS 6 ships a GRUB bootloader that can boot from a disk with a GUID Partition Table (GPT for short). Despite not having EFI, the HP BIOS can boot from GPT disks so there’s no limitation with disk sizes that I can use.

However, before I tore down my current home server I wanted to test all of this really did work so I popped a “small” 500 GB disk in the HP. The trouble is the CentOS installer won’t use GPT if the disk is smaller than 2 TB. Normally this wouldn’t be a problem with a single disk apart from I want to specifically test GPT functionality but this can cause complications if your disk is in fact a hardware RAID volume because most RAID controllers allow the following scenario:

  1. You have a RAID 5 volume comprising 4x 500 GB disks, so the OS sees ~ 1.5 TB raw space.
  2. Pull one 500 GB disk, replace with a 1 TB disk, wait for RAID volume to rebuild.
  3. Rinse and repeat until all disks are upgraded.
  4. Grow RAID volume to use the extra space, OS now sees ~ 3 TB raw space.
  5. Resize partitions on the volume, grow filesystems, etc.

You’ll find your volume will have a traditional MBR partition scheme as it fell below the 2 TB limit so you can’t resize the partitions to fully use the new space. While it might be possible to non-destructively convert MBR to GPT, I wouldn’t want to risk it with terabytes of data. It would be far better to just use GPT from the start to save painting myself into a corner later.

If you’re doing a normal install, start the installer until you get to what’s known as stage 2 such that on (ctrl+)alt+F2 you have a working shell. From here, you can use parted to create an empty GPT on the disk:

# /usr/sbin/parted -s /dev/sda mklabel gpt

Return to the installer and keep going until you reach the partitioning choices. Pick the “use empty space” option and the installer will create new partitions within the new GPT. If you choose the “delete everything” option, the installer will replace the GPT with MBR again.

If like me you’re using kickstart to automate the install process, you can do something similar in your kickstart file with the %pre section, something like the following:

/usr/sbin/parted -s /dev/sda mklabel gpt

Then make sure your kickstart contains no clearpart instructions so it will default to just using the empty space. The only small nit I found after the install is that the minimal install option only includes fdisk and not parted as well so if you want to manage the disk partitions you’ll need to add that either at install time or afterwards as fdisk doesn’t support GPT.

Storage controller probe order & Kickstart

June 14th, 2011

I’ve been configuring some Dell servers today that have one of Dell’s MD3220 storage arrays attached. The servers were completely blank so they needed kickstarting with CentOS 5.x. Not a problem, just use the standard one that’s used for 99% of the servers.

The problem that arises is the standard kickstart assumes the first disk it sees should be the disks internal to the server and partitions accordingly but annoyingly the installer loads the mpt2sas driver first for the HBA’s used to hook up the storage array, and then loads the megaraid_sas driver for the built-in RAID controller used for the internal disks. This means the internal disks could be anywhere from /dev/sdc onwards depending on what state the array is in.

An undocumented boot option is driverload= which seems to force the supplied list of drivers to be loaded first before anything else that’s detected. I found reference to this option in this post along with use of the nostorage boot option. However when I used the following options:

nostorage driverload=sd_mod:megaraid_sas

The installer still loaded the mpt2sas driver but it did force the megaraid_sas driver to be loaded first. This could just be a bug that mpt2sas isn’t on the list of “storage” drivers but the net result is acceptable in that /dev/sda now pointed to the internal disks. After the server rebooted /etc/modprobe.conf also had the correct scsi_hostadapterX aliases to keep things in the right order.

Using SNMP to upgrade IOS on Cisco devices

April 29th, 2011

Following on from my previous post regarding using SNMP to initiate TFTP backups, it’s also possible to use SNMP to remotely upgrade IOS on a Cisco device.

I’ll reuse and extend the setup and configuration from the previous article so apply that first. The IOS device then needs the following additions:

router(config)#snmp-server view myview ciscoFlashOps included
router(config)#snmp-server view myview lts.9 included
router(config)#snmp-server system-shutdown

The first command grants SNMP write access to allow us to manipulate the flash storage in the device. The remaining commands allow us to trigger a reload/reboot of the device remotely via SNMP.

On our “management station” we’ll need a few more MIB files installing under ~/mibs or wherever you put them:


Update the $MIBS environment variables to include them, you’ll need something like the following:


Now, my devices don’t have enough flash storage to hold two IOS images so I normally have to delete the current version (which is loaded into memory, so it doesn’t affect the operation of the device) to free up space and then upload the new version, reload, and pray it works.

To find the filename of the current IOS image we can use the SNMP equivalent of dir flash:/:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser -Cb -Ci -Cw 80 ciscoFlashFileTable
SNMP table: CISCO-FLASH-MIB::ciscoFlashFileTable
 index           Size     Checksum Status                                Name
 1.1.1      576 bytes        "0x0"  valid                                   /
 1.1.2 16459360 bytes "0xDEADBEEF"  valid c870-advsecurityk9-mz.124-15.T9.bin
SNMP table CISCO-FLASH-MIB::ciscoFlashFileTable, part 2
 index      Type Date
 1.1.1 directory    ?
 1.1.2   unknown    ?

The index is a composite of the flash device, partition and file, so 1.1.2 is the second file on the first partition of the first flash device. Unless you have multiple flash devices and/or partitions you can largely ignore these details, take a look at ciscoFlashDeviceTable & ciscoFlashPartitionTable if you want more information.

With the filename, we now need to construct a request to delete it. This sort of operation along with tasks such as erasing the flash, etc. uses the ciscoFlashMiscOpTable SNMP table. Using the same technique I demonstrated in the previous article insert a new row and activate it:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
ciscoFlashMiscOpCommand.23 i delete \
ciscoFlashMiscOpDestinationName.23 s c870-advsecurityk9-mz.124-15.T9.bin \
ciscoFlashMiscOpEntryStatus.23 i createAndGo
CISCO-FLASH-MIB::ciscoFlashMiscOpCommand.23 = INTEGER: delete(3)
CISCO-FLASH-MIB::ciscoFlashMiscOpDestinationName.23 = STRING: c870-advsecurityk9-mz.124-15.T9.bin
CISCO-FLASH-MIB::ciscoFlashMiscOpEntryStatus.23 = INTEGER: createAndGo(4)

Note that I’m using createAndGo instead of active for the row status because it’s part of the same initial set request. Now check the table to see if the operation was successful:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser -Cb -Ci -Cw 80 ciscoFlashMiscOpTableSNMP table: CISCO-FLASH-MIB::ciscoFlashMiscOpTable
 index Command                     DestinationName                 Status
    23  delete c870-advsecurityk9-mz.124-15.T9.bin miscOpOperationSuccess
SNMP table CISCO-FLASH-MIB::ciscoFlashMiscOpTable, part 2
 index NotifyOnCompletion         Time EntryStatus
    23              false 0:0:00:56.24      active

It worked so checking the flash storage should confirm the file is now absent. Now we can upload the new IOS image, you’ll need to have it copied to /tftpboot on your TFTP server. This uses another SNMP table but the procedure is exactly the same, insert a row describing the operation we want to perform:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
ciscoFlashCopyCommand.23 i copyToFlashWithoutErase \
ciscoFlashCopyProtocol.23 i tftp \
ciscoFlashCopyServerAddress.23 a \
ciscoFlashCopySourceName.23 s c870-advsecurityk9-mz.124-24.T4.bin \
ciscoFlashCopyEntryStatus.23 i createAndGo
CISCO-FLASH-MIB::ciscoFlashCopyCommand.23 = INTEGER: copyToFlashWithoutErase(2)
CISCO-FLASH-MIB::ciscoFlashCopyProtocol.23 = INTEGER: tftp(1)
CISCO-FLASH-MIB::ciscoFlashCopyServerAddress.23 = IpAddress:
CISCO-FLASH-MIB::ciscoFlashCopySourceName.23 = STRING: c870-advsecurityk9-mz.124-24.T4.bin
CISCO-FLASH-MIB::ciscoFlashCopyEntryStatus.23 = INTEGER: createAndGo(4)

Chances are this operation will take a minute or two to complete, keep checking ciscoFlashCopyTable until it completes:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser -Cb -Ci -Cw 80 ciscoFlashCopyTable
SNMP table: CISCO-FLASH-MIB::ciscoFlashCopyTable
 index                 Command Protocol ServerAddress
    23 copyToFlashWithoutErase     tftp
SNMP table CISCO-FLASH-MIB::ciscoFlashCopyTable, part 2
 index                          SourceName DestinationName RemoteUserName
    23 c870-advsecurityk9-mz.124-24.T4.bin                               
SNMP table CISCO-FLASH-MIB::ciscoFlashCopyTable, part 3
 index               Status NotifyOnCompletion         Time EntryStatus Verify
    23 copyOperationSuccess              false 0:0:05:03.71      active  false
SNMP table CISCO-FLASH-MIB::ciscoFlashCopyTable, part 4
 index ServerAddrType ServerAddrRev1 RemotePassword
    23           ipv4    ""              ?

With the new image uploaded it’s time for the coup de grâce, reloading the device:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
tsMsgSend.0 i reload
OLD-CISCO-TS-MIB::tsMsgSend.0 = INTEGER: reload(2)

Your device should now reboot and after a short while come back up running the updated IOS image. You can confirm the source of the reboot request with the following:

router#show version | i reason
Last reload reason: snmp shutdown request

Powerful stuff. Needless to say, don’t make this accessible to the world with an SNMPv2 community of “private”.

Using SNMP to trigger Cisco TFTP backups

April 24th, 2011

I recently discovered you can use SNMP to trigger a Cisco device to write its configuration to a remote TFTP server. Apparently this feature has existed in Cisco devices in one form or another for over a decade, but somehow I’d missed it completely. This seems a far better alternative to getting in a muddle with Expect-style scripts to telnet or SSH into the device. One or more Access Control Lists can also be utilised to restrict access.

First of all, let’s configure the IOS device which will need to be running at least IOS 12.0. The first thing to do is create an Access Control List which matches the IP address of the “management station” which will be issuing the SNMP commands and for simplicities sake is also the TFTP server:

router(config)#access-list 20 permit host
router(config)#access-list 20 deny any

Next we define an SNMP view that restricts how much of the tree is writable, allowing write access to everything is both unnecessary and inadvisable:

router(config)#snmp-server view myview ccCopyTable included

Now we create an SNMP group that ties the ACL and view together:

router(config)#snmp-server group mygroup v3 priv write myview access 20

Note that I’ve specified SNMPv3 along with the authPriv security level, not all devices support that so adjust accordingly. I’ve also not specified a read view so it will use the default v1default view which can read pretty much everything. Next create a new user as a member of this group:

router(config)#snmp-server user myuser mygroup v3 auth md5 authsecret priv des privsecret

Finally, restrict which TFTP servers can be written to, triggered by SNMP. This stops someone from making your device write files to an arbitrary TFTP server somewhere:

router(config)#snmp-server tftp-server-list 20

That should be all of the configuration needed for the Cisco device.

Now on our “management station” which can be anything provided it has a recent version of Net-SNMP installed, (CentOS 5.x works fine), first make sure TFTP is installed and running. Normally the TFTP server cannot create new files so create an empty file ready:

# touch /tftpboot/router.conf

Now install the various MIB files we’ll need. You don’t need these but for the sake of readability it’s easier to use the symbolic names rather than raw OID strings. You’ll need to download the following MIB files:


Put them in a directory such as ~/mibs and then configure Net-SNMP with the following two environment variables:

# export MIBDIRS=+${HOME}/mibs

You should be able to test that everything is configured correctly:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser ccCopyTable
CISCO-CONFIG-COPY-MIB::ccCopyTable: No entries

Now we need to create a row in this table describing the copy operation we’d like to perform. Without getting too heavy into how SNMP tables work, we basically need to use a unique index number to refer to the row. As we’ve already shown, the table is empty so pick any random number, umm, 23! Create a new row with the following column values:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
> ccCopyProtocol.23 i tftp \
> ccCopySourceFileType.23 i runningConfig \
> ccCopyDestFileType.23 i networkFile \
> ccCopyServerAddress.23 a \
> ccCopyFileName.23 s router.conf
CISCO-CONFIG-COPY-MIB::ccCopyProtocol.23 = INTEGER: tftp(1)
CISCO-CONFIG-COPY-MIB::ccCopySourceFileType.23 = INTEGER: runningConfig(4)
CISCO-CONFIG-COPY-MIB::ccCopyDestFileType.23 = INTEGER: networkFile(1)
CISCO-CONFIG-COPY-MIB::ccCopyServerAddress.23 = IpAddress:
CISCO-CONFIG-COPY-MIB::ccCopyFileName.23 = STRING: router.conf

Now if you re-run the snmptable command (with some extra flagsoup to help readability) you’ll see there’s now a row:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser -Cb -Ci -Cw 80 ccCopyTable
 index Protocol SourceFileType DestFileType ServerAddress    FileName UserName
    23     tftp  runningConfig  networkFile router.conf        ?
SNMP table CISCO-CONFIG-COPY-MIB::ccCopyTable, part 2
 index UserPassword NotificationOnCompletion State TimeStarted TimeCompleted
    23            ?                    false     ?           ?             ?
SNMP table CISCO-CONFIG-COPY-MIB::ccCopyTable, part 3
 index FailCause EntryRowStatus ServerAddressType ServerAddressRev1
    23         ?              ?              ipv4       ""

This won’t initiate the operation until we set one more column to activate the table row:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
> ccCopyEntryRowStatus.23 i active
CISCO-CONFIG-COPY-MIB::ccCopyEntryRowStatus.23 = INTEGER: active(1)

Now check the table again:

# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X authsecret -u myuser -Cb -Ci -Cw 80 ccCopyTable
 index Protocol SourceFileType DestFileType ServerAddress    FileName UserName
    23     tftp  runningConfig  networkFile router.conf        ?
SNMP table CISCO-CONFIG-COPY-MIB::ccCopyTable, part 2
 index UserPassword NotificationOnCompletion      State  TimeStarted
    23            ?                    false successful 5:0:20:49.92
SNMP table CISCO-CONFIG-COPY-MIB::ccCopyTable, part 3
 index TimeCompleted FailCause EntryRowStatus ServerAddressType
    23  5:0:20:51.39         ?         active              ipv4
SNMP table CISCO-CONFIG-COPY-MIB::ccCopyTable, part 4
 index ServerAddressRev1
    23       ""

You can see the state column now reads successful so check the empty file you created under /tftpboot, it should now have the contents of the current configuration for your device.

The row in the table will be automatically removed after a few minutes but you can also explicitly remove it:

# snmpset -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser \
> ccCopyEntryRowStatus.23 i destroy
CISCO-CONFIG-COPY-MIB::ccCopyEntryRowStatus.23 = INTEGER: destroy(6)
# snmptable -v 3 -l authPriv -a MD5 -A authsecret -x DES -X privsecret -u myuser ccCopyTable
CISCO-CONFIG-COPY-MIB::ccCopyTable: No entries

There’s still a fair bit of functionality that I’ve skipped over, you might be able to use something other than TFTP or generate an SNMP trap on completion. Experiment with changing the Access Control Lists and stopping the TFTP server to see how the table output changes.

Setting the date on a Muvi Micro Camcorder

February 27th, 2011

I recently got a little Veho Muvi Micro Camcorder after seeing my brother using one on his bike. I appear to have the Pro model which on the positive side can handle more frames per second, but on the negative side puts a bright green timestamp on your recordings and there’s no way to disable it.

So the best you can do is at least make sure the timestamp is correct; it will reset to some known epoch if the battery goes flat and it might lose accuracy over time although I can’t say I’ve paid attention to that. There’s a utility for Windows that ships with the device, but that’s no good on a Mac or Linux host so for the benefit of the lazyweb and myself because it’s fairly obscure how to do it…

Create a file called time.txt in the root of the Muvi filesystem (alongside the DCIM directory) with the following contents:


Unmount the device and turn it off and on, the file should be consumed and the time should be now set.

Here’s a video shot in my 1955 VW Beetle which demonstrates the annoying timestamp, although does show the Muvi does just about cope with the electromagnetic interference from the magneto ignition and noise levels from the exhaust, although it doesn’t cope with the lack of daylight, blame the black paint for that.

Holy Exploding Batteries Batman

February 27th, 2011

If you have a UPS, it’s probably worth paying attention to the temperature reading if it reports such a thing. My APC Smart-UPS 1000VA suffered a failed battery which wasn’t totally unexpected given I had last replaced it about two years ago.

While I was awaiting the replacement RBC6 battery, I had a poke about with apcupsd and noticed that it reckoned the internal temperature of the UPS was around 39-40 ℃ which seemed a bit high.

New battery arrived so I pulled the front panel off the UPS and tried to use that stupid plastic flap APC put on the battery to help you pull it out, surprise, it tore off like it’s done on every damn battery. Okay, so I’d have to reach in and get my hand behind the battery to push it out, which due to the nature of what a UPS generally does felt like that bit in the Flash Gordon movie when Peter Duncan sticks his arm in the tree stump and gets fatally stung. Nope, still not coming out and I could feel the battery was quite hot so I pulled the power and left it to cool down for an hour or two. With a bit of help from gravity it became apparent why the battery was being so stubborn:

Bursting Battery

The photo probably doesn’t illustrate just how distorted each side of the battery was. According to apcupsd the temperature is now around 25 ℃ with the new battery so it’s probably worth sticking some monitoring on that. Grabbing the temperature is easy if apcupsd(8) is running:

# /usr/sbin/apcaccess | grep -i temp
ITEMP    : 25.9 C Internal

It should be easy enough to wrap that in a simple Nagios test and/or graph it with Cacti.

PlayStation 3 HDD Upgrade

December 30th, 2010

I recently got Gran Turismo 5 for my PS3 and it helpfully offered to install bits of itself to the hard disk to speed things up. I think it claimed about 10GB was needed and at the time I just accepted it. Then I got thinking, along with this I’ve blindly installed a fair bit of downloadable content for GTA IV and Dragon Age: Origins and so far nothing has told me the hard disk is full but I wondered how close I was.

A quick spot of archaeology found the box and apparently I had been a tight-arse and only purchased the 40GB model and the system settings indicated I had just over 3GB left so the next decent-sized DLC would’ve bitten me.

Thankfully I had recently upgraded my MacBook Pro with a whopping 750GB hard disk so I had the original 160GB gathering dust, plenty big enough. Sony make it really easy to upgrade the hard disk with clear instructions, all I needed to do first was perform a full backup to some external USB storage prior to swapping the hard disks over. I warn you now, this takes ages. The backup process claimed to back up 27GB (where’s my other 10GB gone?) and that took around an hour. Maybe I’m just expecting too much out of USB2?

Most of this is obvious stuff, but I remembered there was some brouhaha about some content such as game save data being non-transferable and I knew Dragon Age was one such game. I assume this is to prevent trading save games to unlock trophies as part of some sort of willy waving contest but this has the knock on effect of being unable to transfer the data if you replace your PS3 for some reason. While I was waiting for the backup to finish I considered what might be happening:

  • Does the process skip non-transferable data?
  • Does it back it up and then not restore it?
  • Does it back it up, restore it and then prevent you from loading it within the game?
  • Does a PS3 with a different hard disk count as a different system?

I assumed it wouldn’t be so silly to include the hard disk in the system profile but upon restoring the first thing I would do is check which save game data was restored and if it still worked or not, keeping the original 40GB hard disk safe so I could always downgrade again and reconsider.

Upon swapping the hardware, the PS3 correctly reformatted the hard disk which still had the OS X partitioning present and then I started the restore process which took even longer than the initial backup, I think it took about an hour and a half.

The process seems to have worked and restored everything, my Dragon Age game saves still seem to load with no problems. Weirdly the PS3 now reports that I’ve used 47GB on the new hard disk, whereas I had only used around 34GB on the old one accounting for the stupid capacity rounding hard disk manufacturers love to use, so either there’s quite the rounding error going on or the PS3 reserves a percentage of the disk for its own purposes.

This blog is IPv6-enabled

December 5th, 2010

I’ve been quite keen on getting IPv6 connectivity to my hosts, although I’m currently lacking either a home internet package or hosted server where there is native connectivity. Thankfully there are enough free tunnel providers available which work fine as a workaround.

I’m using SixXS for two tunnels; one to my home network and another to the VPS provided by Linode which amongst other things hosts this blog.

SixXS operate a system of credits whereby tasks such as creating tunnels, obtaining subnets and setting up reverse DNS delegation costs varying amounts of credits and the only way to earn credits is to keep your tunnel up and running on a week-by-week basis. In the interests of promoting reliable and stable connectivity this is a Good Thing.

Now, what must have happened about a year ago was I created the two tunnels, but because SixXS deliberately don’t give you enough credits to be able to request subnets at the same time, you have to wait a week or two to earn sufficient credits to be able to do that. So I got my home network sorted first as that was more important and then likely got distracted by something shiny and forgot to sort out a subnet for the server.

Now, you don’t necessarily need a subnet when you have just the one host at the end a tunnel, you can just use the IPv6 address allocated to the local endpoint of the tunnel. However, you can’t change the reverse DNS of that address, meaning it will be stuck called something like If you’re planning on running a service which can be picky about the reverse DNS, (SMTP is a good example), then it’s probably best to get a subnet so you have full control over it.

By this point, requesting a subnet isn’t a problem credits-wise as the tunnels have been up for over a year. My existing tunnel configuration stored under /etc/sysconfig/network-scripts/ifcfg-sit1 looked something like this:


Once the subnet has been allocated and routed to your tunnel, it’s simply a case of picking an address from that and altering the configuration like so:


Note the IPV6ADDR_SECONDARIES addition. Now your host should be accessible through both addresses and thanks to #199862 any outbound connections initiated from your host should use the subnet address rather than the tunnel address. You can test this with ip(8) and some known IPv6-connected hosts.

# host -t aaaa is an alias for has IPv6 address 2001:1af8:1:f006::6 has IPv6 address 2001:838:2:1::30:67 has IPv6 address 2001:960:800::2
# ip -6 route get 2001:1af8:1:f006::6
2001:1af8:1:f006::6 via 2001:1af8:1:f006::6 dev sit1  src 2001:feed:beef::1  metric 0 
    cache  mtu 1280 advmss 1220 hoplimit 4294967295
# host -t aaaa is an alias for has IPv6 address 2001:4860:800f::63
# ip -6 route get 2001:4860:800f::63
2001:4860:800f::63 via 2001:4860:800f::63 dev sit1  src 2001:feed:beef::1  metric 0 
    cache  mtu 1280 advmss 1220 hoplimit 4294967295

Trying again for the remote endpoint address of the tunnel gets a different result:

# ip -6 route get 2001:dead:beef:7a::1
2001:dead:beef:7a::1 via :: dev sit1  src 2001:dead:beef:7a::2  metric 256  expires 21315606sec mtu 1280 advmss 1220 hoplimit 4294967295

The kernel decides that in this case using the local address of the tunnel is a better choice, which I think is due to RFC 3484 and how Linux chooses a source address. If that expiry counter ever hits zero and the behaviour changes, I’ll let you know…