Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:

/dts-v1/;
/plugin/;

/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;
            };
        };
    };
};

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.

Userspace

Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

device = uinput.device([
  uinput.ABS_X,
  uinput.ABS_Y
])

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
  else:
    device.emit(uinput.BTN_TOUCH, 0)   

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Basic traffic shaping

So, I thought this would be a lot harder than it ended up being1.

Over the last few posts I’ve been walking through the parts needed to build a super simple miniature ISP and one of the last bits I think (I know I’ll have forgotten something) we need is a way to limit the bandwidth available to the end users.

Normally this is mainly done by a step in the chain we have been missing out, that being the actual DSL link between the users house and the exchange. The length of the telephone line imposes some technical restrictions as well as the encoding scheme used by the DSL modems. In the case I’ve been taking about we don’t have any of that as it’s all running directly over Gigabit Ethernet.

Limiting bandwidth is called traffic shaping. One of the reasons to apply traffic shaping is to make sure all the users get a consistent experience, e.g. to stop one user maxing out all the backhual bandwidth (streaming many 4k Netflix episodes) and preventing all the other users from being able to even just browse basic pages.

Home broadband connections tend to have an asymmetric bandwidth profile, this is because most of what home users do is dominated by information being downloaded rather than uploaded, e.g. requesting a web page consists of a request (a small upload) followed by a much larger download (the content of the page). So as a starting point I will assume the backhaul for our ISP is going to be configured in a similar way and set each user up with similar asymmetric set up of 10mb down and 5mb up.

Initially I thought it might be just a case of setting a couple of variable in the RADIUS response. While looking at the dictionary for the RADIUS client I came across the dictionary.roaringpenguin file that includes the following two attribute types

  • RP-Upstream-Speed-Limit
  • RP-Downstream-Speed-Limt

Since Roaring Penguin is the name of the package that provided the pppoe-server I wondered if this meant it had bandwidth control built in. I updated the RADIUS configuration files to include these alongside where I’d set Acct-Interim-Interval so they are sent for every user.

post-auth {

	update reply {
		Acct-Interim-Interval = 300
		RP-Upstream-Speed-Limit = 5120
		RP-Downstream-Speed-Limit = 10240
	}
        ...
}

Unfortunately this didn’t have any noticeable effect so it was time to have a bit of a wider look.

Linux has a traffic shaping tool called tc. The definitive guide is included in a document called the Linux Advanced Routing and Traffic Control HowTo and it is incredibly powerful. Luckily for me what I want is relatively trivial so there is no need to dig into all of it’s intricacies.

Traffic shaping is normally applied to outbound traffic so we will deal with that first. In this case outbound is relative to the machine running the pppoe-server so we will be setting the limits for the user’s download speed. Section 9.2.2.2 has an example we can use.

# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540

This limits the out going connection on device ppp0 to 220kbit. We can adjust the values for the rate to 10240kbitor 1mbitto get the right speed.

Traffic coming into the device is controlled with ingress rules and is called policing. The tc-policing man page has example for limiting incoming traffic.

 # tc qdisc add dev eth0 handle ffff: ingress
 # tc filter add dev eth0 parent ffff: u32 \
                   match u32 0 0 \
                   police rate 1mbit burst 100k

We can change the device to ppp0 and the rate to 5mbit and we have what we are looking for.

Automation

Setting this up on the command line once the connection is up and running is easy enough, but it really needs to be done automatically when ever a user connects. The pppd daemon that gets started for each connection has a script that can be used to do this. The /etc/ppp/ip-up.sh script is called and in turn this calls all the scripts in /etc/ppp/ip-up.d so we can include a script in there to do the work.

The next trick is where to find the settings. When setting up the pppoe-server we added the plugin radattr.so line to the /etc/ppp/options file, this causes all the RADIUS attributes to be written to a file when the connection is created. The file is /var/run/radattr.ppp0 (with the prefix changing for each connection).

Framed-Protocol PPP
Framed-Compression Van-Jacobson-TCP-IP
Reply-Message Hello World
Framed-IP-Address 192.168.5.2
Framed-IP-Netmask 255.255.255.0
Acct-Interim-Interval 300
RP-Upstream-Speed-Limit 5120
RP-Downstream-Speed-Limit 10240

With a little bit of sed and awk magic we can tidy (environments can’t contain - & we need to wrap the string value in ") that up and turn it into environment variables and a script to set the traffic shaping.

#!/bin/sh

eval "$(sed 's/-/_/g; s/ /=/' /var/run/radattr.$PPP_IFACE | awk -F = '{if ($0  ~ /(.*)=(.* .*)/) {print $1 "=\"" $2  "\""} else {print $0}}')"

if [ -n "$RP_Upstream_Speed_Limit" ];
then

#down
tc qdisc add dev $PPP_IFACE root tbf rate ${RP_Upstream_Speed_Limit}kbit latency 50ms burst 1540

#up
tc qdisc add dev $PPP_IFACE handle ffff: ingress
tc filter add dev $PPP_IFACE parent ffff: u32 \
          match u32 0 0 \
          police rate ${RP_Downstream_Speed_Limit}kbit burst 100k

else
	echo "no rate info"
fi

Now when we test the bandwidth with iperf we see the the speeds limited to what we are looking for.

Advanced

1 This is a super simple version that probably has lots of problems I’ve not yet discovered and it would be good to try and set up something that would allow a single user to get bursts of speed above a simple total/number of users share of the bandwidth if nobody else is wanting to use it. So it’s back to reading the LARTC guide to dig out some of the more advanced options.

Static IP Addresses and Accounting

Over the last few posts I’ve talked about how to set up the basic parts needed to run a small ISP.

In this post I’m going to cover adding a few extra features such as static IP addresses, Bandwidth accounting and Bandwidth limiting/shaping.

Static IP Addresses

We can add a static IP address by adding a field to the users LDAP entry. To do this first we need to add the Freeradius schema to the list of fields that the LDAP server understands. The Freeradius schema files can be found in the /usr/share/doc/freeradius/schemas/ldap/openldap/ and have been gzipped. I unzipped them and copied them to /etc/ldap/schema then imported it with

$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/freeradius.ldif

Now we have the schema imported we can now add the radiusprofile objectClass to the user along with a radiusFramedIPAddress entry with the following ldif file.

dn: uid=isp1,ou=users,dc=hardill,dc=me,dc=uk
changetype: modify
add: radiusFramedIPAddress
rediusFramedIPAddress: 192.168.5.2

We then use ldapmodify to update the isp1 users record

$ ldapmodify -f addIPAddress.ldif -D cn=admin,dc=hardill,dc=me,dc=uk -w password

Now we have the static IP address stored against the user, we have to get the RADIUS server to pass that information back to the PPPoE server after it has authenticated the user. To do this we need to edit the /etc/freeradius/3.0/mods-enabled/ldap file. Look for the `update` section and add the following

update {
  ...
  reply:Framed-IP-Address     := 'radiusFramedIPAddress'
}

Running radtest will now show Framed-IP-Address in the response message and when pppoe-server receives the authentication response it will use this as the IP address for the client end of the connection.

Accounting

Out of the box pppoe-server will send accounting messages to the RADIUS server at the start and end of the session.

Sat Aug 24 21:35:17 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Start
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 21:35:17 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566678917

Sat Aug 24 23:08:53 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Stop
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 5616
	Acct-Output-Octets = 2328
	Acct-Input-Octets = 18228
	Acct-Output-Packets = 32
	Acct-Input-Packets = 297
	NAS-Port-Type = Virtual
	Acct-Terminate-Cause = User-Request
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 23:08:53 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566684533

The Stop message includes the session length (Acct-Session-Time) in seconds and the number of bytes downloaded (Acct-Output-Octets) and uploaded (Acct-Input-Octets).

Historically in the days of dial up that probably would have been sufficient as sessions would probably only last for hours at a time, not weeks/months for a DSL connection. pppoe-server can be told to send updates at regular intervals, this setting is also controlled by a field in the RADIUS authentication response. While we could add this to each user, it can be added to all users with a simple update to the /etc/freeradius/3.0/sites-enabled/default file in the post-auth section.

post-auth {
   update reply {
      Acct-Interim-Interval = 300
   }
   ...
}

This sets the update interval to 5mins and the log now also contains entries like this.

Wed Aug 28 08:38:56 2019
	Acct-Session-Id = "5D62ACB7070100"
	User-Name = "isp1"
	Acct-Status-Type = Interim-Update
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 230105
	Acct-Output-Octets = 10915239
	Acct-Input-Octets = 17625977
	Acct-Output-Packets = 25918
	Acct-Input-Packets = 31438
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 28 2019 08:38:56 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "f36693e4792eafa961a477492ad83f8c"
	Timestamp = 1566977936

Having this data written to a log file is useful, but if you want to trigger events based on it (e.g. create a rolling usage graph or restrict speed once a certain allowance has been passed) then something a little more dynamic is useful. Freeradius has a native plugin interface, but it also has plugins that let you write Perl and Python functions that are triggered at particular points. I’m going to use the Python plugin to publish the data to a MQTT broker.

To enable the Python plugin you need to install the freeradius-python package

$ sudo apt-get install freeradius-python

And then we need to symlink the mods-available/python to mods-enabled and then edit the file. First we need to set the path that the plugin will use to file Python modules and files. And then enable the events we want to pass to the module.

python {
    python_path = "/etc/freeradius/3.0/mods-config/python:/usr/lib/python2.7:/usr/local/lib/python/2.7/dist-packages"
    module = example

    mod_instantiate = ${.module}
    func_instantiate = instantiate

    mod_accounting = ${.module}
    func_accounting = accounting
}

The actual code follows, it publishes the number of bytes used in the session to the topic isp/[username]/usage. Each callback gets pass a tuple containing all the values available.

import radiusd
import paho.mqtt.publish as publish

def instantiate(p):
  print "*** instantiate ***"
  print p
  # return 0 for success or -1 for failure

def accounting(p):
  print "*** accounting ***"
  radiusd.radlog(radiusd.L_INFO, '*** radlog call in accounting (0) ***')
  print
  print p
  d = dict(p)
  if d['Acct-Status-Type'] == 'Interim-Update':
      topic = "isp/" + d['User-Name'] + "/usage"
      usage = d['Acct-Output-Octets']
      print "publishing data to " + topic
      publish.single(topic, usage, hostname="hardill.me.uk", retain=True)
      print "published"
  return radiusd.RLM_MODULE_OK

def detach():
  print "*** goodbye from example.py ***"
  return radiusd.RLM_MODULE_OK

I was going to talk about traffic shaping next, but that turns out to be real deep magic and I need to spend some more time playing before I have something to share.

PPPoE Server

With the working RADIUS authentication server setup in the last post it’s time to install and set up the PPPoE server for the users to connect to. As well as the pppoe package we will need the libradcli4 as this provides the RADIUS client library.

$ sudo apt-get install pppoe libradcli4

First we need to stop the dhcpcd daemon from trying to allocate a IP address for the interface we are going to use for PPPoE. As I’m running this on a Rasperry Pi 4 I’ll be using the eth0 port and then using wlan0 for the back haul. To get dhcpcd to ignore eth0 we add the following to /etc/dhcpcd.conf

denyinterfaces eth0

With that out of the way we can start setting things up for the pppoe-server. We will start by editing the /etc/ppp/options file. We need to add the plugins to link it to the RADIUS server and tweak a couple of settings.

mtu 1492
proxyarp
...
plugin radius.so
plugin radattr.so
radius-config-file /etc/radcli/radiusclient.conf

next up create /etc/ppp/pppoe-server-options and make sure it outputs logs

# PPP options for the PPPoE server
# LIC: GPL
require-pap
login
lcp-echo-interval 10
lcp-echo-failure 2
debug
logfile /var/log/pppoe/pppoe-server.log

and finally /etc/ppp/pap-secrets we need to add the following:

# INBOUND connections

# Every regular user can use PPP and has to use passwords from /etc/passwd
#*	hostname	""	*
* * "" *

That’s it for PPP options, just need to finish settings up radcli. Here we need to add the password for the RADIUS server in the /etc/radcli/servers file

localhost/localhost				testing123

and then we can update /etc/radcli/radiusclient.conf to point to the RADIUS server on localhost

authserver 	localhost
acctserver 	localhost

The current version of PPP available with Raspbian Buster has been built against an older version of the radius client library so to get things to work we have to also add the following 2 lines and run touch /etc/ppp/radius-port-id-map

seqfile /var/run/radius.seq
mapfile /etc/ppp/radius-port-id-map

And we need to edit the /etc/radcli/dictionary file to comment out all the lines that include ipv6addr and also change all instances of ipv4addr to ipaddr. There is a patch which fixes some of this but requires a rebuild of all of PPP. I’m going to give that a go later to get IPv6 working properly.

We should now be able to start the pppoe-server.

# pppoe-server -I eth0 -T 60 -N 127 -C PPPoE -S PPPoE -L 192.168.5.1 -R 192.168.5.128 -F
  • -I sets the port to listen on
  • -T sets the timeout for a connection
  • -N sets the maximum number of connections
  • -C sets the “name” of the server instance
  • -S sets the “name” of the PPP Service
  • -L sets the IP address for the server
  • -R sets the first address of the range for the remote device
  • -F tells pppoe-server to run in the foreground (only used for testing)

If we make sure the server is set to masquerade and forward IP packets then any client that connects should now be able reach the internet via the server.

In the next post I’ll cover how to customise connections for different users by adding data to their LDAP entry. And also how to do traffic shaping to ensure equal use of the available bandwidth along with basic accounting so we know what to bill each user.

Building an ISP

I’ve had this idea in the back of my head for ages, it’s centred round buying a building (something like an old Yorkshire mill, or better yet a private island) and dividing it up into a number of homes/offices/co-working paces.

To go with this fantasy I’ve been working out how to build a small scale boutique ISP (most of this would probably work for a small town community fibre or wireless mesh system) to share the hugely expensive high bandwidth symmetric dedicated fibre .

Over the next few posts I’m going to walk through building the PoC for this (which is likely to be where it stays unless I win the lottery)

To work out what I’d need lets first look roughly how home internet connections works.

At the advent of Home Internet there were two methods of delivering IP packets over a telephone/serial line, SLIP and PPP protocol. PPP became the dominant player and was extended to encapsulate PPP packets carried over both ATM (PPPoA) and Ethernet (PPPoE) frames in order to facilitate the move to DSL Home Broadband connections. PPPoE became the standard for the next evolution, FTTX (Where X can be B for building, P for premisses, or H for Home ). Modern home routers include a modem that converts DSL signal back to Ethernet frames and a PPPoE client to unpack the PPP connection back into IP packets to forward on to the network.

This means we need a PPPoE server for the users router to connect to, Linux has PPPoE support both as a client and as a server. I’ve already used the PPPoE client when the router for my FTTC Broadband service was late arriving.

Now we have the basic connection between the users equipment and the ISPs network we need to be able to authenticate each user so we know who is actually trying to connect. You can hard code credentials and details into the PPPoE configuration files, but this doesn’t scale and means you need to restart everything when ever something changes.

The better solution is something called a RADIUS server. RADIUS is a AAA service that can be used to not only authenticate users, but also supply information to the PPPoE server about that user, e.g. a static IP address allocation. RADIUS can also be used for accounting to record how much bandwidth each user has consumed.

A rasperry Pi and a Acer Revo hooked up to a ethernet switch
Initial testing

RADIUS servers can be backed by a number of different databases but the usual approach is to use LDAP.

In the next post I’ll cover installing the LDAP and RADIUS servers, then configuring them.

Depolying a TTN LoRa Gateway

I’ve been meaning to get round to this ever since the Pi Supply Kickstarter delivered my LoRa Gateway HAT and the LoRa Node pHAT.

They have been sat in their boxes waiting until I had some spare time (and I’d finally finished moving a few things around to free up a spare Pi).

LoRa Gateway on a Pi 3

LoRa is a long range, low bandwidth radio system that uses the unlicensed spectrum. When combined with the higher level LoRaWAN protocol it makes great IoT platform for low power devices that want to send low volumes of data in places where there is no WiFi coverage and can’t justify the cost of a cellular connection.

LoRaWAN allows you to deploy a collection of Gateway devices that can act as receivers for a large number of deployed devices. These gateways then forward on messages to central point for processing.

The Things Network

A group called The Things Network run a LoRaWAN deployment. They are aiming for as large a coverage area as possible. To do this they allow users to deploy their own gateways and join these to the network. By joining the network you get to use everybody elses gateways in exchange for letting other people use yours.

Setting up the Gateway

This was particularly easy. I just had to download an image and flash it to a SD card. Stick that into the pi along with an ethernet cable and some power.

After the pi boots up you point your browser at http://iotloragateway.local and fill in a couple of values generated when I registered the gateway on the TTN site and that was it. The gateway is now up and running and ready to send/receive packets from any devices in range.

Testing

In order to test the gateway I need to set up a Pi Zero with the LoRa Node pHAT. This was a little trickier, but not much.

Fist I had to disable the Linux serial console, this can be done using the raspi-config command. I also had to add dtoverlay=pi3-miniuart-bt /boot/config.txt.

That was all that was needed to get the hardware configured, as for the software there is a rak811 python package that supplies the api and utilities to work with pHAT.

I now needed to declare an application on The Things Network site, this is how messages get routed to be processes. Taking the values for this application I could now write the following helloWorld.py

#!/usr/bin/env python3
from rak811 import Mode, Rak811

lora = Rak811()
lora.hard_reset()
lora.mode = Mode.LoRaWan
lora.band = 'EU868'
lora.set_config(app_eui='xxxxxxxxx',
                app_key='xxxxxxxxxxxx')
lora.join_otaa()
lora.dr = 5
lora.send('Hello world')
lora.close()

Which can then be seen arriving in The Things Network console.

Data arriving and being displayed in The Things Network console.

And I can subscribe directly to that data feed via MQTT:

$ mosquitto_sub -h eu.thethings.network -u 'lora-app1-hardill-me-uk' -P 'xxxxxxxxx' -v -t '+/devices/+/up'
{
  "app_id": "lora-app1-hardill-me-uk",
  "dev_id": "test-lora-pi-zero",
  "hardware_serial": "323833356E387901",
  "port": 1,
  "counter": 0,
  "is_retry": true,
  "payload_raw": "SGVsbG8gd29ybGQ=",
  "metadata": {
    "time": "2019-08-10T15:45:07.568449769Z",
    "frequency": 867.5,
    "modulation": "LORA",
    "data_rate": "SF7BW125",
    "airtime": 61696000,
    "coding_rate": "4/5",
    "gateways": [
      {
        "gtw_id": "lora-gw1-hardill-me-uk",
        "gtw_trusted": true,
        "timestamp": 910757708,
        "time": "2019-08-10T15:45:07Z",
        "channel": 5,
        "rssi": -91,
        "snr": 7.75,
        "rf_chain": 0,
        "latitude": 51.678905,
        "longitude": -2.3549008,
        "location_source": "registry"
      }
    ]
  }
}

Next Steps

First up will be to get a better antenna for the gateway and to move the whole things up in the attic, from there it should get a good view north out towards the River Severn. After that I want to get a small battery powered LoRa/GPS board, like a TTGO T-Beam and ride round on my bike to get a feel for what the range/coverage actually is.

I’ll also be keeping an eye on the stats from the gateway to see if anybody else near by is deploying TTN LoRaWAN devices.

Tracks2Miles and Tracks2TitanXT removed from Play Store

I have removed both of the apps in the title from the play store.

This is for a number reasons:

  1. MyTracks removed the API required to get at the recorded tracks meta data which vastly reduced the capability of the app.
  2. DailyMile shut down, which rendered Tracks2Miles useless.
  3. Google flagged both apps as potentially having SQL injection attacks (while theoretically possible I couldn’t find a directly exploitable use case).
  4. Even before the shutdown I had completely moved all my activity tracking to dedicated Garmin devices and Strava.

Dirty Kanza 100 2019

I’ve just got back from a pretty great trip.

As I mentioned in a previous post, this year I was going to take part in one of the premier gravel riding events, the Dirty Kanza.

The Trip

I flew out to Denver and spent the first 4 days in Boulder. This was for a few reasons. Firstly it’s somewhere I’ve been wanting to visit for a while as it was one of the options I listed for my year at uni in the US (I suspect that the powers that be knew I’d have spent the whole year snowboarding which is why I ended up in North Carolina…) and I was sure there would be some good riding to shake the flight out of my legs. It’s also at altitude (about a mile above sea level) which I was hoping would help as a final little push (I know you have to spend more than a week at altitude before it actually starts to have a positive effect).

While in town I went on 2 rides led out of the Full Cycle bike shop. The first was a road ride on Saturday morning at 09:00 which covered about 70km in 2 and a bit hours.

It was a nice pace to shake the flight out of my legs and to make sure I’d put the bike back together properly after the flight. The ride was pretty flat and I didn’t have any trouble keeping up with the pace.

The second ride was billed as a gravel ride but had aspects that were closer to a mountain bike single track ride. And this time I really started to feel the altitude as there was a bunch more climbing some of it pretty steep.

Both rides left from the Pearl Street store with at least 2 ride leaders. If you are in town and want somebody to show you some good riding I suggest you head along and see these folk.

I stuck in a repeat of Saturday’s ride on my own on the Monday just to keep the legs turning over as I didn’t fancy doing any major climbing. I also tried to stick a short run in but the the altitude really kicked in and I managed about 4k before deciding that enough was enough while gasping for air.

On Wednesday I set off on the monumental drive across most of Kansas. I had booked a place to stop in a small town called Hays to break up the nearly 600mile drive.

The drive was an experience, I’ve driven round different parts of the US in the past, but either on the East coast or round the US National Parks in the South West, but this was just hour after hour of nearly totally straight flat road. I can totally see the appeal of a self driving (totally, definitely not the current systems that require human oversight) vehicle for this kind of driving.

While stretching my legs when I arrived I happened to catch a train crossing Main Street.

Then on Thursday I finished off the run in to Emporia. While the first 360 miles had been nearly totally flat the ground did finally start to become a little bit more rolling.

The Event

The Dirty Kanza is based out the town of Emporia in East Kansas.

The DK takes over pretty much all of Commercial Street, with pretty much all of the shops getting involved. There is a area between Commercial Street and Mechanics Street where all the sponsors get to set up their stands and you can look at all the new bikes and tech.

On the Friday before the actual event there is a short social ride, I rolled round with everybody before heading off to recy the first 10 miles of the course.

The full 200 mile event starts at 6am (with the dawn) and the 100 mile event starts 30 minutes later so I got to see the folk doing the longer distance set off.

Once they had cleared the starting area, we got to line up to set off.

The ride was a tale of 2 parts, the first 90km to the feed station went really well, cruising along at a steady 23kph and doing just over half the climbing in just under 4 hours. I’d made a decent dent into the 2 750ml drink bottles I had with me. I refilled them both and had a snack to keep the energy up, I spent about 15mins at the rest station. The temperature for the first leg had topped out at about 25 °C.

The second leg was a lot harder covering just short of 80km, the temperature topped out at 35 °C and averaged 32 °C and this meant I burnt through a lot more water. Luckily there were a couple of places along the way where I could top up my bottles again. It also kicked off with one of the bigger climbs of the whole route. There had been a 40% chance of rain (with the possibility of thunder) in the forecast for the afternoon that I had been keeping an eye on all of the week running up to the event, hoping it wouldn’t happen. By the time the temperature passed 30 °C I had changed my mind and scanning the sky for clouds.

I got within 10km of the finish when I had my first and only mechanical issue. My rear tire had nearly fully deflated, but the tubeless sealant had managed to plug the leak so all I needed to do was use a CO2 canister to top it up.

Crossing the DK finish line

The finish line was back down Commercial Street which had been converted into a street party while we had been out on the trail. Lots of people cheering which really helped get over the line.

I would definitely do the DK again, may be even give the full 200mile version a try next time, but that would need a bigger training plan and probably finding a way to spend a good few weeks at altitude in the run up. I would also make use of the third bottle cage mounts on the bike

Mobile IPv6 Workaround

As I’ve previously mentioned I’m in the market for a cellular data plan that supports IPv6.

The only mainstream provider in the UK that offers any IPv6 support is EE, but only for their pay monthly plans and I want something for more occasional usage.

While I wait for the UK mobile operators to catch up, I’ve been using OpenVPN on my phone to allow it to behave as if it’s actually on my local network at least from a IPv4 point of view.

This just about works, but it did mean that I have the DNS reply with internal addresses when queried from within and external addresses when queried from addresses outside. Again this is possible to do with bind9 using views, but it leads to a bunch more administration when ever anything needs changing.

It also doesn’t solve the need to access other people’s/organisation’s resources that are only available via IPv6.

OpenVPN can also route IPv6 over the tunnel and hand out IPv6 addresses to the clients that connect. Instructions for how to set it up can be found on the OpenVPN Wiki here.

By adding the following to the OpenVPN server.conf file

tun-ipv6
push tun-ipv6
ifconfig-ipv6 2001:8b0:2c1:xxx::1 2001:8b0:2c1:xxx::2
ifconfig-ipv6-pool 2001:8b0:2c1:xxx::4/64
push "route-ipv6 2000::/3"

I initially was trying to work out how to carve a section out of the initial /64 IPv6 subnet that my ISP had assigned to me. My plan was to take a /112 block (which maps to 65536 addresses) but as a general rule you are not meant to try and use IPv6 subnets smaller than /64.

Luckily A&A assign each customer a /48 range that can be split up across multiple sites/lines. Or you can assign extra /64 or /60 blocks to an existing line.

I choose to add a second /64 to my existing line and then configured my Ubiquiti Edgerouter X.

set protocols static route6 2001:8b0:2c1:xxx::/64 next-hop fe80::92fb:a6ff:fe2e:28a2

Where fe80::92fb:a6ff:fe2e:28a2 is the link local address of the machine running the OpenVPN server.

Android OpenVPN client

The added bonus is that I now can get IPv6 access on both my mobile phone and on my laptop when away from home.

Listing AWS Lambda Runtimes

For the last few weeks I’ve been getting emails from AWS about Node 6.10 going end of life and saying I have deployed Lambda using this level.

The emails don’t list which Lambda or which region they think are at fault which makes tracking down the culprit difficult. I only really have 1 live instance deployed across multiple regions (and 1 test instance on a single region).

AWS Lambda region list

Clicking down the list of regions is time consuming and prone to mistakes.

In the email AWS do provide a command to list which Lambda are running with Node 6.10:

aws lambda list-functions --query="Functions[?Runtime=='nodejs6.10']"

But what they fail to mention is that this only checks your current default region. I can’t find a way to get the aws command line tool to list the Lambda regions, the closest I’ve found is the list of ec2 regions which hopefully match up. Pairing this with the command line JSON search tool jq and a bit of Bash scripting I’ve come up with the following:

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $region;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This walks over all the regions as prints out all the function names and the runtime they are using.

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
"oAuth-test - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"

In my case it only lists NodeJS 8.10 so I have no idea why AWS keep sending me these emails. Also since I’m only on the basic level I can’t even raise a technical help desk query to find out.

Anyway I hope this might be useful to others with the same problem.