IPv6 only network with IPv4 access (DNS64/NAT64)

Continuing the theme of building my own ISP I started looking at running a IPv6 only network.

As IPv4 addresses become increasingly scarce it won’t be possible to hand out publicly routeable addresses to every user. The alternatives are things like CGNAT but that has a bunch of problems.

On the other hand the default suggested IPv6 allocation per user is a /48 subnet (which is 65,5536 /64 subnets each containing 18,446,744,073,709,551,616 addresses) which should be more than enough for anybody. More and more services slowly are making them selves available via IPv6.

So rather than run a dual stack IPv4/IPv6 network with a double NAT’d (at the home router and again at the ISP’s CGNAT link to the rest of the internet ) IPv4 address, we can run a pure IPv6 ISP and offer access to IPv4 via a NAT64 gateways to allow access to those services that are still IPv4 only.


This is the part that converts IPv4 addresses to IPv6 addresses.

The local device looking to connect makes a DNS request for the hostname of the remote device. If there is no native AAAA (IPv6 address) entry for the hostname, the DNS64 server will generate one based on converting the IPv4 address to hex and prepending a IPv6 prefix. The prefix can be anything with at least /96 (allowing enough room for all IPv4 addresses) but there is a pre-defined address range of 64:ff9b::/96.

So if looking up the remote hostname returns then the mapped IPv6 address would be 64:ff9b::c0a8:0105 ( -> c0.a8.01.05 in hex)

As of version 9.8 of bind support for DN64 is built in and configured by adding the following:

dns64 64:ff9b::/96 {
  clients { any; };
  mapped { !10/8; any;};
  break-dnssec yes;

Running your own means you can control who can access the server (using the clients directive and control which IP address ranges are mapped or excluded).

All this IP address mapping does break DNSSEC but since most clients rely on their recursive DNS servers to validate DNSSEC record rather than doing it directly this is less of a problem.

There are also a number of public DNS servers that support DNS64 including one run by Google.

To test you can use dig and point to the DNS64 server. e.g. to get a IPv6 mapped address for www.bbc.co.uk ( &

$ dig @2001:4860:4860::6464 -t AAAA www.bbc.co.uk

; <<>> DiG 9.11.5-P4-5.1-Raspbian <<>> @2001:4860:4860::6464 -t AAAA www.bbc.co.uk
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1043
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;www.bbc.co.uk.			IN	AAAA

www.bbc.co.uk.		235	IN	CNAME	www.bbc.net.uk.
www.bbc.net.uk.		285	IN	AAAA	64:ff9b::d43a:edfe
www.bbc.net.uk.		285	IN	AAAA	64:ff9b::d43a:e9fe

;; Query time: 133 msec
;; SERVER: 2001:4860:4860::6464#53(2001:4860:4860::6464)
;; WHEN: Mon Feb 03 21:40:50 GMT 2020
;; MSG SIZE  rcvd: 124

We can see that is d43ae9fe in hex and has been added to the 64:ff9b::/96 prefix to make 64:ff9b::d43a:e9fe


This is the part that actually does the mapping between the IPv6 address of the initiating device and the IPv4 address of the target device.

There are a few different implementations of NAT64 for Linux

I decided to give Jool a go first based on a presentation I found.

I had to build Jool from source, but this wasn’t particularly tricky and once installed I followed the Stateful NAT64 instructions. There were 2 bits missing from this that caused a few problems.

The first was because my host machine has multiple IPv4 addresses I needed to add the right address to the `pool4`. When adding the the address you also need to specify a range of ports to use and these need to be excluded from ephemeral local port range.

The ephemeral range can be between 1024 and 65535 and you can check what the current range is set to with sysctrl

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768	60999

For a proper deployment this range needs reducing so that you can commit enough ports for all the NAT64 connections that will pass through the gateway. You can also add multiple IPv4 addresses.

Quick aside: The whole point of this exercise is to reduce the number of publicly routable IPv4 addresses that we need. To make this work we are always going to need some, but we will share a small number at one point at the edge of the network to be used as the NAT64 egress point, but as more and more services move over to supporting IPv6 this number will decrease.

Because I’m only playing at the moment, I’m just going to use the 61000-65535 range and leave the ephemeral ports alone. I will still be able to host 4500 connections.

To make starting it all easier I wrote a short script that

  • Loads the module
  • Enables IPv4 and IPv6 routing
  • Sets up jool with the default IPv6 prefix
  • Adds the iptables entries to intercept the packets
  • Adds the IPv4 output address to the jool config with the port range for TCP, UDP and ICMP
modprobe jool

sysctl -w net.ipv4.conf.all.forwarding=1
sysctl -w net.ipv6.conf.all.forwarding=1

jool instance add "example" --iptables  --pool6 64:ff9b::/96

ip6tables -t mangle -A PREROUTING -j JOOL --instance "example"
iptables -t mangle -A PREROUTING -j JOOL --instance "example"

jool -i "example" pool4 add -i 61000-65535
jool -i "example" pool4 add -t 61000-65535
jool -i "example" pool4 add -u 61000-65535

Putting it together

To make it all work I need to set the DNS server handed out to my ISP customers point to the DNS64 instance and to make sure that 64:ff9b::/96 gets routed via the gateway machine.

To test I pinged www.bbc.co.uk

$ ping6 -c 2 www.bbc.co.uk
PING www.bbc.co.uk (64:ff9b::d43a:e9fe) 56 data bytes
64 bytes from 64:ff9b::d43a:e9fe: icmp_seq=1 ttl=54 time=18.1 ms
64 bytes from 64:ff9b::d43a:e9fe: icmp_seq=2 ttl=54 time=23.0 ms

--- 64:ff9b::d43a:e9fe ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 18.059/20.539/23.019/2.480 ms
Network diagram
Network layout

Getting out past the firewall

Ahhh, the joys of a IT departments that think everybody just uses Word/Excel/Outlook and just browses to Facebook at lunchtime.

Networks that transparently proxy HTTP/HTTPS (probably with man in the middle TLS CA certs deployed to all the machines, but that is an entirely different problem) and block everything else really do not work in the modern world where access to places like GitHub via SSH or devices connecting out via MQTT are needed.

One possible solution to the SSH problem is a bastion host. This is a machine that can be reached from the internal network but is also allowed to connect to the outside world. This allows you to use this machine as a jumping off point to reach services blocked by the firewall.

The simple way is to log into the bastion, and then from the shell connect on to your intended external host, this works for targets you want a shell on but not for things like cloning/updating git repositories. We also want to automate as much of this as possible.

The first step is to set up public/private key login for the bastion machine. To do this we first generate a key pair with the ssh-keygen command.

$ ssh-keygen -f ~/.ssh/bastion -t ecdsa -b 521
Generating public/private ecdsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in ~/.ssh/bastion.
Your public key has been saved in ~/.ssh/bastion.pub.
The key fingerprint is:
SHA256:3Cfr60QRNbkRHDt6LEUUcemhKFmonqDlgEETgZl+H8A hardillb@tiefighter
The key's randomart image is:
+---[ECDSA 521]---+
|oB+      ..+X*.. |
|= .E    . .o++o  |
|.o  .  . o..+= . |
|....o...o..=o..  |
|  .=.o..S.* +    |
|  . ..o  . *     |
|          o      |
|         o       |
|         .+.     |

In this case we want to leave the passphrase blank because we want to use this key as part of automation of other steps, normally you should use a passphrase to protect access should the keys be compromised.

Once generated you can copy it to the `~/.ssh/authorized_keys` file on the bastion machine using the ssh-copy-id command

$ ssh-copy-id -i ~/.ssh/bastion user@bastion

Once that is in place we should be able to use the key to log straight into the bastion machine. We can now use the `-J` option to specify the bastion as a jump point to reach a remote machine.

$ ssh -J user@bastion user@remote.machine

We can also add this as an entry in the `.ssh/config` file which is more useful for things like git where it’s harder to get at the actual ssh command line.

Host bastion
 HostName bastion
 User user
 IdentityFile ~/.ssh/bastion

Host github
 Hostname github.com
 User git
 IdentityFile ~/.ssh/github
 ProxyCommand ssh -W %h:%p bastion

This config will proxy all git commands working with remote repositories on github.com via the bastion machine, using the bastion key to authenticate with the bastion machine and the github key to authenticate with github. This is all totally transparent to git.

ssh also supports a bunch of other useful ticks, such as port forwarding from either end of the connection.

It also can proxy other protocols using the Socks tunnelling protocol which means it can be used as poor man’s VPN in some situations. To enable Socks proxying you can use -D option to give a local port number or the DynamixProxy directive in the ~/.ssh/config file. This option is really useful with web browser that supports Socks proxies as it means you can point the browser at a local port and have it surf the web as if it was the remote machine.

All of this still works if you are using a bastion machine.

Custom hardware

Combining all this really useful SSH capability with a the Raspberry Pi gadgets makes it possible to carry a bastion host with you. Using a Raspberry Pi Zero W or even a full sized Pi 4 that can be configured to join a more open WiFi network (e.g. a visitor or testing network) you can have a device you just plug into a spare USB port that will give you a jumping off point to the outside world while still being connected to the more restricted internal network with all that provides. Just don’t tell IT security ;-).

This works well because even the most locked down machine normally still allows USB network adapters to be used.

DoH Update and DNS over TLS

I’ve been updating my DoH code again. It should now match RFC8484 and can be found on github here.

  • DNS wire format requests are now on /dns-query rather than /query
  • Change Content-Type to applicaton/dns-message
  • JSON format requests are now on /resolve
  • Made the dns-to-https-proxy only listen on IPv4 as it was always replying on IPv6

Normally ISPs have rules about running open recursive DNS servers on consumer lines, this is mainly because they can be subject to UDP source forgery and used in DDoS attacks. Because DoH is all TCP based it does not pose the same problem. So I’m going to stand up a version publicly so I can set my phone to use it for a while. I’ll be using nginx to proxy and sticking the following config bock in the http section that serves my https traffic.

location /dns-query {

location /resolve {

As well as DoH I’ve been looking at setting up DoT (RFC858) for my DNS server. Since bind doesn’t have native support for TLS, this will again be using nginx as a proxy to terminate the TLS connection and then proxy to the bind instance. The following configuration should export port 853 and forward to port 53.

stream {
    upstream dns {
        zone dns 64k;

    server {
        listen 853 ssl;
        ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
        proxy_pass dns;

nginx is running on the same machine as the as bind, but runs different views for internal and external clients based on the IP address of the request came from. The internal view includes which is why the proxy_bind directive is used to make sure the request comes from so it looks like and external address.

Open Source Rewards

This is a bit of a rambling piece around some things that have been rattling round in my brain for a while. I’ve written it out mainly to just get it out of my head.

There has been a of noise around Open Source projects and cloud companies making huge profits when running these as services. To the extent of some projects even changing the license to either prevent this, to force the cloud providers to publish all the supporting code that allows the projects to be run at scale or include new features under none Open Source licenses.

Most of the cases that have been making the news have been around projects that have an organisation that supports them e.g. Elastic search that also sell support and hosted versions of the project. While I’m sympathetic to the arguments of the projects I’m not sure the license changes work, and the cloud companies do tend to commit developers to the projects (OK, usually with an aim of getting the features they want implemented, but they do fix bugs as well). Steve O’Grady from Redmonk has a good piece about this here.

I’m less interested in the big firms fighting over this sort of thing, I’m more interested in the other end of the scale, the little “guys/gals”.

There have also been cases about single developers that have built some core components that under pin huge amounts of Open Source software. This is especially clear in the NodeJS world where hundreds of tiny npm modules come together to build thousands of more complex modules that then make up every bodies applications.

While a lot of OS developers do it for the love, or to practice their skills on things they enjoy. But when a project becomes popular the expectations start to stack up. There are literally too many stories of entitled users expecting the same levels of support/service that they would get from a large enterprise when paying for a support contract.

The problem here is when the developer at the bottom of the stack gets fed up with everybody that depends on their module raising bugs and not contributing fixes or just gets bored and walks away. We end up with what happened to the event-stream module. In this case the dev got bored and handy ownership over to somebody else (the first person who asked), that somebody later injected a bunch of code that would steal cryptocurrency private keys.

So what are the options to allow a loan developer to work on their projects and get the support they deserve.

Get employed by a somebody that depends on your code

If your project is of value to a medium/large organisation it can be in their interests to bring support for that project in house. I’ve seen this happen with projects I’ve been involved with (if only on the periphery) and it can work really well.

The thing that can possibly be tricky is balancing the needs of the community that has grown up around a project and the developers new “master” who may well have their own idea’s about what direction the project should take.

I’ve also seen work projects be open sourced and their developers continuing to drive them and get paid to do so.

Set up own business around the project

This is sort of the ultimate version of the previous one.

It can be done by selling support/services around a project, but it also can make some of the problems I mentioned earlier worse as now some will expect even more now they are paying for it.

Paypal donations or Github sponsorship/Patreon/Ko-Fi

Adding a link on the projects About page to a paypal account or a Patreon/Github sponsorship page can let people show their appreciation for a developers work.

Paypal links work well for one off payments, where as the Patreon/Github/Ko-Fi sponsorship model is a little bit more of a commitment but can be a good way to cover on going costs without needing to charge for a service directly. With a little work the developer can make use of the APIs these platforms provide bespoke content/services for users who choose to donate.

I have included a Paypal link the about page of some of my projects, I set have set the default amount to £5 with the suggestion that I will probably use it to buy myself a beer from time to time.

I have also recently signed up to the Githib sponsorship project to see how it works. Github lets you set different monthly amounts between $1 and $20000, at this time I only have 1 level set to $1.

Adverts/Affiliate links in projects

If you are building a mobile app or run a website then there is always the option of including adverts in the page/app. With this approach the user of the project doesn’t have to do anything apart from put up with some hopefully relevant adverts.

There is a balance that has to be struck with this as too many adverts or for irrelevant things can annoy users. I do occasionally post Amazon affiliate links in blog posts and I keep track of how much I’ve earned on the about page.

This is not just a valid model for open source projects, many (most) mobile games have adopted this sort of model, even if it is just as a starting tire before either allowing users to pay a fee to remove the adds or to buy in game content.

Amazon Wishlists

This is a slightly different approach is to publish a link to something like an Amazon wishlist. This allows users to buy developers a gift as a token of appreciation. The list allow the developer to get things they actually want and to select a range of items at different price points.

Back when Amazon was closer to it’s roots as a online book store (and people still read books to learn new things) it was a great way to get a book about a new subject to start a new project.

Other random thoughts

For another very interesting take on some of this please watch this video from Tom Scott for the Royal Institution about Science Communicating in the world of Youtube and social media. It has a section in middle about Parasocial Relationships which is really interesting (as it the rest of the video) in this context.


I don’t really have one at the moment, as I said at the start this is a bit of a stream of conciousness post.

I do think that there isn’t a one size fits all model, nor are the options I’ve listed above all of them, they were just the ones that came to mind as it typed.

If I come up with anything meaningful, I’ll do a follow up post, also if somebody want to sponsor me $20,000 a month on Github to come up with something, drop me a line ;-).

Alternate PPPoE Server (Access Concentrator)

Earlier this year I had a short series of posts where I walked through building a tiny (fantasy) ISP.

I’ve been using the Roaring Penguin version of the PPPoE Server that was available by default in Raspbian as I am running all of this on a Raspberry Pi4. It worked pretty well but I had to add the traffic shaping manually, at the time this was useful as it gave me an excuse to finally get round to learning how to do some of those things.

I’ve been looking for a better accounting solution, one that I can reset the data counters on at a regular interval without forcing the connection to drop. While digging around I found an alternative PPPoE implementation called accel-ppp.

accel-ppp supports a whole host of tunnelling protocols such as pptp, l2tp, sstp and ipoe as well as pppoe. It also has a built in traffic shaper module. It builds easily enough on Rasbian so I thought I’d give it a try.

The documentation for accel-ppp isn’t great, it’s not complete and a mix of English and Russian which is not the most helpful. But it is possible to patch enough together to get things working.










#leaf-qdisc=sfq perturb 10
#leaf-qdisc=fq_codel [limit PACKETS] [flows NUMBER] [target TIME] [interval TIME] [quantum BYTES] [[no]ecn]




To use the same Radius attributes as before I had to copy the Roaring Penguin dictionary to /usr/local/share/accel-ppp/radius/ and edit it to add in BEGIN-VENDOR and END-VENDOR tags.

With this configuration is a straight drop in replacement for the Roaring Penguin version, no need to change anything in the LDAP or Radius, but it doesn’t need the `ip-up` script to setup the traffic shaping.

I’m still playing with some of the extra features, like SNMP support and the command line/telnet support for sending management commands.

Pi4 USB-C Gadget

Pi4 Gadget

I’ve previously blogged about using Pi Zero (and Zero W) devices as USB Gadgets. This allows them to be powered and accessed via one of the micro USB sockets and it shows up as both a CD-Drive and a ethernet device.

A recent update to the Raspberry Pi 4 bootloader not only enables the low power mode for the USB hardware, allows the enabling of Network boot and enables data over the USB-C port. The lower power means it should run (without any hats) with the power supplied from a laptop.

Details of how to check/update the bootloader can be found here.

Given that the Pi4 has a Gigabit Ethernet adapter, WiFi and 4 USB sockets (need to keep the power draw low to be safe) and up to 4Gb RAM to go with it’s 4 x 1.5Ghz core processor it makes for a very attractive plugin compute device.

With this enabled all the same script from the Pi Zero’s should just work but here is the updated version for Raspbian Buster.

  • Add dtoverlay=dwc2 to the /boot/config.txt
  • Add modules-load=dwc2 to the end of /boot/cmdline.txt
  • If you have not already enabled ssh then create a empty file called ssh in /boot
  • Add libcomposite to /etc/modules
  • Add denyinterfaces usb0 to /etc/dhcpcd.conf
  • Install dnsmasq with sudo apt-get install dnsmasq
  • Create /etc/dnsmasq.d/usb with following content
  • Create /etc/network/interfaces.d/usb0 with the following content
auto usb0
allow-hotplug usb0
iface usb0 inet static
  • Create /root/usb.sh
cd /sys/kernel/config/usb_gadget/
mkdir -p pi4
cd pi4
echo 0x1d6b > idVendor # Linux Foundation
echo 0x0104 > idProduct # Multifunction Composite Gadget
echo 0x0100 > bcdDevice # v1.0.0
echo 0x0200 > bcdUSB # USB2
echo 0xEF > bDeviceClass
echo 0x02 > bDeviceSubClass
echo 0x01 > bDeviceProtocol
mkdir -p strings/0x409
echo "fedcba9876543211" > strings/0x409/serialnumber
echo "Ben Hardill" > strings/0x409/manufacturer
echo "PI4 USB Device" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
# Add functions here
# see gadget configurations below
# End functions
mkdir -p functions/ecm.usb0
HOST="00:dc:c8:f7:75:14" # "HostPC"
SELF="00:dd:dc:eb:6d:a1" # "BadUSB"
echo $HOST > functions/ecm.usb0/host_addr
echo $SELF > functions/ecm.usb0/dev_addr
ln -s functions/ecm.usb0 configs/c.1/
udevadm settle -t 5 || :
ls /sys/class/udc > UDC
ifup usb0
service dnsmasq restart
  • Make /root/usb.sh executable with chmod +x /root/usb.sh
  • Add /root/usb.sh to /etc/rc.local before exit 0 (I really should add a systemd startup script here at some point)

With this setup the Pi4 will show up as a ethernet device with an IP address of and will assign the device you plug it into an IP address via DHCP. This means you can just ssh to pi@ to start using it.


Quick note, not all USB-C cables are equal it seems. I’ve been using this one from Amazon and it works fine.

The latest revision (as of late Feb 2020) of the Pi 4 boards should work with any cable.

There is also now a script to create pre-modified Raspbian images here with a description here and a copy of the modified image here.

Updated AWS Lambda NodeJS Version checker

I got another of those emails from Amazon this morning that told me that the version of the NodeJS runtime I’m using in the Lambda for my Node-RED Alexa Smart Home Skill is going End Of Life.

I’ve previously talked about wanting a way to automate checking what version of NodeJS was in use across all the different AWS Availability Regions. But when I tried my old script it didn’t work.

for r in `aws ec2 describe-regions --output text | cut -f3`; 
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';

This is most likely because the output from awscli tool has changed slightly.

The first change looks to be in the listing of the available regions

$ aws ec2 describe-regions --output text
REGIONS	ec2.eu-north-1.amazonaws.com	opt-in-not-required	eu-north-1
REGIONS	ec2.ap-south-1.amazonaws.com	opt-in-not-required	ap-south-1
REGIONS	ec2.eu-west-3.amazonaws.com	opt-in-not-required	eu-west-3
REGIONS	ec2.eu-west-2.amazonaws.com	opt-in-not-required	eu-west-2
REGIONS	ec2.eu-west-1.amazonaws.com	opt-in-not-required	eu-west-1
REGIONS	ec2.ap-northeast-2.amazonaws.com	opt-in-not-required	ap-northeast-2
REGIONS	ec2.ap-northeast-1.amazonaws.com	opt-in-not-required	ap-northeast-1
REGIONS	ec2.sa-east-1.amazonaws.com	opt-in-not-required	sa-east-1
REGIONS	ec2.ca-central-1.amazonaws.com	opt-in-not-required	ca-central-1
REGIONS	ec2.ap-southeast-1.amazonaws.com	opt-in-not-required	ap-southeast-1
REGIONS	ec2.ap-southeast-2.amazonaws.com	opt-in-not-required	ap-southeast-2
REGIONS	ec2.eu-central-1.amazonaws.com	opt-in-not-required	eu-central-1
REGIONS	ec2.us-east-1.amazonaws.com	opt-in-not-required	us-east-1
REGIONS	ec2.us-east-2.amazonaws.com	opt-in-not-required	us-east-2
REGIONS	ec2.us-west-1.amazonaws.com	opt-in-not-required	us-west-1
REGIONS	ec2.us-west-2.amazonaws.com	opt-in-not-required	us-west-2

This looks to have added something extra to the start of the each line, so I need to change which filed I select with the cut command by changing -f3 to -f4.

The next problem looks to be with the JSON that is output for the list of functions in each region.

$ aws --region $r lambda list-functions
    "Functions": [
            "TracingConfig": {
                "Mode": "PassThrough"
            "Version": "$LATEST", 
            "CodeSha256": "wUnNlCihqWLXrcA5/5fZ9uN1DLdz1cyVpJV8xalNySs=", 
            "FunctionName": "Node-RED", 
            "VpcConfig": {
                "SubnetIds": [], 
                "VpcId": "", 
                "SecurityGroupIds": []
            "MemorySize": 256, 
            "RevisionId": "4f5bdf6e-0019-4b78-a679-12638412177a", 
            "CodeSize": 1080463, 
            "FunctionArn": "arn:aws:lambda:eu-west-1:434836428939:function:Node-RED", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::434836428939:role/service-role/home-skill", 
            "Timeout": 10, 
            "LastModified": "2018-05-11T16:20:01.400+0000", 
            "Runtime": "nodejs8.10", 
            "Description": "Provides the basic framework for a skill adapter for a smart home skill."

This time it looks like there is an extra level of array in the output, this can be fixed with a minor change to the jq filter

$aws lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'
"Node-RED - nodejs8.10"

Putting it all back together to get

for r in `aws ec2 describe-regions --output text | cut -f4`;  do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'; 

"Node-RED - nodejs8.10"
"Node-RED - nodejs8.10"
"Node-RED - nodejs8.10"

Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:


/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.


Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)


def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))


This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)


device = uinput.device([

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
    device.emit(uinput.BTN_TOUCH, 0)   


This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Basic traffic shaping

So, I thought this would be a lot harder than it ended up being1.

Over the last few posts I’ve been walking through the parts needed to build a super simple miniature ISP and one of the last bits I think (I know I’ll have forgotten something) we need is a way to limit the bandwidth available to the end users.

Normally this is mainly done by a step in the chain we have been missing out, that being the actual DSL link between the users house and the exchange. The length of the telephone line imposes some technical restrictions as well as the encoding scheme used by the DSL modems. In the case I’ve been taking about we don’t have any of that as it’s all running directly over Gigabit Ethernet.

Limiting bandwidth is called traffic shaping. One of the reasons to apply traffic shaping is to make sure all the users get a consistent experience, e.g. to stop one user maxing out all the backhual bandwidth (streaming many 4k Netflix episodes) and preventing all the other users from being able to even just browse basic pages.

Home broadband connections tend to have an asymmetric bandwidth profile, this is because most of what home users do is dominated by information being downloaded rather than uploaded, e.g. requesting a web page consists of a request (a small upload) followed by a much larger download (the content of the page). So as a starting point I will assume the backhaul for our ISP is going to be configured in a similar way and set each user up with similar asymmetric set up of 10mb down and 5mb up.

Initially I thought it might be just a case of setting a couple of variable in the RADIUS response. While looking at the dictionary for the RADIUS client I came across the dictionary.roaringpenguin file that includes the following two attribute types

  • RP-Upstream-Speed-Limit
  • RP-Downstream-Speed-Limt

Since Roaring Penguin is the name of the package that provided the pppoe-server I wondered if this meant it had bandwidth control built in. I updated the RADIUS configuration files to include these alongside where I’d set Acct-Interim-Interval so they are sent for every user.

post-auth {

	update reply {
		Acct-Interim-Interval = 300
		RP-Upstream-Speed-Limit = 5120
		RP-Downstream-Speed-Limit = 10240

Unfortunately this didn’t have any noticeable effect so it was time to have a bit of a wider look.

Linux has a traffic shaping tool called tc. The definitive guide is included in a document called the Linux Advanced Routing and Traffic Control HowTo and it is incredibly powerful. Luckily for me what I want is relatively trivial so there is no need to dig into all of it’s intricacies.

Traffic shaping is normally applied to outbound traffic so we will deal with that first. In this case outbound is relative to the machine running the pppoe-server so we will be setting the limits for the user’s download speed. Section has an example we can use.

# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540

This limits the out going connection on device ppp0 to 220kbit. We can adjust the values for the rate to 10240kbitor 1mbitto get the right speed.

Traffic coming into the device is controlled with ingress rules and is called policing. The tc-policing man page has example for limiting incoming traffic.

 # tc qdisc add dev eth0 handle ffff: ingress
 # tc filter add dev eth0 parent ffff: u32 \
                   match u32 0 0 \
                   police rate 1mbit burst 100k

We can change the device to ppp0 and the rate to 5mbit and we have what we are looking for.


Setting this up on the command line once the connection is up and running is easy enough, but it really needs to be done automatically when ever a user connects. The pppd daemon that gets started for each connection has a script that can be used to do this. The /etc/ppp/ip-up.sh script is called and in turn this calls all the scripts in /etc/ppp/ip-up.d so we can include a script in there to do the work.

The next trick is where to find the settings. When setting up the pppoe-server we added the plugin radattr.so line to the /etc/ppp/options file, this causes all the RADIUS attributes to be written to a file when the connection is created. The file is /var/run/radattr.ppp0 (with the prefix changing for each connection).

Framed-Protocol PPP
Framed-Compression Van-Jacobson-TCP-IP
Reply-Message Hello World
Acct-Interim-Interval 300
RP-Upstream-Speed-Limit 5120
RP-Downstream-Speed-Limit 10240

With a little bit of sed and awk magic we can tidy (environments can’t contain - & we need to wrap the string value in ") that up and turn it into environment variables and a script to set the traffic shaping.


eval "$(sed 's/-/_/g; s/ /=/' /var/run/radattr.$PPP_IFACE | awk -F = '{if ($0  ~ /(.*)=(.* .*)/) {print $1 "=\"" $2  "\""} else {print $0}}')"

if [ -n "$RP_Upstream_Speed_Limit" ];

tc qdisc add dev $PPP_IFACE root tbf rate ${RP_Upstream_Speed_Limit}kbit latency 50ms burst 1540

tc qdisc add dev $PPP_IFACE handle ffff: ingress
tc filter add dev $PPP_IFACE parent ffff: u32 \
          match u32 0 0 \
          police rate ${RP_Downstream_Speed_Limit}kbit burst 100k

	echo "no rate info"

Now when we test the bandwidth with iperf we see the the speeds limited to what we are looking for.


1 This is a super simple version that probably has lots of problems I’ve not yet discovered and it would be good to try and set up something that would allow a single user to get bursts of speed above a simple total/number of users share of the bandwidth if nobody else is wanting to use it. So it’s back to reading the LARTC guide to dig out some of the more advanced options.