Contributing to Accel-PPP

I mentioned in a previous post about my desktop ISP project I was using the accel-ppp access concentrator. This provides the main components for allowing my “users” to connect to my network, it handles all the authentication and setting up of PPPoE connections.

While playing with it I wanted to look at settings different DNS servers for different users. For IPv4 this was already built in, you could put entries in the LDAP that get passed on via the Radius lookup when a user connects. But with my messing about with IPv6 only networks I also wanted to set custom IPv6 DNS servers on a per user basis.

You can set a global set of DNS servers in the config file

...
[ipv6-dns]
dns=fd12:3456:789a::1
dns=fd12:3456:789a::2
...

The Radius spec has an entry for holding IPv6 DNS server values (DNS-Server-IPv6-Address from RFC6911), but accel-ppp didn’t support using it. I mentioned it on the accel-ppp forum as a feature request and the developers seemed to think it would be a good idea, so I decided to have a go at implementing it.

First up adding the required bits to the LDAP for the test user, here I have one IPv4 DNS and one IPv6 DNS.

displayName: Ben Hardill
cn: Ben
sn: Hardill
mail: isp1@hardill.me.uk
uid: isp1
radiusReplyAttribute: MS-Primary-DNS-Server := "192.168.5.1"
radiusReplyAttribute: Reply-Message := "Hello World"
radiusReplyAttribute: Delegated-IPv6-Prefix := "fd12:3456:789a:2::/64"
radiusReplyAttribute: Framed-IPv6-Prefix := "fd12:3456:789a:0:192:168:5:2/128"
radiusReplyAttribute: DNS-Server-IPv6-Address := "fd12:3456:789a:ff64::1"
radiusFramedIPAddress: 192.168.5.2
radiusFramedIPNetmask: 255.255.255.0

My first pass was a little naive in that it ended up changing the DNS servers for all subsequent users by overwriting the global settings. It also only supported providing 1 DNS server when the DHCPv6 and RA both support up to 3. So it was back to the drawing board.

First up is to generate a linked list containing the DNS server addresses from the Radius response.

...
			case Framed_IPv6_Route:
				rad_add_framed_ipv6_route(attr->val.string, rpd);
				break;
			case DNS_Server_IPv6_Address:
				a = _malloc(sizeof(*a));
				memset(a, 0, sizeof(*a));
				a->addr = attr->val.ipv6addr;
				list_add_tail(&a->entry, &rpd->ipv6_dns.addr_list);
				break;
		}
	}
...

This gets bound to the users session object so it can be retrieved later. In this case when responding to a DHCPv6 request.

...
	if (!list_empty(&ses->ipv6_dns->addr_list)) {
		list_for_each_entry(dns, &ses->ipv6_dns->addr_list, entry) {
			j++;
		}
		if (j >= 3) {
			j = 3;
		}
		opt1 = dhcpv6_option_alloc(reply, D6_OPTION_DNS_SERVERS, j * sizeof(addr));
		addr_ptr = (struct in6_addr *)opt1->hdr->data;
		list_for_each_entry(dns, &ses->ipv6_dns->addr_list, entry) {
			if (k < j) {
				memcpy(addr_ptr, &dns->addr, sizeof(addr));
				k++;
				addr_ptr++;
			} else {
				break;
			}
		}
...

Now this is not as clean as I would like since we have to walk the list to know how many DNS servers there are in order to allocate the right size chunk of memory to hold the addresses and then walk it again to copy the addresses into the response. I might go back and add the list length to the session structure and update it when adding items to the list so we can avoid this step.

Testing

While working on these changes I wanted something a bit quicker than having to deploy the changes to my physical test rig (a collection of Raspberry Pi and switched hidden under my sofa). To help with this I spun up a VM with a copy of CORE. CORE comes from the NRL and allows you to build virtual networks and run code on nodes within the network.

CORE emulating a simple network

This means I can start and stop the access coordinator really quickly and spin up as many clients as I want. I can also easily drop in L2TP clients (which use the same Radius and PPP infrastructure in accel-ppp) to check it works there as well. It also lets you attach tools like tcpdump and wireshark to capture packets at any point in the network which can be difficult with home grade switches (enterprise switches tend to have admin interfaces and the ability to nominate ports as taps that can see all traffic).

I have submitted a pull request against the project, so now fingers crossed it gets accepted.

Building Custom Raspberry Pi SD Card Images

After my post about using a Raspberry Pi 4 as a USB gadget got linked to by a YouTuber who worked out it also worked with the iPad Pro it has been getting a lot of traffic.

Pi4 Gadget

Along with the traffic came a number of comments from people wanting help setting things up. While the instructions are reasonably complete, they do assume a certain amount of existing knowledge and access to a monitor/keyboard/network to complete everything. Also given the majority of the readers were Apple users they couldn’t mount the main partition of the SDCard as it is a ext4 file system.

The quick and dirty solution is for me to modify a standard image and host it somewhere. This is OK, but it means I have to,

  • Find somewhere to host a 500mb file
  • Keep it up to date when new versions are released
  • Provide some way for people to trust what changes I’ve made to the image

I can probably get round the first one pretty easily, bandwidth is a lot cheaper than it used to be. The second and third items are a little harder.

I started to look at a way to script the modifications to a standard Raspbian image, that way I could just host the script and people could provide their own starting image. On a Linux machine I could mount both partitions on the card and modify or add the required config files, but the problem was installing dnsmasq. This needs the image to actually be running for apt-get to run, which gets back to the chicken/egg problem of needing to boot the pi order to make the changes. That and it would only run on Linux, not a OSx or Windows machine.

What I need is a way to “run” a virtual Raspberry Pi and a way to run commands in that virtual machine. I know from working with services like Balena.io‘s build system that it is possible to emulate the ARM processor found in a Raspberry Pi on Intel based hardware. I found a couple of examples of people using Qemu to run virtual Pis and I was just about to set up the same when I came across dockerpi, which is a docker image with everything preconfigured. You can mount a SD card image

Virtual Raspberry Pi in a terminal window

When started you end up with what is basically the virtual machine console as a command line app. You can interact with it just like any other console application. I logged in as pi and then used sudo to run apt-get update and apt-get install dnsmasq.

That works for doing it manually, but I need to script this, so it’s time to break out some old school Linux foo and use expect.

Expect is a scripting tool that reads from stdin, and outputs to stdout, but will wait for a known output before sending a reply. It was used in the early days of the Internet to script dial up internet.

#!/usr/bin/expect -f
set timeout -1
set imageName [lindex $argv 0]
if {[string trimleft $imageName] eq ""} {
  puts "No Image file provided"
  exit
}
set cwd [file normalize [file dirname $argv0]]
set imagePath [file join $cwd $imageName]
spawn docker run -i --rm -v $imagePath:/sdcard/filesystem.img lukechilds/dockerpi:vm
expect "login: "
send "pi\n"
expect "Password: "
send "raspberry\n"
interact

This expect script takes the name of the SD Card image as an argument, starts the Docker container and then logs in with the default pi/raspberry username & password.

With a bit more work we can get all all the changes done including creating the extra files.

...
proc slurp {file} {
    set fh [open $file r]
    set ret [read $fh]
    close $fh
    return $ret
}
...
set file [slurp "etc/network/interfaces.d/usb0"]
expect "# "
send "cat <<EOF >> /etc/network/interfaces.d/usb0\n"
send "$file\n"
send "EOF\n"
...

I’ve checked all the files into a github repository here. I’ve tested the output with a Pi Zero and things look good. To run it for yourself, clone the repo, copy the Raspbian Lite image (unzip it first) into the directory and run ./create-image <image file name>

There is a version of the output from here.

I’ve still got to get round to trying to the RNDIS Ethernet device support working so it will load the right drivers on Windows. And I need to extend the script to build the Personal CA Appliance from my last post.

A Personal Offline Certificate Authority

I had a slight scare in the run up to Christmas. I went to use the VPN on my phone to connect into my home network and discovering that the certificate that identifies both my phone and the one for the server had expired the day before.

This wouldn’t have been a problem except I couldn’t find where I’d stashed the files that represent the CA I had used to create the certificates. There was a short panic until I got home that evening and found them on a old decommissioned server that luckily I hadn’t got round to scrapping properly yet.

This led me to think of a better place to store these files. I wanted to have a (relatively) secure offline place to store them, but also somewhere that could handle the actual signing of certificates and the rest of the admin (I normally end up Googling the instructions for openssl each time I need to do this).

A simple approach would be to just store the files on a encrypted USB Mass Storage device, but I wanted something a little bit more automated.

Hardware

Recycling my ever useful Raspberry Pi Zero as a USB Ethernet Gadget instructions again with a Raspberry Pi Zero (note not a Zero W) gets me a device that has no direct internet connection, but that can be plugged into nearly any machine and accessible via a local network connection.

RTC attached to a Raspberry Pi Zero in a Pimoroni case

One little niggle is that working with certificates requires an accurate clock on the device. Raspbian by defaults sets it’s clock via NTP over the network since there is no persistent clock on the pi. The fix for this is a i2c battery backed up hardware clock. You can pick these up from a number of places, but I grabbed one of these from amazon.

To enable the RTC you need to add the following to /boot/config.txt

dtoverlay=i2c-rtc,ds3231

And comment out the first if block in /lib/udev/hwclock-set

...
dev=$1
#if [ -e /run/systemd/system ] ; then
#    exit 0
#fi
...

Now we have a reliable system clock we can go about setting up the CA.

Software

The first version just requires me to ssh in to the pi and use openssl on the command line. This was enough to get me started again, but I was looking for something a bit more user friendly.

I had a look round for a web interface to openssl and found a few different options

But they all requires a whole bunch of other things like OpenLDAP, MySQL and Apache which is all a bit too heavy weight for a Pi Zero.

A web form collecting data for a certificate

So I decided to write my own, a bit of poking around and I found the node-openssl-cert module on npm which looked like it should be able to handle everything I need.

Combined with express and I now have a form I can fill in with the subject details and hit submit.

The page then downloads a PKCS12 format bundle which contains the CA cert, Client cert and Client key all protected by a supplied password. I can then use openssl to break out the parts I need or just import the whole thing into Android.

At the moment I’ve just copied the existing CA key & cert out of the existing CA directory structure to get this to work. I intend to update the code to make use of the serial number and index tracking so if needed I can generate a Certificate Revocation List if needed, also potentially allow the downloading of previously generated certs.

You can find the project on github here and I hope to find some time to write up some end to end instructions for setting it all up.

The interesting bit was how to download a file from a XMLHttpRequest, you can see that trick here.

Aside

I originally titled this as “A Secure Offline Certificate Authority”. I changed it because this isn’t really any more secure than a USB key you just keep the CA key & cert on, and probably less secure than if you encrypted that drive. It is true that the CA key cert are not accessible from the host machine without SSHing to the device, but the CA key & cert are still just stored on the Pi’s SDCard so if anybody has physical access to it then it’s game over.

I could look at i2c or SPI secure elements that could be used to store the private key but the real solution to this is an ASCI or FPGA combined with a secure element, but that is all overkill for what I needed here.

Problems with a IPv6 only network

In my last post I talked about running a pure IPv6 network, as part of my ISP building project, but still allowing access to resources on the internet currently only available via IPv4.

This works well assuming all the clients on the local network are IPv6 capable, unfortunately this is not always the case. There are legacy devices that do not understand IPv6.

This is a real problem with IoT devices that are either no longer being maintained or just that have hardware that is incapable of using anything other than IPv4. There is also a small problem that a IP cam with a IPv6 address is probably available to the world with out some firewall rules or a ACL limiting access to the local /64, but those are problems for another day…

Another issue is hard coded IPv4 addresses in legacy applications, this is a problem even if the OS/device supports both IPv4 & IPv6 but is only connected via IPv6.

There is are a few of solution to both these problems.

Dual Stack networks

The simplest is to just run a dual stack network supplying both IPv4 & IPv6 all the way from the end device to the edge of the ISPs network. While this works it either means using lots of IPv4 addresses in the ISPs internal network and probably 2 layers of NAT (one at the customers router and then CGNAT at the edge of the ISPs network) assuming the ISP is not handing out publicly routed IPv4 addresses directly to customers.

464XLAT

464XLAT -> 4 to 6 to 4 X transLATion and the X can be either a Client or Provider.

464CLAT comes in 2 main types

464CLAT – on host

464CLAT works by doing the conversion from IPv4 to IPv6 on the client. This works for sites that are only available via IPv4 and for hard coded IPv4 addresses. It all happens in the TCP/IP stack on the host OS. For this to work the OS needs a way to work out what the DNS64 prefix is.

This paper discusses methods for a client to determine the NAT64 prefix. It suggests that the simple method of looking up the IPv6 address of a hostname known to only have a IPv4 entry ipv4only.arpa and using returned address (This behaviour is described in rfc7050).

There is an important step to validate that the address is returned is valid, because DNS64 breaks DNSSEC it takes a few extra steps.

  • Make a IPv6 (AAAA) look up for ipv4only.arp with DNSSEC checking disabled
  • Make a reverse (PTR) lookup of the IPv6 address 64:ff9b::c000:ab
  • This should return a pointer to a fully qualified domain name for a host in the ISPs domain (nat64.hardill.me.uk).
  • Finally make a IPv6 (AAAA) lookup for this hostname, which should return the same IPv6 address and a valid DNSSEC signature (RRSIG AAAA) record signed by the ISPs domain keys.

All of the required parts for this are running on my test network, I just need to attach some client devices to see if it works.

464CLAT – on router (Stateless translation)

This is a form of dual stack network, but it limits the dual stack to just the customer side of their router.

In this case the client network runs both native IPv6 and a RFC1918 range IPv4 and the clients router does the conversion by adding the IPv6 prefix. As well as running DHCPv6/SLAAC for IPv6 addressing it also runs DHCPv4 and hands out IPv4 addresses.

This works for devices that can’t support IPv6 (e.g. cheap IoT devices) and you still get to run a IPv6 only network in the ISP network.

The router generates it’s own prefix from the publicly routed IPv6 range it’s been allocated for it’s client side network. It then uses this prefix to generate 1 to 1 IPv6 addresses for each IPv4 address it hands out.

Jool can also be used to handle this in SIIT mode, you have to tell it what the input IPv4 range is and the IPv6 prefix it can work with.

#!/bin/sh
modprobe jool_siit
jool_siit instance add "example" --iptables
jool_siit -i "example" eamt add fd12:3456:789a:ff46:2::/96 10.99.0.0/24
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.ppp0.accept_ra=2
sysctl -w net.ipv6.conf.all.forwarding=1
ip6tables -t mangle -A PREROUTING -j JOOL_SIIT --instance "example"
iptables  -t mangle -A PREROUTING -j JOOL_SIIT --instance "example"

Here I’ve hardcoded the IPv6 prefix, but for a real deployment we would need to use a delegated IPv6 prefix that is routed to the gateway. This should be run using the script option for dhcp6c when the prefixes are delegated.

interface ppp0 {
	send ia-na 0;
	send ia-pd 0;
	script "/etc/dhcp6c/reply.sh";
};
id-assoc na {
};
id-assoc pd 0 {
  prefix-interface eth1 {
    sla-len 0;
    sla-id 1;
  };
};

This config takes the /64 prefix and assigns it to eth1 for clients to use as part of SLAAC. I’m still working out how to extract the second /96 prefix and pass it to the script.

We can then update the script that starts jool as follows and point to it in the dhcp6c.conf file so it gets run once the network is up.

#!/bin/sh
#IPv6=`ip -o -6 a show dev eth1 | awk '/global/ { print $4 }'`
#PREFIX=`subnetcalc $IPv6 -n | awk '/Network/ { print $3}'`
IPv4=`ip -4 -o a show dev eth1 | awk '/global/ {print $4}'`
SUBNET=`subnetcalc $IPv4 -n | awk '/Network/ {print $3}'`
#need to make sure this only runs once
modprobe jool_siit
jool_siit instance add "example" --iptables
jool_siit -i "example" eamt add fd12:3456:789a:ff46:2::/96 $SUBNET/24
sysctl -w net.ipv4.conf.all.forwarding=1
sysctl -w net.ipv6.conf.all.forwarding=1
ip6tables -t mangle -A PREROUTING -j JOOL_SIIT --instance "example"
iptables  -t mangle -A PREROUTING -j JOOL_SIIT --instance "example"

464PLAT

This is usually just the NAT64 we talked about in the last post running in the ISPs network.

IPv6 only network with IPv4 access (DNS64/NAT64)

Continuing the theme of building my own ISP I started looking at running a IPv6 only network.

As IPv4 addresses become increasingly scarce it won’t be possible to hand out publicly routeable addresses to every user. The alternatives are things like CGNAT but that has a bunch of problems.

On the other hand the default suggested IPv6 allocation per user is a /48 subnet (which is 65,5536 /64 subnets each containing 18,446,744,073,709,551,616 addresses) which should be more than enough for anybody. More and more services slowly are making them selves available via IPv6.

So rather than run a dual stack IPv4/IPv6 network with a double NAT’d (at the home router and again at the ISP’s CGNAT link to the rest of the internet ) IPv4 address, we can run a pure IPv6 ISP and offer access to IPv4 via a NAT64 gateways to allow access to those services that are still IPv4 only.

DNS64

This is the part that converts IPv4 addresses to IPv6 addresses.

The local device looking to connect makes a DNS request for the hostname of the remote device. If there is no native AAAA (IPv6 address) entry for the hostname, the DNS64 server will generate one based on converting the IPv4 address to hex and prepending a IPv6 prefix. The prefix can be anything with at least /96 (allowing enough room for all IPv4 addresses) but there is a pre-defined address range of 64:ff9b::/96.

So if looking up the remote hostname returns 192.168.1.5 then the mapped IPv6 address would be 64:ff9b::c0a8:0105 (192.168.1.5 -> c0.a8.01.05 in hex)

As of version 9.8 of bind support for DN64 is built in and configured by adding the following:

...
dns64 64:ff9b::/96 {
  clients { any; };
  mapped { !10/8; any;};
  break-dnssec yes;
};
...

Running your own means you can control who can access the server (using the clients directive and control which IP address ranges are mapped or excluded).

All this IP address mapping does break DNSSEC but since most clients rely on their recursive DNS servers to validate DNSSEC record rather than doing it directly this is less of a problem.

There are also a number of public DNS servers that support DNS64 including one run by Google.

To test you can use dig and point to the DNS64 server. e.g. to get a IPv6 mapped address for www.bbc.co.uk (212.58.233.254 & 212.58.237.254)

$ dig @2001:4860:4860::6464 -t AAAA www.bbc.co.uk

; <<>> DiG 9.11.5-P4-5.1-Raspbian <<>> @2001:4860:4860::6464 -t AAAA www.bbc.co.uk
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1043
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.bbc.co.uk.			IN	AAAA

;; ANSWER SECTION:
www.bbc.co.uk.		235	IN	CNAME	www.bbc.net.uk.
www.bbc.net.uk.		285	IN	AAAA	64:ff9b::d43a:edfe
www.bbc.net.uk.		285	IN	AAAA	64:ff9b::d43a:e9fe

;; Query time: 133 msec
;; SERVER: 2001:4860:4860::6464#53(2001:4860:4860::6464)
;; WHEN: Mon Feb 03 21:40:50 GMT 2020
;; MSG SIZE  rcvd: 124

We can see that 212.58.233.254 is d43ae9fe in hex and has been added to the 64:ff9b::/96 prefix to make 64:ff9b::d43a:e9fe

NAT64

This is the part that actually does the mapping between the IPv6 address of the initiating device and the IPv4 address of the target device.

There are a few different implementations of NAT64 for Linux

I decided to give Jool a go first based on a presentation I found.

I had to build Jool from source, but this wasn’t particularly tricky and once installed I followed the Stateful NAT64 instructions. There were 2 bits missing from this that caused a few problems.

The first was because my host machine has multiple IPv4 addresses I needed to add the right address to the `pool4`. When adding the the address you also need to specify a range of ports to use and these need to be excluded from ephemeral local port range.

The ephemeral range can be between 1024 and 65535 and you can check what the current range is set to with sysctrl

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768	60999

For a proper deployment this range needs reducing so that you can commit enough ports for all the NAT64 connections that will pass through the gateway. You can also add multiple IPv4 addresses.

Quick aside: The whole point of this exercise is to reduce the number of publicly routable IPv4 addresses that we need. To make this work we are always going to need some, but we will share a small number at one point at the edge of the network to be used as the NAT64 egress point, but as more and more services move over to supporting IPv6 this number will decrease.

Because I’m only playing at the moment, I’m just going to use the 61000-65535 range and leave the ephemeral ports alone. I will still be able to host 4500 connections.

To make starting it all easier I wrote a short script that

  • Loads the module
  • Enables IPv4 and IPv6 routing
  • Sets up jool with the default IPv6 prefix
  • Adds the iptables entries to intercept the packets
  • Adds the IPv4 output address to the jool config with the port range for TCP, UDP and ICMP
#!/bin/sh
modprobe jool

sysctl -w net.ipv4.conf.all.forwarding=1
sysctl -w net.ipv6.conf.all.forwarding=1

jool instance add "example" --iptables  --pool6 64:ff9b::/96

ip6tables -t mangle -A PREROUTING -j JOOL --instance "example"
iptables -t mangle -A PREROUTING -j JOOL --instance "example"

jool -i "example" pool4 add -i 192.168.1.94 61000-65535
jool -i "example" pool4 add -t 192.168.1.94 61000-65535
jool -i "example" pool4 add -u 192.168.1.94 61000-65535

Putting it together

To make it all work I need to set the DNS server handed out to my ISP customers point to the DNS64 instance and to make sure that 64:ff9b::/96 gets routed via the gateway machine.

To test I pinged www.bbc.co.uk

$ ping6 -c 2 www.bbc.co.uk
PING www.bbc.co.uk (64:ff9b::d43a:e9fe) 56 data bytes
64 bytes from 64:ff9b::d43a:e9fe: icmp_seq=1 ttl=54 time=18.1 ms
64 bytes from 64:ff9b::d43a:e9fe: icmp_seq=2 ttl=54 time=23.0 ms

--- 64:ff9b::d43a:e9fe ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 18.059/20.539/23.019/2.480 ms
Network diagram
Network layout

Getting out past the firewall

Ahhh, the joys of a IT departments that think everybody just uses Word/Excel/Outlook and just browses to Facebook at lunchtime.

Networks that transparently proxy HTTP/HTTPS (probably with man in the middle TLS CA certs deployed to all the machines, but that is an entirely different problem) and block everything else really do not work in the modern world where access to places like GitHub via SSH or devices connecting out via MQTT are needed.

One possible solution to the SSH problem is a bastion host. This is a machine that can be reached from the internal network but is also allowed to connect to the outside world. This allows you to use this machine as a jumping off point to reach services blocked by the firewall.

The simple way is to log into the bastion, and then from the shell connect on to your intended external host, this works for targets you want a shell on but not for things like cloning/updating git repositories. We also want to automate as much of this as possible.

The first step is to set up public/private key login for the bastion machine. To do this we first generate a key pair with the ssh-keygen command.

$ ssh-keygen -f ~/.ssh/bastion -t ecdsa -b 521
Generating public/private ecdsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in ~/.ssh/bastion.
Your public key has been saved in ~/.ssh/bastion.pub.
The key fingerprint is:
SHA256:3Cfr60QRNbkRHDt6LEUUcemhKFmonqDlgEETgZl+H8A hardillb@tiefighter
The key's randomart image is:
+---[ECDSA 521]---+
|oB+      ..+X*.. |
|= .E    . .o++o  |
|.o  .  . o..+= . |
|....o...o..=o..  |
|  .=.o..S.* +    |
|  . ..o  . *     |
|          o      |
|         o       |
|         .+.     |
+----[SHA256]-----+

In this case we want to leave the passphrase blank because we want to use this key as part of automation of other steps, normally you should use a passphrase to protect access should the keys be compromised.

Once generated you can copy it to the `~/.ssh/authorized_keys` file on the bastion machine using the ssh-copy-id command

$ ssh-copy-id -i ~/.ssh/bastion user@bastion

Once that is in place we should be able to use the key to log straight into the bastion machine. We can now use the `-J` option to specify the bastion as a jump point to reach a remote machine.

$ ssh -J user@bastion user@remote.machine

We can also add this as an entry in the `.ssh/config` file which is more useful for things like git where it’s harder to get at the actual ssh command line.

Host bastion
 HostName bastion
 User user
 IdentityFile ~/.ssh/bastion

Host github
 Hostname github.com
 User git
 IdentityFile ~/.ssh/github
 ProxyCommand ssh -W %h:%p bastion

This config will proxy all git commands working with remote repositories on github.com via the bastion machine, using the bastion key to authenticate with the bastion machine and the github key to authenticate with github. This is all totally transparent to git.

ssh also supports a bunch of other useful ticks, such as port forwarding from either end of the connection.

It also can proxy other protocols using the Socks tunnelling protocol which means it can be used as poor man’s VPN in some situations. To enable Socks proxying you can use -D option to give a local port number or the DynamixProxy directive in the ~/.ssh/config file. This option is really useful with web browser that supports Socks proxies as it means you can point the browser at a local port and have it surf the web as if it was the remote machine.

All of this still works if you are using a bastion machine.

Custom hardware

Combining all this really useful SSH capability with a the Raspberry Pi gadgets makes it possible to carry a bastion host with you. Using a Raspberry Pi Zero W or even a full sized Pi 4 that can be configured to join a more open WiFi network (e.g. a visitor or testing network) you can have a device you just plug into a spare USB port that will give you a jumping off point to the outside world while still being connected to the more restricted internal network with all that provides. Just don’t tell IT security ;-).

This works well because even the most locked down machine normally still allows USB network adapters to be used.

DoH Update and DNS over TLS

I’ve been updating my DoH code again. It should now match RFC8484 and can be found on github here.

  • DNS wire format requests are now on /dns-query rather than /query
  • Change Content-Type to applicaton/dns-message
  • JSON format requests are now on /resolve
  • Made the dns-to-https-proxy only listen on IPv4 as it was always replying on IPv6

Normally ISPs have rules about running open recursive DNS servers on consumer lines, this is mainly because they can be subject to UDP source forgery and used in DDoS attacks. Because DoH is all TCP based it does not pose the same problem. So I’m going to stand up a version publicly so I can set my phone to use it for a while. I’ll be using nginx to proxy and sticking the following config bock in the http section that serves my https traffic.

location /dns-query {
  proxy_pass https://127.0.0.1:3000/dns-query;
}

location /resolve {
  proxy_pass https://127.0.0.1:3000/resolve;
}

As well as DoH I’ve been looking at setting up DoT (RFC858) for my DNS server. Since bind doesn’t have native support for TLS, this will again be using nginx as a proxy to terminate the TLS connection and then proxy to the bind instance. The following configuration should export port 853 and forward to port 53.

stream {
    upstream dns {
        zone dns 64k;
        server 127.0.0.1:53;
    }

    server {
        listen 853 ssl;
        ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
        proxy_pass dns;
        proxy_bind 127.0.0.2;
    }
}

nginx is running on the same machine as the as bind, but runs different views for internal and external clients based on the IP address of the request came from. The internal view includes 127.0.0.1 which is why the proxy_bind directive is used to make sure the request comes from 127.0.0.2 so it looks like and external address.

Alternate PPPoE Server (Access Concentrator)

Earlier this year I had a short series of posts where I walked through building a tiny (fantasy) ISP.

I’ve been using the Roaring Penguin version of the PPPoE Server that was available by default in Raspbian as I am running all of this on a Raspberry Pi4. It worked pretty well but I had to add the traffic shaping manually, at the time this was useful as it gave me an excuse to finally get round to learning how to do some of those things.

I’ve been looking for a better accounting solution, one that I can reset the data counters on at a regular interval without forcing the connection to drop. While digging around I found an alternative PPPoE implementation called accel-ppp.

accel-ppp supports a whole host of tunnelling protocols such as pptp, l2tp, sstp and ipoe as well as pppoe. It also has a built in traffic shaper module. It builds easily enough on Rasbian so I thought I’d give it a try.

The documentation for accel-ppp isn’t great, it’s not complete and a mix of English and Russian which is not the most helpful. But it is possible to patch enough together to get things working.

[modules]
log_file
pppoe
auth_pap
radius
shaper
#net-snmp
#logwtmp
#connlimit
ipv6_nd
#ipv6_dhcp
#ipv6pool

[core]
log-error=/var/log/accel-ppp/core.log
thread-count=4

[common]
#single-session=replace
#sid-case=upper
#sid-source=seq
#max-sessions=1000
check-ip=1

[ppp]
verbose=1
min-mtu=1280
mtu=1400
mru=1400
#accomp=deny
#pcomp=deny
#ccp=0
#mppe=require
ipv4=require
ipv6=allow
ipv6-intf-id=0:0:0:1
ipv6-peer-intf-id=0:0:0:2
ipv6-accept-peer-intf-id=1
lcp-echo-interval=20
#lcp-echo-failure=3
lcp-echo-timeout=120
unit-cache=1
#unit-preallocate=1

[auth]
#any-login=0
#noauth=0

[pppoe]
verbose=1
ac-name=PPPoE7
#service-name=PPPoE7
#pado-delay=0
#pado-delay=0,100:100,200:200,-1:500
called-sid=mac
#tr101=1
#padi-limit=0
#ip-pool=pppoe
#ifname=pppoe%d
#sid-uppercase=0
#vlan-mon=eth0,10-200
#vlan-timeout=60
#vlan-name=%I.%N
interface=eth0

[dns]
#dns1=172.16.0.1
#dns2=172.16.1.1

[radius]
dictionary=/usr/local/share/accel-ppp/radius/dictionary
nas-identifier=accel-ppp
nas-ip-address=127.0.0.1
gw-ip-address=192.168.5.1
server=127.0.0.1,testing123,auth-port=1812,acct-port=1813,req-limit=50,fail-timeout=0,max-fail=10,weight=1
#dae-server=127.0.0.1:3799,testing123
verbose=1
#timeout=3
#max-try=3
#acct-timeout=120
#acct-delay-time=0
#acct-on=0
#attr-tunnel-type=My-Tunnel-Type

[log]
log-file=/var/log/accel-ppp/accel-ppp.log
log-emerg=/var/log/accel-ppp/emerg.log
log-fail-file=/var/log/accel-ppp/auth-fail.log
copy=1
#color=1
#per-user-dir=per_user
#per-session-dir=per_session
#per-session=1
level=3

[shaper]
vendor=RoaringPenguin
attr-up=RP-Upstream-Speed-Limit
attr-down=RP-Downstream-Speed-Limit
#down-burst-factor=0.1
#up-burst-factor=1.0
#latency=50
#mpu=0
#mtu=0
#r2q=10
#quantum=1500
#moderate-quantum=1
#cburst=1534
#ifb=ifb0
up-limiter=police
down-limiter=tbf
#leaf-qdisc=sfq perturb 10
#leaf-qdisc=fq_codel [limit PACKETS] [flows NUMBER] [target TIME] [interval TIME] [quantum BYTES] [[no]ecn]
#rate-multiplier=1
#fwmark=1
#rate-limit=2048/1024
verbose=1

[cli]
verbose=1
telnet=127.0.0.1:2000
tcp=127.0.0.1:2001
#password=123
#sessions-columns=ifname,username,ip,ip6,ip6-dp,type,state,uptime,uptime-raw,calling-sid,called-sid,sid,comp,rx-bytes,tx-bytes,rx-bytes-raw,tx-bytes-raw,rx-pkts,tx-pkts

[snmp]
master=0
agent-name=accel-ppp

[connlimit]
limit=10/min
burst=3
timeout=60

To use the same Radius attributes as before I had to copy the Roaring Penguin dictionary to /usr/local/share/accel-ppp/radius/ and edit it to add in BEGIN-VENDOR and END-VENDOR tags.

With this configuration is a straight drop in replacement for the Roaring Penguin version, no need to change anything in the LDAP or Radius, but it doesn’t need the `ip-up` script to setup the traffic shaping.

I’m still playing with some of the extra features, like SNMP support and the command line/telnet support for sending management commands.

Pi4 USB-C Gadget

Pi4 Gadget

I’ve previously blogged about using Pi Zero (and Zero W) devices as USB Gadgets. This allows them to be powered and accessed via one of the micro USB sockets and it shows up as both a CD-Drive and a ethernet device.

A recent update to the Raspberry Pi 4 bootloader not only enables the low power mode for the USB hardware, allows the enabling of Network boot and enables data over the USB-C port. The lower power means it should run (without any hats) with the power supplied from a laptop.

Details of how to check/update the bootloader can be found here.

Given that the Pi4 has a Gigabit Ethernet adapter, WiFi and 4 USB sockets (need to keep the power draw low to be safe) and up to 4Gb RAM to go with it’s 4 x 1.5Ghz core processor it makes for a very attractive plugin compute device.

With this enabled all the same script from the Pi Zero’s should just work but here is the updated version for Raspbian Buster.

  • Add dtoverlay=dwc2 to the /boot/config.txt
  • Add modules-load=dwc2 to the end of /boot/cmdline.txt
  • If you have not already enabled ssh then create a empty file called ssh in /boot
  • Add libcomposite to /etc/modules
  • Add denyinterfaces usb0 to /etc/dhcpcd.conf
  • Install dnsmasq with sudo apt-get install dnsmasq
  • Create /etc/dnsmasq.d/usb with following content
interface=usb0
dhcp-range=10.55.0.2,10.55.0.6,255.255.255.248,1h
dhcp-option=3
leasefile-ro
  • Create /etc/network/interfaces.d/usb0 with the following content
auto usb0
allow-hotplug usb0
iface usb0 inet static
  address 10.55.0.1
  netmask 255.255.255.248
  • Create /root/usb.sh
#!/bin/bash
cd /sys/kernel/config/usb_gadget/
mkdir -p pi4
cd pi4
echo 0x1d6b > idVendor # Linux Foundation
echo 0x0104 > idProduct # Multifunction Composite Gadget
echo 0x0100 > bcdDevice # v1.0.0
echo 0x0200 > bcdUSB # USB2
echo 0xEF > bDeviceClass
echo 0x02 > bDeviceSubClass
echo 0x01 > bDeviceProtocol
mkdir -p strings/0x409
echo "fedcba9876543211" > strings/0x409/serialnumber
echo "Ben Hardill" > strings/0x409/manufacturer
echo "PI4 USB Device" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
# Add functions here
# see gadget configurations below
# End functions
mkdir -p functions/ecm.usb0
HOST="00:dc:c8:f7:75:14" # "HostPC"
SELF="00:dd:dc:eb:6d:a1" # "BadUSB"
echo $HOST > functions/ecm.usb0/host_addr
echo $SELF > functions/ecm.usb0/dev_addr
ln -s functions/ecm.usb0 configs/c.1/
udevadm settle -t 5 || :
ls /sys/class/udc > UDC
ifup usb0
service dnsmasq restart
  • Make /root/usb.sh executable with chmod +x /root/usb.sh
  • Add /root/usb.sh to /etc/rc.local before exit 0 (I really should add a systemd startup script here at some point)

With this setup the Pi4 will show up as a ethernet device with an IP address of 10.55.0.1 and will assign the device you plug it into an IP address via DHCP. This means you can just ssh to pi@10.55.0.1 to start using it.

Addendum

Quick note, not all USB-C cables are equal it seems. I’ve been using this one from Amazon and it works fine.

The latest revision (as of late Feb 2020) of the Pi 4 boards should work with any cable.

There is also now a script to create pre-modified Raspbian images here with a description here and a copy of the modified image here.

Updated AWS Lambda NodeJS Version checker

I got another of those emails from Amazon this morning that told me that the version of the NodeJS runtime I’m using in the Lambda for my Node-RED Alexa Smart Home Skill is going End Of Life.

I’ve previously talked about wanting a way to automate checking what version of NodeJS was in use across all the different AWS Availability Regions. But when I tried my old script it didn’t work.

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This is most likely because the output from awscli tool has changed slightly.

The first change looks to be in the listing of the available regions

$ aws ec2 describe-regions --output text
REGIONS	ec2.eu-north-1.amazonaws.com	opt-in-not-required	eu-north-1
REGIONS	ec2.ap-south-1.amazonaws.com	opt-in-not-required	ap-south-1
REGIONS	ec2.eu-west-3.amazonaws.com	opt-in-not-required	eu-west-3
REGIONS	ec2.eu-west-2.amazonaws.com	opt-in-not-required	eu-west-2
REGIONS	ec2.eu-west-1.amazonaws.com	opt-in-not-required	eu-west-1
REGIONS	ec2.ap-northeast-2.amazonaws.com	opt-in-not-required	ap-northeast-2
REGIONS	ec2.ap-northeast-1.amazonaws.com	opt-in-not-required	ap-northeast-1
REGIONS	ec2.sa-east-1.amazonaws.com	opt-in-not-required	sa-east-1
REGIONS	ec2.ca-central-1.amazonaws.com	opt-in-not-required	ca-central-1
REGIONS	ec2.ap-southeast-1.amazonaws.com	opt-in-not-required	ap-southeast-1
REGIONS	ec2.ap-southeast-2.amazonaws.com	opt-in-not-required	ap-southeast-2
REGIONS	ec2.eu-central-1.amazonaws.com	opt-in-not-required	eu-central-1
REGIONS	ec2.us-east-1.amazonaws.com	opt-in-not-required	us-east-1
REGIONS	ec2.us-east-2.amazonaws.com	opt-in-not-required	us-east-2
REGIONS	ec2.us-west-1.amazonaws.com	opt-in-not-required	us-west-1
REGIONS	ec2.us-west-2.amazonaws.com	opt-in-not-required	us-west-2

This looks to have added something extra to the start of the each line, so I need to change which filed I select with the cut command by changing -f3 to -f4.

The next problem looks to be with the JSON that is output for the list of functions in each region.

$ aws --region $r lambda list-functions
{
    "Functions": [
        {
            "TracingConfig": {
                "Mode": "PassThrough"
            }, 
            "Version": "$LATEST", 
            "CodeSha256": "wUnNlCihqWLXrcA5/5fZ9uN1DLdz1cyVpJV8xalNySs=", 
            "FunctionName": "Node-RED", 
            "VpcConfig": {
                "SubnetIds": [], 
                "VpcId": "", 
                "SecurityGroupIds": []
            }, 
            "MemorySize": 256, 
            "RevisionId": "4f5bdf6e-0019-4b78-a679-12638412177a", 
            "CodeSize": 1080463, 
            "FunctionArn": "arn:aws:lambda:eu-west-1:434836428939:function:Node-RED", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::434836428939:role/service-role/home-skill", 
            "Timeout": 10, 
            "LastModified": "2018-05-11T16:20:01.400+0000", 
            "Runtime": "nodejs8.10", 
            "Description": "Provides the basic framework for a skill adapter for a smart home skill."
        }
    ]
}

This time it looks like there is an extra level of array in the output, this can be fixed with a minor change to the jq filter

$aws lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'
"Node-RED - nodejs8.10"

Putting it all back together to get

for r in `aws ec2 describe-regions --output text | cut -f4`;  do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'; 
done

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"