Raspberry Pi Streaming Camera

I talked about using a ONVIF camera to stream to a Chromecast earlier because they come with an open well documented interface for pulling video from them (as well as pan/tilt/zoom control if available).

If you don’t have a camera that supports ONVIF you can build something similar with a Raspberry Pi and the Camera module.

This should work with pretty much all of the currently available Raspberry Pi models (With the exception of the basic Pi Zero that doesn’t have Wifi)

  1. Flash a SD card with the Raspbian Lite image
  2. Insert the camera ribbon cable into both the camera module and the Pi
  3. Once the card has booted use the raspi-conf command to enable the Camera interface
  4. Install ffmpeg sudo apt-get install ffmpeg
  5. Create a script with the following content
#!/bin/sh

v4l2-ctl --set-ctrl video_bitrate=300000

ffmpeg -f video4linux2 -input_format h264 -video_size 640x360 -framerate 30 -i /dev/video0  -vcodec copy -an -f flv rtmp://192.168.1.96/show/pi
  • This script sets the max video bitrate to 30kps
  • If you need to rotate the video you can insert v4l2-ctl --set-ctrl=rotate=180 before ffmpeg to rotate 180 degrees
  • ffmpeg uses the videolinux2` driver to read from the attached camera (/dev/video0)
  • Takes h264 encoded feed at 640x360 and 30 frames per second and outputs it to the same nginx instance that I mentioned in my previous post. The feed is called pi

ffmpeg uses the on board hardware support for the video encoding so even a Pi Zero W runs at about 5% CPU load. This means that if you only have 1 camera you could probably run nginx on the same device, else you can have a multiple cameras all feeding to a central video streaming server.

If you want a kit that comes with the Pi Zero W, Camera and a case to mount it to a window have a look at the Pimoroni OctoCam.

The instructions should also work for pretty much any USB (or built in) camera attached to a Linux machine.

Streaming Camera to Chromecast

I have a little cheap WiFi camera I’ve been meaning to do something with for a while. The on board web access doesn’t really work any more because it only supports Flash or Java Applet to view the stream.

But camera supports the ONVIF standard so it offers a rtsp:// feed so I can point Linux apps like mplayer at it and see the stream.

The camera is currently sat on my window sill looking out over the valley which is a pretty good view.

View from upstairs window

I thought it would be interesting to stream the view to the TV in my Living room while I’m working from home at the moment. It is also a way to check the weather without having to get up in the morning and open the blind.

I have a Chromecast in the back back of both TVs so using this seamed like it would be the easiest option.

Chromecast

Chromecasts support a number of different media types but for video we have 2 common codec that will work across all the currently available device types.

  • H264
  • VP8

And we have 3 options to deliver the video stream

  • HLS
  • DASH
  • SmoothStreaming

These are all basically the same, they chop the video up into short segments and generate a play list that points to the segments in order and the consumer downloads each segment. When it reaches the end of the list it downloads the list again which will now hold the next list of segments.

There is a plugin for Nginx that supports both HLS and DASH which looked like a good place to start.

NGINX

I’m running this whole stack on a Raspberry Pi 4 running Raspbian Buster.

$ sudo apt-get install nginx libnginx-mod-rtmp ffmpeg

Once the packages are installed the following needs to be added to the end of the /etc/nginx/nginx.conf file. This sets up a rtmp listener that we can stream the video to which will then be turned into both a HLS and DASH stream to be consumed.

...
rtmp {
  server {
    listen 1935; # Listen on standard RTMP port
    chunk_size 4000;

    application show {
      live on;
      # Turn on HLS
      hls on;
      hls_type live;
      hls_path /var/www/html/hls/;
      hls_fragment 5s;
      hls_playlist_length 20s;
      
      # Turn on DASH      
      dash on;
      dash_path /var/www/html/dash/;
      dash_fragment 5s;
      dash_playlist_length 20s;

      # disable consuming the stream from nginx as rtmp
      deny play all;
    }
  }
}

The playlist and video segments get written to /var/www/html/hls and /var/www/html/dash respectively. Because they will be short lived and replaced very regularly it’s a bad idea to write these to an SD card as they will just cause excessive flash wear.

To get round this I’ve mounted tmpfs filesystems at those points with the following entries in /etc/fstab

tmpfs	/var/www/html/dash	tmpfs	defaults,noatime,size=50m
tmpfs	/var/www/html/hls	tmpfs	defaults,noatime,size=50m

Now we have the playlists and segments being generated in a sensible way we need to server them up. I added the following to the /etc/nginx/sites-enabled/default file

server {
  listen 8080;
  listen [::]:8080;

  sendfile off;
  tcp_nopush on;
  directio 512;
  default_type application/octet-stream;

  location / {
    add_header 'Cache-Control' 'no-cache';
    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Expose-Headers' 'Content-Length';

    if ($request_method = 'OPTIONS') {
      add_header 'Access-Control-Allow-Origin' '*';
      add_header 'Access-Control-Allow-Credentials' 'true';
      add_header 'Access-Control-Max-Age' 1728000;
      add_header 'Content-Type' 'text/plain charset=UTF-8';
      add_header 'Content-Length' 0;
      return 204;
    }

    types {
      application/dash+xml mpd;
      application/vnd.apple.mpegurl m3u8;
      video/mp2t ts;
    }

    root /var/www/html/;
  }
}

Now we have the system to stream the content in an acceptable format we need to get the video from the camera into nginx. We can use ffmpeg to do this.

ffmpeg -re -rtsp_transport tcp -i rtsp://192.168.1.104:554/live/ch1 -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv rtmp://localhost/show/stream

This reads from the RTSP stream rtsp://192.168.1.104:554/live/ch1 and streams it into the rtmp://localhost/show/stream. The showpart is the name of the application declared in the rtmp section in the nginx.conf and the stream will be the name of the HLS or DASH stream. In this case the following:

  • HLS -> http://192.168.1.98/hls/stream.m38u
  • DASH -> http://192.168.1.98/dash/stream.mpd

If you change the end of the `rtmp://localhost/show/XXXX` URL you can create multiple streams from different sources with different names (just make sure the tmpfs mounts have enough space for all the streams).

Testing

Node-RED Chromecast flow

I’ve been using the Node-RED Chromecast node to test the streams. The DASH stream is working pretty well, but the HLS is a bit more fragile for some reason. Latency is currently about 20-30 seconds which is appears mainly to be determined by the size and number of chunks in the playlist used but if I wind the fragment size down any lower than 5s or the 20s for the playlist length.

Next

Now it’s basically working the next steps are to add support for Camera devices to my Google Assistant Node-RED service so I can request the stream via voice and have it show on my Google Home Hub as well. I’m also building a standalone Google Smart Home Action just for Camera feeds using an all Google stack just as a learning exercise in Firebase.

At the moment the stream is only available from inside my network, I’ll probably proxy it to my external web server as well and add on authentication. The Google Assistant can be given a Bearer Auth Token along with the URL which means I’ll be able to view the stream on my phone while out. While not important for this stream it would be for other Security camera type applications.

Building Custom Raspberry Pi SD Card Images

After my post about using a Raspberry Pi 4 as a USB gadget got linked to by a YouTuber who worked out it also worked with the iPad Pro it has been getting a lot of traffic.

Pi4 Gadget

Along with the traffic came a number of comments from people wanting help setting things up. While the instructions are reasonably complete, they do assume a certain amount of existing knowledge and access to a monitor/keyboard/network to complete everything. Also given the majority of the readers were Apple users they couldn’t mount the main partition of the SDCard as it is a ext4 file system.

The quick and dirty solution is for me to modify a standard image and host it somewhere. This is OK, but it means I have to,

  • Find somewhere to host a 500mb file
  • Keep it up to date when new versions are released
  • Provide some way for people to trust what changes I’ve made to the image

I can probably get round the first one pretty easily, bandwidth is a lot cheaper than it used to be. The second and third items are a little harder.

I started to look at a way to script the modifications to a standard Raspbian image, that way I could just host the script and people could provide their own starting image. On a Linux machine I could mount both partitions on the card and modify or add the required config files, but the problem was installing dnsmasq. This needs the image to actually be running for apt-get to run, which gets back to the chicken/egg problem of needing to boot the pi order to make the changes. That and it would only run on Linux, not a OSx or Windows machine.

What I need is a way to “run” a virtual Raspberry Pi and a way to run commands in that virtual machine. I know from working with services like Balena.io‘s build system that it is possible to emulate the ARM processor found in a Raspberry Pi on Intel based hardware. I found a couple of examples of people using Qemu to run virtual Pis and I was just about to set up the same when I came across dockerpi, which is a docker image with everything preconfigured. You can mount a SD card image

Virtual Raspberry Pi in a terminal window

When started you end up with what is basically the virtual machine console as a command line app. You can interact with it just like any other console application. I logged in as pi and then used sudo to run apt-get update and apt-get install dnsmasq.

That works for doing it manually, but I need to script this, so it’s time to break out some old school Linux foo and use expect.

Expect is a scripting tool that reads from stdin, and outputs to stdout, but will wait for a known output before sending a reply. It was used in the early days of the Internet to script dial up internet.

#!/usr/bin/expect -f
set timeout -1
set imageName [lindex $argv 0]
if {[string trimleft $imageName] eq ""} {
  puts "No Image file provided"
  exit
}
set cwd [file normalize [file dirname $argv0]]
set imagePath [file join $cwd $imageName]
spawn docker run -i --rm -v $imagePath:/sdcard/filesystem.img lukechilds/dockerpi:vm
expect "login: "
send "pi\n"
expect "Password: "
send "raspberry\n"
interact

This expect script takes the name of the SD Card image as an argument, starts the Docker container and then logs in with the default pi/raspberry username & password.

With a bit more work we can get all all the changes done including creating the extra files.

...
proc slurp {file} {
    set fh [open $file r]
    set ret [read $fh]
    close $fh
    return $ret
}
...
set file [slurp "etc/network/interfaces.d/usb0"]
expect "# "
send "cat <<EOF >> /etc/network/interfaces.d/usb0\n"
send "$file\n"
send "EOF\n"
...

I’ve checked all the files into a github repository here. I’ve tested the output with a Pi Zero and things look good. To run it for yourself, clone the repo, copy the Raspbian Lite image (unzip it first) into the directory and run ./create-image <image file name>

There is a version of the output from here.

I’ve still got to get round to trying to the RNDIS Ethernet device support working so it will load the right drivers on Windows. And I need to extend the script to build the Personal CA Appliance from my last post.

A Personal Offline Certificate Authority

I had a slight scare in the run up to Christmas. I went to use the VPN on my phone to connect into my home network and discovering that the certificate that identifies both my phone and the one for the server had expired the day before.

This wouldn’t have been a problem except I couldn’t find where I’d stashed the files that represent the CA I had used to create the certificates. There was a short panic until I got home that evening and found them on a old decommissioned server that luckily I hadn’t got round to scrapping properly yet.

This led me to think of a better place to store these files. I wanted to have a (relatively) secure offline place to store them, but also somewhere that could handle the actual signing of certificates and the rest of the admin (I normally end up Googling the instructions for openssl each time I need to do this).

A simple approach would be to just store the files on a encrypted USB Mass Storage device, but I wanted something a little bit more automated.

Hardware

Recycling my ever useful Raspberry Pi Zero as a USB Ethernet Gadget instructions again with a Raspberry Pi Zero (note not a Zero W) gets me a device that has no direct internet connection, but that can be plugged into nearly any machine and accessible via a local network connection.

RTC attached to a Raspberry Pi Zero in a Pimoroni case

One little niggle is that working with certificates requires an accurate clock on the device. Raspbian by defaults sets it’s clock via NTP over the network since there is no persistent clock on the pi. The fix for this is a i2c battery backed up hardware clock. You can pick these up from a number of places, but I grabbed one of these from amazon.

To enable the RTC you need to add the following to /boot/config.txt

dtoverlay=i2c-rtc,ds3231

And comment out the first if block in /lib/udev/hwclock-set

...
dev=$1
#if [ -e /run/systemd/system ] ; then
#    exit 0
#fi
...

Now we have a reliable system clock we can go about setting up the CA.

Software

The first version just requires me to ssh in to the pi and use openssl on the command line. This was enough to get me started again, but I was looking for something a bit more user friendly.

I had a look round for a web interface to openssl and found a few different options

But they all requires a whole bunch of other things like OpenLDAP, MySQL and Apache which is all a bit too heavy weight for a Pi Zero.

A web form collecting data for a certificate

So I decided to write my own, a bit of poking around and I found the node-openssl-cert module on npm which looked like it should be able to handle everything I need.

Combined with express and I now have a form I can fill in with the subject details and hit submit.

The page then downloads a PKCS12 format bundle which contains the CA cert, Client cert and Client key all protected by a supplied password. I can then use openssl to break out the parts I need or just import the whole thing into Android.

At the moment I’ve just copied the existing CA key & cert out of the existing CA directory structure to get this to work. I intend to update the code to make use of the serial number and index tracking so if needed I can generate a Certificate Revocation List if needed, also potentially allow the downloading of previously generated certs.

You can find the project on github here and I hope to find some time to write up some end to end instructions for setting it all up.

The interesting bit was how to download a file from a XMLHttpRequest, you can see that trick here.

Aside

I originally titled this as “A Secure Offline Certificate Authority”. I changed it because this isn’t really any more secure than a USB key you just keep the CA key & cert on, and probably less secure than if you encrypted that drive. It is true that the CA key cert are not accessible from the host machine without SSHing to the device, but the CA key & cert are still just stored on the Pi’s SDCard so if anybody has physical access to it then it’s game over.

I could look at i2c or SPI secure elements that could be used to store the private key but the real solution to this is an ASCI or FPGA combined with a secure element, but that is all overkill for what I needed here.

Getting out past the firewall

Ahhh, the joys of a IT departments that think everybody just uses Word/Excel/Outlook and just browses to Facebook at lunchtime.

Networks that transparently proxy HTTP/HTTPS (probably with man in the middle TLS CA certs deployed to all the machines, but that is an entirely different problem) and block everything else really do not work in the modern world where access to places like GitHub via SSH or devices connecting out via MQTT are needed.

One possible solution to the SSH problem is a bastion host. This is a machine that can be reached from the internal network but is also allowed to connect to the outside world. This allows you to use this machine as a jumping off point to reach services blocked by the firewall.

The simple way is to log into the bastion, and then from the shell connect on to your intended external host, this works for targets you want a shell on but not for things like cloning/updating git repositories. We also want to automate as much of this as possible.

The first step is to set up public/private key login for the bastion machine. To do this we first generate a key pair with the ssh-keygen command.

$ ssh-keygen -f ~/.ssh/bastion -t ecdsa -b 521
Generating public/private ecdsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in ~/.ssh/bastion.
Your public key has been saved in ~/.ssh/bastion.pub.
The key fingerprint is:
SHA256:3Cfr60QRNbkRHDt6LEUUcemhKFmonqDlgEETgZl+H8A hardillb@tiefighter
The key's randomart image is:
+---[ECDSA 521]---+
|oB+      ..+X*.. |
|= .E    . .o++o  |
|.o  .  . o..+= . |
|....o...o..=o..  |
|  .=.o..S.* +    |
|  . ..o  . *     |
|          o      |
|         o       |
|         .+.     |
+----[SHA256]-----+

In this case we want to leave the passphrase blank because we want to use this key as part of automation of other steps, normally you should use a passphrase to protect access should the keys be compromised.

Once generated you can copy it to the `~/.ssh/authorized_keys` file on the bastion machine using the ssh-copy-id command

$ ssh-copy-id -i ~/.ssh/bastion user@bastion

Once that is in place we should be able to use the key to log straight into the bastion machine. We can now use the `-J` option to specify the bastion as a jump point to reach a remote machine.

$ ssh -J user@bastion user@remote.machine

We can also add this as an entry in the `.ssh/config` file which is more useful for things like git where it’s harder to get at the actual ssh command line.

Host bastion
 HostName bastion
 User user
 IdentityFile ~/.ssh/bastion

Host github
 Hostname github.com
 User git
 IdentityFile ~/.ssh/github
 ProxyCommand ssh -W %h:%p bastion

This config will proxy all git commands working with remote repositories on github.com via the bastion machine, using the bastion key to authenticate with the bastion machine and the github key to authenticate with github. This is all totally transparent to git.

ssh also supports a bunch of other useful ticks, such as port forwarding from either end of the connection.

It also can proxy other protocols using the Socks tunnelling protocol which means it can be used as poor man’s VPN in some situations. To enable Socks proxying you can use -D option to give a local port number or the DynamixProxy directive in the ~/.ssh/config file. This option is really useful with web browser that supports Socks proxies as it means you can point the browser at a local port and have it surf the web as if it was the remote machine.

All of this still works if you are using a bastion machine.

Custom hardware

Combining all this really useful SSH capability with a the Raspberry Pi gadgets makes it possible to carry a bastion host with you. Using a Raspberry Pi Zero W or even a full sized Pi 4 that can be configured to join a more open WiFi network (e.g. a visitor or testing network) you can have a device you just plug into a spare USB port that will give you a jumping off point to the outside world while still being connected to the more restricted internal network with all that provides. Just don’t tell IT security ;-).

This works well because even the most locked down machine normally still allows USB network adapters to be used.

DoH Update and DNS over TLS

I’ve been updating my DoH code again. It should now match RFC8484 and can be found on github here.

  • DNS wire format requests are now on /dns-query rather than /query
  • Change Content-Type to applicaton/dns-message
  • JSON format requests are now on /resolve
  • Made the dns-to-https-proxy only listen on IPv4 as it was always replying on IPv6

Normally ISPs have rules about running open recursive DNS servers on consumer lines, this is mainly because they can be subject to UDP source forgery and used in DDoS attacks. Because DoH is all TCP based it does not pose the same problem. So I’m going to stand up a version publicly so I can set my phone to use it for a while. I’ll be using nginx to proxy and sticking the following config bock in the http section that serves my https traffic.

location /dns-query {
  proxy_pass https://127.0.0.1:3000/dns-query;
}

location /resolve {
  proxy_pass https://127.0.0.1:3000/resolve;
}

As well as DoH I’ve been looking at setting up DoT (RFC858) for my DNS server. Since bind doesn’t have native support for TLS, this will again be using nginx as a proxy to terminate the TLS connection and then proxy to the bind instance. The following configuration should export port 853 and forward to port 53.

stream {
    upstream dns {
        zone dns 64k;
        server 127.0.0.1:53;
    }

    server {
        listen 853 ssl;
        ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
        proxy_pass dns;
        proxy_bind 127.0.0.2;
    }
}

nginx is running on the same machine as the as bind, but runs different views for internal and external clients based on the IP address of the request came from. The internal view includes 127.0.0.1 which is why the proxy_bind directive is used to make sure the request comes from 127.0.0.2 so it looks like and external address.

Pi4 USB-C Gadget

Pi4 Gadget

I’ve previously blogged about using Pi Zero (and Zero W) devices as USB Gadgets. This allows them to be powered and accessed via one of the micro USB sockets and it shows up as both a CD-Drive and a ethernet device.

A recent update to the Raspberry Pi 4 bootloader not only enables the low power mode for the USB hardware, allows the enabling of Network boot and enables data over the USB-C port. The lower power means it should run (without any hats) with the power supplied from a laptop.

Details of how to check/update the bootloader can be found here.

Given that the Pi4 has a Gigabit Ethernet adapter, WiFi and 4 USB sockets (need to keep the power draw low to be safe) and up to 4Gb RAM to go with it’s 4 x 1.5Ghz core processor it makes for a very attractive plugin compute device.

With this enabled all the same script from the Pi Zero’s should just work but here is the updated version for Raspbian Buster.

  • Add dtoverlay=dwc2 to the /boot/config.txt
  • Add modules-load=dwc2 to the end of /boot/cmdline.txt
  • If you have not already enabled ssh then create a empty file called ssh in /boot
  • Add libcomposite to /etc/modules
  • Add denyinterfaces usb0 to /etc/dhcpcd.conf
  • Install dnsmasq with sudo apt-get install dnsmasq
  • Create /etc/dnsmasq.d/usb with following content
interface=usb0
dhcp-range=10.55.0.2,10.55.0.6,255.255.255.248,1h
dhcp-option=3
leasefile-ro
  • Create /etc/network/interfaces.d/usb0 with the following content
auto usb0
allow-hotplug usb0
iface usb0 inet static
  address 10.55.0.1
  netmask 255.255.255.248
  • Create /root/usb.sh
#!/bin/bash
cd /sys/kernel/config/usb_gadget/
mkdir -p pi4
cd pi4
echo 0x1d6b > idVendor # Linux Foundation
echo 0x0104 > idProduct # Multifunction Composite Gadget
echo 0x0100 > bcdDevice # v1.0.0
echo 0x0200 > bcdUSB # USB2
echo 0xEF > bDeviceClass
echo 0x02 > bDeviceSubClass
echo 0x01 > bDeviceProtocol
mkdir -p strings/0x409
echo "fedcba9876543211" > strings/0x409/serialnumber
echo "Ben Hardill" > strings/0x409/manufacturer
echo "PI4 USB Device" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
# Add functions here
# see gadget configurations below
# End functions
mkdir -p functions/ecm.usb0
HOST="00:dc:c8:f7:75:14" # "HostPC"
SELF="00:dd:dc:eb:6d:a1" # "BadUSB"
echo $HOST > functions/ecm.usb0/host_addr
echo $SELF > functions/ecm.usb0/dev_addr
ln -s functions/ecm.usb0 configs/c.1/
udevadm settle -t 5 || :
ls /sys/class/udc > UDC
ifup usb0
service dnsmasq restart
  • Make /root/usb.sh executable with chmod +x /root/usb.sh
  • Add /root/usb.sh to /etc/rc.local before exit 0 (I really should add a systemd startup script here at some point)

With this setup the Pi4 will show up as a ethernet device with an IP address of 10.55.0.1 and will assign the device you plug it into an IP address via DHCP. This means you can just ssh to pi@10.55.0.1 to start using it.

Addendum

Quick note, not all USB-C cables are equal it seems. I’ve been using this one from Amazon and it works fine.

The latest revision (as of late Feb 2020) of the Pi 4 boards should work with any cable.

There is also now a script to create pre-modified Raspbian images here with a description here and a copy of the modified image here.

Updated AWS Lambda NodeJS Version checker

I got another of those emails from Amazon this morning that told me that the version of the NodeJS runtime I’m using in the Lambda for my Node-RED Alexa Smart Home Skill is going End Of Life.

I’ve previously talked about wanting a way to automate checking what version of NodeJS was in use across all the different AWS Availability Regions. But when I tried my old script it didn’t work.

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This is most likely because the output from awscli tool has changed slightly.

The first change looks to be in the listing of the available regions

$ aws ec2 describe-regions --output text
REGIONS	ec2.eu-north-1.amazonaws.com	opt-in-not-required	eu-north-1
REGIONS	ec2.ap-south-1.amazonaws.com	opt-in-not-required	ap-south-1
REGIONS	ec2.eu-west-3.amazonaws.com	opt-in-not-required	eu-west-3
REGIONS	ec2.eu-west-2.amazonaws.com	opt-in-not-required	eu-west-2
REGIONS	ec2.eu-west-1.amazonaws.com	opt-in-not-required	eu-west-1
REGIONS	ec2.ap-northeast-2.amazonaws.com	opt-in-not-required	ap-northeast-2
REGIONS	ec2.ap-northeast-1.amazonaws.com	opt-in-not-required	ap-northeast-1
REGIONS	ec2.sa-east-1.amazonaws.com	opt-in-not-required	sa-east-1
REGIONS	ec2.ca-central-1.amazonaws.com	opt-in-not-required	ca-central-1
REGIONS	ec2.ap-southeast-1.amazonaws.com	opt-in-not-required	ap-southeast-1
REGIONS	ec2.ap-southeast-2.amazonaws.com	opt-in-not-required	ap-southeast-2
REGIONS	ec2.eu-central-1.amazonaws.com	opt-in-not-required	eu-central-1
REGIONS	ec2.us-east-1.amazonaws.com	opt-in-not-required	us-east-1
REGIONS	ec2.us-east-2.amazonaws.com	opt-in-not-required	us-east-2
REGIONS	ec2.us-west-1.amazonaws.com	opt-in-not-required	us-west-1
REGIONS	ec2.us-west-2.amazonaws.com	opt-in-not-required	us-west-2

This looks to have added something extra to the start of the each line, so I need to change which filed I select with the cut command by changing -f3 to -f4.

The next problem looks to be with the JSON that is output for the list of functions in each region.

$ aws --region $r lambda list-functions
{
    "Functions": [
        {
            "TracingConfig": {
                "Mode": "PassThrough"
            }, 
            "Version": "$LATEST", 
            "CodeSha256": "wUnNlCihqWLXrcA5/5fZ9uN1DLdz1cyVpJV8xalNySs=", 
            "FunctionName": "Node-RED", 
            "VpcConfig": {
                "SubnetIds": [], 
                "VpcId": "", 
                "SecurityGroupIds": []
            }, 
            "MemorySize": 256, 
            "RevisionId": "4f5bdf6e-0019-4b78-a679-12638412177a", 
            "CodeSize": 1080463, 
            "FunctionArn": "arn:aws:lambda:eu-west-1:434836428939:function:Node-RED", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::434836428939:role/service-role/home-skill", 
            "Timeout": 10, 
            "LastModified": "2018-05-11T16:20:01.400+0000", 
            "Runtime": "nodejs8.10", 
            "Description": "Provides the basic framework for a skill adapter for a smart home skill."
        }
    ]
}

This time it looks like there is an extra level of array in the output, this can be fixed with a minor change to the jq filter

$aws lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'
"Node-RED - nodejs8.10"

Putting it all back together to get

for r in `aws ec2 describe-regions --output text | cut -f4`;  do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'; 
done

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"

Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:

/dts-v1/;
/plugin/;

/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;
            };
        };
    };
};

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.

Userspace

Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

device = uinput.device([
  uinput.ABS_X,
  uinput.ABS_Y
])

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
  else:
    device.emit(uinput.BTN_TOUCH, 0)   

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Static IP Addresses and Accounting

Over the last few posts I’ve talked about how to set up the basic parts needed to run a small ISP.

In this post I’m going to cover adding a few extra features such as static IP addresses, Bandwidth accounting and Bandwidth limiting/shaping.

Static IP Addresses

We can add a static IP address by adding a field to the users LDAP entry. To do this first we need to add the Freeradius schema to the list of fields that the LDAP server understands. The Freeradius schema files can be found in the /usr/share/doc/freeradius/schemas/ldap/openldap/ and have been gzipped. I unzipped them and copied them to /etc/ldap/schema then imported it with

$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/freeradius.ldif

Now we have the schema imported we can now add the radiusprofile objectClass to the user along with a radiusFramedIPAddress entry with the following ldif file.

dn: uid=isp1,ou=users,dc=hardill,dc=me,dc=uk
changetype: modify
add: objectClass
objectClass: radiusprofile
-
add: radiusFramedIPAddress
radiusFramedIPAddress: 192.168.5.2

We then use ldapmodify to update the isp1 users record

$ ldapmodify -f addIPAddress.ldif -D cn=admin,dc=hardill,dc=me,dc=uk -w password

Now we have the static IP address stored against the user, we have to get the RADIUS server to pass that information back to the PPPoE server after it has authenticated the user. To do this we need to edit the /etc/freeradius/3.0/mods-enabled/ldap file. Look for the `update` section and add the following

update {
  ...
  reply:Framed-IP-Address     := 'radiusFramedIPAddress'
}

Running radtest will now show Framed-IP-Address in the response message and when pppoe-server receives the authentication response it will use this as the IP address for the client end of the connection.

Accounting

Out of the box pppoe-server will send accounting messages to the RADIUS server at the start and end of the session.

Sat Aug 24 21:35:17 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Start
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 21:35:17 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566678917

Sat Aug 24 23:08:53 2019
	Acct-Session-Id = "5D619F853DBB00"
	User-Name = "isp1"
	Acct-Status-Type = Stop
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 5616
	Acct-Output-Octets = 2328
	Acct-Input-Octets = 18228
	Acct-Output-Packets = 32
	Acct-Input-Packets = 297
	NAS-Port-Type = Virtual
	Acct-Terminate-Cause = User-Request
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 24 2019 23:08:53 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "290b459406a25d454fcfdf3088a2211c"
	Timestamp = 1566684533

The Stop message includes the session length (Acct-Session-Time) in seconds and the number of bytes downloaded (Acct-Output-Octets) and uploaded (Acct-Input-Octets).

Historically in the days of dial up that probably would have been sufficient as sessions would probably only last for hours at a time, not weeks/months for a DSL connection. pppoe-server can be told to send updates at regular intervals, this setting is also controlled by a field in the RADIUS authentication response. While we could add this to each user, it can be added to all users with a simple update to the /etc/freeradius/3.0/sites-enabled/default file in the post-auth section.

post-auth {
   update reply {
      Acct-Interim-Interval = 300
   }
   ...
}

This sets the update interval to 5mins and the log now also contains entries like this.

Wed Aug 28 08:38:56 2019
	Acct-Session-Id = "5D62ACB7070100"
	User-Name = "isp1"
	Acct-Status-Type = Interim-Update
	Service-Type = Framed-User
	Framed-Protocol = PPP
	Acct-Authentic = RADIUS
	Acct-Session-Time = 230105
	Acct-Output-Octets = 10915239
	Acct-Input-Octets = 17625977
	Acct-Output-Packets = 25918
	Acct-Input-Packets = 31438
	NAS-Port-Type = Virtual
	Framed-IP-Address = 192.168.5.2
	NAS-IP-Address = 127.0.1.1
	NAS-Port = 0
	Acct-Delay-Time = 0
	Event-Timestamp = "Aug 28 2019 08:38:56 BST"
	Tmp-String-9 = "ai:"
	Acct-Unique-Session-Id = "f36693e4792eafa961a477492ad83f8c"
	Timestamp = 1566977936

Having this data written to a log file is useful, but if you want to trigger events based on it (e.g. create a rolling usage graph or restrict speed once a certain allowance has been passed) then something a little more dynamic is useful. Freeradius has a native plugin interface, but it also has plugins that let you write Perl and Python functions that are triggered at particular points. I’m going to use the Python plugin to publish the data to a MQTT broker.

To enable the Python plugin you need to install the freeradius-python package

$ sudo apt-get install freeradius-python

And then we need to symlink the mods-available/python to mods-enabled and then edit the file. First we need to set the path that the plugin will use to file Python modules and files. And then enable the events we want to pass to the module.

python {
    python_path = "/etc/freeradius/3.0/mods-config/python:/usr/lib/python2.7:/usr/local/lib/python/2.7/dist-packages"
    module = example

    mod_instantiate = ${.module}
    func_instantiate = instantiate

    mod_accounting = ${.module}
    func_accounting = accounting
}

The actual code follows, it publishes the number of bytes used in the session to the topic isp/[username]/usage. Each callback gets pass a tuple containing all the values available.

import radiusd
import paho.mqtt.publish as publish

def instantiate(p):
  print "*** instantiate ***"
  print p
  # return 0 for success or -1 for failure

def accounting(p):
  print "*** accounting ***"
  radiusd.radlog(radiusd.L_INFO, '*** radlog call in accounting (0) ***')
  print
  print p
  d = dict(p)
  if d['Acct-Status-Type'] == 'Interim-Update':
      topic = "isp/" + d['User-Name'] + "/usage"
      usage = d['Acct-Output-Octets']
      print "publishing data to " + topic
      publish.single(topic, usage, hostname="hardill.me.uk", retain=True)
      print "published"
  return radiusd.RLM_MODULE_OK

def detach():
  print "*** goodbye from example.py ***"
  return radiusd.RLM_MODULE_OK

I was going to talk about traffic shaping next, but that turns out to be real deep magic and I need to spend some more time playing before I have something to share.