Taking a RISC

As a result of the Chip Crisis getting hold of Raspberry Pi SBCs has been tricky for the last year or so. If you manage to find a retailer that has some in stock before they get cleaned out by the scalpers they come bundled with PSU, pre-installed SD card, case and HDMI adapters (I understand why, it makes them more attractive to the original target audience).

At the same time there has been a growing number of new SBCs based on the RISC-V architecture. RISC-V is an open CPU architecture and looking to try and compete with the close ARM architecture which is found just about everywhere (Phones, new Macs, Watches, Set Top boxes, …)

So it seamed like a good time to see what boards were available and have a play. I had a bit of a search, and found a Sipeed Nezha RV64 Development Board on Amazon. It has the same basic layout as a Pi, all the same ports (plus some extra), a Debian Linux variant to run on it, the only thing lacking is RAM at 1GB, but that should be enough to get started.

Single Board Computer

Having ordered one, the delivery date was for mid October, early November but it turned up the 3rd weekend in September.

Unboxing

We get a reasonably nice box which contains the following

  • Sipeed Nezha 64bit 1GB Development Board
  • 32gb SD card preloaded with Debain Linux
  • 2 USB-A to USB-C cables
  • USB to Serial break out cable
  • American style USB Power supply
  • 4 Stand off feet

I don’t think I’ll be making use of the USB PSU, unbranded Chinese PSUs like this have a bad reputation and I don’t fancy digging out my UK to US step down transformer. Luckily when the Postman delivered this he also delivered one of the new Google Chromecast HDs which came with a suitable PSU that I don’t need as the TV has a USB socket I can use with the Chromcast.

The USB-A to USB-C cables also look a little strange, the USB-A end has a purple insert, but doesn’t look to have the USB-3 extra tracks and the data tracks look really thin. I’ll give them a go to power the board, but will probably use one of my existing cables with the USB-C OTG port on the board when I get round to seeing if I can drive it as a USB Gadget like I’ve been doing with Raspberry Pi 4 and Pi Zero boards.

I can’t find a matching case for this board online, while it’s mainly laid out like a Pi, the ports are not quite the same, so most Pi cases would need modifying. I may see if I can find somebody with a 3D printer to run one up for me. In the mean time I’ve screwed in the stand off feet to hold it up off the desk.

Powering up

As the board only has one USB-A socket I stuck a wireless keyboard/mouse dongle in to get started, along with a HDMI cable, ethernet and power.

When I plugged the power in I got absolutely nothing out to the display, but the power LED lit up and the link and data lights on the ethernet socket started to flash. I had a quick look in the DHCP table on my router and found a new device with a hostname of sipeed. I fired up SSH to connect as the sipeed user and entered the password printed on the inside of the box.

This got me logged into a bash shell. I found a shell script in the home directory called test_lcd.sh which when I examined hinted that i might be able to switch between a built in LCD driver and the HDMI output. Running the script at root with the argument of hdmi kicked the hooked up display into life.

X Windows running on half a screen

I appears (after a little searching) that the default X config is for a tablet sized screen in portrait mode, which is why it’s only using the left 3rd of the screen.

I’ll try and find some time to set up the display properly later, but for now we’ve proved nothing is broken.

Next I tried checking for any OS updates with sudo apt-get update but it threw errors to do with missing GPG keys.

root@sipeed:~# apt-get update
Get:1 http://ftp.ports.debian.org/debian-ports sid InRelease [69.6 kB]
Err:1 http://ftp.ports.debian.org/debian-ports sid InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E852514F5DF312F6
Fetched 69.6 kB in 2s (34.0 kB/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.ports.debian.org/debian-ports sid InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E852514F5DF312F6
W: Failed to fetch http://ftp.ports.debian.org/debian-ports/dists/sid/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E852514F5DF312F6
W: Some index files failed to download. They have been ignored, or old ones used instead.

I’m pretty sure I can import them, just need to find the right key server to pull them from.

(From Redit)

$ busybox wget -O- https://www.ports.debian.org/archive_2022.key | apt-key add

Once I get the updates installed I’ll see if I can get NodeJS and give Node-RED and FlowForge a go.

That’s a basic sniff done. Will post some more when I’ve had time to poke round some more.

I’ve found some Fedora images for this board, here that I’ll probably boot up next, then give Ubuntu a go from here.

Home MicroK8s Cluster

I started to write about my home test environment for FlowForge a while ago, having just had to rebuild my K8s cluster due to a node failure I thought I should come back to this and document how I set it up (as much for next time as to share).

Cluster Nodes

I’m using the following nodes

Base OS

I’m working with Ubuntu 20.04 as this is the default OS of choice for MicroK8s and it’s available for both x86_64 and Arm8 for the Raspberry Pi 4.

Installing MicroK8s

$ sudo snap install microk8s --classic --channel=1.24

Once deployed on all 3 nodes, then we need to pick one of the nodes as the manager. In this case I’m using the Intel Celeron machine as the master and will run the following:

$ microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.59:25000/52bfa563603b3018770f88cadf606920/0e6fa3fb9ed3

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.59:25000/52bfa563603b3018770f88cadf606920/0e6fa3fb9ed3 --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.59:25000/52bfa563603b3018770f88cadf606920/0e6fa3fb9ed3

And then on the other 2 nodes run the following

$ microk8s join 192.168.1.59:25000/52bfa563603b3018770f88cadf606920/0e6fa3fb9ed3 --worker

You can verify the nodes are joined to the cluster with:

$ microk8s.kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
kube-two     Ready    <none>   137m   v1.24.0-2+f76e51e86eadea
kube-one     Ready    <none>   138m   v1.24.0-2+f76e51e86eadea
kube-three   Ready    <none>   140m   v1.24.0-2+59bbb3530b6769

Once the nodes are added to the cluster we need to enable a bunch of plugins, on the master node run:

$ microk8s enable dns:192.168.1.xx ingress helm helm3

dns:192.168.1.xx overrides the default of using Google’s 8.8.8.8 DNS server to resolve names outside the cluster. This is important because I want it to point to my local DNS as I have set *.flowforge.loc and *.k8s.loc to point to the cluster IP addresses for Ingress.

Install kubectl and helm

By default Microk8s ships with a bunch of tools baked in, these include kubectl and helm that can be accessed as microk8s.kubectl and microk8s.helm respectively.

kubectl

Instructions for installing standalone kubectl can be found here. Once installed you can generate the config by running the following on the master node:

$ microk8s config > ~/.kube/config

This can be copied to other machines that you want to be able to administrate the cluster.

helm

Instructions for installing helm can be found standalone here.

This will make use of the same ~/.kube/config credentials file as kubectl.

NFS Persistent Storage

In order to have a consistent persistence storage pool across all 3 nodes I’m using a NFS share from my NAS. This is controlled using the nfs-subdir-external-provisioner. This creates a new directory on the NFS share for each volume created.

All the nodes need to have all the NFS client tools installed, this can be achieved with:

$ sudo apt-get install nfs-common

This is deployed using helm

$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=192.168.1.7 \
    --set nfs.path=/volume1/kube

To set this as the default StorageClass run the following:

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Conclusion

That is enough for the basic Kubernetes cluster setup, there are some FlowForge specific bits that are needed (e.g. tagging nodes) but I’ll leave that for the FlowForge on Kubernetes install docs (which I have to finish writing before the next release).

New Daily Driver

I got my first Dell XPS13 back in 2016 and a second one in 2020. I really like them but with the new job I’ve been using it for both personal and work use. So I decided to grab a second machine to help with keeping things separate, it’s easier to switch off at the end of the day if I can leave the “work” machine in the office.

Lenovo X1 Carbon

I’ve gone for a Lenovo X1 Carbon. It’s a machine I looked at when I got the second XPS13 as it was another machine that can be ordered with Linux pre-installed. Lenovo are now offering both Ubuntu and Fedora as options. In my case I knew I wouldn’t have any problems installing the OS myself so I ordered a bare machine and installed Fedora 35. Because I got to do a clean install I could also enable LUKS out of the box to encrypt the drive.

Also running both a deb and rpm based distro will help me stay current with both and make testing a little easier without running VMs all the time.

I used to run Fedora when I was at IBM and even worked with the internal team that packaged some of the tools we needed on a day to day basis (like Lotus Notes and the IBM JVM). I decided it would be good to give it a try again especially as Fedora releases tend to move a little quicker than Ubuntu LTS and are more aggressive at picking up new versions.

The main hardware differences to the XPS13 are double the RAM at 32gb and double the storage with a 1TB SSD. The screen is the same resolution but slightly larger and without a touch screen (but that’s not something I make a lot of use of). It also comes with a Lenovo trademark track point as well as a trackpad. The CPU is still a 4 core ( with Hyper Threading) but the base clock speeds are better (Dell, Lenovo)

The only niggle I’ve found so far is that the USB-C port layout doesn’t work as well as with the XPS13 on my desk. The XPS13 has USB-C ports on both sides of the case, where as the X1 Carbon only has 2 on the left hand edge. But it does have a full sized HDMI port and 2 USB 3.1 A ports which means I don’t need the little USB-C to USB-A hub I’d been using. This also makes pluggin in a SD card reader a little easier as the Lenovo doesn’t have one built in.

The keyboard feels a little nicer (just got to get used to the ctrl and fn keys being swapped, though there is bios setting to flip them)

Determining which Linux Distro you are on to install NodeJS

I’ve recently been working on an install script for a project. As part of the install I need to check if there is a suitable version of NodeJS installed and if not install one.

The problem is that there are 2 main ways in which NodeJS can be installed using the default package management systems for different Linux Distributions. So I needed a way to work out which distro the script was running on.

The step was to work out if it is actually Linux or if it’s OSx, since I’m using bash as the interpreter for the script there is the OSTYPE environment variable that I can check.

case "$OSTYPE" in
  darwin*) 
    MYOS=darwin
  ;;
  linux*)
    MYOS=$(cat /etc/os-release | grep "^ID=" | cut -d = -f 2 | tr -d '"')
  ;;
  *) 
    # unknown OS
  ;;
esac

Once we are sure we are on Linux the we can check the /etc/os-release file and cut out the ID= entry. The tr is to cut the quotes off (Amazon Linux I’m looking at you…)

MYOS then contains one of the following:

  • debian
  • ubuntu
  • raspbian
  • fedora
  • rhel
  • centos
  • amzon

And using this we can then decide how to install NodeJS

if [[ "$MYOS" == "debian" ]] || [[ "$MYOS" == "ubuntu" ]] || [[ "$MYOS" == "raspbian" ]]; then
      curl -sSL "https://deb.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo apt-get install -y nodejs build-essential
elif [[ "$MYOS" == "fedora" ]]; then
      sudo dnf module reset -y nodejs
      sudo dnf module install -y "nodejs:$MIN_NODEJS/default"
      sudo dnf group install -y "C Development Tools and Libraries"
elif [[ "$MYOS" == "rhel" ]] || [[ "$MYOS" == "centos" || "$MYOS" == "amzn" ]]; then
      curl -fsSL "https://rpm.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo yum install -y nodejs
      sudo yum group install -y "Development Tools"
elif [[ "$MYOS" == "darwin" ]]; then
      echo "**************************************************************"
      echo "* On OSx you will need to manually install NodeJS            *"
      echo "* Please install the latest LTS release from:                *"
      echo "* https://nodejs.org/en/download/                            *"
      echo "**************************************************************"
      exit 1
fi

Now that’s out of the way time to look at how to nicely setup a Systemd service…

Adding a TPM to My Offline Certificate Authority

Back at the start of last year, I built an offline Certificate Authority based around Pi Zero and a RTC module.

The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.

For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.

TPM

A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.

TPM & RTC on a Raspberry Pi Zero

TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.

You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.

You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).

It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.

dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670

Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.

Using the TPM with the CA

Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.

Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.

There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.

Generating New Keys

You can generate a new CA certificate with the following commands

$ tpm2tss-genkey -a rsa -s 2048 ca.tss
$ openssl req -new -x509 -engine tpm2tss -key ca.tss  -keyform engine -out ca.crt

This certificate can now be used to sign CSRs

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Importing Keys

For an existing ca.key private key file.

$ tpm2_createprimary --hierarchy=o --hash-algorithm=sha256 --key-algorithm=rsa --key-context=primiary_owner_key.ctx
$ HANDLE=$(tpm2_evictcontrol --hierarchy=o --object-context=primiary_owner_key.ctx | cut -d ' ' -f 2 | head -n 1)
$ tpm2_import -C primiary_owner_key.ctx -G rsa -i ca.key -u ca-pub.tpm -r ca.tpm
$ tpm2tss-genkey --public ca-pub-tpm --private ca.tpm --parent $HANDLE --password secret ca.tss

And we can then sign new CSRs the same way as with the generated key

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.

Web Interface

At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.


1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XPS13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.

Flic.io Linux library

As I mentioned I recently got my hands on a set of 4 flic.io buttons. I pretty immediately paired one of them with my phone and started playing. It soon became obvious that while fun, the use cases for a button paired to a phone where limited to a single user environment and not what I had in mind.

What was needed was a way to hook the flic buttons up to something like a Raspberry Pi and Node-RED. While I was waiting for the buttons to arrive I was poking round the messages posted to the indiegogo one of the guys from Shortcut Labs mentioned that such a library was in the works. I reached out to their developer contact point asking about getting access to the library to build a Node-RED node round, I said I was happy to help test any code they had. Just before Christmas I managed got hold of a early beta release to have a play with.

From that I was able to spin up a npm module and a Node-RED node.

The Node-RED node will currently listen for any buttons that are paired with the computer and publish a message as to if it was a single, double or long click

Flic.io Node-RED node

I said I would sit on these nodes until the library shipped, but it appeared on github yesterday so hence this post. The build includes binaries for Raspberry Pi, i386 and x86_64 and needs the very latest Bluez packages (5.36+).

Both my nodes need a little bit of cleaning up and a decent README writing, once that is done I’ll push them to npm.

UPDATE:
Both nodes are now on npmjs.org:
https://www.npmjs.com/package/node-flic-buttons
https://www.npmjs.com/package/node-red-contrib-flic-buttons

Securing Node-RED

Node-RED added some new authentication/authorisation code in the 0.10 release that allows for a plugable scheme. In this post I’m going to talk about how to use this to make Node-RED use a LDAP server to look up users.

HTTPS

First of all, to do this properly we will need to enable HTTPS to ensure the communication channel between the browser and Node-RED is properly protected. This is done by adding a https value to the settings.js file like this:

...
},
https: {
  key: fs.readFileSync('privkey.pem'),
  cert: fs.readFileSync('cert.pem')
},
...

You also need to un-comment the var fs = require(‘fs’); line at the top of settings.js.

You can generate the privkey.pem and cert.pem with the following commands in your node-red directory:

pi@raspberrypi ~/node-red $ openssl genrsa -out privkey.pem
Generating RSA private key, 1024 bit long modulus
.............................++++++
......................++++++
e is 65537 (0x10001)
pi@raspberrypi ~/node-red $ openssl req -new -x509 -key privkey.pem -out cert.pem -days 1095
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:Hampshire
Locality Name (eg, city) []:Eastleigh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Node-RED
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:raspberrypi.local
Email Address []:

The important bit is the Common Name value, this needs to match either the name or IP address that you will use to access your Node-RED console. In my case I have avahi enabled so I can access my pi using it’s host name raspberrypi with .local as the domain, but you may be more used to using an IP address like 192.168.1.12.

Since this is a self signed certificate your browser will reject it the first time you try to connect with a warning like this:

Chrome warning about unsafe cert.
Chrome warning about unsafe cert.

This is because your certificate is not signed by one of the trusted certificate authorities, you can get past this error by clicking on Advanced then Proceed to raspberrypi.local (unsafe). With Chrome this error will be shown once every time you access the page, you can avoid this by copying the cert.pem file to your client machine and import it into Chrome:

  1. Open Chrome settings page chrome://settings
  2. Scroll to the bottom of page and click on the “+Show advanced settings” link
  3. Scroll to the HTTPS/SSL and click on “Manage certificates…”
  4. Select the Servers tab and select import
  5. Select the cert.pem you copied from your Raspberry Pi

Usernames and Passwords

In previous Node-RED releases you could set a single username and password for the admin interface, any static content or one that covered bh. This was done by adding a object to the settings.js file containing the user name and password. This was useful but could be a little limited. Since the 0.10 release there is now a pluggable authentication interface that also includes support for things like read only access to the admin interface. Details of these updates can be found here.

To implement a authentication plugin you need to create a NodeJS module based on this skeleton:

var when = require("when");

module.exports = {
   type: "credentials",
   users: function(username){
      //returns a promise that checks the authorisation for a given user
      return when.promise(function(resolve) {
         if (username == 'foo') {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   authenticate: function(username, password) {
      //returns a promise that completes when the user has been authenticated
      return when.promise(function(resolve) {
         if (username == 'foo' && password == 'bar' ) {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   default: function() {
      // Resolve with the user object for the default user.
      // If no default user exists, resolve with null.
      return when.promise(function(resolve) {
         resolve(null);
      });
   }
};

This comprises on 3 functions, one to authenticate a user against the backend, one to check the level of authorisation (used by Node-REDs built in oAuth mechanism once a user has been authenticated), and finally default which matches unauthenticated users.

For my LDAP example I’m not going to implement different read only/write levels of authorisation to make things a little easier.

The source for the module can be found here or on npmjs here. The easiest way to install it is with:

npm install -g node-red-contrib-ldap-auth

Then edit the Node-RED settings.js to include the following:

adminAuth: require('node-red-contrib-ldap-auth').setup({
    uri:'ldap://url.to.server', 
    base: 'ou=group,o=company.com', 
    filterTemplate: 'mail={{username}}'
}),

The filterTemplate is a mustache template to use to search the LDAP for the username, in this case the username is mail address.

Once it’s all up and running Node-RED should present you with something that looks like this:

Node-RED asking for credentials
Node-RED asking for credentials

Emergency FTTC Router

On Monday I moved to a new broadband provider (A&A). The BT Openreach guy turned up and swapped over the face plate on my master socket, dropped off the FTTC modem, then went and down to the green box in the street and flipped my connection over. It all would have been very uneventful except for the small problem that the new router I needed to link my kit up to FTTC modem had not arrived.

This is because BT messed up the address for where they think my line is installed a few of years ago and I’ve not been able to get them to fix it. A&A quickly sent me out a replacement router with next day delivery but it would mean a effectively 2 days without any access at home.

The routers talks to the FTTC modem using a protocol called PPPoE over normal ethernet. There is a Linux package called rp-pppoe which provides the required support. So to quickly test that the install was working properly I installed this on to my laptop and plugged it directly into FFTC modem. Things looked really good but did mean I was only able to get one device online and I was tied to one end of the sofa by the ethernet cable.

PPPoE is configured the same way PPP used to be used with dial up modems, you just need to create /etc/ppp/pppoe.conf file that looks a bit like this:

ETH=eth0
USER=xxxxxxx
DEMAND=no
DNSTYPE=SERVER
PEERDNS=yes
DEFAULTROUTE=yes
PING="."
CONNECT_POLL=2
CF_BASE=`basename $CONFIG`
PIDFILE="/var/run/$CF_BASE-adsl.pid"
LCP_INTERVAL=20
LCP_FAILURE=3
FIREWALL=NONE
CLAMPMSS=1412
SYNCHRONOUS=no
ACNAME=
SERVICENAME=
CONNECT_TIMEOUT=30
PPPOE_TIMEOUT=80

And include you username and password in the /etc/ppp/chap-secrets. Once set up you just need to run pppoe-start as root

In order to get back to something like normal I needed something else. I had a Raspberry Pi in my bag along with a USB Ethernet adapter which looked like it should fit the bill.

I installed rp-pppoe and the dhcp server then plugged one ethernet adapter into the FTTC modem and the other into a ethernet hub. Into the hub I had a old WiFi access point and the rest of my usual machines. After configuring the Pi to masquerade IP traffic from the hub I had everything back up and running. The only downside is that speeds are limited to 10mbps as that is as quick as the built in ethernet adapter on Pi will do.

Playing with touchatag reader

Touchatag reader
We got a bunch of Touchatag NFC readers in the office just after Christmas and I said I would have a play with one to see what we could use them for. I had seen one before (in fact I borrowed one from Andy Piper for a little while) but didn’t get much further than trying to read my work id card.

To get it work on Linux you need to use PCSC Lite and download the driver from ASC (the guys that actually make the readers that Touchatag use). You can grab the download here. Once you’ve built these (standard “configure; make; sudo make install”) you will need to update one of the PCSC config files. There is a patch linked to from this page: http://idefix.net/~koos/rfid-touchatag.html

PCSC Lite is packaged for Fedora and Ubuntu, but I couldn’t find libnfc in the standard repos for Ubuntu so ended up building it myself for one of the machines in the office. Again this was a simple “configure; make; sudo make install”. With that done I was able to use the nfc-mfclassic tool from the libnfc samples to read data from a tag.

$ nfc-mfclassic r a test.out
Connected to NFC reader: ACS ACR122U 00 00 / ACR122U103 - PN532 v1.6 (0x07)
Found MIFARE Classic 4k card with UID: 9e2bcaea
Reading out 256 blocks |.....................................|

Which gets me a file with all the data stored on a tag (assuming I know the right keys to access all the blocks), but most of the time just having the tag id is enough to trigger an event. After a bit more poking around I found nfc-eventd which seamed to fit the bill perfectly.

This allows you to specify commands to be run when a tag is placed on the reader and when it is removed and it will pass the tag id to the command. Here is the config file I used

nfc-eventd {

  # Run in background? Implies debug=false if true
  daemon = false;

  # show debug messages?
  debug = true;
	
  # polling time in seconds
  polling_time = 1;

  # expire time in seconds
  # default = 0 ( no expire )
  expire_time = 0;
	
  device my_touchatag {
    driver = "ACR122";
    name = "ACS ACR 38U-CCID 01 00";
  }

  # which device to use ? note: if this part is commented out, 
  # nfc-eventd will try to pick up device automagically...
  #nfc_device = "my_touchatag";

  # list of events and actions
  module nem_execute {
    # Tag inserted
    event tag_insert {
      # what to do if an action fail?
      # ignore  : continue to next action
      # return  : end action sequence
      # quit    : end program
      on_error = ignore ;
	
      # You can enter several, comma-separated action entries
      # they will be executed in turn
      action = "publish 'wmqtt://nfc@broker:1883/nfc/tagid?qos=0&retain=n&debug=0' $TAG_UID"
    }
	
    # Tag has been removed
    event tag_remove { 
      on_error = ignore;
      action = "(echo -n 'Tag (uid=$TAG_UID) removed at: ' && date) >> /tmp/nfc-eventd.log";
    }
	
    # Too much time card removed
    event expire_time { 
      on_error = ignore;
      action = "/bin/false";
    }
  }

}

Here I have used the publish command from the IBM IA93 C MQTT package to publish a message with the tag id to the nfc topic. You can do something similar with mosquitto_pub like this:

mosquitto_pub -h broker -i nfc -t nfc -m $TAG_ID

The plan is to use this now to allow the guys in ETS to log into various demos in the lab with their id badges.

Next on the list is to see if I can get the reader to respond to my Galaxy Nexus when it’s in tag mode.