Category Archives: Tech

Updated Pi Zero Gadgets

Following on from my last post I’ve continued to work on improving my instructions for a USB connectable gadget based on a Raspberry Pi Zero.

Firstly I’ve got a slight improvement to the dnsmasq config I mentioned last time. This removes the dnsmasq.leases file which can cause issues if you plug the Zero into multiple computers. This can be a problem because while I had managed to fix the mac address for Host computer end of the connection the OS still pushes down the host name and it’s own unique id when it requests a DHCP address from dnsmasq, this causes dnsmasq to cycle through it’s small pool of addresses. This combined with the fact the clock on Zero is not battery backed up so only gets set correctly when it can reach internet can cause strangeness with addresses getting expired in strange ways. Anyway there is a pretty simple fix.

Adding leasefile-ro to the dnsmasq config causes it to not write lease information to disk, but rely on the dhcp-script to keep track of things. To do this I needed to add handling for a new option to the script to deal with dnsmasq wanting to read the current lease state at startup.


if [[ $op == "init" ]]; then
  exit 0

if [[ $op == "add" ]] || [[ $op == "old" ]]; then
  route add default gw $ip usb0

Now on to getting things working better with Windows machines.

To do this properly we need to move from the g_ether to the g_multi kernel module, this lets the Zero be a USB Mass Storage device, a network device (and a serial device) at the same time. This is useful because it lets me bundle .inf files that tell Windows which drivers to use on the device it’s self so it they can be installed just by plugging it in.

The first order of business is to fix the cmdline.txt to load the right module, after making the changes in the last post it looks like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait modules-load=dwc2,g_ether

The g_ether needs replacing with g_multi so it looks like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait modules-load=dwc2,g_multi

next we need to fix the options passed to the module, these are in the /etc/modprobe.d directory probably in a file called g_ether.conf. We should don’t need to, but to make it obvious when we come back to this a lot later on we’ll rename it to g_multi.conf. Now we’ve renamed it we need to add a few more options.

It currently looks like this:

options g_ether host_addr=00:dc:c8:f7:75:15 dev_addr=00:dd:dc:eb:6d:a1

It needs the g_ether needs changing to e_multi and some options adding to point to a disk image.

options g_multi file=/opt/disk.iso cdrom=y ro=y host_addr=00:dc:c8:f7:75:15 dev_addr=00:dd:dc:eb:6d:a1

Now we have the config all sorted we just need to build the disk image. First we need to create a directory called disk-image to use as the root of the file system we will share from the Zero. Then we need to get hold of the 2 .inf files, they ship with the Linux Kernel doc, but can be found online (serial port, RNDIS Ethernet).

Once we have the files place them in a directory called disk-image/drivers. We should also create a README.txt to explain what’s going on, put that in the root of disk-image. Along side that we want to create a file called Autorun.inf, this tell Windows about what’s on the “cd” when it’s inserted and where it should search for the driver definitions.




Full details of what can go in the Autorun.inf file can be found here, but the interesting bits are the DriverPath=drivers which points to the directory that we put the .inf files in earlier. Also the open=”documentation\index.html” which should open documentation/index.html when the device is plugged in which explains how to install the drivers. I also added an icon file so the drive shows up looking like a clock in the file manager.

That should be the bare minimum that needs to be on the image, but I ran into an issue with the g_multi module complaining the disk image being too small, to get round this I just stuck a 4mb image in the directory as well. To build the iso image run the following command:

mkisofs -o disk.iso -r -J -V "Zero Clock" disk-image

This will output a file called disk.iso that you should copy to /opt/disk.iso on the Zero (I built the image on my laptop as it’s easier to edit files and mkisofs is not installed in the default raspbian image).

This is working well on Linux and Windows, but I’m still getting errors on OSx to do with the file system. It’s not helped by the fact I don’t have a Mac to test on so I end up having to find friends that will let me stick a random but of hardware in to the side of their MacBook.

Once I’ve got the OSx bits sorted out I’ll put together script to build images for anything you want.

So now we have a Pi Zero that should look like a CD-ROM drive and a Network adapter when plugged into pretty much any machine going, it brings the driver information with it for windows, sets up a network connection with a static IP address and a Avahi/Bonjour/mDNS address to access it. I’m planning on using this to set up my Linear Clock on the local WiFi but there are all manner of interesting things that could be done with a device like this. e.g. an offline Certificate Authority, a 2FA token, a Hardware VPN solution or a Web Controllable display device.


Auto Launch Webpages full screen on Android

While waiting for Amazon to get round to reviewing my Node-RED Alexa Smart Home Skill I’ve needed something to hack on.

I’ve been keeping an eye on what folk have been doing with Node-RED Dashboard and a common use case keeps coming up. That is running the UI on a Android tablet mounted on a wall as a way to keep track of and control things around the house. This got me thinking about how to set something like this up.


For the best results you really want to run this totally full screen, there are ways to get this to happen with Chrome, but it’s a bit convoluted. Sure you can add it as a shortcut on the home screen but I thought there had to be a easier/better way.

So I started to have a bit of a play and came up with a new app. It’s basically just a full screen Activity with a WebView, with a few of extra features.

  • Set URL – Pick a URL to load when the phone boots, or the app is launched.
  • Launch on boot – The app can be configured to start up as soon as the phone finishes booting.
  • Take Screen Lock – This prevents the screen from powering off or locking so the page is always visible.

You can change the URL by swiping from the left hand edge of the screen, this will cause the action bar to appear, from this you can select “Settings”.

Set URL to Load
Set URL to Load

The app is in the Google Play store here, the code is on Github here

Google Home

Having got hold of a Amazon Echo Dot just over a week ago, I was eager to get my hands on a Google Home to compare. Luckily a I had a colleague in the US on release day who very kindly brought me one back.

Google Home

It looks like the developer program will not be opening up until December, which is just as well as I need to get the my Alexa project finished first.

My initial impression from a couple of hours of playing is it’s very similar to the Echo, but knows a lot more about me out of the box (due Google already knowing all my secrets), I really do like the Chromecast integration for things like Youtube. I need to try the Echo with my FireTV stick to see if it can do anything similar. It’s still early days for the Google Home and it has only been released in the US so it wouldn’t be totally fair compare them too closely just yet.

I’ll keep this brief for now until I can get into the developer tools. It’s going to be fun time working out just what I can get these two devices to do.

Alexa Home Skill for Node-RED

Following on from my last post this time I’m looking at how to implement Alexa Home Skills for use with Node-RED.

Home Skills provide ON/OFF, Temperature, Percentage control for devices which should map to most home automation tasks.

To implement a Home Skill there are several parts that need to created.

Skill Endpoint

Unlike normal skills which can be implemented as either HTTP or Lambda endpoints, Home Skills can only be implemented as a Lambda function. The Lambda function can be written in one of three languages, Javascript, Python and Java. I’ve chosen to implement mine in Javascript.

For Home Skills the request is passed in a JSON object and can be one of three types of message:

  • Discovery
  • Control
  • System


These messages are triggered when you say “Alexa, discover devices”. The reply this message is when the skill has the chance to tell the Echo what devices are available to control and what sort of actions they support. Each device section includes it’s name and a description to be shown in the Alexa phone/tablet application.

The full list of supported actions:

  • setTargetTemperature
  • incrementTargetTemperature
  • decrementTargetTemperature
  • setPercentage
  • incrementPercentage
  • decrementPercentage
  • turnOff
  • turnOn


These are the actual control messages, triggered by something like “Alexa, set the bedroom lights to 20%”. It contains one of the actions listed earlier and the on/off or value of the change.


This is the Echo system checking that the skill is all healthy.

Linking Accounts

In order for the skill to know who’s echo is connecting we have to arrange a way to link an Echo to an account in the Skill. To do this we have to implement a oAuth 2.0 system. There is a nice tutorial on using passport to provide oAuth 2.0 services here, I used this to add the required HTTP endpoints needed.

Since there is a need to set up oAuth and to create accounts in order to authorise the oAuth requests this means that is makes sense to only do this once and to run it as a shared service for everybody (just got to work out where to host it and how to pay for it).

A Link to the Device

For this the device is actually Node-RED which is probably going to be running on people’s home network. This means something that can connect out to the Skill is probably best to allow or traversing NAT routers. This sounds like a good usecase for MQTT (come on, you knew it was coming). Rather than just use the built in MQTT nodes we have a custom set of nodes that make use of some of the earlier sections.

Firstly a config node that uses the same authentication details as account linking system to create oAuth token to be used to publish device details to the database and to authenticate with the MQTT broker.

Secondly a input node that pairs with the config node. In the input node settings a device is defined and the actions it can handle are listed.

Hooking it all together

At this point the end to end flow looks something like this:

Alexa -> Lambda -> HTTP to Web app -> MQTT to broker -> MQTT to Node-RED

At this point I’ve left the HTTP app in the middle, but I’m looking at adding direct database access to the Lambda function so it can publish control messages directly via MQTT.

Enough talk, how do I get hold of it!

I’m beta testing it with a small group at the moment, but I’ll be opening it up to everybody in a few days. In the mean time the code is all on github, the web side of all of this can be found here, the Lambda function is here and the Node-RED node is here, I’ll put it on npm as soon as the skill is public.

Alexa Skills with Node-RED

I had a Amazon Echo Dot delivered on release day. I was looking forward to having a voice powered butler around the flat.

Amazon advertise WeMo support, but unfortunately they only support WeMo Sockets and I have a bunch of WeMo bulbs that I’d love to get to work

Alexa supports 2 sorts of skills:

  • Normal Skills
  • Home Skills

Normal Skills are triggered by a key word prefixed by either “Tell” or “Ask”. These are meant for information retrieval type services, e.g. you can say “Alexa, ask Network Rail what time the next train to London leaves Southampton Central”, which would retrieve train time table information. Implementing these sort of skills can be as easy as writing a simple REST HTTP endpoint that can handle the JSON format used by Alexa. You can also use AWS Lambda as an endpoint.

Home Skills are a little trickier, these only support Lambda as the endpoint. This is not so bad as you can write Lambda functions in a bunch of languages like Java, Python and JavaScript. As well as the Lambda function you also need to implement a website to link some sort of account for each user to their Echo device and as part of this you need to implement OAuth2.0 authentication and authorisation which can be a lot more challenging.

First Skill

The first step to create a skill is to define it in the Amazon Developer Console. Pick the Alexa tab and then the Alexa Skill Kit.

Defining a new Alexa Skill
Defining a new Alexa Skill

I called my skill Jarvis and set the keyword that will trigger this skill also to Jarvis.

On the next tab you define the interaction model for your skill. This is where the language processing gets defined. You outline the entities (Slots in Alexa parlance) that the skill will interact with and also the sentence structure to match for each Intent. The Intents are defined in a simple JSON format e.g.

  "intents": [
      "intent": "TVInput",
      "slots": [
          "name": "room",
          "type": "LIST_OF_ROOMS"
          "name": "input",
          "type": "LIST_OF_INPUTS"
      "intent": "Lights",
      "slots": [
          "name": "room",
          "type": "LIST_OF_ROOMS"
          "name": "command",
          "type": "LIST_OF_COMMANDS"
          "name": "level",
          "type": "AMAZON.NUMBER"

Slots can be custom types that you define on the same page or can make use of some built in types like AMAZON.NUMBER.

Finally on this page you supply the sample sentences prefixed with the Intent name and with the Slots marked.

TVInput set {room} TV to {input}
TVInput use {input} on the {room} TV
TVInput {input} in the {room}
Lights turn {room} lights {command}
Lights {command} {room} lights {level} percent

The next page is used to setup the endpoint that will actually implement this skill. As a first pass I decided to implement a HTTP endpoint as it should be relatively simple for Node-RED. There is a contrib package called node-red-contrib-alexa that supplies a collection of nodes to act as HTTP endpoints for Alexa skills. Unfortunately is has a really small bug that stops it working on Bluemix so I couldn’t use straight away. I’ve submitted a pull request that should fix things, so hopefully I’ll be able to give it a go soon.

The reason I want to run this on Bluemix is because endpoints need to have a SSL certificate and Bluemix has a wild card certificate for everything deployed on the domain which makes things a lot easier. (Update: When configuring the SSL section of the skill in the Amazon Developer Console, pick “My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority” when using Bluemix)

Alexa Skill Flow
Alexa Skill Flow

The key parts of the flow are:

  • The HTTP-IN & HTTP-Response nodes to handle the HTTP Post from Alexa
  • The first switch node filters the type of request, sending IntentRequests for processing and rejecting other sorts
  • The second switch picks which of the 2 Intents defined earlier (TV or Lights)

      "type": "IntentRequest",
      "requestId": "amzn1.echo-api.request.4ebea9aa-3730-4c28-9076-99c7c1555d26",
      "timestamp": "2016-10-24T19:25:06Z",
      "locale": "en-GB",
      "intent": { 
        "name": "TVInput", 
        "slots": { 
          "input": { 
            "name": "input",
            "value": "chromecast"
          "room": {
            "name": "room",
            "value": "bedroom"
  • The Lights Function node parses out the relevant slots to get the command and device to control
  • The commands are output to a MQTT out node which published to a shared password secured broker where they are picked up by a second instance of Node-RED running on my home network and sent to the correct WeMo node to carry out the action
  • The final function node formats a response for Alexa to say (“OK”) when the Intent has been carried out

    rep = {};
    rep.version  = '1.0';
    rep.sessionAttributes = {};
    rep.response = {
        outputSpeech: {
            type: "PlainText",
            text: "OK"
        shouldEndSession: true
    msg.payload = rep;
    msg.headers = {
        "Content-Type" :"application/json;charset=UTF-8"
    return msg;

It could do with some error handling and the TV Intent needs a similar MQTT output adding.

This works reasonably well, but having to say “Alexa, ask Jarvis to turn the bedroom light on” is a little long winded compared to just “Alexa turn on the bedroom light”, in order to use the shorter form you need to write a Home Skill.

I’ve started work on a Home Skill service and a matching Node-RED Node to go with it. When I’ve got it working I’ll post a follow up with details.

Provisioning WiFi IoT devices


“How do you get the local WiFi details into a consumer device that doesn’t have a keyboard or screen?”

It’s been an on going problem since we started to put WiFi in everything. The current solution seams to be have the device behave like it’s own access point first and then connect a phone or laptop to this network in order to enter the details of the real WiFi network. While this works it’s a clunky solution that relies on a user being capable of disconnecting from their current network and connecting to the new device only network. It also requires the access point mode to be shutdown in order to try the supplied WiFi credentials which causes the phone/laptop to disconnect and normally reconnect to it’s normal network. This means if the user makes a typo then they have to wait for the device to realise the credentials are wrong and at best go back into access point mode or you have to manually reset the device to have it start over. This really isn’t a good solution.

I’ve been trying to come up with a better solution, specifically for my linear clock project. You may ask why a wall clock needs to be connected to the internet, but since it’s Raspberry Pi powered, which has no persistent real time clock it needs a way to set the time in the case of power outages. I also want a way to allow users to change the colours for the hours, mins and seconds and adjust brightness levels.

In order to add WiFi to the Raspberry Pi Zero I’m using the RedBear IoT pHat which as well as providing WiFi it includes Bluetooth 4.0 support. This opens up a new avenue to use for configuration. The idea is to have the device start as a Eddystone beacon broadcasting a URL. The URL will host a page that uses Web Bluetooth to connect to the device and allow the WiFi SSID and password to be entered. Assuming the credentials are correct the IP address can be pushed to the page allowing a link to be presented to a page served by the device to then configure the more day to day options.

RedBear IoT pHat

We had a hackday at the office recently and I had a go at implementing all of this.

Firstly the device comes up as an Eddystone Beacon that broadcasts the URL to the configuration page:


When the page loads is shows a button to search for the devices with bluetooth. This is because the Web Bluetooth spec currently requires a user interaction to start a BLE session.


Chrome then displays a list of matching devices to connect to.


When the user selects a given device the form appears to allow the SSID and password to be entered and submitted.


Once the details are submitted the page will wait until the device has bound to the WiFi network, it then displays a link directly to the device so any further configuration can be carried out.

The code for this is all up on GitHub here if anybody wants to play. The web front end needs a little work and I’m planning to add WiFi AP scanning and pre-population soon.

Now we just need Apple to get their finger out and implement Web Bluetooth

Tinkerforge Node-RED nodes

For a recent project I’ve been working on a collection of different Node-RED nodes. Part of this set is a group of nodes to interact with Tinkerforge Bricks and Bricklets.

Tinkerforge is a platform of stackable devices and sensors/actuators that can be connected to via USB or can be attached directly to the network via Ethernet or Wifi.

Tinkerforge Stack

Collections of sensors/actuators, known as bricklets, are grouped together round a Master Brick. Each Master Brick can host up to 4 sensors/actuators but multiple Master Brick’s can be stacked to add more capacity. The Master Brick also has a Micro USB socket which can be used to connect to a host machine running a deamon called brickd. The host runs a daemon that handles discovering the attached devices and exposes access to them via an API. There are also Ethernet (with and without PoE) and WiFi bricks which allow you to link the stack directly to the network.

As well as the Master Brick there is a RED Brick which contains a ARM Cortex A8 processor, SD card, USB socket and micro HDMI adapter. This runs a small Linux distribution (based on Debian) and hosts a copy of brickd.

There are bindings for the API in a number of languages including:

  • C/C++
  • C#
  • Delphi
  • Java
  • Javascript
  • PHP
  • Perl
  • Python
  • Ruby

For the Node-RED nodes I took the Javascript bindings and wrapped them as Node-RED nodes. So far the following bricklets are supported:

Adding others shouldn’t be hard, but these were the ones I had access to in order to test.

All the nodes share a config node which is configured to point to the instance of the brickd the sensors are linked and it then provides a filtered list of available bricklets for each node.

Node-RED - Google Chrome_004

The code for the nodes can be found here and is available on here

Physical Web FatBeacons

I’ve been playing with Eddystone beacons (A Open BLE beacon technology backed by Google) for a while. Combined with things like WebBluetooth they make for a really nice way to interact with physical object from mobile devices.

Physical Web Logo

Recently there has been a suggested extension to the Physical Web specification to include something called a FatBeacon. A FatBeacon is one that rather than advertising a URL to load a web page from it actually hosts the web page on the device and services it up from the BLE characteristic. This means that it becomes possible to interact with a device when there is no network back haul to load the page.

This may sounds a little strange in the western world where cellular coverage is ubiquitous, but there are places where coverage is still spotty and it also allows for totally self contained solutions. For simple things like my Physical Web Light Switch this would work well.

A request went up on the Physical Web GitHub page asking if anybody wanted to have a go at implementing a software stack to allow people to have a play with the concept so I threw my hat into the ring to have a go.

My first thought was to make use of Sandeep Mistry‘s bleno library and I ran up a simple characteristic to serve up the HTML.

var bleno = require('bleno');
var util = require('util');

var html = "<html>" + 
			"<head>" +
			 "<title>Fat Beacon Demo</title>" +
			 "<meta name='description' content='FatBeacon Demo'/>" +
			"</head>" + 
			"<body>" +
			 "<h1>HelloWorld</h1>" +
                         "This is a test" +
			"</body>" + 

function HTMLCharacteristic() {, {
		uuid: 'd1a517f0249946ca9ccc809bc1c966fa',
		properties: ['read'],
		descriptors: [
			new bleno.Descriptor({
				uuid: '2901',
				value: 'HTML'


HTMLCharacteristic.prototype.onReadRequest = function(offset, callback) {
	console.log("read request");
	var buf = new Buffer(html, 'utf8');
	if (offset < buf.length) {
		var slice = buf.slice(offset);
		callback(this.RESULT_SUCCESS, slice);
	} else {


module.exports = HTMLCharacteristic;

I wrapped this in a simple service but I had missed the need to put the right bits in the advertisement packet, luckily Jacob Rosenthal spotted what needed doing and ported my code over to Don Coleman‘s node-eddystone-beacon project.

After a couple of tweaks I now have it all working (on Linux, iirc there are some limitation on OSx that mean you can’t connect to services with bleno). My code is available on GitHub here and I will be submitting a pull request to Don to get it included in the node-eddystone-beacon project soon.

The beacon shows up like this in the Physical Web android app

FatBeacon in Physical Web App

Which when loaded looks like this:

FatBeacon page

There are some interesting issues around security to be looked at but it’s still an interesting area.

Lightbar clock

I’ve been meaning to get on with this project for ages, it was originally going to be based on the espruino pico but I never quiet got round to sorting it all out.

I even prototyped it as a little webpage

In the end I’ve ended up using a Raspberry Pi Zero and Node-RED to drive it all. It uses a 60 segment RGB LED strip I picked up off ebay. The LED strip is driven by the node-red-node-pi-neopixel node from the function node.

Screenshot from 2016-06-17 15-11-27

var date = new Date();
var red = [];
var green = [];
var blue = [];
var b = 125;

for (var i=0; i<60; i++) {

var array = [[]];
var hours = date.getHours();
var hs = ((hours + 12) % 12) * 5;
for (var i=0; i<5; i++) {
    red[hs + i] = b;

var m = date.getMinutes();
green[m] = b;

var s = date.getSeconds();
blue[s] = b;

for (var i=0; i<60; i++) {
        payload: i + "," + 
            red[i] + "," +
            green[i] + "," +
return array;
YouTube Preview Image

It just needs a nice mount sorting to hang it on the wall.

Further work may include adding a light sensor to vary the brightness depending on the ambient light levels, a nice way to configure a wifi dongle so it can join the network for ntp time updates and so a user could set the timezone.

OwnTracks Encrypted Location Node-RED Node

OwnTracks Logo
OwnTracks Logo

At the weekend I ran the London Marathon, while I’m glad I did it, I have no desire to do it again (ask me again in 6 months).

My folks came down to watch me and to help them work out where on the course I was I ran with a phone strapped to my arm running OwnTracks. This was pointing at the semi public broker running on my machine on the end of my broadband. In order to keep some degree of privacy I had enabled the symmetric encryption facility.

As well as my family using the data I had run up a very simple logging script with the mosquitto_sub command to record my progress (I was also tracking it with my Garmin watch, but wanted to see how Owntracks did in comparison).

# mosquitto_sub -t 'owntracks/Ben/#' > track.log

Before I started I hadn’t looked at what sort of encryption was being used, but a little bit of digging in the src and a pointer in the right direction from Jan-Piet Mens got me to the libsodium library. I found a nodejs implementation on npm and hacked up a quick little script to decode the messages.

var readline = require('readline').createInterface({
  input: require('fs').createReadStream('track.json')

var sodium = require('libsodium-wrappers');

var StringDecoder = require('string_decoder').StringDecoder;
var decoder = new StringDecoder('utf8');

  var msg = JSON.parse(line);
  if (msg._type == 'encrypted') {
    var cypherText = new Buffer(, 'base64');
    var nonce = cypherText.slice(0,24);
    var key = new Buffer(32);
    var clearText = sodium.crypto_secretbox_open_easy(cypherText.slice(24),nonce,key,"text");

Now I had the method worked out it made sense to turn it into a Node-RED node so encrypted location streams could easily be consumed. I also added a little extra functionality, it copied the lat & lon values into a msg.location objects so they can easily be consumed by my node-red-node-geofence node and Dave’s worldmap node. The original decrypted location is left in the msg.payload field.

Owntracks node

The source for the node can be found on github here and the package is on npm here.

To install run the following in your ~/.node-red

npm install node-red-contrib-owntracks