Category Archives: Projects

Fist pass TRÅDFRI MQTT Bridge

I’ve been working on integrating the new IKEA TRÅDFRI Lights into my Home Automation system. I’d really like a native NodeJS system so I can plug it directly into Node-RED, but I’ve not found a working CoAP over DTLS setup just yet.

So in the mean time I’ve been working on a very basic MQTT to CoAP client bridge in Java using the Eclipse Californium library.

It still needs some work, but here is the first pass:

package uk.me.hardill.coap2mqtt;

import java.net.InetSocketAddress;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.HashMap;
import java.util.logging.Level;

import org.eclipse.californium.core.CaliforniumLogger;
import org.eclipse.californium.core.CoapClient;
import org.eclipse.californium.core.CoapHandler;
import org.eclipse.californium.core.CoapResponse;
import org.eclipse.californium.core.Utils;
import org.eclipse.californium.core.coap.MediaTypeRegistry;
import org.eclipse.californium.core.network.CoapEndpoint;
import org.eclipse.californium.core.network.config.NetworkConfig;
import org.eclipse.californium.scandium.DTLSConnector;
import org.eclipse.californium.scandium.ScandiumLogger;
import org.eclipse.californium.scandium.config.DtlsConnectorConfig;
import org.eclipse.californium.scandium.dtls.pskstore.StaticPskStore;
import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
import org.eclipse.paho.client.mqttv3.MqttCallback;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
import org.json.JSONArray;
import org.json.JSONObject;

/**
 * @author hardillb
 *
 */
public class Main {
  
  static {
    CaliforniumLogger.disableLogging();
    ScandiumLogger.disable();
//    ScandiumLogger.initialize();
//    ScandiumLogger.setLevel(Level.FINE);
  }
  
  private DTLSConnector dtlsConnector;
  private MqttClient mqttClient;
  private CoapEndpoint endPoint;
  
  private String ip;
  
  private HashMap<String, Integer> name2id = new HashMap<>();
  
  Main(String psk, String ip) {
    this.ip = ip;
    DtlsConnectorConfig.Builder builder = new DtlsConnectorConfig.Builder(new InetSocketAddress(0));
    builder.setPskStore(new StaticPskStore("", psk.getBytes()));
    dtlsConnector = new DTLSConnector(builder.build());
    endPoint = new CoapEndpoint(dtlsConnector, NetworkConfig.getStandard());
    
    MemoryPersistence persistence = new MemoryPersistence();
    try {
      mqttClient = new MqttClient("tcp://localhost", MqttClient.generateClientId(), persistence);
      mqttClient.connect();
      mqttClient.setCallback(new MqttCallback() {
        
        @Override
        public void messageArrived(String topic, MqttMessage message) throws Exception {
          // TODO Auto-generated method stub
          System.out.println(topic + " " + message.toString());
          String parts[] = topic.split("/");
          int id = name2id.get(parts[1]);
          System.out.println(id);
          String command = parts[3];
          System.out.println(command);
          try{
            JSONObject json = new JSONObject("{\"9001\":\"Living Room Light\",\"9020\":1491515804,\"9002\":1491158817,\"9003\":65537,\"9054\":0,\"5750\":2,\"3\":{\"0\":\"IKEA of Sweden\",\"1\":\"TRADFRI bulb E27 opal 1000lm\",\"2\":\"\",\"3\":\"1.1.1.0-5.7.2.0\",\"6\":1},\"9019\":1,\"3311\":[{\"5850\":1,\"5851\":10,\"9003\":0}]}");
            JSONObject settings = json.getJSONArray("3311").getJSONObject(0);
            if (command.equals("dim")) {
              settings.put("5851", Integer.parseInt(message.toString()));
            } else if (command.equals("on")) {
              if (message.toString().equals("0")) {
                settings.put("5850", 0);
                settings.put("5851", 0);
              } else {
                settings.put("5850", 1);
                settings.put("5851", 128);
              }
            }
            String payload = json.toString();
            Main.this.set("coaps://" + ip + "//15001/" + id, payload);
          } catch (Exception e) {
            e.printStackTrace();
          }
        }
        
        @Override
        public void deliveryComplete(IMqttDeliveryToken token) {
          // TODO Auto-generated method stub
        }
        
        @Override
        public void connectionLost(Throwable cause) {
          // TODO Auto-generated method stub
        }
      });
      mqttClient.subscribe("TRÅDFRI/+/control/+");
    } catch (MqttException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
  }
  
  private void discover() {
    try {
      URI uri = new URI("coaps://" + ip + "//15001");
      CoapClient client = new CoapClient(uri);
      client.setEndpoint(endPoint);
      CoapResponse response = client.get();
      JSONArray array = new JSONArray(response.getResponseText());
      for (int i=0; i<array.length(); i++) {
        String devUri = "coaps://"+ ip + "//15001/" + array.getInt(i);
        this.watch(devUri);
      }
      client.shutdown();
    } catch (URISyntaxException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
  }
  
  private void set(String uriString, String payload) {
    System.out.println("payload\n" + payload);
    try {
      URI uri = new URI(uriString);
      CoapClient client = new CoapClient(uri);
      client.setEndpoint(endPoint);
      CoapResponse response = client.put(payload, MediaTypeRegistry.TEXT_PLAIN);
      if (response.isSuccess()) {
        System.out.println("Yay");
      } else {
        System.out.println("Boo");
      }
      
      client.shutdown();
      
    } catch (URISyntaxException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
  }
  
  private void watch(String uriString) {
    
    try {
      URI uri = new URI(uriString);
      
      CoapClient client = new CoapClient(uri);
      client.setEndpoint(endPoint);
      CoapHandler handler = new CoapHandler() {
        
        @Override
        public void onLoad(CoapResponse response) {
          System.out.println(response.getResponseText());
          JSONObject json = new JSONObject(response.getResponseText());
          if (json.has("3311")){
            MqttMessage message = new MqttMessage();
            int state = json.getJSONArray("3311").getJSONObject(0).getInt("5850");
            message.setPayload(Integer.toString(state).getBytes());
            message.setRetained(true);
            String topic = "TRÅDFRI/" + json.getString("9001") + "/state/on";
            String topic2 = "TRÅDFRI/" + json.getString("9001") + "/state/dim";
            name2id.put(json.getString("9001"), json.getInt("9003"));
            MqttMessage message2 = new MqttMessage();
            int dim = json.getJSONArray("3311").getJSONObject(0).getInt("5851");
            message2.setPayload(Integer.toString(dim).getBytes());
            message2.setRetained(true);
            try {
              mqttClient.publish(topic, message);
              mqttClient.publish(topic2, message2);
            } catch (MqttException e) {
              // TODO Auto-generated catch block
              e.printStackTrace();
            }
          } else {
            System.out.println("not bulb");
          }
        }
        
        @Override
        public void onError() {
          // TODO Auto-generated method stub
          
        }
      };
      client.observe(handler);
    } catch (URISyntaxException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
    
  }

  /**
   * @param args
   */
  public static void main(String[] args) throws InterruptedException {
    String psk = args[0];
    String ip = args[1];
    Main m = new Main(psk, ip);
    
    m.discover();
  }

}

I’ve tagged the code onto the gist as well for now, but I’ll check the whole thing in as a separate project soon.

EDIT: now with it’s own Github repo here

node-red-contrib-alexa-home-skill

It’s finally ready. I’ve been working on a Node-RED node to act on Amazon Alexa Home Skill directives since November last year. The skill was approved some time very early this morning and now should be available in the UK, US and Germany.

I’ll be mailing all the folks that have already signed up some time later today to let them know they can finally start using the skill, but for the rest of you here is a brief introduction (full details in earlier post).

Alexa Home Skill’s allow you to say the much more natural “Alexa, turn on the kitchen light” rather than “Alexa, ask Jeeves to turn on the kitchen light”, where “Jeeves” is the name of skill you have to remember. Some of the basic commands are:

  • Turn On/Off
  • Dim/Brighten
  • Set/Get Temperature
  • Lock/Unlock

With this node and service you can wire those commands to nearly anything you can control via Node-RED.

Node-RED - Alexa Smart Home Skill

You can install the node with the following commands:

cd ~/.node-red
npm install node-red-contrib-alexa-home-skill

Or via the Manage Palette option in the Node-RED editor.

If you have already installed this module please make sure you update to the latest version (0.1.13) to get the best support for all the voice commands.

There are detailed instructions on how to set everything up here.

Here is an example flow using the node. This turns a light on then automatically turns it off after 5mins. It uses the switch node to detect if it’s a request to turn the light on or off. When following the On branch it uses a trigger node to first send a payload of true then, 5 minutes later it sends false to the WeMo node.

On then Auto Off flow

This sort of flow would be great for a set of outside lights or maybe an electric heater. I also have some updates to the node-red-nodes-wemo package to make dimming/brightening by specific amounts easier, I’ll try and get them out by the weekend.

Updated Pi Zero Gadgets

Following on from my last post I’ve continued to work on improving my instructions for a USB connectable gadget based on a Raspberry Pi Zero.

Firstly I’ve got a slight improvement to the dnsmasq config I mentioned last time. This removes the dnsmasq.leases file which can cause issues if you plug the Zero into multiple computers. This can be a problem because while I had managed to fix the mac address for Host computer end of the connection the OS still pushes down the host name and it’s own unique id when it requests a DHCP address from dnsmasq, this causes dnsmasq to cycle through it’s small pool of addresses. This combined with the fact the clock on Zero is not battery backed up so only gets set correctly when it can reach internet can cause strangeness with addresses getting expired in strange ways. Anyway there is a pretty simple fix.

Adding leasefile-ro to the dnsmasq config causes it to not write lease information to disk, but rely on the dhcp-script to keep track of things. To do this I needed to add handling for a new option to the script to deal with dnsmasq wanting to read the current lease state at startup.

#!/bin/bash
op="${1:-op}"
mac="${2:-mac}"
ip="${3:-ip}"
host="${4}"

if [[ $op == "init" ]]; then
  exit 0
fi

if [[ $op == "add" ]] || [[ $op == "old" ]]; then
  route add default gw $ip usb0
fi

Now on to getting things working better with Windows machines.

To do this properly we need to move from the g_ether to the g_multi kernel module, this lets the Zero be a USB Mass Storage device, a network device (and a serial device) at the same time. This is useful because it lets me bundle .inf files that tell Windows which drivers to use on the device it’s self so it they can be installed just by plugging it in.

The first order of business is to fix the cmdline.txt to load the right module, after making the changes in the last post it looks like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait modules-load=dwc2,g_ether

The g_ether needs replacing with g_multi so it looks like this:

dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait modules-load=dwc2,g_multi

next we need to fix the options passed to the module, these are in the /etc/modprobe.d directory probably in a file called g_ether.conf. We should don’t need to, but to make it obvious when we come back to this a lot later on we’ll rename it to g_multi.conf. Now we’ve renamed it we need to add a few more options.

It currently looks like this:

options g_ether host_addr=00:dc:c8:f7:75:15 dev_addr=00:dd:dc:eb:6d:a1

It needs the g_ether needs changing to e_multi and some options adding to point to a disk image.

options g_multi file=/opt/disk.iso cdrom=y ro=y host_addr=00:dc:c8:f7:75:15 dev_addr=00:dd:dc:eb:6d:a1

Now we have the config all sorted we just need to build the disk image. First we need to create a directory called disk-image to use as the root of the file system we will share from the Zero. Then we need to get hold of the 2 .inf files, they ship with the Linux Kernel doc, but can be found online (serial port, RNDIS Ethernet).

Once we have the files place them in a directory called disk-image/drivers. We should also create a README.txt to explain what’s going on, put that in the root of disk-image. Along side that we want to create a file called Autorun.inf, this tell Windows about what’s on the “cd” when it’s inserted and where it should search for the driver definitions.

[AutoRun]
open="documentation\index.html"
icon=clock.ico,0

[DeviceInstall]
DriverPath=drivers

[Content]
MusicFiles=no
PictureFiles=no
VideoFiles=no

Full details of what can go in the Autorun.inf file can be found here, but the interesting bits are the DriverPath=drivers which points to the directory that we put the .inf files in earlier. Also the open=”documentation\index.html” which should open documentation/index.html when the device is plugged in which explains how to install the drivers. I also added an icon file so the drive shows up looking like a clock in the file manager.

That should be the bare minimum that needs to be on the image, but I ran into an issue with the g_multi module complaining the disk image being too small, to get round this I just stuck a 4mb image in the directory as well. To build the iso image run the following command:

mkisofs -o disk.iso -r -J -V "Zero Clock" disk-image

This will output a file called disk.iso that you should copy to /opt/disk.iso on the Zero (I built the image on my laptop as it’s easier to edit files and mkisofs is not installed in the default raspbian image).

This is working well on Linux and Windows, but I’m still getting errors on OSx to do with the file system. It’s not helped by the fact I don’t have a Mac to test on so I end up having to find friends that will let me stick a random but of hardware in to the side of their MacBook.

Once I’ve got the OSx bits sorted out I’ll put together script to build images for anything you want.

So now we have a Pi Zero that should look like a CD-ROM drive and a Network adapter when plugged into pretty much any machine going, it brings the driver information with it for windows, sets up a network connection with a static IP address and a Avahi/Bonjour/mDNS address to access it. I’m planning on using this to set up my Linear Clock on the local WiFi but there are all manner of interesting things that could be done with a device like this. e.g. an offline Certificate Authority, a 2FA token, a Hardware VPN solution or a Web Controllable display device.

brr

Raspberry Pi Zero Gadgets

I’m still slowly plugging away at my linear clock project. Currently I’m working out how to make it easy to configure it to connect to WiFi network.

One approach is the Physical Web/Web Bluetooth approach I’ve talked about before. This is a really neat solution but it only works with Android phones at the moment as Apple don’t look to be planning any Web Bluetooth support at the moment.

While looking for a more general solution I decided to look at expanding the method I’ve been using to develop the clock. Raspberry Pi Zero’s USB port can act in both Host and Device mode. This means you can plug it into another computer and have it show up as a peripheral. There is support for several different modes, Ethernet adapter, Mass Storage, Serial port plus a few others. The most popular is the Ethernet adapter. You can find some really good instructions on setting all this up here.

This works pretty well for a developer looking to access the Pi Zero to poke around but it’s still a little bit brittle for a consumer device as it relies on Bonjour/avahi to locate the device as it will end up with a randomly assigned 169.254.x.x address. This can be solved by adding some more config options after the instructions in the blog I linked to earlier.

  • First we need to fix the mac address that the Ethernet device presents. This is so that when you plug the Zero into a computer it always recognises it as the same device. To do this we need to add some options to the g_ether module. Create a file in the /etc/modprobe.d directory called g_ether.conf with the following content:
    options g_ether host_addr=02:dd:c8:f7:75:15 dev_addr=02:dd:dc:eb:6d:a1

    This sets the mac address for Zero end of the connection to

  • Next we need to give the Zero a fixed IP address, to do this we add the following to the /etc/network/interfaces file:
    auto usb0
    iface usb0 inet static
      address 10.33.0.1
      netmask 255.255.255.0
    

    The 10.0.0.0/8 range is one of the RFC 1918 private address ranges, I picked 10.33.0.0/24 as it’s not likely to clash with the average home network address range.

  • We also need to stop the dhcp client from adding that 169.254.x.x address, this is the bit that took me ages to find, but in the end you just need to add noipv4ll to the end of /etc/dhcpcd.conf
  • That takes care of the network from the Zero’s point of view but we still need to find a way to assign a network address to the Host computer. This is done with a DHCP server and the simplest to set up for this is dnsmasq. This is where the first tricky bit happens as dnsmasq is not installed by default in the raspbian lite image*. Once installed add file called local.conf to /etc/dnsmasq.d/ with the following:
    interface=usb0
    dhcp-range=usb0,10.33.0.2,10.33.0.5,255.255.255.0,1h
    dhcp-option=3
    

    This tells the Zero to serve up an address between 10.33.0.2 an 10.33.0.5, but given we fixed the mac address of the host end of the network connection it will always end up just handing out 10.33.0.2.

After all that you should have a Pi Zero you can plug into any computer and it should always be available on 10.33.0.1 (as well as raspberrypi.local if the connected computer supports Bonjour/Avahi). This will make writing documentation a lot easier.

I have a couple of extra bits as well, such as a script that sets the default route to the IP address handed out by dnsmasq so if you have internet sharing/Masquerading enabled on the host then the Zero can access the internet. (There is also DHCP option 19 which should enable packet forwarding on the DHCP client, but I need to investigate how this actually works and what effects it has on different OS)

The script lives in /root/route.sh looks like this:

#!/bin/bash
op="${1:-op}"
mac="${2:-mac}"
ip="${3:-ip}"
host="${4}"

if [[ $op == "add" ]] || [[ $op == "old" ]]; then
  route add default gw $ip usb0
fi

And to enable it add the following line to the end of the /etc/dnsmasq.d/local.conf

dhcp-script=/root/route.sh

There is still on niggle, that while the driver (RNDIS Ethernet driver) for this is shipped with Windows you still need to manually install it before it will work. There are some inf files that ship with the Linux kernel docs that can make this a lot easier to do so my next task is to work out how to user the g_multi mode which allows the Zero to be both a Ethernet adapter and a Mass Storage device. This will mean that the Zero will show up as a thumb drive as well the network adapter. I can then include some Documentation and the inf files on that drive. I have most of it working, but it still needs a little polishing, I’ll post again when I’ve got it all working nicely.

*You need to find a way to get the Zero online to install it, I used the RedBear IoT pHAT which lets me get on to my WiFi while still powering/accessing the Zero via the USB socket, but you can also boot the Zero normally with a USB Ethernet or WiFi adapter. To install dnsmasq run the following:

apt-get install dnsmasq

Auto Launch Webpages full screen on Android

While waiting for Amazon to get round to reviewing my Node-RED Alexa Smart Home Skill I’ve needed something to hack on.

I’ve been keeping an eye on what folk have been doing with Node-RED Dashboard and a common use case keeps coming up. That is running the UI on a Android tablet mounted on a wall as a way to keep track of and control things around the house. This got me thinking about how to set something like this up.

Icon

For the best results you really want to run this totally full screen, there are ways to get this to happen with Chrome, but it’s a bit convoluted. Sure you can add it as a shortcut on the home screen but I thought there had to be a easier/better way.

So I started to have a bit of a play and came up with a new app. It’s basically just a full screen Activity with a WebView, with a few of extra features.

Settings
Settings
  • Set URL – Pick a URL to load when the phone boots, or the app is launched.
  • Launch on boot – The app can be configured to start up as soon as the phone finishes booting.
  • Take Screen Lock – This prevents the screen from powering off or locking so the page is always visible.

You can change the URL by swiping from the left hand edge of the screen, this will cause the action bar to appear, from this you can select “Settings”.

Set URL to Load
Set URL to Load

The app is in the Google Play store here, the code is on Github here

Alexa Home Skill for Node-RED

Following on from my last post this time I’m looking at how to implement Alexa Home Skills for use with Node-RED.

Home Skills provide ON/OFF, Temperature, Percentage control for devices which should map to most home automation tasks.

To implement a Home Skill there are several parts that need to created.

Skill Endpoint

Unlike normal skills which can be implemented as either HTTP or Lambda endpoints, Home Skills can only be implemented as a Lambda function. The Lambda function can be written in one of three languages, Javascript, Python and Java. I’ve chosen to implement mine in Javascript.

For Home Skills the request is passed in a JSON object and can be one of three types of message:

  • Discovery
  • Control
  • System

Discovery

These messages are triggered when you say “Alexa, discover devices”. The reply this message is when the skill has the chance to tell the Echo what devices are available to control and what sort of actions they support. Each device section includes it’s name and a description to be shown in the Alexa phone/tablet application.

The full list of supported actions:

  • SetTargetTemperature
  • IncrementTargetTemperature
  • DecrementTargetTemperature
  • SetPercentage
  • IncrementPercentage
  • DecrementPercentage
  • TurnOff
  • TurnOn
  • GetTemperatureReading 1
  • GetTargetTemperature 1
  • GetLockState 1
  • SetLockState 1

1 These actions are listed as only available in the US at the moment.

Control

These are the actual control messages, triggered by something like “Alexa, set the bedroom lights to 20%”. It contains one of the actions listed earlier and the on/off or value of the change.

System

This is the Echo system checking that the skill is all healthy.

Linking Accounts

In order for the skill to know who’s echo is connecting we have to arrange a way to link an Echo to an account in the Skill. To do this we have to implement a oAuth 2.0 system. There is a nice tutorial on using passport to provide oAuth 2.0 services here, I used this to add the required HTTP endpoints needed.

Since there is a need to set up oAuth and to create accounts in order to authorise the oAuth requests this means that is makes sense to only do this once and to run it as a shared service for everybody (just got to work out where to host it and how to pay for it).

A Link to the Device

For this the device is actually Node-RED which is probably going to be running on people’s home network. This means something that can connect out to the Skill is probably best to allow or traversing NAT routers. This sounds like a good usecase for MQTT (come on, you knew it was coming). Rather than just use the built in MQTT nodes we have a custom set of nodes that make use of some of the earlier sections.

Firstly a config node that uses the same authentication details as account linking system to create oAuth token to be used to publish device details to the database and to authenticate with the MQTT broker.

Secondly a input node that pairs with the config node. In the input node settings a device is defined and the actions it can handle are listed.

Hooking it all together

At this point the end to end flow looks something like this:

Alexa -> Lambda -> HTTP to Web app -> MQTT to broker -> MQTT to Node-RED

At this point I’ve left the HTTP app in the middle, but I’m looking at adding direct database access to the Lambda function so it can publish control messages directly via MQTT.

Enough talk, how do I get hold of it!

I’m beta testing it with a small group at the moment, but I’ll be opening it up to everybody in a few days. In the mean time the code is all on github, the web side of all of this can be found here, the Lambda function is here and the Node-RED node is here, I’ll put it on npm as soon as the skill is public.

EDIT

The skill should now be generally available in the UK,US and Germany

Alexa Skills with Node-RED

I had a Amazon Echo Dot delivered on release day. I was looking forward to having a voice powered butler around the flat.

Amazon advertise WeMo support, but unfortunately they only support WeMo Sockets and I have a bunch of WeMo bulbs that I’d love to get to work

Alexa supports 2 sorts of skills:

  • Normal Skills
  • Home Skills

Normal Skills are triggered by a key word prefixed by either “Tell” or “Ask”. These are meant for information retrieval type services, e.g. you can say “Alexa, ask Network Rail what time the next train to London leaves Southampton Central”, which would retrieve train time table information. Implementing these sort of skills can be as easy as writing a simple REST HTTP endpoint that can handle the JSON format used by Alexa. You can also use AWS Lambda as an endpoint.

Home Skills are a little trickier, these only support Lambda as the endpoint. This is not so bad as you can write Lambda functions in a bunch of languages like Java, Python and JavaScript. As well as the Lambda function you also need to implement a website to link some sort of account for each user to their Echo device and as part of this you need to implement OAuth2.0 authentication and authorisation which can be a lot more challenging.

First Skill

The first step to create a skill is to define it in the Amazon Developer Console. Pick the Alexa tab and then the Alexa Skill Kit.

Defining a new Alexa Skill
Defining a new Alexa Skill

I called my skill Jarvis and set the keyword that will trigger this skill also to Jarvis.

On the next tab you define the interaction model for your skill. This is where the language processing gets defined. You outline the entities (Slots in Alexa parlance) that the skill will interact with and also the sentence structure to match for each Intent. The Intents are defined in a simple JSON format e.g.

{
  "intents": [
    {
      "intent": "TVInput",
      "slots": [
        {
          "name": "room",
          "type": "LIST_OF_ROOMS"
        },
        {
          "name": "input",
          "type": "LIST_OF_INPUTS"
        }
      ]
    },
    {
      "intent": "Lights",
      "slots": [
        {
          "name": "room",
          "type": "LIST_OF_ROOMS"
        },
        {
          "name": "command",
          "type": "LIST_OF_COMMANDS"
        },
        {
          "name": "level",
          "type": "AMAZON.NUMBER"
        }
      ]
    }
  ]
}

Slots can be custom types that you define on the same page or can make use of some built in types like AMAZON.NUMBER.

Finally on this page you supply the sample sentences prefixed with the Intent name and with the Slots marked.

TVInput set {room} TV to {input}
TVInput use {input} on the {room} TV
TVInput {input} in the {room}
Lights turn {room} lights {command}
Lights {command} {room} lights {level} percent

The next page is used to setup the endpoint that will actually implement this skill. As a first pass I decided to implement a HTTP endpoint as it should be relatively simple for Node-RED. There is a contrib package called node-red-contrib-alexa that supplies a collection of nodes to act as HTTP endpoints for Alexa skills. Unfortunately is has a really small bug that stops it working on Bluemix so I couldn’t use straight away. I’ve submitted a pull request that should fix things, so hopefully I’ll be able to give it a go soon.

The reason I want to run this on Bluemix is because endpoints need to have a SSL certificate and Bluemix has a wild card certificate for everything deployed on the .mybluemix.net domain which makes things a lot easier. (Update: When configuring the SSL section of the skill in the Amazon Developer Console, pick “My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority” when using Bluemix)

Alexa Skill Flow
Alexa Skill Flow

The key parts of the flow are:

  • The HTTP-IN & HTTP-Response nodes to handle the HTTP Post from Alexa
  • The first switch node filters the type of request, sending IntentRequests for processing and rejecting other sorts
  • The second switch picks which of the 2 Intents defined earlier (TV or Lights)

    {
      "type": "IntentRequest",
      "requestId": "amzn1.echo-api.request.4ebea9aa-3730-4c28-9076-99c7c1555d26",
      "timestamp": "2016-10-24T19:25:06Z",
      "locale": "en-GB",
      "intent": { 
        "name": "TVInput", 
        "slots": { 
          "input": { 
            "name": "input",
            "value": "chromecast"
          }, 
          "room": {
            "name": "room",
            "value": "bedroom"
          }
        }
      }
    }
  • The Lights Function node parses out the relevant slots to get the command and device to control
  • The commands are output to a MQTT out node which published to a shared password secured broker where they are picked up by a second instance of Node-RED running on my home network and sent to the correct WeMo node to carry out the action
  • The final function node formats a response for Alexa to say (“OK”) when the Intent has been carried out

    rep = {};
    rep.version  = '1.0';
    rep.sessionAttributes = {};
    rep.response = {
        outputSpeech: {
            type: "PlainText",
            text: "OK"
        },
        shouldEndSession: true
    }
    
    msg.payload = rep;
    msg.headers = {
        "Content-Type" :"application/json;charset=UTF-8"
    }
    return msg;

It could do with some error handling and the TV Intent needs a similar MQTT output adding.

This works reasonably well, but having to say “Alexa, ask Jarvis to turn the bedroom light on” is a little long winded compared to just “Alexa turn on the bedroom light”, in order to use the shorter form you need to write a Home Skill.

I’ve started work on a Home Skill service and a matching Node-RED Node to go with it. When I’ve got it working I’ll post a follow up with details.

Provisioning WiFi IoT devices

or

“How do you get the local WiFi details into a consumer device that doesn’t have a keyboard or screen?”

It’s been an on going problem since we started to put WiFi in everything. The current solution seams to be have the device behave like it’s own access point first and then connect a phone or laptop to this network in order to enter the details of the real WiFi network. While this works it’s a clunky solution that relies on a user being capable of disconnecting from their current network and connecting to the new device only network. It also requires the access point mode to be shutdown in order to try the supplied WiFi credentials which causes the phone/laptop to disconnect and normally reconnect to it’s normal network. This means if the user makes a typo then they have to wait for the device to realise the credentials are wrong and at best go back into access point mode or you have to manually reset the device to have it start over. This really isn’t a good solution.

I’ve been trying to come up with a better solution, specifically for my linear clock project. You may ask why a wall clock needs to be connected to the internet, but since it’s Raspberry Pi powered, which has no persistent real time clock it needs a way to set the time in the case of power outages. I also want a way to allow users to change the colours for the hours, mins and seconds and adjust brightness levels.

In order to add WiFi to the Raspberry Pi Zero I’m using the RedBear IoT pHat which as well as providing WiFi it includes Bluetooth 4.0 support. This opens up a new avenue to use for configuration. The idea is to have the device start as a Eddystone beacon broadcasting a URL. The URL will host a page that uses Web Bluetooth to connect to the device and allow the WiFi SSID and password to be entered. Assuming the credentials are correct the IP address can be pushed to the page allowing a link to be presented to a page served by the device to then configure the more day to day options.

RedBear IoT pHat

We had a hackday at the office recently and I had a go at implementing all of this.

Firstly the device comes up as an Eddystone Beacon that broadcasts the URL to the configuration page:

wpid-wp-1473775044897.png

When the page loads is shows a button to search for the devices with bluetooth. This is because the Web Bluetooth spec currently requires a user interaction to start a BLE session.

wpid-wp-1473775152699.png

Chrome then displays a list of matching devices to connect to.

wpid-wp-1473775152702.png

When the user selects a given device the form appears to allow the SSID and password to be entered and submitted.

wpid-wp-1473775152708.png

Once the details are submitted the page will wait until the device has bound to the WiFi network, it then displays a link directly to the device so any further configuration can be carried out.

The code for this is all up on GitHub here if anybody wants to play. The web front end needs a little work and I’m planning to add WiFi AP scanning and pre-population soon.

Now we just need Apple to get their finger out and implement Web Bluetooth

Adding Web Bluetooth to the Lightswitch

I’ve just got Web Bluetooth working properly this weekend to finish off my Physical Web lightswitch.

Web Bluetooth is a draft API to allow webpages to interact directly with Bluetooth LE devices. There is support in the latest Chrome builds on Android and can be turned on by a flag: enable-web-bluetooth (Also coming to Linux in version 50.x).

Some folks have already been playing with this stuff and done things like controlling a BB-8 droid from Chrome.

Chrome Physical Web Notification

I started off following the instructions here which got me started. One of the first things I ran into was that in order to use Web Bluetooth the page needs to be loaded from a trusted source, which basically means localhost or a HTTPS enabled site. I’d already run into this with the Physical Web stuff as Chrome won’t show details of a discovered URL unless it points to a HTTPS site and it even barfs on self signed/private CA certificates. I got round this by using a letsencrypt.org (Which reminds me I really need to change my domain registrar so I can get back to setting up DNSSEC).

Now that I was allowed to actually use the API I had a small problem discovering the BLE device. I had initially thought I would be able to filter local devices based on the Primary Services they possessed, something like this:

navigator.bluetooth.requestDevice({
    filters: [{
        services: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
    }]
})

Web Bluetooth Device Discovery

But after not getting any devices returned I had to reach out on Stackoverflow with the this question. This turned out to be because the beacon was only advertising 1 of it’s 3 Primary Services along with the URL. The answer to my question posted by Jeffrey Yasskin pointed me at using a device name prefix and listing the alternative services the device should provide. I’m going to have a look at the code for the eddystone-beacon node to see if it can be altered to advertise more of the services as well as a URL.

navigator.bluetooth.requestDevice({
    filters: [{
        namePrefix: 'Light'
    }],
    optionalServices: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
})

This now allows the user to select the correct device if there are more than one within range. Once selected the web app switches over from making posts to the REST control endpoints to talking directly to the device via BLE. The device is surfacing 2 characteristics at the moment, one for the toggling on and off and one to set the brightness levels.

All the code is up and Github here.

Next I need to see if there is a way to skip the device selection phase if the user has already paired the page and the device to save on the number of steps required before you can switch the lights on/off. I expect this may not be possible for privacy/security reasons at the moment. Even with the extra step it’s still quicker than waiting for the Offical Belkin WeMo app to load.

Physical Web Lightswitch

Physical Web Logo
As I’ve mentioned before I’ve been having a playing a bunch of WeMo kit and also looking at using Bluetooth LE and Physical Web beacons. I’ve been looking at putting the 2 together to solve a problem I’ve been having round the house. (Also one mentioned by somebody [@mattb I’m told by @knolleary], sorry can’t remember who, at this years ThingMonk)

Belkin ship a mobile phone app to control their products but it has a few drawbacks:
Belkin WeMo App

  1. Launching the app takes ages
  2. Vistors need to know what type of lights you own, then they have to install the right app
  3. You have to give visitors access to you Wifi
  4. Once you’ve granted access to the WiFi, Visitors are granted full control of your lights, including when no longer attached to the same network with no way to revoke access

(Having been reminded by @knolleary -> More of these types of problems discussed in @mattb’s Thingmonk talk at this year)

The Physical Web approach deals with the first 2 of these really nicely, a phone will detect the Eddystone beacon and offer a link to the control web page, so no app needed and no need to identify what type of devices you have, you just need to be close enough to it.

The second 2 problems are a little bit more tricky. Due to mitigation of some privacy problems at the moment to get the best out of a Physical Web URL it needs to be publicly accessible, this is because when a device detects a beacon it tries to access the URL in order to pull some summary/meta data to help with presenting it to the user. The problem with this is it exposes the devices IP address to the guys who deployed the beacon, which allows for the possibility of tracking that user. The workaround is that the Physical Web spec says that the URLs should be accessed by a proxy hence shielding the device from the URL, the problem is these proxies are all on the public internet and can only see public sites. But this does mean since it’s on the public internet you don’t need to give guest net access.

All this gets round problem number 3, but means that control for your living room lights needs to be publicly exposed to the internet. Which brings up nicely to problem 4, if it’s on the public internet how do you control who has access, once somebody has used the URL in the beacon it will be in their internet history and they can comeback and mess with your lights from home again.

You can add authentication to the URL, but a careful balance about how long any signed in session lasts will need to be struck as you don’t want to be flapping around in the dark trying to enter a password to turn the lights on. While this is big scary problem there is a potential solution to all that I’ll touch on at the end of this post.

There is one other problem, URLs broadcast via Eddystone beacons have to be less than 18 bytes long which is pretty short, while there are some encoding tricks for common start and end sections (e.g. ‘http://www.’ & ‘.com’) that reduce these sections to just 1 byte, that still doesn’t leave room for much more. You need to use a URL shortner to do anything major.

Trying things out

While thinking about all this I decided to spin up a little project (on github) to have a play. I took the core code from my Node-RED WeMo node and wrapped it up in a little web app along with the bleno and eddystone-beacon npm modules. I could have done this using Node-RED but I wanted to be able to support control via straight BLE as well.

The code uses discovery to find the configured light bulb or group of bulbs

wemo.start();

if (!wemo.get(deviceID)) {
  wemo.on('discovered', function(d){
    if (d === deviceID) {
      device = wemo.get(d);
      console.log("found light");
    }
  });
} else {
  device = wemo.get(deviceID);
  console.log("found light");
}

It then starts up an express js webserver and creates 2 routes, 1 to toggle on/off and one to set the brightness

app.post('/toggle/:on', function(req, res){
  console.log("toggle " + req.params.on);
  if (req.params.on === 'on') {
    wemo.setStatus(device,'10006',1);
  } else {
    wemo.setStatus(device,'10006',0);
  }
  res.send();
});

app.post('/dim/:range', function(req,res){
  console.log("dim " + req.params.range);
  wemo.setStatus(device,'10006,10008','1,' + req.params.range);
  res.send();
});

It also sets up a directory to load static content out of. Once that is all setup then it sets up the Eddystone beacon

eddystone.advertiseUrl(config.shortURL, {name: config.name});

Enabling Physical Web on You Phone

If you want to have a play yourselves there are a couple of approaches to enable Physical Web discovery on your phone. The first is a mobile app built by the physicalweb.org guys, it’s available for both Android and iOS. When you launch this app it will seek out any Eddystone beacons in range and display a list along with a summary

Recently Google announced that they were rolling Physical web capability into the their Chrome Web browser. At the moment it is only available in the beta release. You can down it on Android here and the this link has instructions for iOS. I have not tried iOS instructions as I don’t have a suitable device.

Once it’s been installed these instructions explain how to enable it.

Now we have a beacon and some way to detect it what does it look like

Discovered beacons

The Physical Web app has detected 2 beacons (actually they are both the same beacon, using both BLE and mDNS to broadcast the URL). Both beacon are on my private network at the moment so the proxy could not load any of the meta data to enrich the listing, it could also not find my private URL shortener http://s.loc. If I click on one of the beacons then it will take me to a page to control the light. At the moment the interface is purely functional.

Light interface

This is working nicely enough for now, but it needs to be made to look a bit nicer and could do with presenting what brightness level the bulb is currently set to.

Direct device communication

I mentioned earlier there was a possible solution for the public network access requirement and authentication. There is a working group developing a specification to allow web pages to interact with local BLE devices. The Web Bluetooth API Specification is not yet finished but an early version is baked into Chrome (you can enable it via these instructions). This is something I intend to play with because it solves the whole public facing site problem and how to stop guests keeping remote access to your lights. It doesn’t matter that you can download the control page if you still need to be physically close to the beacon to connect via BLE to control the lights.

I’ve added 2 BLE GATT characteristics to the beacon (1 for on/off and 1 for dimming) and when I get another couple of free hours I’m going to improve the webpage served up from the beacon to include this support. Once this works I can move the page to my public site and use a public URL shortener which should mean all the meta data will load properly.

All this also means that with the right cache headers the page only needs to be downloaded once and can then loaded directly from the on device cache in the future.