Provisioning WiFi IoT devices

or

“How do you get the local WiFi details into a consumer device that doesn’t have a keyboard or screen?”

It’s been an on going problem since we started to put WiFi in everything. The current solution seams to be have the device behave like it’s own access point first and then connect a phone or laptop to this network in order to enter the details of the real WiFi network. While this works it’s a clunky solution that relies on a user being capable of disconnecting from their current network and connecting to the new device only network. It also requires the access point mode to be shutdown in order to try the supplied WiFi credentials which causes the phone/laptop to disconnect and normally reconnect to it’s normal network. This means if the user makes a typo then they have to wait for the device to realise the credentials are wrong and at best go back into access point mode or you have to manually reset the device to have it start over. This really isn’t a good solution.

I’ve been trying to come up with a better solution, specifically for my linear clock project. You may ask why a wall clock needs to be connected to the internet, but since it’s Raspberry Pi powered, which has no persistent real time clock it needs a way to set the time in the case of power outages. I also want a way to allow users to change the colours for the hours, mins and seconds and adjust brightness levels.

In order to add WiFi to the Raspberry Pi Zero I’m using the RedBear IoT pHat which as well as providing WiFi it includes Bluetooth 4.0 support. This opens up a new avenue to use for configuration. The idea is to have the device start as a Eddystone beacon broadcasting a URL. The URL will host a page that uses Web Bluetooth to connect to the device and allow the WiFi SSID and password to be entered. Assuming the credentials are correct the IP address can be pushed to the page allowing a link to be presented to a page served by the device to then configure the more day to day options.

RedBear IoT pHat

We had a hackday at the office recently and I had a go at implementing all of this.

Firstly the device comes up as an Eddystone Beacon that broadcasts the URL to the configuration page:

wpid-wp-1473775044897.png

When the page loads is shows a button to search for the devices with bluetooth. This is because the Web Bluetooth spec currently requires a user interaction to start a BLE session.

wpid-wp-1473775152699.png

Chrome then displays a list of matching devices to connect to.

wpid-wp-1473775152702.png

When the user selects a given device the form appears to allow the SSID and password to be entered and submitted.

wpid-wp-1473775152708.png

Once the details are submitted the page will wait until the device has bound to the WiFi network, it then displays a link directly to the device so any further configuration can be carried out.

The code for this is all up on GitHub here if anybody wants to play. The web front end needs a little work and I’m planning to add WiFi AP scanning and pre-population soon.

Now we just need Apple to get their finger out and implement Web Bluetooth

Tinkerforge Node-RED nodes

For a recent project I’ve been working on a collection of different Node-RED nodes. Part of this set is a group of nodes to interact with Tinkerforge Bricks and Bricklets.

Tinkerforge is a platform of stackable devices and sensors/actuators that can be connected to via USB or can be attached directly to the network via Ethernet or Wifi.

Tinkerforge Stack

Collections of sensors/actuators, known as bricklets, are grouped together round a Master Brick. Each Master Brick can host up to 4 sensors/actuators but multiple Master Brick’s can be stacked to add more capacity. The Master Brick also has a Micro USB socket which can be used to connect to a host machine running a deamon called brickd. The host runs a daemon that handles discovering the attached devices and exposes access to them via an API. There are also Ethernet (with and without PoE) and WiFi bricks which allow you to link the stack directly to the network.

As well as the Master Brick there is a RED Brick which contains a ARM Cortex A8 processor, SD card, USB socket and micro HDMI adapter. This runs a small Linux distribution (based on Debian) and hosts a copy of brickd.

There are bindings for the API in a number of languages including:

  • C/C++
  • C#
  • Delphi
  • Java
  • Javascript
  • PHP
  • Perl
  • Python
  • Ruby

For the Node-RED nodes I took the Javascript bindings and wrapped them as Node-RED nodes. So far the following bricklets are supported:

Adding others shouldn’t be hard, but these were the ones I had access to in order to test.

All the nodes share a config node which is configured to point to the instance of the brickd the sensors are linked and it then provides a filtered list of available bricklets for each node.

Node-RED - Google Chrome_004

The code for the nodes can be found here and is available on npmjs.org here

Physical Web FatBeacons

I’ve been playing with Eddystone beacons (A Open BLE beacon technology backed by Google) for a while. Combined with things like WebBluetooth they make for a really nice way to interact with physical object from mobile devices.

Physical Web Logo

Recently there has been a suggested extension to the Physical Web specification to include something called a FatBeacon. A FatBeacon is one that rather than advertising a URL to load a web page from it actually hosts the web page on the device and services it up from the BLE characteristic. This means that it becomes possible to interact with a device when there is no network back haul to load the page.

This may sounds a little strange in the western world where cellular coverage is ubiquitous, but there are places where coverage is still spotty and it also allows for totally self contained solutions. For simple things like my Physical Web Light Switch this would work well.

A request went up on the Physical Web GitHub page asking if anybody wanted to have a go at implementing a software stack to allow people to have a play with the concept so I threw my hat into the ring to have a go.

My first thought was to make use of Sandeep Mistry‘s bleno library and I ran up a simple characteristic to serve up the HTML.

var bleno = require('bleno');
var util = require('util');

var html = "<html>" + 
			"<head>" +
			 "<title>Fat Beacon Demo</title>" +
			 "<meta name='description' content='FatBeacon Demo'/>" +
			"</head>" + 
			"<body>" +
			 "<h1>HelloWorld</h1>" +
                         "This is a test" +
			"</body>" + 
		   "</html>";

function HTMLCharacteristic() {
	bleno.Characteristic.call(this, {
		uuid: 'd1a517f0249946ca9ccc809bc1c966fa',
		properties: ['read'],
		descriptors: [
			new bleno.Descriptor({
				uuid: '2901',
				value: 'HTML'
			})
		]
	});
}

util.inherits(HTMLCharacteristic,bleno.Characteristic);

HTMLCharacteristic.prototype.onReadRequest = function(offset, callback) {
	console.log("read request");
	var buf = new Buffer(html, 'utf8');
	if (offset < buf.length) {
		var slice = buf.slice(offset);
		callback(this.RESULT_SUCCESS, slice);
	} else {
		//problem
	}

}

module.exports = HTMLCharacteristic;

I wrapped this in a simple service but I had missed the need to put the right bits in the advertisement packet, luckily Jacob Rosenthal spotted what needed doing and ported my code over to Don Coleman‘s node-eddystone-beacon project.

After a couple of tweaks I now have it all working (on Linux, iirc there are some limitation on OSx that mean you can’t connect to services with bleno). My code is available on GitHub here and I will be submitting a pull request to Don to get it included in the node-eddystone-beacon project soon.

The beacon shows up like this in the Physical Web android app

FatBeacon in Physical Web App

Which when loaded looks like this:

FatBeacon page

There are some interesting issues around security to be looked at but it’s still an interesting area.

Lightbar clock

I’ve been meaning to get on with this project for ages, it was originally going to be based on the espruino pico but I never quiet got round to sorting it all out.

I even prototyped it as a little webpage

In the end I’ve ended up using a Raspberry Pi Zero and Node-RED to drive it all. It uses a 60 segment RGB LED strip I picked up off ebay. The LED strip is driven by the node-red-node-pi-neopixel node from the function node.

Screenshot from 2016-06-17 15-11-27

var date = new Date();
//console.log(date);
var red = [];
var green = [];
var blue = [];
var b = 125;

for (var i=0; i<60; i++) {
    red.push(0);
    green.push(0);
    blue.push(0);
}

var array = [[]];
var hours = date.getHours();
var hs = ((hours + 12) % 12) * 5;
for (var i=0; i<5; i++) {
    red[hs + i] = b;
}

var m = date.getMinutes();
green[m] = b;

var s = date.getSeconds();
blue[s] = b;

for (var i=0; i<60; i++) {
    array[0].push({
        payload: i + "," + 
            red[i] + "," +
            green[i] + "," +
            blue[i]
    })
}
return array;
YouTube Preview Image

It just needs a nice mount sorting to hang it on the wall.

Further work may include adding a light sensor to vary the brightness depending on the ambient light levels, a nice way to configure a wifi dongle so it can join the network for ntp time updates and so a user could set the timezone.

OwnTracks Encrypted Location Node-RED Node

OwnTracks Logo
OwnTracks Logo

At the weekend I ran the London Marathon, while I’m glad I did it, I have no desire to do it again (ask me again in 6 months).

My folks came down to watch me and to help them work out where on the course I was I ran with a phone strapped to my arm running OwnTracks. This was pointing at the semi public broker running on my machine on the end of my broadband. In order to keep some degree of privacy I had enabled the symmetric encryption facility.

As well as my family using the data I had run up a very simple logging script with the mosquitto_sub command to record my progress (I was also tracking it with my Garmin watch, but wanted to see how Owntracks did in comparison).

# mosquitto_sub -t 'owntracks/Ben/#' > track.log

Before I started I hadn’t looked at what sort of encryption was being used, but a little bit of digging in the src and a pointer in the right direction from Jan-Piet Mens got me to the libsodium library. I found a nodejs implementation on npm and hacked up a quick little script to decode the messages.

var readline = require('readline').createInterface({
  input: require('fs').createReadStream('track.json')
});

var sodium = require('libsodium-wrappers');

var StringDecoder = require('string_decoder').StringDecoder;
var decoder = new StringDecoder('utf8');

readline.on('line',function(line){
  var msg = JSON.parse(line);
  if (msg._type == 'encrypted') {
    var cypherText = new Buffer(msg.data, 'base64');
    var nonce = cypherText.slice(0,24);
    var key = new Buffer(32);
    key.fill(0);
    key.write('xxx');
    var clearText = sodium.crypto_secretbox_open_easy(cypherText.slice(24),nonce,key,"text");
    console.log(clearText);
  }
});

Now I had the method worked out it made sense to turn it into a Node-RED node so encrypted location streams could easily be consumed. I also added a little extra functionality, it copied the lat & lon values into a msg.location objects so they can easily be consumed by my node-red-node-geofence node and Dave’s worldmap node. The original decrypted location is left in the msg.payload field.

Owntracks node

The source for the node can be found on github here and the package is on npm here.

To install run the following in your ~/.node-red

npm install node-red-contrib-owntracks

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XSP13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.

Adding 2 Factor Authentication to Your ExpressJS App

Over the last few years barely a week goes by without some mention of a breach at some online organisation and the request to reset passwords does the rounds or we hear about somebody’s account getting compromised.

One method to help reduce the risk of a compromised password is to use something called 2 Factor Authentication. 2FA makes use of the “Something you know, Something you have” approach to authentication. The “Something you know” is your password and the “Something you have” is some form of token. The token is used to generate unique code for each login attempt. There are several different types of token in general use these days.

Hypersecure HyperFIDO token

  • Time based token – These have been around for a while in the form of RSA tokens that companies have been handing out to employees to use with things like VPNs. A more modern variant of this is the Google Authenticator application (Android & iPhone).
  • Smart Card readers – Some banks hand these out to customers, you insert your card into the device, enter your pin number and it generates a one time password.
  • Hardware tokens – These are plugged into your computer and generate a token on demand. An example of these are things like the Yubico neo and the Hypersecur HypeFIDO which implement the Fido U2F standard. At the moment only Chrome supports using these tokens to authenticate with a Website, but there is on going work to add it to Firefox.

In the rest of this post I’m going to talk about adding Google Authenticator and Fido U2F support to a NodeJS/ExpressJS application.

Basic Username/password

Most ExpressJS apps use a plugin middleware called PassortJS to provide authentication. I’ll use this to do the normal username/password stage.

var http = require('http');
var express = require('express');
var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var mongoose = require('mongoose');

var Account = require('./models/account');
var app = express();

var port = (process.env.VCAP_APP_PORT || process.env.PORT ||3000);
var host = (process.env.VCAP_APP_HOST || '0.0.0.0');
var mongo_url = (process.env.MONGO_URL || 'mongodb://localhost/users');

app.set('view engine', 'ejs');
app.use(passport.initialize());
app.use(passport.session());

passport.use(new LocalStrategy(Account.authenticate()));
passport.serializeUser(Account.serializeUser());
passport.deserializeUser(Account.deserializeUser());

mongoose.connect(mongo_url);

app.use('/',express.static('static'));
app.use('/secure' ensureAuthenticated,passport.,express.static('secure'));

function ensureAuthenticated(req,res,next) {
  if (req.isAuthenticated()) {
    return next();
  } else {
    res.redirect('/login');
  }
}

app.get('/login', function(req,res){
  res.render('login',{ message: req.flash('info') });
});

app.post('/login', passport.authenticate('local', { failureRedirect: '/login', successRedirect: '/secure', failureFlash: true }));

app.get('/newUser', function(req,res){
  res.render('register', { message: req.flash('info') });
});

app.post('/newUser', function(req,res){
  Account.register(new Account({ username : req.body.username }), req.body.password, function(err, account) {
    if (err) {
      console.log(err);
      return res.status(400).send(err.message);
    }

    passport.authenticate('local')(req, res, function () {
      console.log("created new user %s", req.body.username);
      res.status(201).send();
    });
  });
});

var server = http.Server(app);
server.listen(port, host, function(){
  console.log('App listening on  %s:%d!', host, port);
});

Here we have a pretty basic Express app that serves public static content from a directory and renders a login page template with ejs that takes a username and password to access a second “secure” directory of static content. It also has a page to register a new user, all the user information is stashed in a MongoDB database using Mongoose.

Google Authenticator TOTP

Next we will add support for the Google Authenticator app. Here there is a PassportJS plugin (passport-totp) that will handle the actual authentication but we need a way to enrol the site into the application. The app has the ability to read configuration data from a QR code which makes setup simple. In the ExpressJS app we need to add the following routes:

var TOTPStrategy = require('passport-totp').Strategy;
var G2FA = require('./models/g2fa');

app.get('/setupG2FA', ensureAuthenticated, function(req,res){
  G2FA.findOne({'username': req.user.username}, function(err,user){
    if (err) {
      res.status(400).send(err);
    } else {
      var secret;
      if (user !== null) {
        secret = user.secret;
      } else {
        //generate random key
        secret = genSecret(10);
        var newToken = new G2FA({username: req.user.username, secret: secret});
        newToken.save(function(err,tok){});
      }
      var encodedKey = base32.encode(secret);
      var otpUrl = 'otpauth://totp/2FADemo:' + req.user.username + '?secret=' + encodedKey + '&period=30&issuer=2FADemo';
      var qrImage = 'https://chart.googleapis.com/chart?chs=166x166&chld=L|0&cht=qr&chl=' + encodeURIComponent(otpUrl);
      res.send(qrImage);
    }
  });
});

app.post('/loginG2FA', ensureAuthenticated, passport.authenticate('totp'), function(req, res){
  req.session.secondFactor = 'g2fa';
  res.send();
});

2FA Demo App enrolled in Google Authenticator

The first route checks to see if the user has already set up the Authentication app, if not it generates a new random secret and then uses this to build a URL for Google’s Chart API. This API is used to generate a QR code with the enrolment information. As well as the shared secret, the QR code contains the name of the application and the users name so it can easily be identified within the application. The secret is stashed in the MongoDB database along with the username so we can get it back later.

The second route actually verifies the code provided by the application is correct and adds a flag to the users session to say it passed.

The following code is embedded in the in a page to actually show the QR code then to verify that the app is generating the right values.

googleButton.onclick = function setupGoogle() {
  clearWorkspace();
  xhr.open('GET', '/setupG2FA',true);
  xhr.onreadystatechange = function () {
    if(xhr.readyState == 4 && xhr.status == 200) {
      var message = document.createElement('p');
      message.innerHTML = 'Scan the QR code with the app then enter the code in the box and hit submit';
      document.getElementById('workspace').appendChild(message);
      var qrurl = xhr.responseText;
      var image = document.createElement('img');
      image.setAttribute('src', qrurl);
      document.getElementById('workspace').appendChild(image);
      var code = document.createElement('input');
      code.setAttribute('type', 'number');
      code.setAttribute('id', 'code');
      document.getElementById('workspace').appendChild(code);
      var submitG2FA = document.createElement('button');
      submitG2FA.setAttribute('id', 'submitG2FA');
      submitG2FA.innerHTML = 'Submit';
      document.getElementById('workspace').appendChild(submitG2FA);
      submitG2FA.onclick = function() {
        var pass = document.getElementById('code').value;
        console.log(pass);
        var xhr2 = new XMLHttpRequest();
        xhr2.open('POST', '/loginG2FA', true);
        xhr2.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
        xhr2.onreadystatechange = function() {
          if (xhr2.readyState == 4 && xhr2.status == 200) {
            clearWorkspace();
            document.getElementById('workspace').innerHTML='Google Authenticator all setup';
          } else if (xhr2.readyState == 4 && xhr2.status !== 200) {
            clearWorkspace();
            document.getElementById('workspace').innerHTML='Error setting up Google Authenticator';
          }
        }
        xhr2.send('code='+pass);
      }
    } else if (xhr.readyState == 4 && xhr.status !== 200) {
      document.getElementById('workspace').innerHTML ="error setting up Google 2FA";
    }
  };
  xhr.send();
}

Fido U2F

Now the slightly more complicated bit, setting up the U2F support. This is 2 stage process, first we need to register the token with both the site.

app.get('/registerU2F', ensureAuthenticated, function(req,res){
  try{
    var registerRequest = u2f.request(app_id);
    req.session.registerRequest = registerRequest;
    res.send(registerRequest);
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

app.post('/registerU2F', ensureAuthenticated, function(req,res){
  var registerResponse = req.body;
  var registerRequest = req.session.registerRequest;
  var user = req.user.username;
  try {
    var registration = u2f.checkRegistration(registerRequest,registerResponse);
    var reg = new U2F_Reg({username: user, deviceRegistration: registration });
    reg.save(function(err,r){});
    res.send();
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

The first route generates a registration requestion from the app_id, for a simple site this is just the root URL for the site (it needs to be a HTTPS url). This is sent to the webpage which passes it to the token. The token responds with a signed response that includes a public key and a certificate. These are stored away with the username to be used for authentication later.

app.get('/authenticateU2F', ensureAuthenticated, function(req,res){
  U2F_Reg.findOne({username: req.user.username}, function(err, reg){
    if (err) {
      res.status(400).send(err);
    } else {
      if (reg !== null) {
        var signRequest = u2f.request(app_id, reg.deviceRegistration.keyHandle);
        req.session.signrequest = signRequest;
        req.session.deviceRegistration = reg.deviceRegistration;
        res.send(signRequest);
      }
    }
  });
});

app.post('/authenticateU2F', ensureAuthenticated, function(req,res){
  var signResponse = req.body;
  var signRequest = req.session.signrequest;
  var deviceRegistration = req.session.deviceRegistration;
  try {
    var result = u2f.checkSignature(signRequest, signResponse, deviceRegistration.publicKey);
    if (result.successful) {
      req.session.secondFactor = 'u2f';
      res.send();
    } else {
      res.status(400).send();
    }
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

The authentication is a similar process, the first route generates a random challenge and sends this to the website, which asks the token to sign the challenge. This signed challenge is passed back to the site which checks with the public key/certificate that the correct token signed the challenge.

We should now have a site that allows users to register and then enrol both the Google Authenticator app and a Fido U2F token in order to do 2FA.

All the code is for this demo hosted in Github here and a working example is on Bluemix here.

Adding Web Bluetooth to the Lightswitch

I’ve just got Web Bluetooth working properly this weekend to finish off my Physical Web lightswitch.

Web Bluetooth is a draft API to allow webpages to interact directly with Bluetooth LE devices. There is support in the latest Chrome builds on Android and can be turned on by a flag: enable-web-bluetooth (Also coming to Linux in version 50.x).

Some folks have already been playing with this stuff and done things like controlling a BB-8 droid from Chrome.

Chrome Physical Web Notification

I started off following the instructions here which got me started. One of the first things I ran into was that in order to use Web Bluetooth the page needs to be loaded from a trusted source, which basically means localhost or a HTTPS enabled site. I’d already run into this with the Physical Web stuff as Chrome won’t show details of a discovered URL unless it points to a HTTPS site and it even barfs on self signed/private CA certificates. I got round this by using a letsencrypt.org (Which reminds me I really need to change my domain registrar so I can get back to setting up DNSSEC).

Now that I was allowed to actually use the API I had a small problem discovering the BLE device. I had initially thought I would be able to filter local devices based on the Primary Services they possessed, something like this:

navigator.bluetooth.requestDevice({
    filters: [{
        services: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
    }]
})

Web Bluetooth Device Discovery

But after not getting any devices returned I had to reach out on Stackoverflow with the this question. This turned out to be because the beacon was only advertising 1 of it’s 3 Primary Services along with the URL. The answer to my question posted by Jeffrey Yasskin pointed me at using a device name prefix and listing the alternative services the device should provide. I’m going to have a look at the code for the eddystone-beacon node to see if it can be altered to advertise more of the services as well as a URL.

navigator.bluetooth.requestDevice({
    filters: [{
        namePrefix: 'Light'
    }],
    optionalServices: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
})

This now allows the user to select the correct device if there are more than one within range. Once selected the web app switches over from making posts to the REST control endpoints to talking directly to the device via BLE. The device is surfacing 2 characteristics at the moment, one for the toggling on and off and one to set the brightness levels.

All the code is up and Github here.

Next I need to see if there is a way to skip the device selection phase if the user has already paired the page and the device to save on the number of steps required before you can switch the lights on/off. I expect this may not be possible for privacy/security reasons at the moment. Even with the extra step it’s still quicker than waiting for the Offical Belkin WeMo app to load.

Physical Web Lightswitch

Physical Web Logo
As I’ve mentioned before I’ve been having a playing a bunch of WeMo kit and also looking at using Bluetooth LE and Physical Web beacons. I’ve been looking at putting the 2 together to solve a problem I’ve been having round the house. (Also one mentioned by somebody [@mattb I’m told by @knolleary], sorry can’t remember who, at this years ThingMonk)

Belkin ship a mobile phone app to control their products but it has a few drawbacks:
Belkin WeMo App

  1. Launching the app takes ages
  2. Vistors need to know what type of lights you own, then they have to install the right app
  3. You have to give visitors access to you Wifi
  4. Once you’ve granted access to the WiFi, Visitors are granted full control of your lights, including when no longer attached to the same network with no way to revoke access

(Having been reminded by @knolleary -> More of these types of problems discussed in @mattb’s Thingmonk talk at this year)

The Physical Web approach deals with the first 2 of these really nicely, a phone will detect the Eddystone beacon and offer a link to the control web page, so no app needed and no need to identify what type of devices you have, you just need to be close enough to it.

The second 2 problems are a little bit more tricky. Due to mitigation of some privacy problems at the moment to get the best out of a Physical Web URL it needs to be publicly accessible, this is because when a device detects a beacon it tries to access the URL in order to pull some summary/meta data to help with presenting it to the user. The problem with this is it exposes the devices IP address to the guys who deployed the beacon, which allows for the possibility of tracking that user. The workaround is that the Physical Web spec says that the URLs should be accessed by a proxy hence shielding the device from the URL, the problem is these proxies are all on the public internet and can only see public sites. But this does mean since it’s on the public internet you don’t need to give guest net access.

All this gets round problem number 3, but means that control for your living room lights needs to be publicly exposed to the internet. Which brings up nicely to problem 4, if it’s on the public internet how do you control who has access, once somebody has used the URL in the beacon it will be in their internet history and they can comeback and mess with your lights from home again.

You can add authentication to the URL, but a careful balance about how long any signed in session lasts will need to be struck as you don’t want to be flapping around in the dark trying to enter a password to turn the lights on. While this is big scary problem there is a potential solution to all that I’ll touch on at the end of this post.

There is one other problem, URLs broadcast via Eddystone beacons have to be less than 18 bytes long which is pretty short, while there are some encoding tricks for common start and end sections (e.g. ‘http://www.’ & ‘.com’) that reduce these sections to just 1 byte, that still doesn’t leave room for much more. You need to use a URL shortner to do anything major.

Trying things out

While thinking about all this I decided to spin up a little project (on github) to have a play. I took the core code from my Node-RED WeMo node and wrapped it up in a little web app along with the bleno and eddystone-beacon npm modules. I could have done this using Node-RED but I wanted to be able to support control via straight BLE as well.

The code uses discovery to find the configured light bulb or group of bulbs

wemo.start();

if (!wemo.get(deviceID)) {
  wemo.on('discovered', function(d){
    if (d === deviceID) {
      device = wemo.get(d);
      console.log("found light");
    }
  });
} else {
  device = wemo.get(deviceID);
  console.log("found light");
}

It then starts up an express js webserver and creates 2 routes, 1 to toggle on/off and one to set the brightness

app.post('/toggle/:on', function(req, res){
  console.log("toggle " + req.params.on);
  if (req.params.on === 'on') {
    wemo.setStatus(device,'10006',1);
  } else {
    wemo.setStatus(device,'10006',0);
  }
  res.send();
});

app.post('/dim/:range', function(req,res){
  console.log("dim " + req.params.range);
  wemo.setStatus(device,'10006,10008','1,' + req.params.range);
  res.send();
});

It also sets up a directory to load static content out of. Once that is all setup then it sets up the Eddystone beacon

eddystone.advertiseUrl(config.shortURL, {name: config.name});

Enabling Physical Web on You Phone

If you want to have a play yourselves there are a couple of approaches to enable Physical Web discovery on your phone. The first is a mobile app built by the physicalweb.org guys, it’s available for both Android and iOS. When you launch this app it will seek out any Eddystone beacons in range and display a list along with a summary

Recently Google announced that they were rolling Physical web capability into the their Chrome Web browser. At the moment it is only available in the beta release. You can down it on Android here and the this link has instructions for iOS. I have not tried iOS instructions as I don’t have a suitable device.

Once it’s been installed these instructions explain how to enable it.

Now we have a beacon and some way to detect it what does it look like

Discovered beacons

The Physical Web app has detected 2 beacons (actually they are both the same beacon, using both BLE and mDNS to broadcast the URL). Both beacon are on my private network at the moment so the proxy could not load any of the meta data to enrich the listing, it could also not find my private URL shortener http://s.loc. If I click on one of the beacons then it will take me to a page to control the light. At the moment the interface is purely functional.

Light interface

This is working nicely enough for now, but it needs to be made to look a bit nicer and could do with presenting what brightness level the bulb is currently set to.

Direct device communication

I mentioned earlier there was a possible solution for the public network access requirement and authentication. There is a working group developing a specification to allow web pages to interact with local BLE devices. The Web Bluetooth API Specification is not yet finished but an early version is baked into Chrome (you can enable it via these instructions). This is something I intend to play with because it solves the whole public facing site problem and how to stop guests keeping remote access to your lights. It doesn’t matter that you can download the control page if you still need to be physically close to the beacon to connect via BLE to control the lights.

I’ve added 2 BLE GATT characteristics to the beacon (1 for on/off and 1 for dimming) and when I get another couple of free hours I’m going to improve the webpage served up from the beacon to include this support. Once this works I can move the page to my public site and use a public URL shortener which should mean all the meta data will load properly.

All this also means that with the right cache headers the page only needs to be downloaded once and can then loaded directly from the on device cache in the future.