Auto Launch Webpages full screen on Android

While waiting for Amazon to get round to reviewing my Node-RED Alexa Smart Home Skill I’ve needed something to hack on.

I’ve been keeping an eye on what folk have been doing with Node-RED Dashboard and a common use case keeps coming up. That is running the UI on a Android tablet mounted on a wall as a way to keep track of and control things around the house. This got me thinking about how to set something like this up.

Icon

For the best results you really want to run this totally full screen, there are ways to get this to happen with Chrome, but it’s a bit convoluted. Sure you can add it as a shortcut on the home screen but I thought there had to be a easier/better way.

So I started to have a bit of a play and came up with a new app. It’s basically just a full screen Activity with a WebView, with a few of extra features.

Settings
Settings
  • Set URL – Pick a URL to load when the phone boots, or the app is launched.
  • Launch on boot – The app can be configured to start up as soon as the phone finishes booting.
  • Take Screen Lock – This prevents the screen from powering off or locking so the page is always visible.

You can change the URL by swiping from the left hand edge of the screen, this will cause the action bar to appear, from this you can select “Settings”.

Set URL to Load
Set URL to Load

The app is in the Google Play store here, the code is on Github here

Google Home

Having got hold of a Amazon Echo Dot just over a week ago, I was eager to get my hands on a Google Home to compare. Luckily a I had a colleague in the US on release day who very kindly brought me one back.

Google Home

It looks like the developer program will not be opening up until December, which is just as well as I need to get the my Alexa project finished first.

My initial impression from a couple of hours of playing is it’s very similar to the Echo, but knows a lot more about me out of the box (due Google already knowing all my secrets), I really do like the Chromecast integration for things like Youtube. I need to try the Echo with my FireTV stick to see if it can do anything similar. It’s still early days for the Google Home and it has only been released in the US so it wouldn’t be totally fair compare them too closely just yet.

I’ll keep this brief for now until I can get into the developer tools. It’s going to be fun time working out just what I can get these two devices to do.

Alexa Home Skill for Node-RED

Following on from my last post this time I’m looking at how to implement Alexa Home Skills for use with Node-RED.

Home Skills provide ON/OFF, Temperature, Percentage control for devices which should map to most home automation tasks.

To implement a Home Skill there are several parts that need to created.

Skill Endpoint

Unlike normal skills which can be implemented as either HTTP or Lambda endpoints, Home Skills can only be implemented as a Lambda function. The Lambda function can be written in one of three languages, Javascript, Python and Java. I’ve chosen to implement mine in Javascript.

For Home Skills the request is passed in a JSON object and can be one of three types of message:

  • Discovery
  • Control
  • System

Discovery

These messages are triggered when you say “Alexa, discover devices”. The reply this message is when the skill has the chance to tell the Echo what devices are available to control and what sort of actions they support. Each device section includes it’s name and a description to be shown in the Alexa phone/tablet application.

The full list of supported actions:

  • setTargetTemperature
  • incrementTargetTemperature
  • decrementTargetTemperature
  • setPercentage
  • incrementPercentage
  • decrementPercentage
  • turnOff
  • turnOn

Control

These are the actual control messages, triggered by something like “Alexa, set the bedroom lights to 20%”. It contains one of the actions listed earlier and the on/off or value of the change.

System

This is the Echo system checking that the skill is all healthy.

Linking Accounts

In order for the skill to know who’s echo is connecting we have to arrange a way to link an Echo to an account in the Skill. To do this we have to implement a oAuth 2.0 system. There is a nice tutorial on using passport to provide oAuth 2.0 services here, I used this to add the required HTTP endpoints needed.

Since there is a need to set up oAuth and to create accounts in order to authorise the oAuth requests this means that is makes sense to only do this once and to run it as a shared service for everybody (just got to work out where to host it and how to pay for it).

A Link to the Device

For this the device is actually Node-RED which is probably going to be running on people’s home network. This means something that can connect out to the Skill is probably best to allow or traversing NAT routers. This sounds like a good usecase for MQTT (come on, you knew it was coming). Rather than just use the built in MQTT nodes we have a custom set of nodes that make use of some of the earlier sections.

Firstly a config node that uses the same authentication details as account linking system to create oAuth token to be used to publish device details to the database and to authenticate with the MQTT broker.

Secondly a input node that pairs with the config node. In the input node settings a device is defined and the actions it can handle are listed.

Hooking it all together

At this point the end to end flow looks something like this:

Alexa -> Lambda -> HTTP to Web app -> MQTT to broker -> MQTT to Node-RED

At this point I’ve left the HTTP app in the middle, but I’m looking at adding direct database access to the Lambda function so it can publish control messages directly via MQTT.

Enough talk, how do I get hold of it!

I’m beta testing it with a small group at the moment, but I’ll be opening it up to everybody in a few days. In the mean time the code is all on github, the web side of all of this can be found here, the Lambda function is here and the Node-RED node is here, I’ll put it on npm as soon as the skill is public.

Alexa Skills with Node-RED

I had a Amazon Echo Dot delivered on release day. I was looking forward to having a voice powered butler around the flat.

Amazon advertise WeMo support, but unfortunately they only support WeMo Sockets and I have a bunch of WeMo bulbs that I’d love to get to work

Alexa supports 2 sorts of skills:

  • Normal Skills
  • Home Skills

Normal Skills are triggered by a key word prefixed by either “Tell” or “Ask”. These are meant for information retrieval type services, e.g. you can say “Alexa, ask Network Rail what time the next train to London leaves Southampton Central”, which would retrieve train time table information. Implementing these sort of skills can be as easy as writing a simple REST HTTP endpoint that can handle the JSON format used by Alexa. You can also use AWS Lambda as an endpoint.

Home Skills are a little trickier, these only support Lambda as the endpoint. This is not so bad as you can write Lambda functions in a bunch of languages like Java, Python and JavaScript. As well as the Lambda function you also need to implement a website to link some sort of account for each user to their Echo device and as part of this you need to implement OAuth2.0 authentication and authorisation which can be a lot more challenging.

First Skill

The first step to create a skill is to define it in the Amazon Developer Console. Pick the Alexa tab and then the Alexa Skill Kit.

Defining a new Alexa Skill
Defining a new Alexa Skill

I called my skill Jarvis and set the keyword that will trigger this skill also to Jarvis.

On the next tab you define the interaction model for your skill. This is where the language processing gets defined. You outline the entities (Slots in Alexa parlance) that the skill will interact with and also the sentence structure to match for each Intent. The Intents are defined in a simple JSON format e.g.

{
  "intents": [
    {
      "intent": "TVInput",
      "slots": [
        {
          "name": "room",
          "type": "LIST_OF_ROOMS"
        },
        {
          "name": "input",
          "type": "LIST_OF_INPUTS"
        }
      ]
    },
    {
      "intent": "Lights",
      "slots": [
        {
          "name": "room",
          "type": "LIST_OF_ROOMS"
        },
        {
          "name": "command",
          "type": "LIST_OF_COMMANDS"
        },
        {
          "name": "level",
          "type": "AMAZON.NUMBER"
        }
      ]
    }
  ]
}

Slots can be custom types that you define on the same page or can make use of some built in types like AMAZON.NUMBER.

Finally on this page you supply the sample sentences prefixed with the Intent name and with the Slots marked.

TVInput set {room} TV to {input}
TVInput use {input} on the {room} TV
TVInput {input} in the {room}
Lights turn {room} lights {command}
Lights {command} {room} lights {level} percent

The next page is used to setup the endpoint that will actually implement this skill. As a first pass I decided to implement a HTTP endpoint as it should be relatively simple for Node-RED. There is a contrib package called node-red-contrib-alexa that supplies a collection of nodes to act as HTTP endpoints for Alexa skills. Unfortunately is has a really small bug that stops it working on Bluemix so I couldn’t use straight away. I’ve submitted a pull request that should fix things, so hopefully I’ll be able to give it a go soon.

The reason I want to run this on Bluemix is because endpoints need to have a SSL certificate and Bluemix has a wild card certificate for everything deployed on the .mybluemix.net domain which makes things a lot easier. (Update: When configuring the SSL section of the skill in the Amazon Developer Console, pick “My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority” when using Bluemix)

Alexa Skill Flow
Alexa Skill Flow

The key parts of the flow are:

  • The HTTP-IN & HTTP-Response nodes to handle the HTTP Post from Alexa
  • The first switch node filters the type of request, sending IntentRequests for processing and rejecting other sorts
  • The second switch picks which of the 2 Intents defined earlier (TV or Lights)

    {
      "type": "IntentRequest",
      "requestId": "amzn1.echo-api.request.4ebea9aa-3730-4c28-9076-99c7c1555d26",
      "timestamp": "2016-10-24T19:25:06Z",
      "locale": "en-GB",
      "intent": { 
        "name": "TVInput", 
        "slots": { 
          "input": { 
            "name": "input",
            "value": "chromecast"
          }, 
          "room": {
            "name": "room",
            "value": "bedroom"
          }
        }
      }
    }
  • The Lights Function node parses out the relevant slots to get the command and device to control
  • The commands are output to a MQTT out node which published to a shared password secured broker where they are picked up by a second instance of Node-RED running on my home network and sent to the correct WeMo node to carry out the action
  • The final function node formats a response for Alexa to say (“OK”) when the Intent has been carried out

    rep = {};
    rep.version  = '1.0';
    rep.sessionAttributes = {};
    rep.response = {
        outputSpeech: {
            type: "PlainText",
            text: "OK"
        },
        shouldEndSession: true
    }
    
    msg.payload = rep;
    msg.headers = {
        "Content-Type" :"application/json;charset=UTF-8"
    }
    return msg;

It could do with some error handling and the TV Intent needs a similar MQTT output adding.

This works reasonably well, but having to say “Alexa, ask Jarvis to turn the bedroom light on” is a little long winded compared to just “Alexa turn on the bedroom light”, in order to use the shorter form you need to write a Home Skill.

I’ve started work on a Home Skill service and a matching Node-RED Node to go with it. When I’ve got it working I’ll post a follow up with details.

Provisioning WiFi IoT devices

or

“How do you get the local WiFi details into a consumer device that doesn’t have a keyboard or screen?”

It’s been an on going problem since we started to put WiFi in everything. The current solution seams to be have the device behave like it’s own access point first and then connect a phone or laptop to this network in order to enter the details of the real WiFi network. While this works it’s a clunky solution that relies on a user being capable of disconnecting from their current network and connecting to the new device only network. It also requires the access point mode to be shutdown in order to try the supplied WiFi credentials which causes the phone/laptop to disconnect and normally reconnect to it’s normal network. This means if the user makes a typo then they have to wait for the device to realise the credentials are wrong and at best go back into access point mode or you have to manually reset the device to have it start over. This really isn’t a good solution.

I’ve been trying to come up with a better solution, specifically for my linear clock project. You may ask why a wall clock needs to be connected to the internet, but since it’s Raspberry Pi powered, which has no persistent real time clock it needs a way to set the time in the case of power outages. I also want a way to allow users to change the colours for the hours, mins and seconds and adjust brightness levels.

In order to add WiFi to the Raspberry Pi Zero I’m using the RedBear IoT pHat which as well as providing WiFi it includes Bluetooth 4.0 support. This opens up a new avenue to use for configuration. The idea is to have the device start as a Eddystone beacon broadcasting a URL. The URL will host a page that uses Web Bluetooth to connect to the device and allow the WiFi SSID and password to be entered. Assuming the credentials are correct the IP address can be pushed to the page allowing a link to be presented to a page served by the device to then configure the more day to day options.

RedBear IoT pHat

We had a hackday at the office recently and I had a go at implementing all of this.

Firstly the device comes up as an Eddystone Beacon that broadcasts the URL to the configuration page:

wpid-wp-1473775044897.png

When the page loads is shows a button to search for the devices with bluetooth. This is because the Web Bluetooth spec currently requires a user interaction to start a BLE session.

wpid-wp-1473775152699.png

Chrome then displays a list of matching devices to connect to.

wpid-wp-1473775152702.png

When the user selects a given device the form appears to allow the SSID and password to be entered and submitted.

wpid-wp-1473775152708.png

Once the details are submitted the page will wait until the device has bound to the WiFi network, it then displays a link directly to the device so any further configuration can be carried out.

The code for this is all up on GitHub here if anybody wants to play. The web front end needs a little work and I’m planning to add WiFi AP scanning and pre-population soon.

Now we just need Apple to get their finger out and implement Web Bluetooth

Tinkerforge Node-RED nodes

For a recent project I’ve been working on a collection of different Node-RED nodes. Part of this set is a group of nodes to interact with Tinkerforge Bricks and Bricklets.

Tinkerforge is a platform of stackable devices and sensors/actuators that can be connected to via USB or can be attached directly to the network via Ethernet or Wifi.

Tinkerforge Stack

Collections of sensors/actuators, known as bricklets, are grouped together round a Master Brick. Each Master Brick can host up to 4 sensors/actuators but multiple Master Brick’s can be stacked to add more capacity. The Master Brick also has a Micro USB socket which can be used to connect to a host machine running a deamon called brickd. The host runs a daemon that handles discovering the attached devices and exposes access to them via an API. There are also Ethernet (with and without PoE) and WiFi bricks which allow you to link the stack directly to the network.

As well as the Master Brick there is a RED Brick which contains a ARM Cortex A8 processor, SD card, USB socket and micro HDMI adapter. This runs a small Linux distribution (based on Debian) and hosts a copy of brickd.

There are bindings for the API in a number of languages including:

  • C/C++
  • C#
  • Delphi
  • Java
  • Javascript
  • PHP
  • Perl
  • Python
  • Ruby

For the Node-RED nodes I took the Javascript bindings and wrapped them as Node-RED nodes. So far the following bricklets are supported:

Adding others shouldn’t be hard, but these were the ones I had access to in order to test.

All the nodes share a config node which is configured to point to the instance of the brickd the sensors are linked and it then provides a filtered list of available bricklets for each node.

Node-RED - Google Chrome_004

The code for the nodes can be found here and is available on npmjs.org here

Physical Web FatBeacons

I’ve been playing with Eddystone beacons (A Open BLE beacon technology backed by Google) for a while. Combined with things like WebBluetooth they make for a really nice way to interact with physical object from mobile devices.

Physical Web Logo

Recently there has been a suggested extension to the Physical Web specification to include something called a FatBeacon. A FatBeacon is one that rather than advertising a URL to load a web page from it actually hosts the web page on the device and services it up from the BLE characteristic. This means that it becomes possible to interact with a device when there is no network back haul to load the page.

This may sounds a little strange in the western world where cellular coverage is ubiquitous, but there are places where coverage is still spotty and it also allows for totally self contained solutions. For simple things like my Physical Web Light Switch this would work well.

A request went up on the Physical Web GitHub page asking if anybody wanted to have a go at implementing a software stack to allow people to have a play with the concept so I threw my hat into the ring to have a go.

My first thought was to make use of Sandeep Mistry‘s bleno library and I ran up a simple characteristic to serve up the HTML.

var bleno = require('bleno');
var util = require('util');

var html = "<html>" + 
			"<head>" +
			 "<title>Fat Beacon Demo</title>" +
			 "<meta name='description' content='FatBeacon Demo'/>" +
			"</head>" + 
			"<body>" +
			 "<h1>HelloWorld</h1>" +
                         "This is a test" +
			"</body>" + 
		   "</html>";

function HTMLCharacteristic() {
	bleno.Characteristic.call(this, {
		uuid: 'd1a517f0249946ca9ccc809bc1c966fa',
		properties: ['read'],
		descriptors: [
			new bleno.Descriptor({
				uuid: '2901',
				value: 'HTML'
			})
		]
	});
}

util.inherits(HTMLCharacteristic,bleno.Characteristic);

HTMLCharacteristic.prototype.onReadRequest = function(offset, callback) {
	console.log("read request");
	var buf = new Buffer(html, 'utf8');
	if (offset < buf.length) {
		var slice = buf.slice(offset);
		callback(this.RESULT_SUCCESS, slice);
	} else {
		//problem
	}

}

module.exports = HTMLCharacteristic;

I wrapped this in a simple service but I had missed the need to put the right bits in the advertisement packet, luckily Jacob Rosenthal spotted what needed doing and ported my code over to Don Coleman‘s node-eddystone-beacon project.

After a couple of tweaks I now have it all working (on Linux, iirc there are some limitation on OSx that mean you can’t connect to services with bleno). My code is available on GitHub here and I will be submitting a pull request to Don to get it included in the node-eddystone-beacon project soon.

The beacon shows up like this in the Physical Web android app

FatBeacon in Physical Web App

Which when loaded looks like this:

FatBeacon page

There are some interesting issues around security to be looked at but it’s still an interesting area.

Lightbar clock

I’ve been meaning to get on with this project for ages, it was originally going to be based on the espruino pico but I never quiet got round to sorting it all out.

I even prototyped it as a little webpage

In the end I’ve ended up using a Raspberry Pi Zero and Node-RED to drive it all. It uses a 60 segment RGB LED strip I picked up off ebay. The LED strip is driven by the node-red-node-pi-neopixel node from the function node.

Screenshot from 2016-06-17 15-11-27

var date = new Date();
//console.log(date);
var red = [];
var green = [];
var blue = [];
var b = 125;

for (var i=0; i<60; i++) {
    red.push(0);
    green.push(0);
    blue.push(0);
}

var array = [[]];
var hours = date.getHours();
var hs = ((hours + 12) % 12) * 5;
for (var i=0; i<5; i++) {
    red[hs + i] = b;
}

var m = date.getMinutes();
green[m] = b;

var s = date.getSeconds();
blue[s] = b;

for (var i=0; i<60; i++) {
    array[0].push({
        payload: i + "," + 
            red[i] + "," +
            green[i] + "," +
            blue[i]
    })
}
return array;
YouTube Preview Image

It just needs a nice mount sorting to hang it on the wall.

Further work may include adding a light sensor to vary the brightness depending on the ambient light levels, a nice way to configure a wifi dongle so it can join the network for ntp time updates and so a user could set the timezone.

OwnTracks Encrypted Location Node-RED Node

OwnTracks Logo
OwnTracks Logo

At the weekend I ran the London Marathon, while I’m glad I did it, I have no desire to do it again (ask me again in 6 months).

My folks came down to watch me and to help them work out where on the course I was I ran with a phone strapped to my arm running OwnTracks. This was pointing at the semi public broker running on my machine on the end of my broadband. In order to keep some degree of privacy I had enabled the symmetric encryption facility.

As well as my family using the data I had run up a very simple logging script with the mosquitto_sub command to record my progress (I was also tracking it with my Garmin watch, but wanted to see how Owntracks did in comparison).

# mosquitto_sub -t 'owntracks/Ben/#' > track.log

Before I started I hadn’t looked at what sort of encryption was being used, but a little bit of digging in the src and a pointer in the right direction from Jan-Piet Mens got me to the libsodium library. I found a nodejs implementation on npm and hacked up a quick little script to decode the messages.

var readline = require('readline').createInterface({
  input: require('fs').createReadStream('track.json')
});

var sodium = require('libsodium-wrappers');

var StringDecoder = require('string_decoder').StringDecoder;
var decoder = new StringDecoder('utf8');

readline.on('line',function(line){
  var msg = JSON.parse(line);
  if (msg._type == 'encrypted') {
    var cypherText = new Buffer(msg.data, 'base64');
    var nonce = cypherText.slice(0,24);
    var key = new Buffer(32);
    key.fill(0);
    key.write('xxx');
    var clearText = sodium.crypto_secretbox_open_easy(cypherText.slice(24),nonce,key,"text");
    console.log(clearText);
  }
});

Now I had the method worked out it made sense to turn it into a Node-RED node so encrypted location streams could easily be consumed. I also added a little extra functionality, it copied the lat & lon values into a msg.location objects so they can easily be consumed by my node-red-node-geofence node and Dave’s worldmap node. The original decrypted location is left in the msg.payload field.

Owntracks node

The source for the node can be found on github here and the package is on npm here.

To install run the following in your ~/.node-red

npm install node-red-contrib-owntracks

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XSP13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.