Category Archives: Work

Native NodeJS callbacks with Context

As I mentioned back in September I’ve recently started a new job. Due to the nature of the work I’m not going to be able to talk about much of it. But when there are things I can I’ll try and write them up here.

One of my first tasks has been to write a Node-RED wrapper around a 3rd party native library. This library provides a 2 way communication channel to a prototyping environment so needs to use threads to keep track of things in both directions and make use of callbacks to pass that information back into the Javascript side. I dug around for some concrete examples of what I was trying and while I found a few things that came close I didn’t find exactly what I was looking for so here is a stripped back version of the node I created to use as a reference for later.

This is the C++ method that is called when a new instance of the native node module is created. It takes an object reference as an argument to be stored away and used as the context for the callback later.

void Test::New(const Nan::FunctionCallbackInfo<v8::Value>& info) {
  if (info.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    Test* obj = new Test();
    obj->Wrap(info.This());
    info.GetReturnValue().Set(info.This());

    v8::Local<v8::Object> context = v8::Local<v8::Object>::Cast(info[0]);
    obj->context.Reset(context);

    uv_loop_t* loop = uv_default_loop();
    uv_async_init(loop, &obj->async, asyncmsg);
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 2;
    v8::Local<v8::Value> argv[argc] = { info[0], info[1] };
    v8::Local<v8::Function> cons = Nan::New<v8::Function>(constructor);
    info.GetReturnValue().Set(Nan::NewInstance(cons,argc,argv).ToLocalChecked());
  }
}

The object is created like this on the javascript side where the this is the reference to the object to be used as the context:

function Native() {
  events.EventEmitter.call(this);
  //passes "this" to the C++ side for callback
  this._native = new native.Test(this);
}

We then make the callback from C++ here:

void Test::asyncmsg(uv_async_t* handle) {
  Nan::HandleScope scope;

  //retrieve the context object
  Test* obj = (Test*)((callbackData*)handle->data)->context;
  v8::Local<v8::Object> context = Nan::New(obj->context);

  //create object to pass back to javascript with result
  v8::Local<v8::Object> response = Nan::New<v8::Object>();
  response->Set(Nan::New<v8::String>("counter").ToLocalChecked(), Nan::New(((callbackData*)handle->data)->counter));

  v8::Local<v8::Value> argv[] = { response };
  ((Nan::Callback*)((callbackData*)handle->data)->cb)->Call(context,1,argv);
  free(handle->data);
}

Which ends up back on the javascript side of the house here:

Native.prototype._status = function(status) {
  this.emit("loop", status.counter);
}

I’ve uploaded code to githib here if you want to have look at the whole stack and possibly use it as a base for your project.

Time for something new

It looks like the IBM Process Server work flow has run and my entry in Bluepages (IBM’s internal LDAP back employee directory) has been expunged. So after pretty much exactly 16 years at IBM it’s time for something new.

I started at IBM straight after I finished my masters (to the extent that I handed my thesis in on the Friday in Cranfield, drove back to Yorkshire on the Saturday morning, did as much washing as possible and then drove down to Southampton on the Sunday to check into the hotel at Marwell Zoo for the start of the induction week).

While at IBM I worked for 2 teams, firstly the Java Technology Centre and then Emerging Technologies & Services.

Java Technology Centre


Most of my time on this group was spent working in the Level 3 Support team, at the time the IBM JVM under pinned a large proportion of the IBM Software stack, which meant it was always our fault (until proven otherwise) when something broke. This was a great team to work for, every morning (and later when phone rang at 3am) there was a new batch of problems to solve and the team helped each other out and we were always learning. I’d like to thank Mark Bluemel who was my original team leader for teaching me that the customer is not always right, and some times the quickest way to solve a problem was to point this out to them (just as long as you had all the evidence to back it up). It helped hone my engineering background to dig into problems and find the underlying cause.

As I said earlier, the JVM used to underpin a large proportion of IBM’s software offerings, this brought me into contact with a large number of product teams and their customers based all round the world. In later years when I became one of the two go to guys (with Chris Bailey) for management to send on-site at really short notice to solve problems, I got to meet a lot of these folks in person and not just at the end of a IM chat window or conference call. This period also taught me the ways of airline/hotel points schemes and how to “work” a corporate travel booking system (Thanks Flavio) and took me to some places I probably wouldn’t have normally chosen to visit (2.5 weeks in Seoul, a winter of Mon-Fri in German country side), even if some visits I saw little more than a air conditioned office and a cookie cutter

In the end the only reason I moved on from this group was because by the time a customer reached me they were usually not the happiest camper and the best I could do was get them back to a content state that things were working again, while there was a great deal of satisfaction in this it did start to grate a little towards the end.

Emerging Technologies & Services


ETS was always THE place to work in Hursley, they have all the best toys and it’s hard to argue with a team that had it’s own armoured car (unfortunately returned a few years ago)!

It is a small team that works on just about anything going, but specialising on what ever was new and interesting we could convince a client to pay for. We would go poke round both IBM Research and anything else in the public domain looking for something interesting and the go looking for a client that wanted to try something on the bleeding edge. Projects vary from just one member of the team working with a client or offering support to one of the other IBM services teams to 3-4 delivering something a little bigger. Projects include things like bits for Wimbledon like social media analysis system and network attached light level sensors, a set of pedestals to control the video walls in the IBM Southbank Forum, Controlling TVs using telepathy and a 10 year research program around Network and Information Science for the US/UK defence sector. The team also runs hackdays, innovation and design thinking workshops with clients.

This is the team that invented Node-RED (much kudos to Nick and Dave) along with a bunch of other cool tech like GianDB and Edgeware Fabric.

The team has had a bit of a shuffle round recently and now sits even closer to the IBM Research folk, hopefully this will make things easier for them to grab the latest and greatest new and shiny stuff coming down the pipe.

Next

On the whole I enjoyed my time at IBM and I’ll miss all the great people I worked with, but it was just time to try something new.

As for what that will be, I’ll let you know more once I’ve actually started (beginning of November) and worked out just how much of it I’m allowed to talk about, but given some recent public announcements it sounds like it could all be VERY interesting. Watch this space.

Multipart HTTP Post requests with Volley on Android

It’s been a little while since I’ve done any really serious Android development, but a couple of projects have brought me back to it.

Early on in one of those projects I had to make some HTTP requests, my first thought was to make use of the Apache HTTP Client classes as I had done many times before on Android. Which is why I was a little surprised when the usual ctrl-space didn’t auto complete any of the expected class names.

It turns out the classes were removed in Android 6.0 and the notice suggests using the HttpURLConnection class. A little bit more digging turned up a wrapper for this called Volley.

Volley is a wrapper round the HttpURLConnection class to provides a neat asynchronous interface that does IO in the background and then delivers results to the Main thread so UI updates can be done with out further faffing around switching threads. There is also a nice set of tutorials on the Android Developers pages.

The first few requests all worked fine, but there was one which was a little bit more tricky. The HTTP endpoint in question accepts a multipart-form payload. A bit of googling/searching on Stackoverflow turned up a number of approaches to this and best seamed to be documented in this gist

This was close to what I wanted but not quite what I needed so I have taken some of the core concepts and built my own MultipathRequest object.

...
MultipartRequest request = new MultipartRequest(url, headers, 
    new Response.Listener<NetworkResponse>() {
        @Override
        public void onResponse(NetworkResponse response) {
        ...
        }
    },
    new Response.ErrorListener() {
        @Override
        public void onErrorResponse(VolleyError error) {
        ...
        }
    });
    
request.addPart(new FormPart(fieldName,value));
request.addPart(new FilePart(fileFieldName, mimeType, fileName, data);

requestQueue.add(request);
...

I’ve stuck the code up on github here. You can include it in your Android Project by adding the following to the build.gradle in the root of the project:

allprojects {
  repositories {
    ...
    maven { url 'https://jitpack.io' }
  }
}

And then this to the dependencies section of the modules build.gradle:

dependencies {
  compile 'com.github.hardillb:MultiPartVolley:0.0.3'
}

Tinkerforge Node-RED nodes

For a recent project I’ve been working on a collection of different Node-RED nodes. Part of this set is a group of nodes to interact with Tinkerforge Bricks and Bricklets.

Tinkerforge is a platform of stackable devices and sensors/actuators that can be connected to via USB or can be attached directly to the network via Ethernet or Wifi.

Tinkerforge Stack

Collections of sensors/actuators, known as bricklets, are grouped together round a Master Brick. Each Master Brick can host up to 4 sensors/actuators but multiple Master Brick’s can be stacked to add more capacity. The Master Brick also has a Micro USB socket which can be used to connect to a host machine running a deamon called brickd. The host runs a daemon that handles discovering the attached devices and exposes access to them via an API. There are also Ethernet (with and without PoE) and WiFi bricks which allow you to link the stack directly to the network.

As well as the Master Brick there is a RED Brick which contains a ARM Cortex A8 processor, SD card, USB socket and micro HDMI adapter. This runs a small Linux distribution (based on Debian) and hosts a copy of brickd.

There are bindings for the API in a number of languages including:

  • C/C++
  • C#
  • Delphi
  • Java
  • Javascript
  • PHP
  • Perl
  • Python
  • Ruby

For the Node-RED nodes I took the Javascript bindings and wrapped them as Node-RED nodes. So far the following bricklets are supported:

Adding others shouldn’t be hard, but these were the ones I had access to in order to test.

All the nodes share a config node which is configured to point to the instance of the brickd the sensors are linked and it then provides a filtered list of available bricklets for each node.

Node-RED - Google Chrome_004

The code for the nodes can be found here and is available on npmjs.org here

GPIO buttons – Demo stand

I recently got asked to help out building a demo for one of the projects the team has been working on.

The project uses situational awareness to vary the level of authentication needed to carry out different tasks. Rather than make the person using the demo run all over the place to change location or log in and out of various systems we have stubbed out the inputs and needed a way to toggle them locally. The demo runs on a tablet so we wanted something that could sit near by and allow the inputs to be varied.

To do this I thought I’d have a play with a Raspberry Pi and the GPIO pins. The plan was to attach 5 buttons to the toggle the state of the inputs, the buttons have lights that will show the state.

This would have been a perfect use case for the new Raspberry Pi Zero as it only needs 1 USB port and the GPIO pins, but I couldn’t convince any of the guys on the team lucky enough to grab one to give it up.

The chosen buttons where 5 of these:

http://uk.rs-online.com/web/p/push-button-switches/7027140/

I’d uses something very similar in the past for the MQTT Powered Video Walls in IBM Southbank Client Centre.

The first single button prototype worked nicely after I’d worked out what was needed with pull-up resistors for the buttons and current limiting resistors for the LEDs.

Button prototype

Node-RED was set up to read the state of the buttons from the GPIO pints and to update toggle the LED output the same way.

YouTube Preview Image
Node-RED flow to read buttons
Node-RED flow to read buttons

To make the demo totally standalone the rPi acts as a WiFi access point that the tablet connects to. It uses this connection to load the demo and to read the updates from the buttons via MQTT over Websockets.

The USB WiFi adapter available was a Edimax EW-7811UN, The default Linux driver for this doesn’t support HostAPD, luckily there is a page with instructions and a github project with a functional version.

Mosquitto 1.4.2 was installed to act as the the MQTT broker and to support MQTT over websockets so the web app could connect directly to get the button updates.

Having finished off the wiring and configuring the rPi the whole thing was mounted in a display board put together by Andy Feltham.

And ended up looking like this, all ready for a overlay with a description and instructions.

Completed Board

Sublime Text 3 UML tool

I’ve been using Sublime Text as my editor of choice for about a year. Every now and again I install another little plugin to help out.

For my current project we’ve been using plantuml to build sequence diagrams and UML models for the data structures.

I had a bit of a search round for a suitable plugin that would use plantuml to generate the images on the fly from with in Sublime. I found this project on github that looked perfect.

It was working fine, but it had 2 small niggles, the first was that it called out to Eye of Gnome to actually display the images which was a bit of a pain. The second is the random chars it appends to the end of the file name for the images it generates.

Sublime with EoG

There was an open issue that seamed to described the first issue problem and a comment from the original author stating that he didn’t have time but would welcome contributions. I thought I’d have a look at seeing how hard it would be to add what was needed.

It turned out to be very simple:

from .base import BaseViewer
import sublime

class Sublime3Viewer(BaseViewer):
	def load(self):
		if not sublime.version().startswith('3'):
			raise Exception("Not Sublime 3!")

	def view(self,diagram_files):
		for diagram_file in diagram_files:
			sublime.active_window().open_file(diagram_file.name)

I’ve submitted a pull request or you can pull from my fork.

Side By Side

I’ve also tweaked the code to remove the random chars in the name for my version but I’ve not checked that in yet as I don’t think the upstream author will accept that change at the moment.

EDIT:
In the meantime you can install my version direct from my git repo with the following steps

  1. Open the Command Pallet, Tools -> Command Pallet
  2. Select Package Control: Add Repository
  3. In the text box add the git repo address and hit return – https://github.com/hardillb/sublime_diagram_plugin
  4. Open the Command Pallet again
  5. Select Package Control: Install Package
  6. Type in “sublime_diagram_plugin”

Securing Node-RED

Node-RED added some new authentication/authorisation code in the 0.10 release that allows for a plugable scheme. In this post I’m going to talk about how to use this to make Node-RED use a LDAP server to look up users.

HTTPS

First of all, to do this properly we will need to enable HTTPS to ensure the communication channel between the browser and Node-RED is properly protected. This is done by adding a https value to the settings.js file like this:

...
},
https: {
  key: fs.readFileSync('privkey.pem'),
  cert: fs.readFileSync('cert.pem')
},
...

You also need to un-comment the var fs = require(‘fs’); line at the top of settings.js.

You can generate the privkey.pem and cert.pem with the following commands in your node-red directory:

pi@raspberrypi ~/node-red $ openssl genrsa -out privkey.pem
Generating RSA private key, 1024 bit long modulus
.............................++++++
......................++++++
e is 65537 (0x10001)
pi@raspberrypi ~/node-red $ openssl req -new -x509 -key privkey.pem -out cert.pem -days 1095
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:Hampshire
Locality Name (eg, city) []:Eastleigh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Node-RED
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:raspberrypi.local
Email Address []:

The important bit is the Common Name value, this needs to match either the name or IP address that you will use to access your Node-RED console. In my case I have avahi enabled so I can access my pi using it’s host name raspberrypi with .local as the domain, but you may be more used to using an IP address like 192.168.1.12.

Since this is a self signed certificate your browser will reject it the first time you try to connect with a warning like this:

Chrome warning about unsafe cert.
Chrome warning about unsafe cert.

This is because your certificate is not signed by one of the trusted certificate authorities, you can get past this error by clicking on Advanced then Proceed to raspberrypi.local (unsafe). With Chrome this error will be shown once every time you access the page, you can avoid this by copying the cert.pem file to your client machine and import it into Chrome:

  1. Open Chrome settings page chrome://settings
  2. Scroll to the bottom of page and click on the “+Show advanced settings” link
  3. Scroll to the HTTPS/SSL and click on “Manage certificates…”
  4. Select the Servers tab and select import
  5. Select the cert.pem you copied from your Raspberry Pi

Usernames and Passwords

In previous Node-RED releases you could set a single username and password for the admin interface, any static content or one that covered bh. This was done by adding a object to the settings.js file containing the user name and password. This was useful but could be a little limited. Since the 0.10 release there is now a pluggable authentication interface that also includes support for things like read only access to the admin interface. Details of these updates can be found here.

To implement a authentication plugin you need to create a NodeJS module based on this skeleton:

var when = require("when");

module.exports = {
   type: "credentials",
   users: function(username){
      //returns a promise that checks the authorisation for a given user
      return when.promise(function(resolve) {
         if (username == 'foo') {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   authenticate: function(username, password) {
      //returns a promise that completes when the user has been authenticated
      return when.promise(function(resolve) {
         if (username == 'foo' && password == 'bar' ) {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   default: function() {
      // Resolve with the user object for the default user.
      // If no default user exists, resolve with null.
      return when.promise(function(resolve) {
         resolve(null);
      });
   }
};

This comprises on 3 functions, one to authenticate a user against the backend, one to check the level of authorisation (used by Node-REDs built in oAuth mechanism once a user has been authenticated), and finally default which matches unauthenticated users.

For my LDAP example I’m not going to implement different read only/write levels of authorisation to make things a little easier.

The source for the module can be found here or on npmjs here. The easiest way to install it is with:

npm install -g node-red-contrib-ldap-auth

Then edit the Node-RED settings.js to include the following:

adminAuth: require('node-red-contrib-ldap-auth').setup({
    uri:'ldap://url.to.server', 
    base: 'ou=group,o=company.com', 
    filterTemplate: 'mail={{username}}'
}),

The filterTemplate is a mustache template to use to search the LDAP for the username, in this case the username is mail address.

Once it’s all up and running Node-RED should present you with something that looks like this:

Node-RED asking for credentials
Node-RED asking for credentials

Node-RED Geofence node

We’ve been looking at some new location based uses for Node-RED recently. To this end we’ve been standardising on adding the information under the msg.location property.

msg: {
  topic: "foo",
  payload: "bar",
  location: {
    lat: 51.02477
    lon: -1.39724
  }
}

As part of this I’ve been writing a geofence node. This will allow messages to be filtered against specific regions.

The first pass worked well supporting both circular and rectangular regions, but configuring it by entering coordinates is not the most user friendly.

Screenshot from 2014-11-11 22:23:33

After a bit of playing with leafletjs and the leaflet.draw plugin I managed to come up with this which should be a lot more intuitive.

Geofence config dialog

At the moment it needs to load the leaflet js and css from a external server until we come up with a way to bundle static content inside nodes. This shouldn’t really be a problem as it’s having to load the map tiles from the internet as well. I’d like to find a way to pass in a base URL for the map server and to gracefully fall back to the text version if there is no access to a map server.

The map version also supports creating arbitrary polygon regions by joining up multiple points.

I’m still trying to get the config dialog to gracefully fall back to the basic version when there is no access to a mapserver but having problems with detecting the failure to download a map title. Any pointers would be gratefully accepted.

The code is on github and on npm. It can be installed with:

npm install node-red-node-geofence

LDAP and NFC Node-RED Nodes

About a week ago a colleague asked me to help resurrect some code I had written to use our work ID badges to look up information on the card owner in order to log into a system for a demonstration.

The ID badges are basically mifare cards so can be read by a NFC reader. The content of the cards is encrypted, but each card has a unique ID. Unfortunately the security team will not share the mapping of these IDs to actual people, but since this is for a demonstration that will only be given by a relatively small number of people it’s not a problem to set up a list of mappings our selves.

The original version of this code used nfc-eventd and some java code to the IDs then do a lookup in a little database to convert these to email addresses. It worked but was a bit of a pig to setup and move between machines as it required a number of different apps and config files so I decided to have a go at rewriting it all in Node-RED.

NFC ID Flow

To do this I was going to need 2 new nodes, one to read the NFC card and one to look up details in the LDAP. Both of these actually proved reasonable easy and quick to write as there are existing Node NPM modules that do most of the heavy lifting. The flow has a couple of extra bit, it uses a mongodb to store the id to email address mappings and if there is no match it uses websockets to populate a field in a separate web page to enter a email address to update the database.

NFC

I did a first pass using the nfc npm and it worked but there was no way to shut the connection to the NFC reader down in the code which meant I couldn’t clean up properly when Node-RED shut down or when the node needed to be restarted.

The nfc on npmjs.org is actually a bit out of date compared to the git repository it’s hosted in. So I moved over to using the upstream version of the code. This changed the API a little and still didn’t have a mechanism to allow the interface to be stopped. I forked the project and after a little bit of playing I ended up with some working shutdown code.

The only call back is for when at NFC tag is detected and it polls in tight loop so the stream of data from the node is way too high to feed into a Node-RED flow really. The Node-RED wrapper rate limits to only reporting the same tag once every 10 seconds. This is good enough for the original problem I was looking to solve but I still think it can be done better. I’m planning on adding call backs for tag seen and when it is removed, this is similar to how nfc-eventd works. I also want to look at doing NDEF decoding.

You can install the current version of the node with:

npm install https://github.com/hardillb/node-red-contrib-nfc/archive/master.tar.gz

It depends on libnfc which should work on the Linux and OSx and I’ve even seen instructions to build it for Windows.
Once I’ve got a bit further I’ll add it to npmjs.org.

LDAP

This one was even simpler. The LDAP npm modules links to the openldap libraries and does all the hard work.

It just needed a config dialog creating to take a base DN and a filter and a connection setup that takes a server, port and if needed a bind DN and password. The filter is a mustache template so values can be passed in.

This node is pretty much done, you can find the code on github here and the node can be installed with the following:

npm install node-red-node-ldap

Like with the NFC node, openldap should be available for Linux and OSx and there looks to be a Windows port.

Running Node-Red as a Windows or OSx Service

For a recent project I needed to run Node-RED on windows and it became apparent that being able to run it as a service would be very useful.

After a little poking around I found a npm module called node-windows.

You install node-windows with as follows:

npm install -g node-windows

followed by:

npm link node-windows

in the root directory of your project. This is a 2 stage process as node-windows works better when installed globally.

Now the npm module in installed you configure the Windows service by writing a short nodejs app. This windows-service.js should work for Node-Red

var Service = require('node-windows').Service;

var svc = new Service({
  name:'Node-Red',
  description: 'A visual tool for wiring the Internet of Things',
  script: require('path').join(__dirname,'red.js')
});

svc.on('install',function(){
  svc.start();
});

svc.on('uninstall',function(){
  console.log('Uninstall complete.');
  console.log('The service exists: ',svc.exists);
});

if (process.argv.length == 3) {
  if ( process.argv[2] == 'install') {
    svc.install();
  } else if ( process.argv[2] == 'uninstall' ) {
    svc.uninstall();
  }
}

Run the following to install the service:

node windows-service.js install

and to remove the service:

node windows-service.js uninstall

There is also a OSx version of node-windows called node-mac, the same script with a small change should work on both:

if (process.platform === 'win32') {
  var Service = require('node-windows').Service;
} else if (process.platform === 'darwin') {
  var Service = require('node-mac').Service;
} else {
  console.log('Not Windows or OSx');
  process.exit(1);
}

var svc = new Service({
  name:'Node-Red',
  description: 'A visual tool for wiring the Internet of Things',
  script: require('path').join(__dirname,'red.js')
});


svc.on('install',function(){
  svc.start();
});

svc.on('uninstall',function(){
  console.log('Uninstall complete.');
  console.log('The service exists: ',svc.exists);
});

if (process.argv.length == 3) {
  if ( process.argv[2] == 'install') {
    svc.install();
  } else if ( process.argv[2] == 'uninstall' ) {
    svc.uninstall();
  }
}

I have submitted a pull request to include this in the base Node-RED install.

EDIT:

I’ve added node-linux to the pull request as well to generate /etc/init.d SystemV start scripts.