OwnTracks Encrypted Location Node-RED Node

OwnTracks Logo
OwnTracks Logo

At the weekend I ran the London Marathon, while I’m glad I did it, I have no desire to do it again (ask me again in 6 months).

My folks came down to watch me and to help them work out where on the course I was I ran with a phone strapped to my arm running OwnTracks. This was pointing at the semi public broker running on my machine on the end of my broadband. In order to keep some degree of privacy I had enabled the symmetric encryption facility.

As well as my family using the data I had run up a very simple logging script with the mosquitto_sub command to record my progress (I was also tracking it with my Garmin watch, but wanted to see how Owntracks did in comparison).

# mosquitto_sub -t 'owntracks/Ben/#' > track.log

Before I started I hadn’t looked at what sort of encryption was being used, but a little bit of digging in the src and a pointer in the right direction from Jan-Piet Mens got me to the libsodium library. I found a nodejs implementation on npm and hacked up a quick little script to decode the messages.

var readline = require('readline').createInterface({
  input: require('fs').createReadStream('track.json')
});

var sodium = require('libsodium-wrappers');

var StringDecoder = require('string_decoder').StringDecoder;
var decoder = new StringDecoder('utf8');

readline.on('line',function(line){
  var msg = JSON.parse(line);
  if (msg._type == 'encrypted') {
    var cypherText = new Buffer(msg.data, 'base64');
    var nonce = cypherText.slice(0,24);
    var key = new Buffer(32);
    key.fill(0);
    key.write('xxx');
    var clearText = sodium.crypto_secretbox_open_easy(cypherText.slice(24),nonce,key,"text");
    console.log(clearText);
  }
});

Now I had the method worked out it made sense to turn it into a Node-RED node so encrypted location streams could easily be consumed. I also added a little extra functionality, it copied the lat & lon values into a msg.location objects so they can easily be consumed by my node-red-node-geofence node and Dave’s worldmap node. The original decrypted location is left in the msg.payload field.

Owntracks node

The source for the node can be found on github here and the package is on npm here.

To install run the following in your ~/.node-red

npm install node-red-contrib-owntracks

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XSP13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.

Adding 2 Factor Authentication to Your ExpressJS App

Over the last few years barely a week goes by without some mention of a breach at some online organisation and the request to reset passwords does the rounds or we hear about somebody’s account getting compromised.

One method to help reduce the risk of a compromised password is to use something called 2 Factor Authentication. 2FA makes use of the “Something you know, Something you have” approach to authentication. The “Something you know” is your password and the “Something you have” is some form of token. The token is used to generate unique code for each login attempt. There are several different types of token in general use these days.

Hypersecure HyperFIDO token

  • Time based token – These have been around for a while in the form of RSA tokens that companies have been handing out to employees to use with things like VPNs. A more modern variant of this is the Google Authenticator application (Android & iPhone).
  • Smart Card readers – Some banks hand these out to customers, you insert your card into the device, enter your pin number and it generates a one time password.
  • Hardware tokens – These are plugged into your computer and generate a token on demand. An example of these are things like the Yubico neo and the Hypersecur HypeFIDO which implement the Fido U2F standard. At the moment only Chrome supports using these tokens to authenticate with a Website, but there is on going work to add it to Firefox.

In the rest of this post I’m going to talk about adding Google Authenticator and Fido U2F support to a NodeJS/ExpressJS application.

Basic Username/password

Most ExpressJS apps use a plugin middleware called PassortJS to provide authentication. I’ll use this to do the normal username/password stage.

var http = require('http');
var express = require('express');
var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var mongoose = require('mongoose');

var Account = require('./models/account');
var app = express();

var port = (process.env.VCAP_APP_PORT || process.env.PORT ||3000);
var host = (process.env.VCAP_APP_HOST || '0.0.0.0');
var mongo_url = (process.env.MONGO_URL || 'mongodb://localhost/users');

app.set('view engine', 'ejs');
app.use(passport.initialize());
app.use(passport.session());

passport.use(new LocalStrategy(Account.authenticate()));
passport.serializeUser(Account.serializeUser());
passport.deserializeUser(Account.deserializeUser());

mongoose.connect(mongo_url);

app.use('/',express.static('static'));
app.use('/secure' ensureAuthenticated,passport.,express.static('secure'));

function ensureAuthenticated(req,res,next) {
  if (req.isAuthenticated()) {
    return next();
  } else {
    res.redirect('/login');
  }
}

app.get('/login', function(req,res){
  res.render('login',{ message: req.flash('info') });
});

app.post('/login', passport.authenticate('local', { failureRedirect: '/login', successRedirect: '/secure', failureFlash: true }));

app.get('/newUser', function(req,res){
  res.render('register', { message: req.flash('info') });
});

app.post('/newUser', function(req,res){
  Account.register(new Account({ username : req.body.username }), req.body.password, function(err, account) {
    if (err) {
      console.log(err);
      return res.status(400).send(err.message);
    }

    passport.authenticate('local')(req, res, function () {
      console.log("created new user %s", req.body.username);
      res.status(201).send();
    });
  });
});

var server = http.Server(app);
server.listen(port, host, function(){
  console.log('App listening on  %s:%d!', host, port);
});

Here we have a pretty basic Express app that serves public static content from a directory and renders a login page template with ejs that takes a username and password to access a second “secure” directory of static content. It also has a page to register a new user, all the user information is stashed in a MongoDB database using Mongoose.

Google Authenticator TOTP

Next we will add support for the Google Authenticator app. Here there is a PassportJS plugin (passport-totp) that will handle the actual authentication but we need a way to enrol the site into the application. The app has the ability to read configuration data from a QR code which makes setup simple. In the ExpressJS app we need to add the following routes:

var TOTPStrategy = require('passport-totp').Strategy;
var G2FA = require('./models/g2fa');

app.get('/setupG2FA', ensureAuthenticated, function(req,res){
  G2FA.findOne({'username': req.user.username}, function(err,user){
    if (err) {
      res.status(400).send(err);
    } else {
      var secret;
      if (user !== null) {
        secret = user.secret;
      } else {
        //generate random key
        secret = genSecret(10);
        var newToken = new G2FA({username: req.user.username, secret: secret});
        newToken.save(function(err,tok){});
      }
      var encodedKey = base32.encode(secret);
      var otpUrl = 'otpauth://totp/2FADemo:' + req.user.username + '?secret=' + encodedKey + '&period=30&issuer=2FADemo';
      var qrImage = 'https://chart.googleapis.com/chart?chs=166x166&chld=L|0&cht=qr&chl=' + encodeURIComponent(otpUrl);
      res.send(qrImage);
    }
  });
});

app.post('/loginG2FA', ensureAuthenticated, passport.authenticate('totp'), function(req, res){
  req.session.secondFactor = 'g2fa';
  res.send();
});

2FA Demo App enrolled in Google Authenticator

The first route checks to see if the user has already set up the Authentication app, if not it generates a new random secret and then uses this to build a URL for Google’s Chart API. This API is used to generate a QR code with the enrolment information. As well as the shared secret, the QR code contains the name of the application and the users name so it can easily be identified within the application. The secret is stashed in the MongoDB database along with the username so we can get it back later.

The second route actually verifies the code provided by the application is correct and adds a flag to the users session to say it passed.

The following code is embedded in the in a page to actually show the QR code then to verify that the app is generating the right values.

googleButton.onclick = function setupGoogle() {
  clearWorkspace();
  xhr.open('GET', '/setupG2FA',true);
  xhr.onreadystatechange = function () {
    if(xhr.readyState == 4 && xhr.status == 200) {
      var message = document.createElement('p');
      message.innerHTML = 'Scan the QR code with the app then enter the code in the box and hit submit';
      document.getElementById('workspace').appendChild(message);
      var qrurl = xhr.responseText;
      var image = document.createElement('img');
      image.setAttribute('src', qrurl);
      document.getElementById('workspace').appendChild(image);
      var code = document.createElement('input');
      code.setAttribute('type', 'number');
      code.setAttribute('id', 'code');
      document.getElementById('workspace').appendChild(code);
      var submitG2FA = document.createElement('button');
      submitG2FA.setAttribute('id', 'submitG2FA');
      submitG2FA.innerHTML = 'Submit';
      document.getElementById('workspace').appendChild(submitG2FA);
      submitG2FA.onclick = function() {
        var pass = document.getElementById('code').value;
        console.log(pass);
        var xhr2 = new XMLHttpRequest();
        xhr2.open('POST', '/loginG2FA', true);
        xhr2.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
        xhr2.onreadystatechange = function() {
          if (xhr2.readyState == 4 && xhr2.status == 200) {
            clearWorkspace();
            document.getElementById('workspace').innerHTML='Google Authenticator all setup';
          } else if (xhr2.readyState == 4 && xhr2.status !== 200) {
            clearWorkspace();
            document.getElementById('workspace').innerHTML='Error setting up Google Authenticator';
          }
        }
        xhr2.send('code='+pass);
      }
    } else if (xhr.readyState == 4 && xhr.status !== 200) {
      document.getElementById('workspace').innerHTML ="error setting up Google 2FA";
    }
  };
  xhr.send();
}

Fido U2F

Now the slightly more complicated bit, setting up the U2F support. This is 2 stage process, first we need to register the token with both the site.

app.get('/registerU2F', ensureAuthenticated, function(req,res){
  try{
    var registerRequest = u2f.request(app_id);
    req.session.registerRequest = registerRequest;
    res.send(registerRequest);
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

app.post('/registerU2F', ensureAuthenticated, function(req,res){
  var registerResponse = req.body;
  var registerRequest = req.session.registerRequest;
  var user = req.user.username;
  try {
    var registration = u2f.checkRegistration(registerRequest,registerResponse);
    var reg = new U2F_Reg({username: user, deviceRegistration: registration });
    reg.save(function(err,r){});
    res.send();
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

The first route generates a registration requestion from the app_id, for a simple site this is just the root URL for the site (it needs to be a HTTPS url). This is sent to the webpage which passes it to the token. The token responds with a signed response that includes a public key and a certificate. These are stored away with the username to be used for authentication later.

app.get('/authenticateU2F', ensureAuthenticated, function(req,res){
  U2F_Reg.findOne({username: req.user.username}, function(err, reg){
    if (err) {
      res.status(400).send(err);
    } else {
      if (reg !== null) {
        var signRequest = u2f.request(app_id, reg.deviceRegistration.keyHandle);
        req.session.signrequest = signRequest;
        req.session.deviceRegistration = reg.deviceRegistration;
        res.send(signRequest);
      }
    }
  });
});

app.post('/authenticateU2F', ensureAuthenticated, function(req,res){
  var signResponse = req.body;
  var signRequest = req.session.signrequest;
  var deviceRegistration = req.session.deviceRegistration;
  try {
    var result = u2f.checkSignature(signRequest, signResponse, deviceRegistration.publicKey);
    if (result.successful) {
      req.session.secondFactor = 'u2f';
      res.send();
    } else {
      res.status(400).send();
    }
  } catch (err) {
    console.log(err);
    res.status(400).send();
  }
});

The authentication is a similar process, the first route generates a random challenge and sends this to the website, which asks the token to sign the challenge. This signed challenge is passed back to the site which checks with the public key/certificate that the correct token signed the challenge.

We should now have a site that allows users to register and then enrol both the Google Authenticator app and a Fido U2F token in order to do 2FA.

All the code is for this demo hosted in Github here and a working example is on Bluemix here.

Adding Web Bluetooth to the Lightswitch

I’ve just got Web Bluetooth working properly this weekend to finish off my Physical Web lightswitch.

Web Bluetooth is a draft API to allow webpages to interact directly with Bluetooth LE devices. There is support in the latest Chrome builds on Android and can be turned on by a flag: enable-web-bluetooth (Also coming to Linux in version 50.x).

Some folks have already been playing with this stuff and done things like controlling a BB-8 droid from Chrome.

Chrome Physical Web Notification

I started off following the instructions here which got me started. One of the first things I ran into was that in order to use Web Bluetooth the page needs to be loaded from a trusted source, which basically means localhost or a HTTPS enabled site. I’d already run into this with the Physical Web stuff as Chrome won’t show details of a discovered URL unless it points to a HTTPS site and it even barfs on self signed/private CA certificates. I got round this by using a letsencrypt.org (Which reminds me I really need to change my domain registrar so I can get back to setting up DNSSEC).

Now that I was allowed to actually use the API I had a small problem discovering the BLE device. I had initially thought I would be able to filter local devices based on the Primary Services they possessed, something like this:

navigator.bluetooth.requestDevice({
    filters: [{
        services: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
    }]
})

Web Bluetooth Device Discovery

But after not getting any devices returned I had to reach out on Stackoverflow with the this question. This turned out to be because the beacon was only advertising 1 of it’s 3 Primary Services along with the URL. The answer to my question posted by Jeffrey Yasskin pointed me at using a device name prefix and listing the alternative services the device should provide. I’m going to have a look at the code for the eddystone-beacon node to see if it can be altered to advertise more of the services as well as a URL.

navigator.bluetooth.requestDevice({
    filters: [{
        namePrefix: 'Light'
    }],
    optionalServices: ['ba42561b-b1d2-440a-8d04-0cefb43faece']
})

This now allows the user to select the correct device if there are more than one within range. Once selected the web app switches over from making posts to the REST control endpoints to talking directly to the device via BLE. The device is surfacing 2 characteristics at the moment, one for the toggling on and off and one to set the brightness levels.

All the code is up and Github here.

Next I need to see if there is a way to skip the device selection phase if the user has already paired the page and the device to save on the number of steps required before you can switch the lights on/off. I expect this may not be possible for privacy/security reasons at the moment. Even with the extra step it’s still quicker than waiting for the Offical Belkin WeMo app to load.

Physical Web Lightswitch

Physical Web Logo
As I’ve mentioned before I’ve been having a playing a bunch of WeMo kit and also looking at using Bluetooth LE and Physical Web beacons. I’ve been looking at putting the 2 together to solve a problem I’ve been having round the house. (Also one mentioned by somebody [@mattb I’m told by @knolleary], sorry can’t remember who, at this years ThingMonk)

Belkin ship a mobile phone app to control their products but it has a few drawbacks:
Belkin WeMo App

  1. Launching the app takes ages
  2. Vistors need to know what type of lights you own, then they have to install the right app
  3. You have to give visitors access to you Wifi
  4. Once you’ve granted access to the WiFi, Visitors are granted full control of your lights, including when no longer attached to the same network with no way to revoke access

(Having been reminded by @knolleary -> More of these types of problems discussed in @mattb’s Thingmonk talk at this year)

The Physical Web approach deals with the first 2 of these really nicely, a phone will detect the Eddystone beacon and offer a link to the control web page, so no app needed and no need to identify what type of devices you have, you just need to be close enough to it.

The second 2 problems are a little bit more tricky. Due to mitigation of some privacy problems at the moment to get the best out of a Physical Web URL it needs to be publicly accessible, this is because when a device detects a beacon it tries to access the URL in order to pull some summary/meta data to help with presenting it to the user. The problem with this is it exposes the devices IP address to the guys who deployed the beacon, which allows for the possibility of tracking that user. The workaround is that the Physical Web spec says that the URLs should be accessed by a proxy hence shielding the device from the URL, the problem is these proxies are all on the public internet and can only see public sites. But this does mean since it’s on the public internet you don’t need to give guest net access.

All this gets round problem number 3, but means that control for your living room lights needs to be publicly exposed to the internet. Which brings up nicely to problem 4, if it’s on the public internet how do you control who has access, once somebody has used the URL in the beacon it will be in their internet history and they can comeback and mess with your lights from home again.

You can add authentication to the URL, but a careful balance about how long any signed in session lasts will need to be struck as you don’t want to be flapping around in the dark trying to enter a password to turn the lights on. While this is big scary problem there is a potential solution to all that I’ll touch on at the end of this post.

There is one other problem, URLs broadcast via Eddystone beacons have to be less than 18 bytes long which is pretty short, while there are some encoding tricks for common start and end sections (e.g. ‘http://www.’ & ‘.com’) that reduce these sections to just 1 byte, that still doesn’t leave room for much more. You need to use a URL shortner to do anything major.

Trying things out

While thinking about all this I decided to spin up a little project (on github) to have a play. I took the core code from my Node-RED WeMo node and wrapped it up in a little web app along with the bleno and eddystone-beacon npm modules. I could have done this using Node-RED but I wanted to be able to support control via straight BLE as well.

The code uses discovery to find the configured light bulb or group of bulbs

wemo.start();

if (!wemo.get(deviceID)) {
  wemo.on('discovered', function(d){
    if (d === deviceID) {
      device = wemo.get(d);
      console.log("found light");
    }
  });
} else {
  device = wemo.get(deviceID);
  console.log("found light");
}

It then starts up an express js webserver and creates 2 routes, 1 to toggle on/off and one to set the brightness

app.post('/toggle/:on', function(req, res){
  console.log("toggle " + req.params.on);
  if (req.params.on === 'on') {
    wemo.setStatus(device,'10006',1);
  } else {
    wemo.setStatus(device,'10006',0);
  }
  res.send();
});

app.post('/dim/:range', function(req,res){
  console.log("dim " + req.params.range);
  wemo.setStatus(device,'10006,10008','1,' + req.params.range);
  res.send();
});

It also sets up a directory to load static content out of. Once that is all setup then it sets up the Eddystone beacon

eddystone.advertiseUrl(config.shortURL, {name: config.name});

Enabling Physical Web on You Phone

If you want to have a play yourselves there are a couple of approaches to enable Physical Web discovery on your phone. The first is a mobile app built by the physicalweb.org guys, it’s available for both Android and iOS. When you launch this app it will seek out any Eddystone beacons in range and display a list along with a summary

Recently Google announced that they were rolling Physical web capability into the their Chrome Web browser. At the moment it is only available in the beta release. You can down it on Android here and the this link has instructions for iOS. I have not tried iOS instructions as I don’t have a suitable device.

Once it’s been installed these instructions explain how to enable it.

Now we have a beacon and some way to detect it what does it look like

Discovered beacons

The Physical Web app has detected 2 beacons (actually they are both the same beacon, using both BLE and mDNS to broadcast the URL). Both beacon are on my private network at the moment so the proxy could not load any of the meta data to enrich the listing, it could also not find my private URL shortener http://s.loc. If I click on one of the beacons then it will take me to a page to control the light. At the moment the interface is purely functional.

Light interface

This is working nicely enough for now, but it needs to be made to look a bit nicer and could do with presenting what brightness level the bulb is currently set to.

Direct device communication

I mentioned earlier there was a possible solution for the public network access requirement and authentication. There is a working group developing a specification to allow web pages to interact with local BLE devices. The Web Bluetooth API Specification is not yet finished but an early version is baked into Chrome (you can enable it via these instructions). This is something I intend to play with because it solves the whole public facing site problem and how to stop guests keeping remote access to your lights. It doesn’t matter that you can download the control page if you still need to be physically close to the beacon to connect via BLE to control the lights.

I’ve added 2 BLE GATT characteristics to the beacon (1 for on/off and 1 for dimming) and when I get another couple of free hours I’m going to improve the webpage served up from the beacon to include this support. Once this works I can move the page to my public site and use a public URL shortener which should mean all the meta data will load properly.

All this also means that with the right cache headers the page only needs to be downloaded once and can then loaded directly from the on device cache in the future.

Flic.io Linux library

As I mentioned I recently got my hands on a set of 4 flic.io buttons. I pretty immediately paired one of them with my phone and started playing. It soon became obvious that while fun, the use cases for a button paired to a phone where limited to a single user environment and not what I had in mind.

What was needed was a way to hook the flic buttons up to something like a Raspberry Pi and Node-RED. While I was waiting for the buttons to arrive I was poking round the messages posted to the indiegogo one of the guys from Shortcut Labs mentioned that such a library was in the works. I reached out to their developer contact point asking about getting access to the library to build a Node-RED node round, I said I was happy to help test any code they had. Just before Christmas I managed got hold of a early beta release to have a play with.

From that I was able to spin up a npm module and a Node-RED node.

The Node-RED node will currently listen for any buttons that are paired with the computer and publish a message as to if it was a single, double or long click

Flic.io Node-RED node

I said I would sit on these nodes until the library shipped, but it appeared on github yesterday so hence this post. The build includes binaries for Raspberry Pi, i386 and x86_64 and needs the very latest Bluez packages (5.36+).

Both my nodes need a little bit of cleaning up and a decent README writing, once that is done I’ll push them to npm.

UPDATE:
Both nodes are now on npmjs.org:
https://www.npmjs.com/package/node-flic-buttons
https://www.npmjs.com/package/node-red-contrib-flic-buttons

New WeMo Nodes for Node-RED

Based on my previous playing with a set of Belkin WeMo sockets and lightbulbs I decided to have a go at improving support in Node-RED.

I’ve built 2 new nodes, control and event nodes.

WeMo Control Node
WeMo Control Node

The control node accepts the following values in the msg.payload

  • on/off
  • 1/0
  • true/false
  • A JSON object like this
    {
      state: 'on'
      brightness: 255
      color: '255,0,0'
    }
WeMo Event node
WeMo Event node

The input (event) node now uses uPNP events rather than polling status from the devices every 2 seconds. This means you won’t get the “nc” no change messages but you will get events when lights change brightness or colour as well as on/off messages.

Both nodes use a shared config node that uses uPNP discovery to locate WeMo devices on your local network so you don’t have to work out what IP address and port number they are using.

WeMo Device discovery
WeMo Device discovery

Discovery runs once a minute to ensure all devices are found and any change in IP address or port number are quickly picked up. First discovery may take a little while so please allow a little time if you don’t see all the devices you expect listed when you look in the config node.

All the code is up on github here, I’ll push them to npmjs after people have given them a bit more of a test and I’ll have a chat with the Node-RED guys about maybe swapping out the original WeMo node. There is basic backwards compatibility with the original WeMo node, but the nodes work better if after upgrading you use the configuration dialog to pick a discovered device from the list.

Update to node-red-node-ldap

I’ve updated the node-red-node-ldap node to use the ldapjs node rather than the LDAP node.

Node-RED LDAP node

The LDAP node was a NodeJS wrapper round the OpenLDAP libraries which meant there was an external native dependancy and limited the number of platforms that node could be deployed on.

The ldapjs node is all pure Javascript so should run everywhere.

Everything should be backward compatible but raise issues on github if you find problems.

Version 0.0.2 can be found on npmjs here and on github here

DNSSEC and Letsencrypt

A couple of tweets from a colleague over the Christmas period along with some jobs I’d been saving up made me have another look at the DNS and HTTPS set up for a couple of sites I look after.

DNSSEC

I’ve been meaning to play with DNSSEC for a while, especially since I run my own primary DNS and set up DMKIM to verify my mail server identity (yeah, I know in this day and age of cloud running all your own services is a little quaint, but I like to understand how every thing works).

This a good introduction to DNSSEC if you’re not up to speed. TL;DR DNSSEC allows you to tell when people have been messing with your DNS entries.

To set up DNSSEC you need to create 2 sets of keys, a zone signing key and a key signing key you can create them with the following commands respectively.

$ dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE hardill.me.uk
Generating key pair..................+++ .............+++
Khardill.me.uk.+007+40400
$ dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE hardill.me.uk
Generating key pair....................................................................................................................................................................................................................................................++ ................................................................................++ 
Khardill.me.uk.+007+23880

Key generation requires a lot of random numbers and these are created from the /dev/random, the values for this are generated from the system entropy so can take a long time on a machine that isn’t doing very much, to help with this I can installed the haveged daemon.

Now I have the 2 sets of keys (public and private) I need to add them to the end of my zone file with the following lines:

$INCLUDE Khardill.me.uk.+007+43892.key
$INCLUDE Khardill.me.uk.+007+23880.key

Now we can use these keys to actually sign the zone with the dnssec-signzone command, the NSEC3 setup takes a salt to help with security. The $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) generates a 16 character random string to act as the salt.

$ dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o hardill.me.uk -t hardill.me.uk.db
Verifying the zone using the following algorithms: NSEC3RSASHA1.
Zone fully signed:
Algorithm: NSEC3RSASHA1: KSKs: 1 active, 0 stand-by, 0 revoked
                         ZSKs: 1 active, 0 stand-by, 0 revoked
hardill.me.uk.db.signed
Signatures generated:                       25
Signatures retained:                         0
Signatures dropped:                          0
Signatures successfully verified:            0
Signatures unsuccessfully verified:          0
Signing time in seconds:                 1.129
Signatures per second:                  22.143
Runtime in seconds:                      1.274

This generates 2 files, the first is hardill.me.uk.db.signed which is an updated version of the zone file with the signed hashes included for each entry. The second is dsset-hardill.me.uk. which holds the DS hashes for my 2 keys. The DS entries are hosted by the layer above my domain in the DNS hierarchy so that anybody wanting to verify the data can walk from the Signed root zone up the tree checking the level above before moving on. To get the DS entries into the zone above you normally have to go through your Domain Name Registrar who would in this case ask Nominet (as the keep of the me.uk domain) to host them for me, unfortunately my registrar (I won’t name them here) claims unable to be able pass this request on to Nominet. I need to see if I can get Nominet to do it for me, but I’m not confident so I’m currently in the market for a new registrar, any recommendations welcome.

In the mean time I decided to test the rest of it out on the private TLD I run on my lan. I can get round the need for a DS record by telling Bind to trust my key explicitly using the trusted-keys directive in named.conf. To get this far I followed this set of instructions, which are the manual steps for DNSSEC, there are also instructions to get Bind to automatically sign zones, which is especially useful if you are doing Dynamic DNS updates, this page has instructions for that which I’ll be looking at once I get things sorted to have my DS records hosted properly.

Letsencrypt

The letsencrypt project has a goal to provide free SSL certificates for everybody that are signed by a CA in the collection commonly included in modern browsers. It had been in private beta most of last year, but went into public beta at the start of December so I could sign up. Letsencrypt will generate you a certificate for any domain you can prove you own, you do this using a protocol called ACME and they have written a client to help with this. ACME works over HTTP/HTTPs by placing a hash value at a known location. This can be via an existing HTTP server (e.g. Apache) or by a one built into the client. At home I run my own private CA as it allows me to issue certificates for names on my private TLD and for my IP addresses. I also issue client certificates to authenticate users and having them all with the same CA makes things a little easier. When I get some time I will probably move my domain over to a letsencrypt certificate and only use my CA for client certs. In the mean time I needed to set up access to my Dad’s work mail server so my Brother can send/receive email from his iPhone, this needed to be secure so everything needs to be protected by a certificate. Rather than mess about getting the root CA certs for my private CA on to his phone I decided to use letsencrypt. The mail server doesn’t run a webserver so I used the one built into the client.

$ letsencrypt-auto certonly --standalone  --email admin@example.com --agree-tos -d mail.example.com

The command line arguments are as follows

  • certonly – This tells the client to download the certificate (rather than download it and install it
  • –standalone – This tells the client to use it’s built in HTTP server
  • –email admin@example.com – This tells the client who to email if there is a problem (like a cert expires without being renewed)
  • –agree-tos – This stops the client showing the TOS and prompting you to agree to them
  • -d mail.example.com – This tells the client which host name to create the certificate for, you can specify multiple instances

Certificates and Keys are stored under /etc/letsencrypt/ with the current cert under live/[host name]. I configured Postfix and Dovecot to point to these so that they just need to be restarted to pick up the new certs.

Letsencrypt hand out certificates that are only valid for 90days, this is for a couple of reasons, but mainly it means that any compromised certs only expose people for a short time and they can upgrade the supported algorithms/key strength regularly to keep ahead of new vulnerabilities. The downside to this is that you need to renew the certificate regularly. The client is actually pretty good at letting you automate things using a very similar command to the original version. I’ve set up a cron job to run on the first of every second month that renews a new cert every 60ish day and then restart Postfix and Dovecot, this gives plenty of time to fix anything should there be a problem.

25 15 1 1,3,5,7,9,11 * /home/admin/renew-cert.sh
#!/bin/sh
/home/admin/letsencrypt/letsencrypt-auto certonly --standalone --renew-by-default --email admin@example.com --agree-tos -d mail.example.com
sudo service dovecot restart
sudo service postfix restart

I had to add the following to the sudoers file to get everything to work without prompting for passwords

admin ALL= NOPASSWD: /home/admin/letsencrypt/letsencrypt-auto 
admin ALL= NOPASSWD: /usr/bin/service postfix *
admin ALL= NOPASSWD: /usr/bin/service dovecot *