I saw the recent announcements from Mozilla, Cloudflare and Google about running a trial to try and make DNS name resolution more secure.

The basic problem is that most users get their DNS server set via DHCP which is controlled by who ever runs the network (at home this tends to be their ISP, but when using public wifi this could be anybody). The first approach to help with this was Google’s public DNS service (followed by the IBM’s and Cloudflares This helps if people are technically literate enough know how to change their OS’s DNS settings and fix them to one of these providers. Also DNS is UDP based protocol which makes it particularly easy for a bad actor on the network to spoof responses.

The approach the 3 companies are taking is to run DNS over an existing secure protocol, in this case HTTPS. From Firefox version 60 (currently in beta) it is possible to set it up to do name host name resolution via DNS-Over-HTTPS.

There are currently 2 competing specifications for how to actually implement DNS-Over-HTTPS.

DNS Wireformat

This uses exactly the same data structure as existing DNS. Requests can be made via a HTTP GET or POST. For a POST the body is the binary request and the Content-Type is set to application/dns-udpwireformat.

For GET requests the payload is BASE64 encoded and passed as the dns query parameter.

In both cases the response is the same binary payload as would be made by a normal DNS server.

This approach is currently covered by this draft RFC


For this approach the request are made as a HTTP GET request with the hostname (or IP address) being passed as the name and the query type being passed as the type query parameters.

A response looks like this:

    "Status": 0,
    "RA": true,
    "RD": true,
    "TC": false,
    "AD": false,
    "CD": true,
    "Additional": [],
    "Answer": [
            "TTL": 86400,
            "data": "",
            "name": "example.com",
            "type": 1
    "Question": [
            "name": "example.com",
            "type": 1

With a Content-Type of application/dns-json

You can find the spec for this scheme from Google here and Cloudflare here.

Both of these schemes have been implemented by both Google and Cloudflare and either can be used with Firefox 60+.

Privacy Fears

There has already been a bit of a backlash against this idea, mainly around privacy fears. The idea of Google/CloudFlare being able to collect information about all the hosts your browser resolves scared some people. Mozilla has an agreement in place with CloudFlare about data retention for the initial trial.

Given these fears I wondered if people might still want to play with DNS-Over-HTTPS but not want to share data with Google/Cloudflare. With this in mind I thought I’d try and see how easy it would be to implement a DNS-Over-HTTPS server. Also people may want to try this out on closed networks (for things like performance testing or security testing).

It turned out not to be too difficult, I started with a simple ExpressJS based HTTP server and then started to add DNS support. Initially I tried a couple of different DNS NodeJS nodes to get all the require details and in the end settled on dns-packet and actually sending my own UDP packets to the DNS server.

I’ve put my code up on github here if anybody wants a play. The README.md should include details about how to set up Firefox to use an instance.

Logging request & response body and headers with nginx

I’ve been working a problem to do with oAuth token refresh with the Amazon Alexa team recently and one of the things they have asked for is a log of the entire token exchange stage.

Normally I’d do this with something like Wireshark but as the server is running on a Amazon EC2 instance I didn’t have easy access to somewhere to tap the network so I decided to look for another way.

The actual oAuth code is all in NodeJS + Express but the whole thing is fronted by nginx. You can get nginx to log the incoming request body relatively simply, there is a $request_body variable that can be included in the logs, but there is no equivalent $resp_body.

To solve this I turned to Google and it turned up this answer on Server Fault which introduced me to the embedded lua engine in nginx. I’ve been playing with lua for some things at work recently so I’ve managed to get my head round the basics.

The important bit of the answer is:

lua_need_request_body on;

set $resp_body "";
body_filter_by_lua '
  local resp_body = string.sub(ngx.arg[1], 1, 1000)
  ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body
  if ngx.arg[2] then
     ngx.var.resp_body = ngx.ctx.buffered

I also wanted the request and response headers logging so a little bit more lua got me those as well:

set $req_header "";
  set $resp_header "";
  header_filter_by_lua ' 
  local h = ngx.req.get_headers()
  for k, v in pairs(h) do
      ngx.var.req_header = ngx.var.req_header .. k.."="..v.." "
  local rh = ngx.resp.get_headers()
  for k, v in pairs(rh) do
      ngx.var.resp_header = ngx.var.resp_header .. k.."="..v.." "

This combined with a custom log format string gets me everything I need.

log_format log_req_resp '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" $request_time req_header:"$req_header" req_body:"$request_body" resp_header:"$resp_header" resp_body:"$resp_body"';

Native NodeJS callbacks with Context

As I mentioned back in September I’ve recently started a new job. Due to the nature of the work I’m not going to be able to talk about much of it. But when there are things I can I’ll try and write them up here.

One of my first tasks has been to write a Node-RED wrapper around a 3rd party native library. This library provides a 2 way communication channel to a prototyping environment so needs to use threads to keep track of things in both directions and make use of callbacks to pass that information back into the Javascript side. I dug around for some concrete examples of what I was trying and while I found a few things that came close I didn’t find exactly what I was looking for so here is a stripped back version of the node I created to use as a reference for later.

This is the C++ method that is called when a new instance of the native node module is created. It takes an object reference as an argument to be stored away and used as the context for the callback later.

void Test::New(const Nan::FunctionCallbackInfo<v8::Value>& info) {
  if (info.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    Test* obj = new Test();

    v8::Local<v8::Object> context = v8::Local<v8::Object>::Cast(info[0]);

    uv_loop_t* loop = uv_default_loop();
    uv_async_init(loop, &obj->async, asyncmsg);
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 2;
    v8::Local<v8::Value> argv[argc] = { info[0], info[1] };
    v8::Local<v8::Function> cons = Nan::New<v8::Function>(constructor);

The object is created like this on the javascript side where the this is the reference to the object to be used as the context:

function Native() {
  //passes "this" to the C++ side for callback
  this._native = new native.Test(this);

We then make the callback from C++ here:

void Test::asyncmsg(uv_async_t* handle) {
  Nan::HandleScope scope;

  //retrieve the context object
  Test* obj = (Test*)((callbackData*)handle->data)->context;
  v8::Local<v8::Object> context = Nan::New(obj->context);

  //create object to pass back to javascript with result
  v8::Local<v8::Object> response = Nan::New<v8::Object>();
  response->Set(Nan::New<v8::String>("counter").ToLocalChecked(), Nan::New(((callbackData*)handle->data)->counter));

  v8::Local<v8::Value> argv[] = { response };

Which ends up back on the javascript side of the house here:

Native.prototype._status = function(status) {
  this.emit("loop", status.counter);

I’ve uploaded code to githib here if you want to have look at the whole stack and possibly use it as a base for your project.

Adding audio to the CCTV of your burgular

In a full on throw back to the 2nd ever post to this blog, back in February 2010, I’ve recently been updating the system that sends me a video when there is movement in my flat via MMS and email.

I thought I’d try and add audio to the video that gets sent. A quick Google turned up two options, one was to use the sox command and it’s silence option, the second uses the on_event_start triggers in motion as a way to record the audio at the same time as capturing the movement video. I went with the second option and tweaked it a bit to make it pick the right input for my system and to direct encode the audio to MP3 rather than WAV to save space.

on_event_start arecord --process-id-file /var/motion/arecord.pid -D sysdefault:CARD=AF -f S16_LE -r 22050 - | lame  -m m - /var/motion/%Y%m%d_%H%M%S.mp3

The other useful addition was the –process-id-file /var/motion/arecord.pid which writes the process id to a file so I can just use this to stop the recording rather than having to use grep and awk to find the process in the ps output.

on_event_end kill `cat /var/motion/arecord.pid`

Now it’s just a case of combining the video from motion with the audio. I can do this with ffmpeg when I re-encode the video into the 3gp container to make it suitable for sending via a MMS message.

ffmpeg -i movement.avi -i movement.mp3 -map 0:v -map 1:a -c:a aac -strict -2 -s qcif -c:v h263 -y /var/www/html/cam1/intruder.3gp

This seams to work but the output file is a little large. The default audio encoding seams to be at 160k bitrate, I wound it down to 32k and the file size got a lot better.

ffmpeg -i movement.avi -i movement.mp3 -map 0:v -map 1:a -c:a aac -b:a 32k -strict -2 -s qcif -c:v h263 -y /var/www/html/cam1/intruder.3gp

I’d like to try the AMR audio codec but I can’t get ffmpeg to encode with it at the moment, so I’m just going to email the mp3 of the audio along with the high res AVI version of the video and just send the low res 3GP version via MMS.

Barcelona Triathlon 2017

Had a great day in Barcelona today at the Barcelona Triathlon.

The sea was calm, very little wind and sunny.

The sea was easily warm enough to swim without a suit, but I decided to wear mine anyway to help with buoyancy to keep the legs up. Off the beach start went well starting just behind the front to not get run over in the surf. I was well up the field by the first mark of the q shaped course, and by the end we were deep into the back of the previous wave. The watch measured 1625m in 27mins which is on a par with the times I’ve been doing in the pool recently.

T1 went smoothly and on to the ride. The course was pretty flat, not too technical and draft legal for the whole field. I managed to average 34+kph and spent most of the time hopping between groups. Ended up about 1:08

T2 again went smoothly and there was a surprisingly small number of racked bikes, then on to the run. This was a hard slog, along the beach before heading up towards the centre of the city. By this point it was starting to get properly warm pushing up towards 29degrees. The last 200m included a steep ramp up from the beach side path to the board walk area which I really could have done without, but managed to get round in less than 50mins.

Total time by my watch 2:30:01 (so close!). The official results say 2:30:03

The only down comment for the day was the arrangement of the starting sequence. Transition was only open from 6am-8am with the first start at 8:10am. This was a small wave of VIPs. Then 2 more waves at 10min intervals, followed by 20+min break before 3-4 more waves, another 20min break then the next batch of starts. So my start in wave 10 was at 9:50, nearly 2 hours after transition closed so a fair bit of waiting around. The rest of the organisation was great.

Time for something new

It looks like the IBM Process Server work flow has run and my entry in Bluepages (IBM’s internal LDAP back employee directory) has been expunged. So after pretty much exactly 16 years at IBM it’s time for something new.

I started at IBM straight after I finished my masters (to the extent that I handed my thesis in on the Friday in Cranfield, drove back to Yorkshire on the Saturday morning, did as much washing as possible and then drove down to Southampton on the Sunday to check into the hotel at Marwell Zoo for the start of the induction week).

While at IBM I worked for 2 teams, firstly the Java Technology Centre and then Emerging Technologies & Services.

Java Technology Centre

Most of my time on this group was spent working in the Level 3 Support team, at the time the IBM JVM under pinned a large proportion of the IBM Software stack, which meant it was always our fault (until proven otherwise) when something broke. This was a great team to work for, every morning (and later when phone rang at 3am) there was a new batch of problems to solve and the team helped each other out and we were always learning. I’d like to thank Mark Bluemel who was my original team leader for teaching me that the customer is not always right, and some times the quickest way to solve a problem was to point this out to them (just as long as you had all the evidence to back it up). It helped hone my engineering background to dig into problems and find the underlying cause.

As I said earlier, the JVM used to underpin a large proportion of IBM’s software offerings, this brought me into contact with a large number of product teams and their customers based all round the world. In later years when I became one of the two go to guys (with Chris Bailey) for management to send on-site at really short notice to solve problems, I got to meet a lot of these folks in person and not just at the end of a IM chat window or conference call. This period also taught me the ways of airline/hotel points schemes and how to “work” a corporate travel booking system (Thanks Flavio) and took me to some places I probably wouldn’t have normally chosen to visit (2.5 weeks in Seoul, a winter of Mon-Fri in German country side), even if some visits I saw little more than a air conditioned office and a cookie cutter

In the end the only reason I moved on from this group was because by the time a customer reached me they were usually not the happiest camper and the best I could do was get them back to a content state that things were working again, while there was a great deal of satisfaction in this it did start to grate a little towards the end.

Emerging Technologies & Services

ETS was always THE place to work in Hursley, they have all the best toys and it’s hard to argue with a team that had it’s own armoured car (unfortunately returned a few years ago)!

It is a small team that works on just about anything going, but specialising on what ever was new and interesting we could convince a client to pay for. We would go poke round both IBM Research and anything else in the public domain looking for something interesting and the go looking for a client that wanted to try something on the bleeding edge. Projects vary from just one member of the team working with a client or offering support to one of the other IBM services teams to 3-4 delivering something a little bigger. Projects include things like bits for Wimbledon like social media analysis system and network attached light level sensors, a set of pedestals to control the video walls in the IBM Southbank Forum, Controlling TVs using telepathy and a 10 year research program around Network and Information Science for the US/UK defence sector. The team also runs hackdays, innovation and design thinking workshops with clients.

This is the team that invented Node-RED (much kudos to Nick and Dave) along with a bunch of other cool tech like GianDB and Edgeware Fabric.

The team has had a bit of a shuffle round recently and now sits even closer to the IBM Research folk, hopefully this will make things easier for them to grab the latest and greatest new and shiny stuff coming down the pipe.


On the whole I enjoyed my time at IBM and I’ll miss all the great people I worked with, but it was just time to try something new.

As for what that will be, I’ll let you know more once I’ve actually started (beginning of November) and worked out just how much of it I’m allowed to talk about, but given some recent public announcements it sounds like it could all be VERY interesting. Watch this space.

Update on Garmin Forerunner 935

It’s been a few weeks since I picked up my Garmin Forerunner 935. I must say I’m pretty impressed.

Step counting

I’ve been using it to record my day to day step count and all day heart rate data as well as all my training and the London Triathlon.

The battery life is great, I’m getting a good 2 weeks out of a charge even when using it to record activities with GPS and ANT+ sensor data. It seams to take about 2 hours to fully charge.

Having it sync with the phone is useful as it means I don’t need to keep a Windows box kicking around just (OK, I do still need one for Zwift but that is less regular) to run the Garmin Connect application to upload my workouts to the web and Strava. There is built in WiFi support as well which can allow it to sync without having the phone, I’ve not enabled this at the moment as even if I’m not always carrying my phone while training it is pretty much always going to be around when I get back.

Another change is that the ANT+ sensors now live in a collective pool rather than being bound to something like a bike profile so you don’t need to remember to pick the right profile if you have multiple bikes. The watch will just pick all the relevant sensors it can see as you select the activity type. The only downside I can see to this is if you lend somebody a bike and both go riding at the same time. To get round this you can force it to pick one if it can see multiple versions of the same sensor. But it does mean I don’t need 3 different profiles, one for the Propel, Defy and the Defy on the turbo trainer.

Rest indicator

The new training tracking feature is also helpful, giving indications of how much rest time you should take between activities and also a training load number. The training load number is supposed to unique to each user so not something you can compare with others, but should show if the system thinks you are over training (looks like I need to back off a little)

Training Load

The only extra I have purchased is a glass screen protector as I managed to get a very small scratch in the plastic face on the first day wearing it. I’ve no idea how it did it as I doing remember knocking or catching it against anything. The protector is very thin and fits nearly flush with the bezel and you can’t tell it’s there. Given I’m planning on wearing this as my day to day watch as well as for activity tracking this is a little disappointing, but this is probably why it’s cheaper than the equivalently spec’d Fenix 5.

London Triathlon 2017

As I mentioned in the last post, I did the London Triathlon at the weekend. I got round in a total time of 2:40:33 which is 3min quicker than WTS even in Leeds I did about a month ago. I’m slowly working my way back towards the sub 2:30:00 times I managed in 2015.

The weather forecast changed all week but always with rain at some point in the day, early on it looked like it might stay dry until at least the run, but this was dashed when I got properly soaked while riding from Leytonstone down to Excel before the start.

As usual The London Triathlon runs a number of different courses over the weekend, I was racing the “shortest” loop version of the Olympic distance which was made up a 2 lap swim, 4 lap bike and a 4 lap run.

The “long” loop version on Sunday morning is a 1 lap swim and a 1.5 lap bike (down to Parliament and back) and a 3 lap run which I’ve done a couple of times before.


There was a break in the rain just in time for the start of the swim

The 2 lap swim has it’s good and bad points over the 1 lap version

  • good: you can see the between all the turning buoys. For the 1 lap version you can’t see the first buoy from the start line.
  • bad: Waves set off in 2 halves with 2 mins between halves and 20mins between waves, which means that as you start your pretty much straight into the back of the mid pace swimmers from the wave before on their second lap. Also with the shorter legs the waves don’t spread out as much so the was a lot more bumping and jostling all the way round and especially at the run in to the exit.


By the time I was out of the water and on to the bike the rain had well and truly kicked back in. The course was a 10k loop between 2 roundabouts, but the turns were the short way round the which made them very tight, this combined with a little technical section just west of Excel made for some treacherous areas. The course was pretty much pan flat except the climb over the flyover just before the first turn. I averaged 30kph, over the 40km which is OK considering how wet it was.


It stopped raining again for the run which is again nearly totally flat, apart from the climb up into the Excel each lap to pass the turn to the
finish straight. The indoor loop was a bit longer this year.

The new Garmin 935 worked really well, the triathlon mode is very similar to the 910XT with the lap button being used to move between disciplines. One feature that I think is new is the ability to set the auto button lock on a per activity basis, I used this to lock the buttons for the openwater swim. I did this because unlike the 910XT the the start/stop and lap button are on the right hand edge of the watch and as I wear my watch on my right wrist this put the buttons up against the edge of my wetsuit so I was a little worried they might get push by accident. This just meant I had to press and hold one of the buttons when I got out of the water to unlock things before pressing the lap button to signal entering T1.

Garmin Forerunner 935

My trusty Garmin Forerunner 910xt has finally been put out to pasture, 2 years ago the barometric altimeter failed and I got it replaced with a refurbed version and over the last 3 months the power button has been getting harder and harder to push. My best guess is that the micro switch has lifted off of the board so it needs to be push at just the right angle to get it to line up with the contacts and actually activate.

My Fitbit HR had also given up the ghost as well in the last few months so I went looking for a replacement that would cover for both. I looked at both the Garmin 735 and the 935. Both do step counting and have a optical HR sensor in the back. Reviews of the HR sensor on the 735 were not so great and it was missing a barometric altimeter so that didn’t help it’s case. Wiggle were also doing a week of extra discount (17%) at the time as well which helped to bring the price of the 935 down to something slightly more sensible than list price.

So as you can guess by the title of this post I opted for the 935. It arrived this morning so I don’t have a lot to say about it just yet, but the first impressions are:

Garmin forerunner 935
Garmin forerunner 935
  • It’s a lot smaller than the 910xt and even a bit smaller than the Suunto Vector that I have been wearing as a day to day watch
  • It’s also lighter than I expected, I’m used to wearing something with a bit of heft (My first serious sailing watch was a Citizen Yatchmaster which was stainless steel, when I took it off my arm used to float) to it so this it will take a day or two to get used to how light it is.

A lot of the features need a bit of time to learn my training pattern and my day to day activity profile so I’ll give it a week to bed in and write some more about it, I’m also doing the London Tri next weekend so that will be a good chance to give it a proper workout.

Both the 735 and the 935 both support the 2 new HRM belts from Garmin that support recording HR data while swimming (the HRM-Tri and HRM-Swim), while I already have a ant+ HRM belt I’m seriously tempted by both of these (mainly for the geekiness) so I may have to grab one or both soon.