Category Archives: Tech

DIY IoT button

I’ve been looking for a project for a bunch of ESP-8266 ESP-01 boards I’ve had kicking around for a while.

The following describes a simple button that when pushed publishes a MQTT message that I can subscribe to with Node-RED to control different tasks.

It’s been done many times before, but I wanted to have a go at building my own IoT button.

Software

The code is pretty simple:

  • The MQTT PubSubClient (Thank’s Nick)
  • Some hard coded WiFi and MQTT Broker details
  • The setup function which connects to the network
  • The reconnect function that connects to the MQTT broker and publishes the message
  • The loop function which flashes the LED to show success then go into deep sleep

In order to get the best battery life you want the ESP8266 to be in deep sleep mode for as much as possible, so for this reason the loop function sends the message to signify the button has been pushed then indefinitely enters the deepest sleep state possible. This means the loop function will only run once on start up.

#include <ESP8266WiFi.h>
#include <PubSubClient.h>

#define ESP8266_LED 1

const char* ssid = "WifiName";
const char* passwd = "GoodPassword";
const char* broker = "192.168.1.114";

WiFiClient espClient;
PubSubClient client(espClient);

void setup() {

  Serial.begin(115200);
  delay(10);
  
  pinMode(ESP8266_LED, OUTPUT);

  WiFi.hostname("Button1");
  WiFi.begin(ssid, passwd);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }

  Serial.println("");
  Serial.println("WiFi connected");  
  Serial.println("IP address: ");
  Serial.println(WiFi.localIP());
  client.setServer(broker, 1883);
  reconnect();
}

void reconnect() {
  while(!client.connected()) {
    if (client.connect("button1")){
      client.publish("button1", "press");
    } else {
      delay(5000);
    }
  }
}

void loop() {

  if (client.connected()) {
    reconnect();
  }
  client.loop();
  
  // put your main code here, to run repeatedly:
  digitalWrite(ESP8266_LED, HIGH);
  delay(1000);
  digitalWrite(ESP8266_LED, LOW);
  delay(1000);
  ESP.deepSleep(0);
}

Hardware

By using a momentary push button wake the ESP-01 I bridged reset pin to ground the chip resets each time it’s pushed, this wakes it from the deep sleep state and runs the all the code, then drops back into the sleep state.

Button diagram

  • The green line is the Chip enable
  • The blue line is the the links reset to ground via the push button

Now I had the basic circuit and code working I needed to pick a power supply. The ESP-01 needs a 3.3v supply and most people seem to opt for using a small LiPo cell. A single cell has a nominal fully charged voltage of 3.7v which is probably close enough to use directly with the ESP-01, but the problem is you normally need to add a circuit to cut them out before the voltage gets too low as they discharge to prevent permanently damage them. You would normally add a charging circuit to allow recharging from a USB source.

This wasn’t what I was looking for, I wanted to use off the shelf batteries so I went looking for a solution using AAA batteries. The board will run directly from a fresh set of 2 AAA batteries, but the voltage can quickly drop too low. To help with this I found a part from Pololu that would take an input between 0.5v and 5v and generate a constant 3.3v. This meant that even as the batteries discharged I should be able to continue to run the device.

At first things didn’t look to work, because converter was not supplying enough current for the ESP-01 at start up, to get round this I added a 100uF capacitor across the outputs of the regulator. I don’t really know how properly to size this capacitor so I basically made a guess.

The final step was to use the soldering iron to remove the red power LED from the board as this was consuming way more power than the rest of the system.

Prototype

Next Steps

  • Make the MQTT topic based on the unique id of the ESP-01 so it isn’t hard coded
  • Look at adding Access Point mode to allow configuration of the WiFi details if the device can not connect to the existing configuration
  • Design a circuit board to hold the voltage converter, capacitor, button and the ESP-01
  • Create a case to hold it all
  • Work out just how long a set of batteries will last

SIP to SMS bridging

I’ve recently updated my local Asterisk PBX. The main reason for the update was that processing the logs in order to set up the firewall rules to block the folk that hammer on it all day long trying to make long distance calls or run up big bills on premium rate numbers was getting too much for the original Mk i Raspberry Pi B (it now runs on a Pi 3 b+ which more up to the task).

As part of the set up I have 3G USB dongle that also supports voice calls using the chan_dongle plugin. This plugin also supports sending and receiving SMS messages (this is different from the MMS/SMS gateway I also run on a separate pi) and they are handled by the dial plan.

My first pass at the dial plan just publishes the incoming message to a MQTT topic and it is then processed by Node-RED, which emails a copy to me as well as logging it to a file.

[dongle-incoming]
exten => sms,1,Verbose(1,Incoming SMS from ${CALLERID(num)} ${BASE64_DECODE(${SMS_BASE64})})
exten => sms,n,AGI(/opt/asterisk/agi-mqtt/mqtt,/opt/asterisk/agi-mqtt/mqtt.cfg,foo/sms,${BASE64_DECODE(${SMS_BASE64})})
exten => sms,n,Hangup()

This works OK for incoming messages but sending them is a bit harder, as the only way to send them from outside the dialplan (from within the dialpan you can use DongleSendSMS) is to use the asterisk CLI tool which is a bit clunky.

What I was really looking for was a way to send/receive SMS messages like you do with a mobile phone. To make calls I normally use Linphone on my tablet and this includes support for SIP messaging. This lets you send messages between SIP clients and you can get Asterisk to consume these.

You can send SIP messages with the MessageSend asterisk dailplan function

The following is a basic echo bot:

[messaging]
exten => _.,1,Answer()
exten => _.,n,Verbose(1,Text ${MESSAGE(body)})
exten => _.,n,Verbose(1,Text from ${MESSAGE(from)})
exten => _.,n,Verbose(1,Text to ${MESSAGE(to)})
exten => _.,n,Set(FROM=${MESSAGE(from)})
exten => _.,n,Set(TO=${REPLACE(FROM,<'>,)})
exten => _.,n,MessageSend(pj${TO},${CUT(MESSAGE(to),:,2)})
exten => _.,n,Hangup()

Because I’m using the PJSIP module rather than the legacy SIP module I need to prefix the outbound address with pjsip: rather than sip:. This also matches any target extension which will be useful a little later.

To enable a specific context for SIP messages you need to add message_context to the PJSIP endpoint for the SIP user:

[tablet]
type = endpoint
context = internal
message_context = messaging
...

Now if we put the 2 bits together we get a dialplan that looks like this:

[dongle-incomming]
exten => sms,1,Verbose(1,Incoming SMS from ${CALLERID(num)} ${BASE64_DECODE(${SMS_BASE64})})
exten => sms,n,AGI(/opt/asterisk/agi-mqtt/mqtt,/opt/asterisk/agi-mqtt/mqtt.cfg,foo/sms,${BASE64_DECODE(${SMS_BASE64})})
exten => sms,n,Set(MESSAGE(body))=${BASE64_DECODE(${SMS_BASE64})})
exten => sms,n,MessageSend(pjsip:nexus7,${CALLERID(num)})
exten => sms,n,Hangup()

[messaging]
exten => _.,1,Answer()
exten => _.,n,Verbose(1,Text ${MESSAGE(body)})
exten => _.,n,Verbose(1,Text from ${MESSAGE(from)})
exten => _.,n,Verbose(1,Text to ${MESSAGE(to)})
exten => _.,n,Set(FROM=${MESSAGE(from)})
exten => _.,n,Set(TO=${REPLACE(FROM,<'>,)})
exten => _.,n,DongleSendSMS(dongle0,${EXTEN},${MESSAGE(body)},1440,no)
exten => _.,n,Hangup()

The first part handles the incoming SMS messages delivered to the dongle and passed to the sms extension in the dongle-incomming context. This logs the message to the console and via MQTT then fires it off to my tablet as a SIP message. The second is the context for the incoming SIP messages from the tablet, this will accept messages to any extension, logs the message, who it’s to/from then sends it to the number in the extension via the dongle.

Using SIP Client to send SMS

Installing SSH Keybox

I’ve recently installed a Keybox on a Raspberry Pi attached to my home network. Keybox is a bastion service that acts as a hardened access point that a protected network sits behind. The idea being that a single locked externally facing machine is easier to defend than allowing access to the whole network. The usual approach is for that one machine to just run an SSH daemon configured to only allow access via a private key. SSH allows terminal access and file transfer via scp, it allows for tunnels to be set up, so a authorised user can with the right config not normally notice that the bastion machine is there.

Keybox extends this model a little, it provides web hosted terminals to access the machines that sit behind it. The upside to this is that users don’t need anything more than a web browser to access the machine and the private keys never leave the bastion machine. Security is handled by 2FA using a OTP generator (e.g. Google Authenticator). One of the major use cases for Keybox is to access AWS machines without public IP addresses.

The reason for all this being that I’ve had some occasions recently where I’ve needed terminal access to my home machines while away but the networks I’ve been connected to did not allow outbound SSH connections. It should also be useful for when I only have access to machines with web access (e.g. locked down Chromebooks) or borrowing machines.

Installing/Configuring

Download the Keybox tgz file from the releases section of the Keybox github page.

Keybox is uses Jetty to host the web app so needs a Java virtual machine to be installed. With this in mind as I am running this on a Raspberry Pi I also reduced the Java heap size in the launch script from 1024mb to 512mb, this shouldn’t be a problem as this instance is not likely to see large amounts of load. I started the service up and tested connecting direct to the Raspberry Pi on port 8443.

The next step was to expose the service to the outside world. To do this I wanted to mount it on a URL on my main machine to make it use my existing SSL certificate. This machine runs Apache so it needs to be configured to proxy for Keybox instance. I found some useful notes to get started here.

I added the following to inside the <VirtualHost> tags in the ssl.conf file:

SSLProxyEngine On
SSLProxyCheckPeerName off
SSLProxyCheckPeerCN off
SSLProxyCheckPeerExpire off
SSLProxyVerify none

ProxyRequests off
ProxyPreserveHost On
ProxyPass /box https://192.168.1.1:8443/box
ProxyPassReverse /box https://192.168.1.1:8443/box

RequestHeader set X-Forwarded-Proto "https" env=HTTPS

<LocationMatch "/box/admin/(terms.*)">
  ProxyPass wss://192.168.1.1:8443/box/admin/$1
  ProxyPassReverse wss://192.168.1.1:8443/box/admin/$1
</LocationMatch>

I also needed to make sure mod_proxy_wstunnel was enabled to ensure the websocket connections were forwarded. The entries at the start (SSLProxyCheckPeerName, SSLProxyCheckPeerExire and SSLProxyVerify) tell Apache not to validate the SSL certificate on the Keybox machine as it is a self signed certificate.

By default Keybox runs in the root of the Jetty server so it needs a quick update to move it to running in /box to match the proxy settings. Edit the keybox.xml file in jetty/webapps to change the contextPath:

<Configure class="org.eclipse.jetty.webapp.WebAppContext">
  <Set name="contextPath">/box</Set>
  <Set name="war"><Property name="jetty.home" default="." />/keybox</Set>
  <Set name="extractWAR">false</Set>
</Configure>

Now I can access Keybox at https://mymachine/box

Better NodeJS OAuth example

Back when I was writing the Node-RED Alexa Home Skill node I used an example oAuth setup I found online here.

This worked for the initial setup but I couldn’t get Alexa to renew the oAuth tokens, as a temporary work around I set the token life to be something huge (300 years…). Again this worked, but it’s not the best and even thought I’m not likely to be around in 2318 to worry about it having crazy long token expiry times negates some of the benefits of the oAuth system.

I spent a couple of weeks bouncing email back and forth with the Alexa team at Amazon trying to work out what the problem was and I thought it would be useful to write up the findings to make life easier for anybody else wanting to use NodeJS/Passport to implement an Alexa Skill. I created a separate stripped down minimal skill to work through this without having to mess with the live skill and disrupting users and the whole thing is up on github here, but I’m going to walk through the changes here.

Let’s start by looking at what we had to start with, the following snippet of code is the part that returns a oAuth token to the remote service (in this case Amazon Alexa) when the user authorises it to use my service.

server.exchange(oauth2orize.exchange.code({
  userProperty: 'app'
}, function(application, code, redirectURI, done) {
  GrantCode.findOne({ code: code }, function(error, grant) {
    if (grant && grant.active && grant.application == application.id) {
      var token = new AccessToken({
        application: grant.application,
        user: grant.user,
        grant: grant,
        scope: grant.scope
      });
      token.save(function(error) {
        done(error, error ? null : token.token, null, error ? null : { token_type: 'standard' });
      });
    } else {
      done(error, false);
    }
  });
}));

This returns the absolute bare minimum (the 2 fields marked as required in the spec) for the oAuth spec.

{
  "access_token": "a1b2c3d4e5g6......",
  "type": "standard" 
}

The first fix is to add refresh token that can be used request a new token. To do this we need to add a new model to store the refresh token and it’s link to the user.

var RefreshTokenSchema = new Schema({
	token: { type: String, unique: true, default: function(){
		return uid(124);
	}},
	user: { type: Schema.Types.ObjectId, ref: 'Account' },
	application: { type: Schema.Types.ObjectId, ref: 'Application' }
});

And now to create a refresh token and add it to the token response from earlier.

server.exchange(oauth2orize.exchange.code({
  userProperty: 'appl'
}, function(application, code, redirectURI, done) {
  OAuth.GrantCode.findOne({ code: code }, function(error, grant) {
    if (grant && grant.active && grant.application == application.id) {
      var token = new OAuth.AccessToken({
        application: grant.application,
        user: grant.user,
        grant: grant,
        scope: grant.scope
      });
      token.save(function(error) {
        var refreshToken = new OAuth.RefreshToken({
          user: grant.user,
          application: grant.application
        });
        refreshToken.save(function(error){
          done(error, error ? null : token.token, refreshToken.token, error ? null : { token_type: 'standard' });
        });
      });
    } else {
      done(error, false);
    }
  });
}));

OK, so now we have a token response that looks like this.

{
  "access_token": "a1b2c3d4e5f6......",
  "type": "standard",
  "refresh_token":  "6f5e4d3c2b1a...."
}

This was basically what I was using when I launched the service, it has all the required fields and a refresh token. Amazon’s Alexa Smart Home API has an explicit error to return when a token has expired, so with that in mind I had assumed that when I return that error then the service would use the refresh token to get a new token. This assumption turned out to be wrong, even if you explicitly tell Amazon that the token has expired it won’t try to refresh it unless it is after the expires_in time in the token response… Now expires_in is listed as optional (but recommended in the spec) but it turns out that Amazon interprets a missing expires_in as tokens having an infinite life and as such will NEVER renew the token. To fix this we need to include an expires time. An expires time is already in the token model for the database (remember I’d already edited this to be 300 years) so we just need to get it included in the token response.

var expires = Math.round((token.expires - (new Date().getTime()))/1000);
refreshToken.save(function(error){
  done(error, error ? null : token.token, refreshToken.token, error ? null : { token_type: 'standard',  expires_in: expires});
});

Which finally gets a token response like

{
  "access_token": "a1b2c3d4e5f6......",
  "type": "standard",
  "refresh_token":  "6f5e4d3c2b1a....",
  "expires_in": 7776000000
}

This worked, the first time the token expired (in 90 days) but not the second time, because I’d not included a expires_in the token response when the token was refreshed.

As I said at the start all the code for a bare bones implementation is on github here, there are a couple of other changes, e.g.to make the system reuse existing tokens if they are still valid if the user removes and adds the skill to Alexa, to change the token type over to Bearer and including the scope information in the token response just to be complete. Should I ever get enough free time to work out how to get the model to work for a Google Assistant version of the Node-RED node this code will form the basis as that also needs an oAuth based service.

The quest for a IPv6 capable mobile data plan

For the last few weeks I’ve been trying to find a UK Mobile data provider that will provide a IPv6 address (well, hopefully a bunch of them that I can share round a few devices, but given how IPv6 normally works this should be trivial).

The reason I want this is because I’m playing with VoIP and SIP at home and I want a reliable way to be able to do direct point to point routing without having to resort to a VPN constantly running on my test devices. While my ISP (the wonderful A&A) have recently started handing out /29 and /30 subnets to make this sort of thing easier for IPV4 most mobile providers don’t provide a routable IP address, they all use CGNAT.

Currently the only major player that claims to support IPv6 is EE. I had a quick search online and found a bunch of forum posts from mid 2017 saying that they had started to roll it out, but only to new pay monthly customers. Given it is now approaching mid 2018 I thought things must have moved on a little but I couldn’t actually find anything more up to date anywhere on line. Having poked around on EE’s website none of the plan information mentions IPv6.

I called into one of EE’s retail stores and had a chat with the staff who didn’t really understand what I was asking for (to be fair it is a bit of a technical question compared to what they normally get asked), but I did manage to convince one of them to disconnect from the WiFi and get android to list their addresses. This showed a IPv6 address so things were looking up.

At a bit of a loss I called EE’s customer service team to see if they could tell me which plan I should pick, the Level 1 agent couldn’t help so passed me to Level 2, unfortunately they weren’t much help either and the best they could suggest was to get hold of a SIM and try.

Since the EE website offers sim cards for free I decided to try and order one and give it a go. At which point I ran into the next problem, the order form is not RFC2822 compliant. Meaning that it will not allow you to include tags in email addresses e.g. foo+ee@example.com where the +ee is a tag allowing you to identify who you gave the email address to.

After a little back and forth with EE’s social media team they managed to arrange to send me the Pay & Go Data (Tablet & 4GEE WiFi) SIM I was trying to order and hopefully pass to issue on to their web development team (to be fair validating email addresses is near impossible, which is why you shouldn’t even try).

Given this was explicitly a data SIM I was hopeful it would get a usable address. After topping up £10 to activate the sim and using that to buy £5 200mb data bundle I fired things up and crossed my fingers. And no joy, so back to having to run VPN tunnels on all my devices to effectively put them on my home network.

In conclusion is the IPv6 is basically still not available to the UK mobile data market.

DNS-Over-HTTPS

I saw the recent announcements from Mozilla, Cloudflare and Google about running a trial to try and make DNS name resolution more secure.

The basic problem is that most users get their DNS server set via DHCP which is controlled by who ever runs the network (at home this tends to be their ISP, but when using public wifi this could be anybody). The first approach to help with this was Google’s 8.8.8.8 public DNS service (followed by the IBM’s 9.9.9.9 and Cloudflares 1.1.1.1). This helps if people are technically literate enough know how to change their OS’s DNS settings and fix them to one of these providers. Also DNS is UDP based protocol which makes it particularly easy for a bad actor on the network to spoof responses.

The approach the 3 companies are taking is to run DNS over an existing secure protocol, in this case HTTPS. From Firefox version 60 (currently in beta) it is possible to set it up to do name host name resolution via DNS-Over-HTTPS.

There are currently 2 competing specifications for how to actually implement DNS-Over-HTTPS.

DNS Wireformat

This uses exactly the same data structure as existing DNS. Requests can be made via a HTTP GET or POST. For a POST the body is the binary request and the Content-Type is set to application/dns-udpwireformat.

For GET requests the payload is BASE64 encoded and passed as the dns query parameter.

In both cases the response is the same binary payload as would be made by a normal DNS server.

This approach is currently covered by this draft RFC

JSON

For this approach the request are made as a HTTP GET request with the hostname (or IP address) being passed as the name and the query type being passed as the type query parameters.

A response looks like this:

{
    "Status": 0,
    "RA": true,
    "RD": true,
    "TC": false,
    "AD": false,
    "CD": true,
    "Additional": [],
    "Answer": [
        {
            "TTL": 86400,
            "data": "93.184.216.34",
            "name": "example.com",
            "type": 1
        }
    ],
    "Question": [
        {
            "name": "example.com",
            "type": 1
        }
    ]
}

With a Content-Type of application/dns-json

You can find the spec for this scheme from Google here and Cloudflare here.

Both of these schemes have been implemented by both Google and Cloudflare and either can be used with Firefox 60+.

Privacy Fears

There has already been a bit of a backlash against this idea, mainly around privacy fears. The idea of Google/CloudFlare being able to collect information about all the hosts your browser resolves scared some people. Mozilla has an agreement in place with CloudFlare about data retention for the initial trial.

Given these fears I wondered if people might still want to play with DNS-Over-HTTPS but not want to share data with Google/Cloudflare. With this in mind I thought I’d try and see how easy it would be to implement a DNS-Over-HTTPS server. Also people may want to try this out on closed networks (for things like performance testing or security testing).

It turned out not to be too difficult, I started with a simple ExpressJS based HTTP server and then started to add DNS support. Initially I tried a couple of different DNS NodeJS nodes to get all the require details and in the end settled on dns-packet and actually sending my own UDP packets to the DNS server.

I’ve put my code up on github here if anybody wants a play. The README.md should include details about how to set up Firefox to use an instance.

Logging request & response body and headers with nginx

I’ve been working a problem to do with oAuth token refresh with the Amazon Alexa team recently and one of the things they have asked for is a log of the entire token exchange stage.

Normally I’d do this with something like Wireshark but as the server is running on a Amazon EC2 instance I didn’t have easy access to somewhere to tap the network so I decided to look for another way.

The actual oAuth code is all in NodeJS + Express but the whole thing is fronted by nginx. You can get nginx to log the incoming request body relatively simply, there is a $request_body variable that can be included in the logs, but there is no equivalent $resp_body.

To solve this I turned to Google and it turned up this answer on Server Fault which introduced me to the embedded lua engine in nginx. I’ve been playing with lua for some things at work recently so I’ve managed to get my head round the basics.

The important bit of the answer is:

lua_need_request_body on;

set $resp_body "";
body_filter_by_lua '
  local resp_body = string.sub(ngx.arg[1], 1, 1000)
  ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body
  if ngx.arg[2] then
     ngx.var.resp_body = ngx.ctx.buffered
  end
';

I also wanted the request and response headers logging so a little bit more lua got me those as well:

set $req_header "";
  set $resp_header "";
  header_filter_by_lua ' 
  local h = ngx.req.get_headers()
  for k, v in pairs(h) do
      ngx.var.req_header = ngx.var.req_header .. k.."="..v.." "
  end
  local rh = ngx.resp.get_headers()
  for k, v in pairs(rh) do
      ngx.var.resp_header = ngx.var.resp_header .. k.."="..v.." "
  end
';

This combined with a custom log format string gets me everything I need.

log_format log_req_resp '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" $request_time req_header:"$req_header" req_body:"$request_body" resp_header:"$resp_header" resp_body:"$resp_body"';

Adding audio to the CCTV of your burgular

In a full on throw back to the 2nd ever post to this blog, back in February 2010, I’ve recently been updating the system that sends me a video when there is movement in my flat via MMS and email.

I thought I’d try and add audio to the video that gets sent. A quick Google turned up two options, one was to use the sox command and it’s silence option, the second uses the on_event_start triggers in motion as a way to record the audio at the same time as capturing the movement video. I went with the second option and tweaked it a bit to make it pick the right input for my system and to direct encode the audio to MP3 rather than WAV to save space.

on_event_start arecord --process-id-file /var/motion/arecord.pid -D sysdefault:CARD=AF -f S16_LE -r 22050 - | lame  -m m - /var/motion/%Y%m%d_%H%M%S.mp3

The other useful addition was the –process-id-file /var/motion/arecord.pid which writes the process id to a file so I can just use this to stop the recording rather than having to use grep and awk to find the process in the ps output.

on_event_end kill `cat /var/motion/arecord.pid`

Now it’s just a case of combining the video from motion with the audio. I can do this with ffmpeg when I re-encode the video into the 3gp container to make it suitable for sending via a MMS message.

ffmpeg -i movement.avi -i movement.mp3 -map 0:v -map 1:a -c:a aac -strict -2 -s qcif -c:v h263 -y /var/www/html/cam1/intruder.3gp

This seams to work but the output file is a little large. The default audio encoding seams to be at 160k bitrate, I wound it down to 32k and the file size got a lot better.

ffmpeg -i movement.avi -i movement.mp3 -map 0:v -map 1:a -c:a aac -b:a 32k -strict -2 -s qcif -c:v h263 -y /var/www/html/cam1/intruder.3gp

I’d like to try the AMR audio codec but I can’t get ffmpeg to encode with it at the moment, so I’m just going to email the mp3 of the audio along with the high res AVI version of the video and just send the low res 3GP version via MMS.

Update on Garmin Forerunner 935

It’s been a few weeks since I picked up my Garmin Forerunner 935. I must say I’m pretty impressed.

Step counting

I’ve been using it to record my day to day step count and all day heart rate data as well as all my training and the London Triathlon.

The battery life is great, I’m getting a good 2 weeks out of a charge even when using it to record activities with GPS and ANT+ sensor data. It seams to take about 2 hours to fully charge.

Having it sync with the phone is useful as it means I don’t need to keep a Windows box kicking around just (OK, I do still need one for Zwift but that is less regular) to run the Garmin Connect application to upload my workouts to the web and Strava. There is built in WiFi support as well which can allow it to sync without having the phone, I’ve not enabled this at the moment as even if I’m not always carrying my phone while training it is pretty much always going to be around when I get back.

Another change is that the ANT+ sensors now live in a collective pool rather than being bound to something like a bike profile so you don’t need to remember to pick the right profile if you have multiple bikes. The watch will just pick all the relevant sensors it can see as you select the activity type. The only downside I can see to this is if you lend somebody a bike and both go riding at the same time. To get round this you can force it to pick one if it can see multiple versions of the same sensor. But it does mean I don’t need 3 different profiles, one for the Propel, Defy and the Defy on the turbo trainer.

Rest indicator

The new training tracking feature is also helpful, giving indications of how much rest time you should take between activities and also a training load number. The training load number is supposed to unique to each user so not something you can compare with others, but should show if the system thinks you are over training (looks like I need to back off a little)

Training Load

The only extra I have purchased is a glass screen protector as I managed to get a very small scratch in the plastic face on the first day wearing it. I’ve no idea how it did it as I doing remember knocking or catching it against anything. The protector is very thin and fits nearly flush with the bezel and you can’t tell it’s there. Given I’m planning on wearing this as my day to day watch as well as for activity tracking this is a little disappointing, but this is probably why it’s cheaper than the equivalently spec’d Fenix 5.

Multipart HTTP Post requests with Volley on Android

It’s been a little while since I’ve done any really serious Android development, but a couple of projects have brought me back to it.

Early on in one of those projects I had to make some HTTP requests, my first thought was to make use of the Apache HTTP Client classes as I had done many times before on Android. Which is why I was a little surprised when the usual ctrl-space didn’t auto complete any of the expected class names.

It turns out the classes were removed in Android 6.0 and the notice suggests using the HttpURLConnection class. A little bit more digging turned up a wrapper for this called Volley.

Volley is a wrapper round the HttpURLConnection class to provides a neat asynchronous interface that does IO in the background and then delivers results to the Main thread so UI updates can be done with out further faffing around switching threads. There is also a nice set of tutorials on the Android Developers pages.

The first few requests all worked fine, but there was one which was a little bit more tricky. The HTTP endpoint in question accepts a multipart-form payload. A bit of googling/searching on Stackoverflow turned up a number of approaches to this and best seamed to be documented in this gist

This was close to what I wanted but not quite what I needed so I have taken some of the core concepts and built my own MultipathRequest object.

...
MultipartRequest request = new MultipartRequest(url, headers, 
    new Response.Listener<NetworkResponse>() {
        @Override
        public void onResponse(NetworkResponse response) {
        ...
        }
    },
    new Response.ErrorListener() {
        @Override
        public void onErrorResponse(VolleyError error) {
        ...
        }
    });
    
request.addPart(new FormPart(fieldName,value));
request.addPart(new FilePart(fileFieldName, mimeType, fileName, data);

requestQueue.add(request);
...

I’ve stuck the code up on github here. You can include it in your Android Project by adding the following to the build.gradle in the root of the project:

allprojects {
  repositories {
    ...
    maven { url 'https://jitpack.io' }
  }
}

And then this to the dependencies section of the modules build.gradle:

dependencies {
  compile 'com.github.hardillb:MultiPartVolley:0.0.3'
}