Depolying a TTN LoRa Gateway

I’ve been meaning to get round to this ever since the Pi Supply Kickstarter delivered my LoRa Gateway HAT and the LoRa Node pHAT.

They have been sat in their boxes waiting until I had some spare time (and I’d finally finished moving a few things around to free up a spare Pi).

LoRa Gateway on a Pi 3

LoRa is a long range, low bandwidth radio system that uses the unlicensed spectrum. When combined with the higher level LoRaWAN protocol it makes great IoT platform for low power devices that want to send low volumes of data in places where there is no WiFi coverage and can’t justify the cost of a cellular connection.

LoRaWAN allows you to deploy a collection of Gateway devices that can act as receivers for a large number of deployed devices. These gateways then forward on messages to central point for processing.

The Things Network

A group called The Things Network run a LoRaWAN deployment. They are aiming for as large a coverage area as possible. To do this they allow users to deploy their own gateways and join these to the network. By joining the network you get to use everybody elses gateways in exchange for letting other people use yours.

Setting up the Gateway

This was particularly easy. I just had to download an image and flash it to a SD card. Stick that into the pi along with an ethernet cable and some power.

After the pi boots up you point your browser at http://iotloragateway.local and fill in a couple of values generated when I registered the gateway on the TTN site and that was it. The gateway is now up and running and ready to send/receive packets from any devices in range.

Testing

In order to test the gateway I need to set up a Pi Zero with the LoRa Node pHAT. This was a little trickier, but not much.

Fist I had to disable the Linux serial console, this can be done using the raspi-config command. I also had to add dtoverlay=pi3-miniuart-bt /boot/config.txt.

That was all that was needed to get the hardware configured, as for the software there is a rak811 python package that supplies the api and utilities to work with pHAT.

I now needed to declare an application on The Things Network site, this is how messages get routed to be processes. Taking the values for this application I could now write the following helloWorld.py

#!/usr/bin/env python3
from rak811 import Mode, Rak811

lora = Rak811()
lora.hard_reset()
lora.mode = Mode.LoRaWan
lora.band = 'EU868'
lora.set_config(app_eui='xxxxxxxxx',
                app_key='xxxxxxxxxxxx')
lora.join_otaa()
lora.dr = 5
lora.send('Hello world')
lora.close()

Which can then be seen arriving in The Things Network console.

Data arriving and being displayed in The Things Network console.

And I can subscribe directly to that data feed via MQTT:

$ mosquitto_sub -h eu.thethings.network -u 'lora-app1-hardill-me-uk' -P 'xxxxxxxxx' -v -t '+/devices/+/up'
{
  "app_id": "lora-app1-hardill-me-uk",
  "dev_id": "test-lora-pi-zero",
  "hardware_serial": "323833356E387901",
  "port": 1,
  "counter": 0,
  "is_retry": true,
  "payload_raw": "SGVsbG8gd29ybGQ=",
  "metadata": {
    "time": "2019-08-10T15:45:07.568449769Z",
    "frequency": 867.5,
    "modulation": "LORA",
    "data_rate": "SF7BW125",
    "airtime": 61696000,
    "coding_rate": "4/5",
    "gateways": [
      {
        "gtw_id": "lora-gw1-hardill-me-uk",
        "gtw_trusted": true,
        "timestamp": 910757708,
        "time": "2019-08-10T15:45:07Z",
        "channel": 5,
        "rssi": -91,
        "snr": 7.75,
        "rf_chain": 0,
        "latitude": 51.678905,
        "longitude": -2.3549008,
        "location_source": "registry"
      }
    ]
  }
}

Next Steps

First up will be to get a better antenna for the gateway and to move the whole things up in the attic, from there it should get a good view north out towards the River Severn. After that I want to get a small battery powered LoRa/GPS board, like a TTGO T-Beam and ride round on my bike to get a feel for what the range/coverage actually is.

I’ll also be keeping an eye on the stats from the gateway to see if anybody else near by is deploying TTN LoRaWAN devices.

Tracks2Miles and Tracks2TitanXT removed from Play Store

I have removed both of the apps in the title from the play store.

This is for a number reasons:

  1. MyTracks removed the API required to get at the recorded tracks meta data which vastly reduced the capability of the app.
  2. DailyMile shut down, which rendered Tracks2Miles useless.
  3. Google flagged both apps as potentially having SQL injection attacks (while theoretically possible I couldn’t find a directly exploitable use case).
  4. Even before the shutdown I had completely moved all my activity tracking to dedicated Garmin devices and Strava.

Dirty Kanza 100 2019

I’ve just got back from a pretty great trip.

As I mentioned in a previous post, this year I was going to take part in one of the premier gravel riding events, the Dirty Kanza.

The Trip

I flew out to Denver and spent the first 4 days in Boulder. This was for a few reasons. Firstly it’s somewhere I’ve been wanting to visit for a while as it was one of the options I listed for my year at uni in the US (I suspect that the powers that be knew I’d have spent the whole year snowboarding which is why I ended up in North Carolina…) and I was sure there would be some good riding to shake the flight out of my legs. It’s also at altitude (about a mile above sea level) which I was hoping would help as a final little push (I know you have to spend more than a week at altitude before it actually starts to have a positive effect).

While in town I went on 2 rides led out of the Full Cycle bike shop. The first was a road ride on Saturday morning at 09:00 which covered about 70km in 2 and a bit hours.

It was a nice pace to shake the flight out of my legs and to make sure I’d put the bike back together properly after the flight. The ride was pretty flat and I didn’t have any trouble keeping up with the pace.

The second ride was billed as a gravel ride but had aspects that were closer to a mountain bike single track ride. And this time I really started to feel the altitude as there was a bunch more climbing some of it pretty steep.

Both rides left from the Pearl Street store with at least 2 ride leaders. If you are in town and want somebody to show you some good riding I suggest you head along and see these folk.

I stuck in a repeat of Saturday’s ride on my own on the Monday just to keep the legs turning over as I didn’t fancy doing any major climbing. I also tried to stick a short run in but the the altitude really kicked in and I managed about 4k before deciding that enough was enough while gasping for air.

On Wednesday I set off on the monumental drive across most of Kansas. I had booked a place to stop in a small town called Hays to break up the nearly 600mile drive.

The drive was an experience, I’ve driven round different parts of the US in the past, but either on the East coast or round the US National Parks in the South West, but this was just hour after hour of nearly totally straight flat road. I can totally see the appeal of a self driving (totally, definitely not the current systems that require human oversight) vehicle for this kind of driving.

While stretching my legs when I arrived I happened to catch a train crossing Main Street.

Then on Thursday I finished off the run in to Emporia. While the first 360 miles had been nearly totally flat the ground did finally start to become a little bit more rolling.

The Event

The Dirty Kanza is based out the town of Emporia in East Kansas.

The DK takes over pretty much all of Commercial Street, with pretty much all of the shops getting involved. There is a area between Commercial Street and Mechanics Street where all the sponsors get to set up their stands and you can look at all the new bikes and tech.

On the Friday before the actual event there is a short social ride, I rolled round with everybody before heading off to recy the first 10 miles of the course.

The full 200 mile event starts at 6am (with the dawn) and the 100 mile event starts 30 minutes later so I got to see the folk doing the longer distance set off.

Once they had cleared the starting area, we got to line up to set off.

The ride was a tale of 2 parts, the first 90km to the feed station went really well, cruising along at a steady 23kph and doing just over half the climbing in just under 4 hours. I’d made a decent dent into the 2 750ml drink bottles I had with me. I refilled them both and had a snack to keep the energy up, I spent about 15mins at the rest station. The temperature for the first leg had topped out at about 25 °C.

The second leg was a lot harder covering just short of 80km, the temperature topped out at 35 °C and averaged 32 °C and this meant I burnt through a lot more water. Luckily there were a couple of places along the way where I could top up my bottles again. It also kicked off with one of the bigger climbs of the whole route. There had been a 40% chance of rain (with the possibility of thunder) in the forecast for the afternoon that I had been keeping an eye on all of the week running up to the event, hoping it wouldn’t happen. By the time the temperature passed 30 °C I had changed my mind and scanning the sky for clouds.

I got within 10km of the finish when I had my first and only mechanical issue. My rear tire had nearly fully deflated, but the tubeless sealant had managed to plug the leak so all I needed to do was use a CO2 canister to top it up.

Crossing the DK finish line

The finish line was back down Commercial Street which had been converted into a street party while we had been out on the trail. Lots of people cheering which really helped get over the line.

I would definitely do the DK again, may be even give the full 200mile version a try next time, but that would need a bigger training plan and probably finding a way to spend a good few weeks at altitude in the run up. I would also make use of the third bottle cage mounts on the bike

Mobile IPv6 Workaround

As I’ve previously mentioned I’m in the market for a cellular data plan that supports IPv6.

The only mainstream provider in the UK that offers any IPv6 support is EE, but only for their pay monthly plans and I want something for more occasional usage.

While I wait for the UK mobile operators to catch up, I’ve been using OpenVPN on my phone to allow it to behave as if it’s actually on my local network at least from a IPv4 point of view.

This just about works, but it did mean that I have the DNS reply with internal addresses when queried from within and external addresses when queried from addresses outside. Again this is possible to do with bind9 using views, but it leads to a bunch more administration when ever anything needs changing.

It also doesn’t solve the need to access other people’s/organisation’s resources that are only available via IPv6.

OpenVPN can also route IPv6 over the tunnel and hand out IPv6 addresses to the clients that connect. Instructions for how to set it up can be found on the OpenVPN Wiki here.

By adding the following to the OpenVPN server.conf file

tun-ipv6
push tun-ipv6
ifconfig-ipv6 2001:8b0:2c1:xxx::1 2001:8b0:2c1:xxx::2
ifconfig-ipv6-pool 2001:8b0:2c1:xxx::4/64
push "route-ipv6 2000::/3"

I initially was trying to work out how to carve a section out of the initial /64 IPv6 subnet that my ISP had assigned to me. My plan was to take a /112 block (which maps to 65536 addresses) but as a general rule you are not meant to try and use IPv6 subnets smaller than /64.

Luckily A&A assign each customer a /48 range that can be split up across multiple sites/lines. Or you can assign extra /64 or /60 blocks to an existing line.

I choose to add a second /64 to my existing line and then configured my Ubiquiti Edgerouter X.

set protocols static route6 2001:8b0:2c1:xxx::/64 next-hop fe80::92fb:a6ff:fe2e:28a2

Where fe80::92fb:a6ff:fe2e:28a2 is the link local address of the machine running the OpenVPN server.

Android OpenVPN client

The added bonus is that I now can get IPv6 access on both my mobile phone and on my laptop when away from home.

Listing AWS Lambda Runtimes

For the last few weeks I’ve been getting emails from AWS about Node 6.10 going end of life and saying I have deployed Lambda using this level.

The emails don’t list which Lambda or which region they think are at fault which makes tracking down the culprit difficult. I only really have 1 live instance deployed across multiple regions (and 1 test instance on a single region).

AWS Lambda region list

Clicking down the list of regions is time consuming and prone to mistakes.

In the email AWS do provide a command to list which Lambda are running with Node 6.10:

aws lambda list-functions --query="Functions[?Runtime=='nodejs6.10']"

But what they fail to mention is that this only checks your current default region. I can’t find a way to get the aws command line tool to list the Lambda regions, the closest I’ve found is the list of ec2 regions which hopefully match up. Pairing this with the command line JSON search tool jq and a bit of Bash scripting I’ve come up with the following:

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $region;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This walks over all the regions as prints out all the function names and the runtime they are using.

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
"oAuth-test - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"

In my case it only lists NodeJS 8.10 so I have no idea why AWS keep sending me these emails. Also since I’m only on the basic level I can’t even raise a technical help desk query to find out.

Anyway I hope this might be useful to others with the same problem.

DNS-over-HTTPS update

My post on DNS-over-HTTPS from last year is getting a fair bit more traffic after a few UK news paper articles (mainly crying that the new UK Government  censoring won’t work if Google roll it out in Chrome… what a shame). The followning article has a good overview [nakedsecurity].

Anyway I tweeted a link to the old post and it started a bit of a discussion and the  question about the other side of system came up. Namely how to use a DNS resolver that pushed traffic over DNS-over-HTTPS rather than provide a HTTPS endpoint that supported queries. The idea being that at the moment only Firefox & Chrome can take advantage of the secure lookups.

I did a bit of poking around and found things like stubby which DNS-over-TLS (another approach to secure DNS lookups) and also Cloudflare have cloudflared which can proxy for DNS-over-HTTPS to Cloudflare’s DNS server (it also is used to set up the VPN tunnel to Cloudflare’s Argo service, which is also worth a good look at.)

Anyway, while there are existing solutions out there I thought I’d have a really quick go at writing my own, to go with the part I’d written last year, just to see how hard it could be.

It turned out a really basic first pass could be done in about 40 lines of Javascript:

const dgram = require('dgram')
const request = require('request')
const dnsPacket = require('dns-packet')

const port = process.env["DNS_PORT"] || 53
//https://cloudflare-dns.com/dns-query
const url = process.env["DNS_URL"] 
    || "https://dns.google.com/experimental" 
const allow_selfSigned = 
    (process.env["DNS_INSECURE"] == 1) 

const server = dgram.createSocket('udp6')

server.on('listening', function(){
  console.log("listening")
})

server.on('message', function(msg, remote){
  var packet = dnsPacket.decode(msg)
  var id = packet.id
  var options = {
    url: url,
    method: 'POST',
    body: msg,
    encoding: null,
    rejectUnauthorized: allow_selfSigned ? false : true,
    headers: {
      'Accept': 'application/dns-message',
      'Content-Type': 'application/dns-message'
    }
  }

  request(options, function(err, resp, body){
    if (!err && resp.statusCode == 200) {
      var respPacket = dnsPacket.decode(body)
      respPacket.id = id
      server.send(body,remote.port)
    } else {
      console.log(err)
    }
  })

})

server.bind(port)

It really could do with some caching and some more error handling and I’d like to add support for Google JSON based lookups as well as the binary DNS format, but I’m going to add it to the github project with the other half and people can help extend it if they want.

The hardest part was working out I needed the encoding: null in the request options to stop it trying to turn the binary response into a string but leaving it as a Buffer.

I’m in the process of migrating my DNS setup to a new machine, I’ll be adding a DNS-over-TLS (using stunnel) & a DNS-over-HTTPS listeners for the public facing sides.

Building a Bluetooth speaker

Recently I’ve been playing with how to build a Bluetooth audio device using a Raspberry Pi Zero. The following are some notes on what I found.

First question is why build one when you can buy one for way less than the cost of the parts. There are a couple of reasons:

  • I build IoT prototypes for a living, and the best way to get a feel for the challenges is to actually face them.
  • Hacking on stuff is fun.

The Hardware

I’m starting out with a standard Raspberry Pi Zero W. This gets me a base high level platform that includes a WiFi and Bluetooth.

Raspberry Pi Zero W

The one thing that’s missing is an audio output (apart from the HDMI) but Raspberry Pi’s support audio using the I2S standard. There are several I2S pHATs available and I’m going to be using a pHAT DAC from Pimoroni. I’ve used these before for a project so I’m reasonably happy with how to set it up, but Pimoroni have detailed instructions.

I’m going to add a screen to show things like the current track title & artist along with the volume. I’m also going to need some buttons to send Play/Pause, Next & Previous commands to the connected device. I have a PaPiRus e-ink display that has 5 buttons built in which I was going to use but this clashes with the GPIO pins used for the DAC so instead I’ve opted for the Inky pHAT and the Button Shim.

The Software

I knew the core components of this had to be a problem others had solved and this proved to be the case. After a little bit of searching I found this project on github.

As part of the configuration we need to generate the Bluetooth Class bitmask. This can be done one this site.


Class options

This outputs a hex value of 0x24043C which is added to the /etc/bluetooth/main.conf

With this up and running I had a basic Bluetooth speaker that any phone can connect to without a pin and play music, but nothing else. The next step is to add some code to handle the button pushes and to update the display.

The Bluetooth stack on Linux is controlled and configured using DBus. Dbus is a messaging system supporting IPC and RPC

A bit of Googling round turned up this askubuntu question that got me started with the following command:

dbus-send --system --print-reply --dest=org.bluez /org/bluez/hci0/dev_44_78_3E_85_9D_6F org.bluez.MediaControl1.Play

This sends a Play command to the connected phone with the Bluetooth mac address of 44:78:3E:85:9D:6F. The problem is knowing what the mac address is as the system allows multiple devices to pair with the speaker. Luckily you can use DBus to query the system for the connected device. DBus also has some really good Python bindings. So with a bit more poking around I ended up with this:

#!/usr/bin/env python
import signal
import buttonshim
import dbus
bus = dbus.SystemBus()
manager = dbus.Interface(
bus.get_object("org.bluez","/"), 
"org.freedesktop.DBus.ObjectManager")

@buttonshim.on_press(buttonshim.BUTTON_A)
def playPause(button, pressed):
	objects = manager.GetManagedObjects()
	 
	for path in objects.keys():
	    interfaces = objects[path]
	    for interface in interfaces.keys():
	        if interface in [
	        "orge.freedesktop.DBus.Introspectable",
	        "org.freedesktop.DBus.Properties"]:
	            continue
	 
	        if interface == "org.bluez.Device1":
	            props = interfaces[interface]
	            if props["Connected"] == 1:
	                media = objects[path + "/player0"]["org.bluez.MediaPlayer1"]
	 
	                mediaControlInterface = dbus.Interface(
	                bus.get_object("org.bluez",path + "/player0"),
	                "org.bluez.MediaPlayer1")
	 
	                if media["Status"] == "paused":
	                    mediaControlInterface.Play()
	                else:
	                    mediaControlInterface.Pause()

signal.pause()

When button A is pressed this looks up the connected device, and also checks the current state of the player, is it playing or paused and toggles the state. This means that one button can be Play and Pause. It also uses the org.bluez.MediaPlay1 API rather than the org.bluez.MediaControl1 which is marked as deprecated in the doc.

The button shim also comes with Python bindings so putting it all together was pretty simple.

DBus also lets you register to be notified when a property changes, this paired with the Track property on the org.bluez.MediaPlay1 as this holds the Artist, Track Name, Album Name and Track length information supplied by the source. This can be combined with the Inky pHAT python library to show the information on the screen.

#!/usr/bin/env python

import dbus
from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib

def trackChanged(*args, **kw):
	target = args[0]
	if target == "org.bluez.MediaPlayer1":
		data = args[1].get("Track",0)
		if data != 0:
			artist = data.get('Artist')
			track = data.get('Title')
			print(artist)
			print(track)


DBusGMainLoop(set_as_default=True)
system_bus = dbus.SystemBus()
system_bus.add_signal_receiver(trackChanged, 
	dbus_interface="org.freedesktop.DBus.Properties", 
	signal_name="PropertiesChanged", 
	path='/org/bluez/hci0/dev_80_5A_04_12_03_0E/player0')
loop = GLib.MainLoop()
loop.run()

This code attaches a listener to the MediaPlayer object and when it spots that the Track has changed it prints out the new Artist and Title. The code matches all PropertiesChanged events which is a little messy but I’ve not found a way to use wildcards or partial matches for the DBus interface in python (since we don’t know the mac address of the connected device at the time we start listening for changes).

Converting the Artist/Title information into an image with the Pyton Image Library then getting the Inky pHAT to render that is not too tricky

from PIL import Image, ImageDraw, ImageFont
from font_fredoka_one import FredokaOne
from inky import InkyPHAT

...

disp = InkyPHAT("yellow")
font = ImageFont.truetype(FredokaOne, 22)

img = Image.new("P", (inky_display.WIDTH, inky_display.HEIGHT))
draw = ImageDraw.Draw(img)

draw.text((), "Artist: "+ artist, disp.WHITE, font=font)
draw.text((), "Track: "+ track, disp.WHITE, font=font)

disp.set_image(img)
disp.show()


That’s the basics working, now I need to find/build a case for it and then look at seeing if I can add Chromecast Audio and Airplay support.

This year’s challenge

Towards the end of last year I was looking for something a bit different to do in 2019. I’m still going to be doing a bunch of triathlons but I also wanted to have a go at some more endurance events.

I’ve already done a marathon (London in 2016) and I having had my IT band tighten up as I crossed the half way point on Tower Bridge I’m not that interested in doing another just yet. This rules out an Ironman distance triathlon as the concept of a marathon after everything else really doesn’t appeal. So I went looking for something else, possibly something on the bike.

Gravel riding has been growing in popularity for the last few years and the number of different specialist bikes available continues to increase. So that sounded like a good place to go looking. Also it was a cracking excuse to buy a new bike (n+1).

I was also looking to make it part of a proper holiday this year, like the trips to the US National Parks I’ve done in the past.

The Event

I’d seen a video‘s of people doing the Dirty Kanza and it is held up as one of the first and great gravel events. The event is based around the main 200mile event but they also run 350mile, 100mile and 50mile events.

Given off road riding was going to be new to me and furthest I’ve ridden so far in one go is about 140km I decided that 200miles (300+km) for the first time was probably pushing things a little too far, so I put an entry in the ballot for the 100mile event. Just after Christmas I was notified that I had a place, and the mad dash to get somewhere to stop for the weekend of the event started

The Bike

2017 Genesis Datum
2017 Genesis Datum

The LBS had an ex-demo Genesis Datum 20 going for a absolute steal of a price.

The Datum 20 is a carbon frame gravel bike with 700c wheels, hydraulic disk brakes and Shimano 105 group set. The 50-34 up front and 11-32 cassette should get me round most thing. I wanted to stick with mechanical gearing as DI2 would be just something else that could potentially go wrong out on the course and I’m not totally sold on the whole one-by concept just yet.

The first upgrade was set of Hunt wheels, I got a set of the Four Season Gravel rims and got the local wheel builder to assemble them with a PowerTap G3 rear hub. This was for 2 reasons, the first was because the standard wheels weren’t tubeless capable and secondly because I wanted a power meter to help train and to measure my effort for pacing during the event.

I had to get a shim for the rear brake caliper as the standard wheel came with a 140mm rear disc and the G3 hub only takes a 160mm. I also shod the new rims with 38mm Panaracer Gravel King SK tyres. The 38mm only just have enough clearance at the rear so I’ll be swapping them for some 35mm before I head to the US just in case the course is wet and ends up muddy.

Training

I’ve done pretty much all my riding on the road, so I knew I’d have to get used to riding on softer ground, both gravel and mud. But to get things kicked off I booked a week riding in the hills in Spain in February. I went with a company called Andalucian Cycling Experience. I’d been away with them before so knew what to expect, and had a great week covering over 420km and climbing nearly 9000m.

When I got back I started out with the local tow path on the Gloucester and Sharpness Canal and a few of the local bridle ways get used to the back end of the bike sliding around a bit.

The next step was out to the Forest of Dean to the Cannop Cycle Centre. Here they have a gravel track called the Family Trail which is just short of 15km.

Doing big sets of 2 laps in one direction followed by 2 laps in the other to stop it getting too repetitive made up good base. Getting out to the start early on a Saturday morning and getting 6 laps before breaking for some lunch and the trail started to get busier with families. It’s also helping with planning out me nutrition and hydration for the event (though it’s possibly going to be a lot hotter in Kansas)

I’ve also been sticking in some local events. The Cotswolds Cross Enduro Sportive was an absolute killer, with sections of riding down a running stream and other parts that were only really possible on a full suspension mountain bike

In between the big sets at the weekends I’ve been making sure I get in at least 2 or 3 sessions on the trainer on Zwift mixing it up between doing their planned workouts and racing (on a good day I can get in the top 3 of a short cat. C event).

Now the clocks have gone forward and it’s lighter in the evenings I’ll be trying to get out climbing the local hills before dinner.

The event is at the start of June so I better get back on the bike.

Node-RED Google Home Smart Home Action

Google Home

Following on from my Alexa Home Skill for Node-RED it’s time to see about showing some love to the Google Home users (OK, I’ve been slowly chipping away at this for ages, but I’ve finally found a bit of time).

One of the nice things about Google Assistant is that it works all over the place, I can use it via the text interface if I’m somewhere and can’t talk, or even from the car via Android Auto.

Screenshot_20190101-170716

Google offer a pretty similar API for controlling Smart home devices to the one offered by Amazon for the Alexa so the implementation of this was very similar. The biggest difference is the is no requirement to use something like Amazon’s Lambda to interface with the service so it’s just a single web endpoint.

I’ve taken pretty much the same approach as with the Alexa version in that I have a Web Site where you can sign up for an account and then define virtual devices with specific names and characteristics.

Virtual devices

Google support a lot more different types of devices and characteristics than Amazon with Alexa at the moment, but to start with I’m just supporting Sockets/Light/Switches and Thermostats. I intend to add more later as I work out the best way to surface the data.

The other big change is that Google Assistant supports asynchronously updating the device state and the ability for the Assistant backend to query the state of a device. To support this I’m going to allow the response node to be configured with a specific device and to accept input that has not come from an input node.

The node is currently being beta tested, if you are interested post in #google-home-assistant on the Node-RED Slack and I can add you to the ACL for the beta.

Google Assistant Node-RED Node

I’ll do another post when the node has finished testing and has been accepted by Google.

New Zwift Machine

I’ve been off my bike for the last month recovering from an ankle injury. During that time Zwift have released a pretty serious update containing the new New York area.

zwift-new-york

This update includes some futuristic courses set round Central Park. These include transparent roads and flying cars.

Acer Revo

Unfortunately all these new fancy features proved too much for my old Acer Revo that I was using to run Zwift. The Acer Revo was released back in 2009 so the fact that it’s 1.6Ghz Intel Dual Core Atom and Nvidia Ion video chip set sharing 4gb of RAM had lasted this long was pretty impressive.

So it was time to look for a new machine to use for winter training. Small footprint media server type machines have come a long way in the last 9 years and the “standard” seems to be the Intel NUC range of machines.

NUC’s come in 2 main form factors, both are about 4″ square but the difference is the hight of the unit. The low hight version only support M2 SSD storage where as the higher units support both M2 and 2.5″ SATA drives. I opted for a NUC7i5BNH with 8gb of RAM and a 240gb SATA SSD. This should meet the current Zwift recommended spec.

Assembling the machine was remarkably simple, just 4 screws in the base allow full access, lifting out the drive tray reveals the 2 memory sockets. Once the memory is fitted just slide the drive into the tray and secure with the supplied screws before reseating the tray and base, fastening the 4 access screws again.

I initially intended to install Windows 7 as I had a ISO image and a license already, the only problem is the NUC only has (externally) USB 3.0 ports and the Windows 7 install image only have USB 2.0 drivers, so while the NUC will boot either from a USB CD/DVD drive or USB key, it can’t access the keyboard/mouse to start the install or read the rest of the installer files from the drive. There are instructions about how to patch a USB key install image, but after lots of messing about trying I finally remembered that my Windows 7 image and license were for the 32bit version and Zwift needs 64bit Windows. In the end I bought a Windows 10 license key and downloaded a new USB install image.

The Windows 10 install was relatively painless, until it got to the part where it forced me to create a online Microsoft account just to log into the local machine and wanted me to opt into a load of tracking for advertising. I fully understand why people are more than happy to stick with Windows 7.

Anyway everything is now up running so on with the winter training plan.