King Alfred’s Way

Not being able to travel for the last 18 months has meant that I’ve not done either of the 2 big cycling events I had planned. I was going to go back and have another crack at the Dirty Kanza (now known as Unbound Gravel) and The Rift (200km round a couple of volcanoes in Iceland).

So I’ve been looking for something a little closer to home to have a crack at.

Mid 2020 Cycling UK announced a new route that they had been working on for a while called the King Alfred’s Way, a 350km mainly off road route that starts in Winchester, goes west to Salisbury, North to near Swindon, west along the Ridgeway to Reading and then down back to Winchester via the edge of the Surry Hills and the South Downs Way.

Day 1

Hursley House

I parked my car up at my old IBM Office at about 7:00 and spent the next 30mins getting all the bags setup on the bike, topping up the tyre pressures, generally sorting out filling water bottles and distributing snacks for the day round various pockets.

It was a short (7km) ride to the official start point at the West Gate in Winchester so I could set off close to 8:00.

Winchester West Gate

The route works it’s way through Winchester on mainly quiet residential streets before making a break for the countryside towards Sparsholt where it forks off on to single track trail up through the woods.

It was then a gentle rolling mix of wooded tracks and farm lanes all the way out to Salisbury where the road turns north to roll past Stone Henge.

Stone Henge in the distance

After Stone Henge the route sets off across the Salisbury Plain Training Area, this is where the you have to keep an eye on the flags and be prepared for loud bangs and tanks crossing the track. It was actually pretty quiet this time.

I had expected to find somewhere to stop for lunch along the way but it’s pretty remote and it wasn’t until I drop off the Plain (just after the end of the 3rd GPX file) and made it to All Canning around 14:50 (and the pub was supposed to close at 15:00)

I had planned to camp at a place called Smeathe’s Ridge which is just past Barbary Castle having seen it mentioned as a good spot by a few people riding/walking the Ridgeway. Unfortunately this appears to also be a race horse training gallops which the owner was in the process of mowing when I got there. I did think about asking for permission, but decided to press in a little to see if I could find somewhere else.

About 5km further along and having dropped into the valley I came across an empty field that was well screen from the track, wasn’t overlooked and didn’t look to have any paths running through it. I set up my little tent , and boiled the water for dinner.

View across the valley

Day 2

I woke up stupidly early on the morning of day 2 so I had managed to pack up my tent and was ready to roll again by 6:00. It started with a short push to get up on to the top of the ridge again.

I soon past a perfect camping spot that was just off the trail, behind a bench looking out over the valley. Another cyclists had spent the night and was still very much asleep as I rolled past.

It was foggy and even started to rain a little so I was glad I’d packed my proper rain jacket, both to keep the wet out and to help keep warm.

The track winds it way up over another ridge to the north and then drops down again to crosses the M4 and passes the PGL Summer Camp before climbing back up on to the Ridgeway.

I was short on water as having last been able to top up my bottles was at The Kings Arms in All Canning the afternoon before and uses a bunch for dinner the night before. I found a tap with a dirty looking hose at a little take way caravan attached to a pig farm. Unfortunately as it was mid week and still very early so no change of grabbing a bacon sandwich, but the fitting for the hose came undone easily so I could top up my bottles. About 2km further on there was another tap signposted.

Ridgeway Strade Bianche

The farm lanes were mainly crushed chalk which looked great but could be pretty bumpy and had chunks of flint poking though which was best avoided. As it was dry it was pretty quick rolling, but would be really sticky in the wet.

The Ridgeway took most of the morning, then the route drops down to run along side the Thames at Goring.

The Thames Path took me on to Reading, which was fun as it was the first day of the Reading Festival and the route past one of the site entrances, having to dodge lots Polos and Fiat 500s full of teenagers and then through main shopping centre with the paths full of people carrying disposable tents and crates of cheap cider.

I stopped off at the Mission Burrito in the shopping centre for lunch, before following the River Kennet south out of the city.

The route was mainly cycle paths and trails until it hooks up with the can Basingstoke Canal for a while.

Crossing the canal

After the canal I detoured a little to find the hotel I’d booked in Aldershot deciding that a real bed and a takeaway pizza would hopefully lead to a longer nights sleep.

Day 3

Day 3 started with the only encounter with a bad driver, a close pass on the way out of Aldershot, who then had the gawl to argue about it when called out.

The route rolled through some of the smarter housing south of Farnham before crossing the edge of a golf course and entering another military range. This area is very sandy which was hard going at time and required pushing.

Bike wheel in sand

Unlike the ranges on Salisbury Plane I did actually catch a glimpse of some of our guys in green suits, a small group that looked to be heading off to practice with some smoke grenades.

The range ends up with the climb up to the Devil’s Punch Bowl which again in places is very steep with sections of “babies heads” sized boulders which makes riding without suspension tricky, but the view from the top is pretty good.

Devil's Punch Bowl

After the Devil’s Punch Bowl the route tracks south from Hindhead again sticking mainly to forest trails and I even spotted some deer on the way into Liss.

Deer on the path

I cut the corner a little to get to Petersfield a little sooner and from there up the old road through Queen Elizabeth Country Park. Before the absolutely brutal climb up Buster Hill (it’s bad enough up the other side on Havesting Lane with tarmac). With all the weight on the bike I had push most of it.

From the top it was an easy run along the South Downs Way to the cafe at the Sustainability Centre for some much needed lunch. As I was feeling it by then I choose to make use of the local knowledge and took the direct road route back home. Down Old Winchester Hill and then back up Beacon Hill out of Exton, before cutting across to Owslebury, Fishers Pond and Colden Common before hitting Poles Lane back to Hursley.


It was a really great 3 days and I had a lot of fun, but 3 days is really pushing pretty hard considering the amount of climbing and the type of trail the route takes.

I’ll try and do another post about the kit I took, but one thing I will say is that I’m going to need some new gravel/mtb shoes pretty soon. The Giro Rumbles I’ve been using since I started training for the Dirty Kansa have been pretty good, but a week later the feeling has still not 100% returned to the middle 3 toes on both feet (It’s getting better everyday, but it’s time for some stiffer soles).


Having seen a tweet to a Hackaday article (/ht Andy Piper) about adding a ESP8266 to the new IKEA VINDRIKTNING air quality sensor.

IKEA Air Quality Sensor showing Green Light

The sensor is a little stand alone platform that measures the amount of PM 2.5 particles in the air and it has an array of coloured LEDs on the front to show a spectrum from green when the count is low and red when high.

Sören Beye opened one up and worked out that the micro controller that reads the sensor to control the leds does so over a uart serial connection and that the Tx/Rx lines were exposed via a a set of test pads along with 5v and Ground power. This makes it easy to attach a second micro controller to the Rx line to read the response when the sensor is polled.

Sören has written some code for an ESP8266 to decode that response and publish the result via MQTT.

Making the hardware modification is pretty simple

Wemos D1 Mini attached to sensor
  • Unscrew the case
  • Strip the ends on 3 short pieces of wire
  • Solder the 3 leads to the test pads labelled 5v, G and REST
  • Solder the 5V to 5V, G to G and REST to D2 (assuming using a Wemos D1 Mini)
  • Place the Wemos in the empty space above the sensor
  • Screw the case back together

The software is built using the Ardunio IDE and is easily flashed via the USB port. Once installed when the ESP8266 boots it will set up a WiFi Access Point to allow you to enter details for the local WiFi network and the address, username and password for a MQTT broker.

When connected the sensor publishes a couple of messages to allow auto configuration for people who use Home Assistant but it also publishes messages like this:

    "ssid":"IoT Network",

It includes the pm25 value and information about which network it’s connected to and it’s current IP address. I’m subscribing to this with Node-RED and using it to convert the numerical value, which has units of μg/m3 into a recognised scale (found on page 4).

let pm25 = msg.payload.pm25
if ( pm25 < 12 ) {
  msg.payload.string = "good"
} else if (pm25 >= 12 && pm25 < 36) {
  msg.payload.string = "moderate"
} else if (pm25 >= 36 && pm25 < 56) {
  msg.payload.string = "unhealthy for sensitive groups"
} else if (pm25 >= 56 && pm25 < 151 ) {
  msg.payload.string = "unhealthy"
} else if (pm25 >= 151 && pm25 < 251 ) {
  msg.payload.string = "very unhealthy"
} else if (pm25 >= 251 ) {
  msg.payload.string = "hazardous"
return msg;

I’m feeding this into a Google Smart Home Assistant Sensor device that has the SensorState trait, this takes the scale values as input, but you can also include the raw values as well.

msg.payload = {
        "rawValue": msg.payload.pm25
return msg;

I will add the an Air Quality trait to the Node-RED Google Assistant Bridge shortly.

I’m also routing it to gauge in a Node-RED Dashboard setup.

Quick and Dirty Finger Daemon

I’ve been listening to more Brad & Will Made a Tech Pod and the current episode triggered a bunch of nostalgia about using finger to work out what my fellow CS students at university were up to. I won’t go into to too much detail about what Finger is as the podcast covers it all.

This podcast has triggered things like this in the past, like when I decided to make this blog (and Brad & Will’s podcast) available via Gopher.

On the podcast they had Ben Brown as a guest who had written his own Finger Daemon and linked it up to a site called Happy Net Box where users can update their plan file. Then anybody can access it using the finger command e.g. finger . The finger command is shipped by default on Windows, OSx and Linux so can be accessed from nearly anywhere.

I really liked the idea of resurrecting finger and as well as having a play with Happy Net Box I decided to see if I could run my own.

I started to look at what it would take to run a finger daemon on one of my Raspberry Pis, but while there are 2 packaged they don’t appear to run on current releases as they rely on init.d rather than Systemd.

Next up I thought I’d have a look at the protocol, which is documented in RFC1288. It is incredibly basic, you just listen on port 79 and read the username terminated with a new line & carriage return. This seamed to be simple enough to implement so I thought I’d give it a try in Go (and I needed something to do while all tonight’s TV was taken up with 22 men chasing a ball round a field).

The code is on Github here.

package main

import (

const (
  CONN_HOST = ""
  CONN_PORT = "79"
  CONN_TYPE = "tcp"

func main () {
  l, err := net.Listen(CONN_TYPE, CONN_HOST+":"+CONN_PORT)
  if err != nil {
    fmt.Println("Error opening port: ", err.Error())

  defer l.Close()
  for {
    conn, err := l.Accept()
    if err != nil {
      fmt.Println("Error accepting connection: ", err.Error())
    go handleRequest(conn)

func handleRequest(conn net.Conn) {
  defer conn.Close()
  currentTime := time.Now()
  buf := make([]byte, 1024)
  reqLen, err := conn.Read(buf)
  if err != nil {
    fmt.Println("Error reading from: ", err.Error())
  } else {
    fmt.Println("Connection from: ", conn.RemoteAddr())

  request := strings.TrimSpace(string(buf[:reqLen]))

  parts := strings.Split(request, " ")
  wide := false
  user := parts[0]

  if parts[0] == "/W" && len(parts) == 2 {
    wide = true
    user = parts[1]
  } else if parts[0] == "/W" && len(parts) == 1 {

  if strings.Index(user, "@") != -1 {
    conn.Write([]byte("Forwarding not supported\r\n"))
  } else {
    if wide {
    } else {
      pwd, err := os.Getwd()
      filePath := path.Join(pwd, "plans", path.Base(user + ".plan"))
      filePath = path.Clean(filePath)
      file, err := os.Open(filePath)
      if err != nil {
        //not found
        // io.Write([]byte("Not Found\r\n"))
      } else {
        defer file.Close()

Rather than deal with the nasty security problems with pulling .plan files out of peoples home directories it uses a directory called plans and loads files that match the pattern <username>.plan

I’ve also built it in a Docker container and mounted a local directory to allow me to edit and add new plan files.

You can test it with finger

Working with multiple AWS EKS instances

I’ve recently been working on a project that uses AWS EKS managed Kubernetes Service.

For various reasons too complicated to go into here we’ve ended up with multiple clusters owned by different AWS Accounts so flipping back and forth between them has been a little trickier than normal.

Here are my notes on how to manage the AWS credentials and the kubectl config to access each cluster.


First task is to authorise the AWS CLI to act as the user in question. We do this by creating a user with the right permissions in the IAM console and then export the Access key ID and Secret access key values usually as a CSV file. We then take these values and add them to the ~/.aws/credentials file.

aws_access_key_id = AKXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy

aws_access_key_id = AKYYYYYYYYYYYYYYYYYY
aws_secret_access_key = abababababababababababababababababababab

aws_access_key_id = AKZZZZZZZZZZZZZZZZZZ
aws_secret_access_key = nmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnm

We can pick which set of credential the AWS CLI uses by adding the --profile option to the command line.

$ aws --profile dev sts get-caller-identity
    "Account": "111111111111",
    "Arn": "arn:aws:iam::111111111111:user/dev"

Instead of using the --profile option you can also set the AWS_PROFILE environment variable. Details of all the ways to switch profiles are in the docs here.

$ export AWS_PROFILE=test
$ aws sts get-caller-identity
    "Account": "222222222222",
    "Arn": "arn:aws:iam::222222222222:user/test"

Now we can flip easily between different AWS accounts we can export the EKS credential with

$ export AWS_PROFILE=prod
$ aws eks update-kubeconfig --name foo-bar --region us-east-1
Updated context arn:aws:eks:us-east-1:333333333333:cluster/foo-bar in /home/user/.kube/config

The user that created the cluster should also follow these instructions to make sure the new account is added to the cluster’s internal ACL.


If we run the previous command with each profile it will add the connection information for all 3 clusters to the ~/.kube/config file. We can list them with the following command

$ kubectl config get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
*         arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   
          arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   
          arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar 

The star is next to the currently active context, we can change the active context with this command

$ kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar
Switched to context "arn:aws:eks:us-east-1:222222222222:cluster/foo-bar".

Putting it all together

To automate all this I’ve put together a collection of script that look like this

export AWS_PROFILE=prod
aws eks update-kubeconfig --name foo-bar --region us-east-1
kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar

I then use the shell source ./setup-prod command (or it’s shortcut . ./setup-prod) , this is instead of adding the shebang to the top and running it as a normal script. This is because when environment variables are set in scripts they go out of scope. Leaving the AWS_PROFILE variable in scope means that the AWS CLI will continue to use the correct account settings when it’s used later while working on this cluster.

Joining FlowForge Inc.

FlowForge Logo

Today is my first day working for FlowForge Inc. I’ll be employee number 2 and joining Nick O’Leary working on all things based around Node-RED and continuing to contribute to the core Open Source project.

We should be building on some of the things I’ve been playing with recently.

Hopefully I’ll be able to share some of the things I’ll be working on soon, but in the mean time here is the short post that Nick wrote when he announced FlowForge a few weeks ago and a post welcoming me to the team

To go with this announcement Hardill Technologies Ltd will be going dormant. It’s been an good 3 months and I’ve built something interesting for my client which I hope to see it go live soon.

Setting up WireGuard IPv6

I’ve been having a quick play with setting up another VPN solution for getting an IPv6 address on my mobile devices this time using WireGuard.

WireGuard is a relatively new VPN tunnel implementation that has been written to be as stripped back as possible to keep the codebase as small as possible to help make it easier to audit.


A lot of the instructions for running WireGuard on RaspberryPi OS talk about adding debian testing repos or building the code from scratch, but it looks like recent updates have included the packages needed in the core repositories.

# apt-get install wireguard

I set up UDP port forwarding on my router for port 53145 and got my ISP to route another /64 IPv6 subnet to my line, both of these are forwarded on to the Raspberry Pi that is running that is also running my OpenVPN setup. This is useful as it’s already setup to do NAT for the range I’m issuing to OpenVPN clients so having it do itfor the range for WireGuard is easy enough.

WireGuard on Linux is implemented as a network device driver so can be configured on the command line with the ip command e.g.

# ip link add dev wg0 type wireguard
# ip address add dev wg0

Which brings the device up and sets the IP addresses but you still need to add the Private Key and remote address and Public Key which can be done with the wg command

# wg set wg0 listen-port 53145 private-key /path/to/private-key peer ABCDEF... allowed-ips endpoint

Or more easily it can read from a config file

# wg setconf wg0 myconfig.conf

Or the whole setup and configured with wg-quick

# wg-quick up /path/to/wg.conf

Server Config

Address =, 2001:8b0:2c1:xxx::1/64
ListenPort = 53145
PrivateKey = oP3TAHBctNVcnPTxxxxxxxxzNRLSF5CwII4s8gVAXg=

PublicKey = 4XcNbctkGy0s73Dvxxxxxxxxx++rs5BAzCGjYmq21UM=
AllowedIPs =, 2001:8b0:2c1:xxx::2/128

The Server config includes:

  • Address is the local address on the VPN tunnel, here has both IPv4 and IPv6.
  • ListenPort is which port to listen for client connections on. WireGuard doesn’t have a assigned port.
  • PrivateKey to identify the host.
  • There can be multiple Peers which represent which clients can connect and the AllowedIPs is the IP addresses for each client.

Client Config

Address =, 2001:8b0:2c1:xxx::2/128
PrivateKey = UFIJGgtKsor6xxxxxxxxxxxbWeKmw+Bb5ODpyNblEA=

PublicKey = jMB2oMu+YTKigGxxxxxxxxxxSYcTde/7HT+QlQoZFm0=
AllowedIPs =, ::0/0
Endpoint =

The differences from the Server config are:

  • Interface has a DNS entry for the client to use while the tunnel is running.
  • Peer has an Endpoint which is the public address and port to connect to
  • AllowedIPs are which IPs to route over the tunnel, in this case it’s everything

Key Generation

Both ends of the connection need a PublicKey and a PrivateKey so they can mutually authenticate each other. These are generated with the wg command

# wg keygen > privateKey
# wg pubkey < privateKey > publicKey

Sharing Config

The WireGuard Android app that you can manually add all the details in the config file or it supports reading config files from QR codes. This makes it really easy to setup and removes the chance of getting a typo in the Keys and IP addresses.

You can generate QR codes from the config file as follows:

# qrencode -t png -o nexus.png < nexus.conf
# qrencode -t ansiutf8 < nexus.conf

The first generates a PNG file with the QR code, the second prints the code out as ASCII art.


It all looks to be working smoothly. I can see the advantages over OpenVPN being that you don’t need to worry about certificate maintenance and distribution.

I’ll give it a proper work out and see how it holds up running things like SIP connections along with general access to my home network.

As well as running it on the phone, I’ll set up a client config for my laptop to use when out and about. The only issues is that the Gnome Network Manager integration for WireGuard isn’t available in the standard repos for Ubuntu 20.04 so it needs to be started/stopped from the command line.

New Monitor (BenQ EW2780U)

I finally got round to buying my self a proper monitor to use with my laptop at home (I know I’m very late to this party given the current situation of extended working from home).

I’ll be using it with my Dell XPS13 which only has 2 ports ( 2 USB-C/Thunderbolt ) and these double as the power input as well so I was looking for a monitor that can be both driven via USB-C and supply power to the laptop via USB-PD.

Having had a bit of a search round and asking for suggestions on Twitter, I found the BenQ EW2780U which looked to cover all the bases. There was a reasonably looking review from TechRadar. 27″ was a little outside my initial size range, but given how close it will be on my desk and the amount of space I have to play with it’s the right call.

There was a very similar 32″ model (BenQ EW3270u) on Amazon that was even slightly cheaper, but while it had support for video over USB-C, it didn’t support USB-PD to charge/power the laptop.

Technical Specs

  • 27″ Screen
  • 3840 x 2160 pixels
  • 2 HDMI ports (v2.0)
  • 1 DisplayPort (v1.4)
  • 1 USB-C
  • USB-PD up to 60W (Note I don’t think this is enough to charge a MBP)
  • Built in speakers (these work over HDMI/USB-C)


I’ve tweaked a few of the out of the box settings.

  • Turned off auto input switching, mainly because it was flipping to the Chromecast when ever the laptop went to sleep or I unplugged it. It’s pretty easy to switch inputs with the buttons on the back.
  • Set it to sleep when the USB-C connection is unplugged.

While the monitor did come with a HDMI cable in the box I did need to buy a new USB-C cable to use it with the Dell. None of the ones I currently had would support the HD video signal. This is one of the only downsides of USB-C, all the cables will fit in all the devices, but it’s very hard to visually tell them apart as to what spec they support.

I had a problem with Ubuntu 18.04 not liking driving such a big desktop, if I put anything on the new monitor it would occasionally randomly crash the Gnome session which meant all the open apps also got killed. So this lead to me actually getting round to do the upgrade to Ubuntu 20.04 that I had been putting off. This has fixed the problem and everything is running smoothly now.

The only thing it’s really missing is a built in USB hub then I wouldn’t need to plug a dongle into the remaining USB-C port to give me some USB-A ports.

Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])

Setting up a AWS EC2 Mac

I recently needed to debug some problems running a Kubernetes app on a Mac. The problem is I don’t have a Mac or easy access to one that I can have full control over to poke and prod at things. (I also am not the biggest fan of OSx, but that’s a separate story)

Recently AWS started to offer Mac Mini EC2 instances. These differ a little from most normal EC2 instances as they are an actual dedicated bit of hardware that you have exclusive access to rather than a VM on hardware shared with others.

Because of the fact it’s a dedicated bit of hardware the process for setting one up is a little different.

Starting the Instance

First you probably need to request to have a limit increasing on your account. as the default limit for dedicated hardware looks to be 0. This limit is also per region so you will need to ask for the update in every one you would need. To request the update use the AWS Support Center, user the “Create Case” button and select “Service Limit Increase”. From the drop down select “EC2 Dedicated Hosts”, then the region and you want to request and update to the mac1 instance type and enter the number of concurrent instances you will need. It took a little time for my request to be processed, but I did submit it on Friday afternoon and it was approved on Sunday morning.

Once it has been approved you can create a new “Dedicated Hosts” instance on the EC2 console, with a “Instance Family” of mac1 and a “Instance Type” of mac1.metal. You can pick your availability zone (not all Regions and AZ have all instance type so it might not be possible to allocate a mac in every zone). I also suggest you tick the “Instance auto-placement” box.

Once that is complete you can actually start allocate an EC2 instance on this dedicated host. You get to pick which version of OSx you want to run. Assuming you only have one dedicated host and you ticked the auto-placement box then you shouldn’t need to pick the hardware you want to run the instance on.

The other main things to pick as you walk through the wizard are the amount of disk space (default is 60gb), which security policy you want (be sure to pick one with ssh access) and which SSH key you’ll use to log in.

The instances do take a while to start, but given it’s doing a fresh OSx install the hardware this is probably not a surprise. But once the console says it’s up and both the status checks are passing you’ll be able to ssh into the box.

Enabling a GUI

Once logged in you can do most things from the command line, but I needed to run Docker, and all the instructions I could find online said I needed to download Docker Desktop and install that via the GUI.

I found the following gist which helped.

  • Fist up set a password for the ec2-user
    sudo passwd ec2-user
  • Second enabled the the VNC
% sudo /System/Library/CoreServices/RemoteManagement/ \
-activate -configure -access -on \
-configure -allowAccessFor -specifiedUsers \
-configure -users ec2-user \
-configure -restart -agent -privs -all

% sudo /System/Library/CoreServices/RemoteManagement/ \
 -configure -access -on -privs -all -users ec2-user

You can then add -L 5900:localhost:5900 to the ssh command that you use to log into the mac. This will port forward the VNC port to localhost.

VNCViewer or Remmina can be used to start a session that gives full access to the Mac’s gui.

Expand the disk

If you have allocated more than the default 60gb then you will need to expand the disk to make full use of it.

% PDISK=$(diskutil list physical external | head -n1 | cut -d" " -f1)
APFSCONT=$(diskutil list physical external | grep "Apple_APFS" | tr -s " " | cut -d" " -f8)
% sudo diskutil repairDisk $PDISK
# Accept the prompt with "y", then paste this command
% sudo diskutil apfs resizeContainer $APFSCONT 0

Add tools

The instance comes with Homebrew pre-setup so you can install nearly anything else you might need.

Shut it down when you are done

Mac EC2 instances really are not cheap ($25.99 per day…) so remember to kill it off when you are done.

Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
      "online": true

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "",
    cameraStreamProtocol: "hls"
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.