The Linear Clock Ticks Again

I’ve had a background project ticking over slowly in the background for a number of years.

Last year I designed and had built a number of PCBs to be used as HATs for a Raspberry Pi Zero. They included a RTC and a terminal block to attach the LED strip.

I did say that I would write another post when the boards where delivered and I had assembled the first prototype. Unfortunately I had made a small, but critical mistake when designing the boards, I slightly messed up the package package size for the RTC so it wasn’t possible to get assemble the boards correctly. I didn’t get round to re-doing the PCB layout with the correct sized parts so the whole thing just sat for a while.

In the meantime the Raspberry Pi Foundation went and released a new product, the Raspberry Pi Pico, which is based on the RP2040 chip. As well as the Pico they are also making the RP2040 chip available to other folk to include it directly in their own projects.

Pimoroni have created a number of different boards but their latest is the Plasma 2040 which is specifically designed to drive LED strips.

B.O.M.

Assembly

  • Solder the RTC on to the breakout section of the Plasma 2040, the terminals are labelled so just make sure you match up the pins, I used the headers that came with the RTC and arranged it so the breakout was over the top of the Plasma2040
  • Loosen the screw terminals for the connections marked 5V, DA and -. Insert the Red wire of the adapter in the 5V, Green wire in DA and White wire in –
  • Clip the LED strip to the end of the adapter.
Plasma 2040

Code

When you first attach the Plasma2040 to your computer it will show up as a USB flash drive. This is so you can install the runtime. In this case we’ll be using the Pimoroni Micropython build that comes with support for the board. You can grab a version from the release page on GitHub here. Once downloaded copy it into the root of the drive. When the copy has finished the board will reboot and be ready to run Python code.

You can use the Thonny IDE to both write and push code to the device. You will need at least version 3.3.3 to support the Plasma2040.

The fist version of the code was as follows:

import plasma
from plasma import plasma2040
from pimoroni import RGBLED, Button
import time

NUM_LEDS = 60
LOW = 32
MED = 64
HIGH = 128
BRIGHTNESS = [LOW,MED,HIGH]
BRIGHTNESS_LEVEL = 0

button_brightness = Button(plasma2040.BUTTON_A)

led = RGBLED(plasma2040.LED_R, plasma2040.LED_G, plasma2040.LED_B)
led.set_rgb(0, 0, 0)
led_strip = plasma.WS2812(NUM_LEDS, 0, 0, plasma2040.DAT)

led_strip.start()

while True:
    RED = [0]*NUM_LEDS
    GREEN = [0]*NUM_LEDS
    BLUE = [0]*NUM_LEDS
    t = time.localtime()

    hour = (t[3] % 12) * 5
    #Hours
    RED[hour] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 1] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 2] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 3] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 4] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    #Mins
    GREEN[t[4]] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    #Secs
    BLUE[t[5]] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    
    #set the LEDS
    for i in range (NUM_LEDS):
        led_strip.set_rgb(i, RED[i], GREEN[i], BLUE[i])
    
    #change brightness
    if button_brightness.read():
        BRIGHTNESS_LEVEL += 1
        BRIGHTNESS_LEVEL %= 3
    
    time.sleep(1)
 

This works well when triggered from Thonny as it syncs the laptop’s time to the RP2040 each time it connects. But when the clock is powered by a USB power supply or a battery, the clock starts at 00:00:01 Jan 1st 2021 and has no way to be updated to match now.

This is why we need the RTC module, it keeps track of the time while the clock is powered down.

It also has a way to change the brightness, by pressing the A button it will cycle through 3 different brightness levels.

Setting the RTC Time

With a little bit of playing I worked out how to sync the RTC to the current time in the Thonny console

>>> from pimoroni_i2c import PimoroniI2C
>>> from breakout_rtc import BreakoutRTC
>>> import time
>>> PINS_PLASMA = {"sda": 20, "scl": 21}
>>> i2c = PimoroniI2C(**PINS_PLASMA)
>>> rtc = BreakoutRTC(i2c)
>>> rtc.set_unix(time.time())
>>> rtc.set_time(54,18,17,6,18,9,2021)
True
>>> rtc.update_time()
True
>>> print(rtc.string_time())
17:18:54
>>> rtc.set_backup_switchover_mode(3)

The most important line is the last one, which enables the battery backup for the RTC so it remembers the time you just set.

I was going to use the rtc.set_unix() function and pass in time.time() but it appears that the unix timestamp is maintained independently of the “Real” time on the RTC.

The set_time() function takes values in the order

  • seconds (0-60)
  • minutes (0-60)
  • hours (0-23)
  • day of the week (1-7 -> mon-sun)
  • day of month (1-31)
  • monthe (1-12)
  • year (2000-2099)

With the RTC set correctly a small update to the code to read from the RTC rather than from the time object and we are good to go.

import plasma
from plasma import plasma2040
from pimoroni import RGBLED, Button
from pimoroni_i2c import PimoroniI2C
from breakout_rtc import BreakoutRTC
import time

PINS_PLASMA = {"sda": 20, "scl": 21}

i2c = PimoroniI2C(**PINS_PLASMA)
rtc = BreakoutRTC(i2c)

if rtc.is_12_hour():
    rtc.set_24_hour()

if rtc.update_time():
    print(rtc.string_time())
    print(rtc.string_date())

NUM_LEDS = 60
LOW = 32
MED = 64
HIGH = 128
BRIGHTNESS = [LOW,MED,HIGH]
BRIGHTNESS_LEVEL = 0

button_brightness = Button(plasma2040.BUTTON_A)

led = RGBLED(plasma2040.LED_R, plasma2040.LED_G, plasma2040.LED_B)
led.set_rgb(0, 0, 0)
led_strip = plasma.WS2812(NUM_LEDS, 0, 0, plasma2040.DAT)

led_strip.start()

rtc.enable_periodic_update_interrupt(True)

while True:
    RED = [0]*NUM_LEDS
    GREEN = [0]*NUM_LEDS
    BLUE = [0]*NUM_LEDS
    t = time.localtime()

    if rtc.read_periodic_update_interrupt_flag():
        rtc.clear_periodic_update_interrupt_flag()
         
        rtc.update_time()
        hour = (rtc.get_hours() % 12) * 5
        RED[hour] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 1] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 2] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 3] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 4] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        GREEN[rtc.get_minutes()] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        BLUE[rtc.get_seconds()] = BRIGHTNESS[BRIGHTNESS_LEVEL]

        for i in range (NUM_LEDS):
            led_strip.set_rgb(i, RED[i], GREEN[i], BLUE[i])
        
        if button_brightness.read():
            BRIGHTNESS_LEVEL += 1
            BRIGHTNESS_LEVEL %= 3
    
    time.sleep(1)
2021 Edition

Next Steps

There are a few things that need doing next. The first is to build a case for the clock, I’m thinking about something made up of layers of thin plywood with a channel for the LED strip and maybe a layer of smoked/mat acrylic to act as a diffuser.

The second part is to work out a way to work with DST, Micropython doesn’t support timezones as the database needed to keep track of all the different timezones takes up a huge amount of space. I could hard code in the dates for my location, but I’ll probably just make use of the B button to toggle an hours difference on/off.

Optionally I might add another 31 LED strip (probably at 30/meter) to be used as a calendar showing the current month with markers for weekends and the current day.

Another option is to use 4 of these to build a 60 LED ring for something a little more conventionally shaped.

And the final extra hack is to daisy chain the Light level sensor (e.g. one of these) on top of the RTC and dynamically adjust the brightness based on ambient light levels.

I’ll also probably keep tinkering with the Raspberry Pi Zero W version as that will allow oAuth to link to things like Google Calendar to show meetings in the clock view and add Holidays to the Calendar view. It will also have access to the full timezone database and NTP for time syncing over the network.

Router swap

With all the working from home over the last 18months and the fact I now work for a 100% remote company I decided it was time to have another look at my home broadband setup.

I currently have a FTTC install supplied by A&A which currently tops out at about 60/15 and while a FTTP setup would be nice I’ll have to wait until OpenReach get their finger out and actually fully enable my exchange (A recent new build development is already full fibre, but the existing properties will have to wait).

The line has been pretty reliable but I decided it was time to add some backup capability if I’m going to be relying on it all the time. I decided to add an LTE/4G link (no 5G out here in the sticks yet either).

I already had a LTE USB stick but the Ubiquiti EdgeRouter X that I was running didn’t have a USB port so I looked at putting the stick in Pi and adding a second low priority default route via the Pi. This worked but meant that I lost IPv6 (finding a UK cell provider that will offer IPv6 on Pay&Go is a problem I’ve looked at before) and others won’t be able to reach my web server or the other services I host at home. I’ll cover the 4G network provision later.

A&A offer a L2TP service which can route the fixed IPv4 and IPv6 ranges over any connection if your main line is down for any reason. This can easily run over a LTE connection, but it does have one slight niggle. If the L2TP tunnel is running at the same time as the FTTC line then it will take priority which means it should only be started when the FTTC line goes down.

The EdgeRouter X only supports L2TP Tunnels when paired with IPSEC so can’t easily be used with this option. I could run something like xl2tp on the Pi with the LTE USB stick but then I would need a way to trigger it on the Pi when the PPPoE link goes down on the EdgeRouter. All of this combined with Ubiquiti’s apparent pulling back from the EdgeRouter line as they focus more on their Dream Machine range I thought I’d see what else was available.

MikroTik

If you poke around the internet in the places where people talk about Ubiquiti kit they also mention MikroTik and RouterOS so I thought I’d have a look and see what was available.

MikroTik hEX s router

The closest match to the EdgeRouter X looked to be a MikroTik hEX S. It has the same 5 Gigabit Ethernet ports, PoE powered and also has a USB port and a SFP port for if I ever want to add fibre support.

I already had a Huawei E3372-200 LTE stick to plug into the side. This supports up to 150Mbps connections and has connectors to add external antenna if needed to get the best signal. I also grabbed a 90° USB adapter, because everybody knows that USB sticks work better when pointed straight up.

Router & Switch

I plugged the hEX S into my desktop ISP setup to work out how configure it and play with some of the settings.

There are 3 ways to configure most RouterBoard/RouterOS devices

  • Winbox – a native application that supports Windows (can be run under Wine on Linux and OSx)
  • WebFig – a web interface
  • Console/SSH – a command line interface

I’ve not tried Winbox, I did most of the setup via the console interface, but I used the WebFig to check. Most of the time WebFig works just fine, but occasionally it would throw javascript errors. I’m hoping that most of this is down to the fact I had to install a 7.1 release candidate build to get LTE stick to work properly. I’ll check back once 7.1 gets a proper release.

Using the console I managed to setup the LAN IP address range, DHCP server and pre-reserved all the static IP addresses to match my old setup.

Getting the port forwarding and hairpin NAT setup was a little bit more challenging than on the EdgeRouter but I have something that looks to behave the same for everything I had setup before.

I set the LTE device to be always on but with a static route to the L2TP endpoint and a script that run when the PPPoE device goes up or down. When the PPPoE goes down it will connect the L2TP client and disconnect it when the PPPoE device comes back up. The easiest way to test is to unplug the ethernet cable between the router and the modem running in bridge mode.

Cellular Contract

The next question is what mobile data plan to use, this is meant to be only used as a fall back, so I don’t really want to be paying for a monthly contract and then not using it, which means I’m looking for a Pay & Go sim card. I also want a plan that has the longest possible lifetime for any credit. Luckily Terrence Eden had recently collated a list of the best deals for this kind of data sim. It looks like the Three 24GB or the matching Vodafone 24GB plan are the best fit.

I opted for the Three as I have reasonable coverage at home, it comes with 24GB pre-loaded and it will last for up to 2 years (unlike a lot of the others that expire every month). It’s list price at time of writing is £44.99, I got mine for $39.96, but it’s been as low as £31.29 on offer recently.

Next

At the moment the router only fails over if the PPPoE connection goes down, it would be nice to try and detect if the PPPoE link stays up, but traffic stops flowing and change over. The challenge here is how to know to switch back since the L2TP tunnel takes priority. I’ll have to think about that one.

King Alfred’s Way

Not being able to travel for the last 18 months has meant that I’ve not done either of the 2 big cycling events I had planned. I was going to go back and have another crack at the Dirty Kanza (now known as Unbound Gravel) and The Rift (200km round a couple of volcanoes in Iceland).

So I’ve been looking for something a little closer to home to have a crack at.

Mid 2020 Cycling UK announced a new route that they had been working on for a while called the King Alfred’s Way, a 350km mainly off road route that starts in Winchester, goes west to Salisbury, North to near Swindon, west along the Ridgeway to Reading and then down back to Winchester via the edge of the Surry Hills and the South Downs Way.

Day 1

Hursley House

I parked my car up at my old IBM Office at about 7:00 and spent the next 30mins getting all the bags setup on the bike, topping up the tyre pressures, generally sorting out filling water bottles and distributing snacks for the day round various pockets.

It was a short (7km) ride to the official start point at the West Gate in Winchester so I could set off close to 8:00.

Winchester West Gate

The route works it’s way through Winchester on mainly quiet residential streets before making a break for the countryside towards Sparsholt where it forks off on to single track trail up through the woods.

It was then a gentle rolling mix of wooded tracks and farm lanes all the way out to Salisbury where the road turns north to roll past Stone Henge.

Stone Henge in the distance

After Stone Henge the route sets off across the Salisbury Plain Training Area, this is where the you have to keep an eye on the flags and be prepared for loud bangs and tanks crossing the track. It was actually pretty quiet this time.

I had expected to find somewhere to stop for lunch along the way but it’s pretty remote and it wasn’t until I drop off the Plain (just after the end of the 3rd GPX file) and made it to All Canning around 14:50 (and the pub was supposed to close at 15:00)

I had planned to camp at a place called Smeathe’s Ridge which is just past Barbary Castle having seen it mentioned as a good spot by a few people riding/walking the Ridgeway. Unfortunately this appears to also be a race horse training gallops which the owner was in the process of mowing when I got there. I did think about asking for permission, but decided to press in a little to see if I could find somewhere else.

About 5km further along and having dropped into the valley I came across an empty field that was well screen from the track, wasn’t overlooked and didn’t look to have any paths running through it. I set up my little tent , and boiled the water for dinner.

View across the valley

Day 2

I woke up stupidly early on the morning of day 2 so I had managed to pack up my tent and was ready to roll again by 6:00. It started with a short push to get up on to the top of the ridge again.

I soon past a perfect camping spot that was just off the trail, behind a bench looking out over the valley. Another cyclists had spent the night and was still very much asleep as I rolled past.

It was foggy and even started to rain a little so I was glad I’d packed my proper rain jacket, both to keep the wet out and to help keep warm.

The track winds it way up over another ridge to the north and then drops down again to crosses the M4 and passes the PGL Summer Camp before climbing back up on to the Ridgeway.

I was short on water as having last been able to top up my bottles was at The Kings Arms in All Canning the afternoon before and uses a bunch for dinner the night before. I found a tap with a dirty looking hose at a little take way caravan attached to a pig farm. Unfortunately as it was mid week and still very early so no change of grabbing a bacon sandwich, but the fitting for the hose came undone easily so I could top up my bottles. About 2km further on there was another tap signposted.

Ridgeway Strade Bianche

The farm lanes were mainly crushed chalk which looked great but could be pretty bumpy and had chunks of flint poking though which was best avoided. As it was dry it was pretty quick rolling, but would be really sticky in the wet.

The Ridgeway took most of the morning, then the route drops down to run along side the Thames at Goring.

The Thames Path took me on to Reading, which was fun as it was the first day of the Reading Festival and the route past one of the site entrances, having to dodge lots Polos and Fiat 500s full of teenagers and then through main shopping centre with the paths full of people carrying disposable tents and crates of cheap cider.

I stopped off at the Mission Burrito in the shopping centre for lunch, before following the River Kennet south out of the city.

The route was mainly cycle paths and trails until it hooks up with the can Basingstoke Canal for a while.

Crossing the canal

After the canal I detoured a little to find the hotel I’d booked in Aldershot deciding that a real bed and a takeaway pizza would hopefully lead to a longer nights sleep.

Day 3

Day 3 started with the only encounter with a bad driver, a close pass on the way out of Aldershot, who then had the gawl to argue about it when called out.

The route rolled through some of the smarter housing south of Farnham before crossing the edge of a golf course and entering another military range. This area is very sandy which was hard going at time and required pushing.

Bike wheel in sand

Unlike the ranges on Salisbury Plane I did actually catch a glimpse of some of our guys in green suits, a small group that looked to be heading off to practice with some smoke grenades.

The range ends up with the climb up to the Devil’s Punch Bowl which again in places is very steep with sections of “babies heads” sized boulders which makes riding without suspension tricky, but the view from the top is pretty good.

Devil's Punch Bowl

After the Devil’s Punch Bowl the route tracks south from Hindhead again sticking mainly to forest trails and I even spotted some deer on the way into Liss.

Deer on the path

I cut the corner a little to get to Petersfield a little sooner and from there up the old road through Queen Elizabeth Country Park. Before the absolutely brutal climb up Buster Hill (it’s bad enough up the other side on Havesting Lane with tarmac). With all the weight on the bike I had push most of it.

From the top it was an easy run along the South Downs Way to the cafe at the Sustainability Centre for some much needed lunch. As I was feeling it by then I choose to make use of the local knowledge and took the direct road route back home. Down Old Winchester Hill and then back up Beacon Hill out of Exton, before cutting across to Owslebury, Fishers Pond and Colden Common before hitting Poles Lane back to Hursley.

Conclusion

It was a really great 3 days and I had a lot of fun, but 3 days is really pushing pretty hard considering the amount of climbing and the type of trail the route takes.

I’ll try and do another post about the kit I took, but one thing I will say is that I’m going to need some new gravel/mtb shoes pretty soon. The Giro Rumbles I’ve been using since I started training for the Dirty Kansa have been pretty good, but a week later the feeling has still not 100% returned to the middle 3 toes on both feet (It’s getting better everyday, but it’s time for some stiffer soles).

IKEA VINDRIKTNING PM2.5 Sensor

Having seen a tweet to a Hackaday article (/ht Andy Piper) about adding a ESP8266 to the new IKEA VINDRIKTNING air quality sensor.

IKEA Air Quality Sensor showing Green Light

The sensor is a little stand alone platform that measures the amount of PM 2.5 particles in the air and it has an array of coloured LEDs on the front to show a spectrum from green when the count is low and red when high.

Sören Beye opened one up and worked out that the micro controller that reads the sensor to control the leds does so over a uart serial connection and that the Tx/Rx lines were exposed via a a set of test pads along with 5v and Ground power. This makes it easy to attach a second micro controller to the Rx line to read the response when the sensor is polled.

Sören has written some code for an ESP8266 to decode that response and publish the result via MQTT.

Making the hardware modification is pretty simple

Wemos D1 Mini attached to sensor
  • Unscrew the case
  • Strip the ends on 3 short pieces of wire
  • Solder the 3 leads to the test pads labelled 5v, G and REST
  • Solder the 5V to 5V, G to G and REST to D2 (assuming using a Wemos D1 Mini)
  • Place the Wemos in the empty space above the sensor
  • Screw the case back together

The software is built using the Ardunio IDE and is easily flashed via the USB port. Once installed when the ESP8266 boots it will set up a WiFi Access Point to allow you to enter details for the local WiFi network and the address, username and password for a MQTT broker.

When connected the sensor publishes a couple of messages to allow auto configuration for people who use Home Assistant but it also publishes messages like this:

{
  "pm25":12,
  "wifi":{
    "ssid":"IoT Network",
    "ip":"192.168.1.58",
    "rssi":-60
  }
}

It includes the pm25 value and information about which network it’s connected to and it’s current IP address. I’m subscribing to this with Node-RED and using it to convert the numerical value, which has units of μg/m3 into a recognised scale (found on page 4).

let pm25 = msg.payload.pm25
if ( pm25 < 12 ) {
  msg.payload.string = "good"
} else if (pm25 >= 12 && pm25 < 36) {
  msg.payload.string = "moderate"
} else if (pm25 >= 36 && pm25 < 56) {
  msg.payload.string = "unhealthy for sensitive groups"
} else if (pm25 >= 56 && pm25 < 151 ) {
  msg.payload.string = "unhealthy"
} else if (pm25 >= 151 && pm25 < 251 ) {
  msg.payload.string = "very unhealthy"
} else if (pm25 >= 251 ) {
  msg.payload.string = "hazardous"
}
return msg;

I’m feeding this into a Google Smart Home Assistant Sensor device that has the SensorState trait, this takes the scale values as input, but you can also include the raw values as well.

msg.payload = {
  "params":{
    "currentSensorStateData":[
      {
        "name":"AirQuality",
        "currentSensorState":msg.payload.string
      },
      {
        "name":"PM2.5",
        "rawValue": msg.payload.pm25
      }
    ]
  }
}
return msg;

I will add the an Air Quality trait to the Node-RED Google Assistant Bridge shortly.

I’m also routing it to gauge in a Node-RED Dashboard setup.

Quick and Dirty Finger Daemon

I’ve been listening to more Brad & Will Made a Tech Pod and the current episode triggered a bunch of nostalgia about using finger to work out what my fellow CS students at university were up to. I won’t go into to too much detail about what Finger is as the podcast covers it all.

This podcast has triggered things like this in the past, like when I decided to make this blog (and Brad & Will’s podcast) available via Gopher.

On the podcast they had Ben Brown as a guest who had written his own Finger Daemon and linked it up to a site called Happy Net Box where users can update their plan file. Then anybody can access it using the finger command e.g. finger hardillb@happynetbox.com . The finger command is shipped by default on Windows, OSx and Linux so can be accessed from nearly anywhere.

I really liked the idea of resurrecting finger and as well as having a play with Happy Net Box I decided to see if I could run my own.

I started to look at what it would take to run a finger daemon on one of my Raspberry Pis, but while there are 2 packaged they don’t appear to run on current releases as they rely on init.d rather than Systemd.

Next up I thought I’d have a look at the protocol, which is documented in RFC1288. It is incredibly basic, you just listen on port 79 and read the username terminated with a new line & carriage return. This seamed to be simple enough to implement so I thought I’d give it a try in Go (and I needed something to do while all tonight’s TV was taken up with 22 men chasing a fall round a field).

The code is on Github here.

package main

import (
  "io"
  "os"
  "fmt"
  "net"
  "path"
  "time"
  "strings"
)

const (
  CONN_HOST = "0.0.0.0"
  CONN_PORT = "79"
  CONN_TYPE = "tcp"
)

func main () {
  l, err := net.Listen(CONN_TYPE, CONN_HOST+":"+CONN_PORT)
  if err != nil {
    fmt.Println("Error opening port: ", err.Error())
    os.Exit(1)
  }

  defer l.Close()
  for {
    conn, err := l.Accept()
    if err != nil {
      fmt.Println("Error accepting connection: ", err.Error())
      continue
    }
    go handleRequest(conn)
  }
}

func handleRequest(conn net.Conn) {
  defer conn.Close()
  currentTime := time.Now()
  buf := make([]byte, 1024)
  reqLen, err := conn.Read(buf)
  fmt.Println(currentTime.Format(time.RFC3339))
  if err != nil {
    fmt.Println("Error reading from: ", err.Error())
  } else {
    fmt.Println("Connection from: ", conn.RemoteAddr())
  }

  request := strings.TrimSpace(string(buf[:reqLen]))
  fmt.Println(request)

  parts := strings.Split(request, " ")
  wide := false
  user := parts[0]

  if parts[0] == "/W" && len(parts) == 2 {
    wide = true
    user = parts[1]
  } else if parts[0] == "/W" && len(parts) == 1 {
    conn.Write([]byte("\r\n"))
    return
  }

  if strings.Index(user, "@") != -1 {
    fmt.Println("remote")
    conn.Write([]byte("Forwarding not supported\r\n"))
  } else {
    if wide {
      //TODO
    } else {
      pwd, err := os.Getwd()
      filePath := path.Join(pwd, "plans", path.Base(user + ".plan"))
      filePath = path.Clean(filePath)
      fmt.Println(filePath)
      file, err := os.Open(filePath)
      if err != nil {
        //not found
        // io.Write([]byte("Not Found\r\n"))
      } else {
        defer file.Close()
        io.Copy(conn,file)
        conn.Write([]byte("\r\n"))
      }
    }
  }
}

Rather than deal with the nasty security problems with pulling .plan files out of peoples home directories it uses a directory called plans and loads files that match the pattern <username>.plan

I’ve also built it in a Docker container and mounted a local directory to allow me to edit and add new plan files.

You can test it with finger ben@hardill.me.uk

Working with multiple AWS EKS instances

I’ve recently been working on a project that uses AWS EKS managed Kubernetes Service.

For various reasons too complicated to go into here we’ve ended up with multiple clusters owned by different AWS Accounts so flipping back and forth between them has been a little trickier than normal.

Here are my notes on how to manage the AWS credentials and the kubectl config to access each cluster.

AWS CLI

First task is to authorise the AWS CLI to act as the user in question. We do this by creating a user with the right permissions in the IAM console and then export the Access key ID and Secret access key values usually as a CSV file. We then take these values and add them to the ~/.aws/credentials file.

[dev]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy

[test]
aws_access_key_id = AKYYYYYYYYYYYYYYYYYY
aws_secret_access_key = abababababababababababababababababababab

[prod]
aws_access_key_id = AKZZZZZZZZZZZZZZZZZZ
aws_secret_access_key = nmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnm

We can pick which set of credential the AWS CLI uses by adding the --profile option to the command line.

$ aws --profile dev sts get-caller-identity
{
    "UserId": "AIXXXXXXXXXXXXXXXXXXX",
    "Account": "111111111111",
    "Arn": "arn:aws:iam::111111111111:user/dev"
}

Instead of using the --profile option you can also set the AWS_PROFILE environment variable. Details of all the ways to switch profiles are in the docs here.

$ export AWS_PROFILE=test
$ aws sts get-caller-identity
{
    "UserId": "AIYYYYYYYYYYYYYYYYYYY",
    "Account": "222222222222",
    "Arn": "arn:aws:iam::222222222222:user/test"
}

Now we can flip easily between different AWS accounts we can export the EKS credential with

$ export AWS_PROFILE=prod
$ aws eks update-kubeconfig --name foo-bar --region us-east-1
Updated context arn:aws:eks:us-east-1:333333333333:cluster/foo-bar in /home/user/.kube/config

The user that created the cluster should also follow these instructions to make sure the new account is added to the cluster’s internal ACL.

Kubectl

If we run the previous command with each profile it will add the connection information for all 3 clusters to the ~/.kube/config file. We can list them with the following command

$ kubectl config get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
*         arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   
          arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   
          arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar 

The star is next to the currently active context, we can change the active context with this command

$ kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar
Switched to context "arn:aws:eks:us-east-1:222222222222:cluster/foo-bar".

Putting it all together

To automate all this I’ve put together a collection of script that look like this

export AWS_PROFILE=prod
aws eks update-kubeconfig --name foo-bar --region us-east-1
kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar

I then use the shell source ./setup-prod command (or it’s shortcut . ./setup-prod) , this is instead of adding the shebang to the top and running it as a normal script. This is because when environment variables are set in scripts they go out of scope. Leaving the AWS_PROFILE variable in scope means that the AWS CLI will continue to use the correct account settings when it’s used later while working on this cluster.

Joining FlowForge Inc.

FlowForge Logo

Today is my first day working for FlowForge Inc. I’ll be employee number 2 and joining Nick O’Leary working on all things based around Node-RED and continuing to contribute to the core Open Source project.

We should be building on some of the things I’ve been playing with recently.

Hopefully I’ll be able to share some of the things I’ll be working on soon, but in the mean time here is the short post that Nick wrote when he announced FlowForge a few weeks ago and a post welcoming me to the team

To go with this announcement Hardill Technologies Ltd will be going dormant. It’s been an good 3 months and I’ve built something interesting for my client which I hope to see it go live soon.

Setting up WireGuard IPv6

I’ve been having a quick play with setting up another VPN solution for getting an IPv6 address on my mobile devices this time using WireGuard.

WireGuard is a relatively new VPN tunnel implementation that has been written to be as stripped back as possible to keep the codebase as small as possible to help make it easier to audit.

Setup

A lot of the instructions for running WireGuard on RaspberryPi OS talk about adding debian testing repos or building the code from scratch, but it looks like recent updates have included the packages needed in the core repositories.

# apt-get install wireguard

I set up UDP port forwarding on my router for port 53145 and got my ISP to route another /64 IPv6 subnet to my line, both of these are forwarded on to the Raspberry Pi that is running that is also running my OpenVPN setup. This is useful as it’s already setup to do NAT for the 10.8.0.0/24 range I’m issuing to OpenVPN clients so having it do itfor the 10.9.0.0/24 range for WireGuard is easy enough.

WireGuard on Linux is implemented as a network device driver so can be configured on the command line with the ip command e.g.

# ip link add dev wg0 type wireguard
# ip address add dev wg0 10.9.0.1/24

Which brings the device up and sets the IP addresses but you still need to add the Private Key and remote address and Public Key which can be done with the wg command

# wg set wg0 listen-port 53145 private-key /path/to/private-key peer ABCDEF... allowed-ips 0.0.0.0/0 endpoint 209.202.254.14:8172

Or more easily it can read from a config file

# wg setconf wg0 myconfig.conf

Or the whole setup and configured with wg-quick

# wg-quick up /path/to/wg.conf

Server Config

[Interface]
Address = 10.9.0.1/24, 2001:8b0:2c1:xxx::1/64
ListenPort = 53145
PrivateKey = oP3TAHBctNVcnPTxxxxxxxxzNRLSF5CwII4s8gVAXg=

#nexus
[Peer]
PublicKey = 4XcNbctkGy0s73Dvxxxxxxxxx++rs5BAzCGjYmq21UM=
AllowedIPs = 10.9.0.2/32, 2001:8b0:2c1:xxx::2/128

The Server config includes:

  • Address is the local address on the VPN tunnel, here has both IPv4 and IPv6.
  • ListenPort is which port to listen for client connections on. WireGuard doesn’t have a assigned port.
  • PrivateKey to identify the host.
  • There can be multiple Peers which represent which clients can connect and the AllowedIPs is the IP addresses for each client.

Client Config

[Interface]
Address = 10.9.0.2/32, 2001:8b0:2c1:4b50::2/128
PrivateKey = UFIJGgtKsor6xxxxxxxxxxxbWeKmw+Bb5ODpyNblEA=
DNS = 8.8.8.8

[Peer]
PublicKey = jMB2oMu+YTKigGxxxxxxxxxxSYcTde/7HT+QlQoZFm0=
AllowedIPs = 0.0.0.0/0, ::0/0
Endpoint = hardill.me.uk:53145

The differences from the Server config are:

  • Interface has a DNS entry for the client to use while the tunnel is running.
  • Peer has an Endpoint which is the public address and port to connect to
  • AllowedIPs are which IPs to route over the tunnel, in this case it’s everything

Key Generation

Both ends of the connection need a PublicKey and a PrivateKey so they can mutually authenticate each other. These are generated with the wg command

# wg keygen > privateKey
# wg pubkey < privateKey > publicKey

Sharing Config

The WireGuard Android app that you can manually add all the details in the config file or it supports reading config files from QR codes. This makes it really easy to setup and removes the chance of getting a typo in the Keys and IP addresses.

You can generate QR codes from the config file as follows:

# qrencode -t png -o nexus.png < nexus.conf
# qrencode -t ansiutf8 < nexus.conf

The first generates a PNG file with the QR code, the second prints the code out as ASCII art.

Conclusion

It all looks to be working smoothly. I can see the advantages over OpenVPN being that you don’t need to worry about certificate maintenance and distribution.

I’ll give it a proper work out and see how it holds up running things like SIP connections along with general access to my home network.

As well as running it on the phone, I’ll set up a client config for my laptop to use when out and about. The only issues is that the Gnome Network Manager integration for WireGuard isn’t available in the standard repos for Ubuntu 20.04 so it needs to be started/stopped from the command line.

New Monitor (BenQ EW2780U)

I finally got round to buying my self a proper monitor to use with my laptop at home (I know I’m very late to this party given the current situation of extended working from home).

I’ll be using it with my Dell XPS13 which only has 2 ports ( 2 USB-C/Thunderbolt ) and these double as the power input as well so I was looking for a monitor that can be both driven via USB-C and supply power to the laptop via USB-PD.

Having had a bit of a search round and asking for suggestions on Twitter, I found the BenQ EW2780U which looked to cover all the bases. There was a reasonably looking review from TechRadar. 27″ was a little outside my initial size range, but given how close it will be on my desk and the amount of space I have to play with it’s the right call.

There was a very similar 32″ model (BenQ EW3270u) on Amazon that was even slightly cheaper, but while it had support for video over USB-C, it didn’t support USB-PD to charge/power the laptop.

Technical Specs

  • 27″ Screen
  • 3840 x 2160 pixels
  • 2 HDMI ports (v2.0)
  • 1 DisplayPort (v1.4)
  • 1 USB-C
  • USB-PD up to 60W (Note I don’t think this is enough to charge a MBP)
  • Built in speakers (these work over HDMI/USB-C)

Setup

I’ve tweaked a few of the out of the box settings.

  • Turned off auto input switching, mainly because it was flipping to the Chromecast when ever the laptop went to sleep or I unplugged it. It’s pretty easy to switch inputs with the buttons on the back.
  • Set it to sleep when the USB-C connection is unplugged.

While the monitor did come with a HDMI cable in the box I did need to buy a new USB-C cable to use it with the Dell. None of the ones I currently had would support the HD video signal. This is one of the only downsides of USB-C, all the cables will fit in all the devices, but it’s very hard to visually tell them apart as to what spec they support.

I had a problem with Ubuntu 18.04 not liking driving such a big desktop, if I put anything on the new monitor it would occasionally randomly crash the Gnome session which meant all the open apps also got killed. So this lead to me actually getting round to do the upgrade to Ubuntu 20.04 that I had been putting off. This has fixed the problem and everything is running smoothly now.

The only thing it’s really missing is a built in USB hub then I wouldn’t need to plug a dongle into the remaining USB-C port to give me some USB-A ports.

Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])
  }
}