Building a Kubernetes Test Environment

Over the last couple of weekends I’ve been noodling around with my home lab set up to build a full local environment to test out FlowForge with both the Kubernetes Container and Docker Drivers.

The other reason to put all this together is to help to work the right way to put together a proper CI pipeline to build, automatically test and deploy to our staging environment.

Components

NPM Registry

This is somewhere to push the various FlowForge NodeJS modules so they can then be installed while building the container images for the FlowForge App and the Project Container Stacks.

This is a private registry so that I can push pre-release builds without them slipping out in to the public domain, but also so I can delete releases and reuse version numbers which is not allowed on the public NPM registry.

I’m using the Verdaccio registry as I’ve used this in the past to host custom Node-RED nodes (which it will probably end up doing again in this set up as things move forwards). This runs as Docker container and I use my Nginx instance to reverse proxy for it.

As well as hosting my private builds it can proxy for the public npmjs.org regisry which speeds up local builds.

Docker Container Registry

This is somewhere to push the Docker containers that represent both the FlowForge app it’s self and the containers that represent the Project Stacks.

Docker ship a container image available that will run a registry.

As well as the registry I’m also running second container with this web UI project to help keep track of what I’ve pushed to the registry and also allows me to delete tags which is useful when testing

Again my internet facing Nginx instance is proxying for both of these (on the same virtual host since their routes do not clash and it makes CORS easier since the UI is all browser side JavaScript)

Helm Chart Repository

This isn’t really needed, as you can generate all the required files with the helm command and host the results on any Web server, but this lets me test the whole stack end to end.

I’m using a package called ChartMuseum which will automatically generate index.yaml manifest file when charts are uploaded via it’s simple UI.

Nginx Proxy

All of the previous components have been stood up as virtual hosts on my public Nginx instance so that they can get HTTPS certificates from LetsEncrypt. This is makes things a lot easier because both Docker and Kubernetes basically require the container registry be secure by default.

While it is possible to add exceptions for specific registries, these days it’s just easier to do it “properly” up front.

MicroK8s Cluster

And finally I need a Kubernetes cluster to run all this on. In this case I have a 3 node cluster made up of

  • 2 Raspberry Pi 4s with 8gb of RAM each
  • 1 Intel Celeron based mini PC with 8gb of RAM

All 3 of these are running 64bit Ubuntu 20.04 and MicroK8s. The Intel machine is needed at the moment because the de facto standard PostrgresSQL Helm Chat only have amd64 based containers at the moment so won’t run on the Raspberry Pi based nodes.

The cluster uses the NFS Persistent Volume provisioner to store volumes on my local NAS so they are available to all the nodes.

Usage

I’ll write some more detailed posts about how I’ve configured each of these components and then how I’m using them.

As well as testing the full Helm chart builds, I can also use this to run the FlowForge app locally and have the Kubernetes Container driver running locally on my development machine and have it create Projects in the Kubernetes cluster.

New Daily Driver

I got my first Dell XPS13 back in 2016 and a second one in 2020. I really like them but with the new job I’ve been using it for both personal and work use. So I decided to grab a second machine to help with keeping things separate, it’s easier to switch off at the end of the day if I can leave the “work” machine in the office.

Lenovo X1 Carbon

I’ve gone for a Lenovo X1 Carbon. It’s a machine I looked at when I got the second XPS13 as it was another machine that can be ordered with Linux pre-installed. Lenovo are now offering both Ubuntu and Fedora as options. In my case I knew I wouldn’t have any problems installing the OS myself so I ordered a bare machine and installed Fedora 35. Because I got to do a clean install I could also enable LUKS out of the box to encrypt the drive.

Also running both a deb and rpm based distro will help me stay current with both and make testing a little easier without running VMs all the time.

I used to run Fedora when I was at IBM and even worked with the internal team that packaged some of the tools we needed on a day to day basis (like Lotus Notes and the IBM JVM). I decided it would be good to give it a try again especially as Fedora releases tend to move a little quicker than Ubuntu LTS and are more aggressive at picking up new versions.

The main hardware differences to the XPS13 are double the RAM at 32gb and double the storage with a 1TB SSD. The screen is the same resolution but slightly larger and without a touch screen (but that’s not something I make a lot of use of). It also comes with a Lenovo trademark track point as well as a trackpad. The CPU is still a 4 core ( with Hyper Threading) but the base clock speeds are better (Dell, Lenovo)

The only niggle I’ve found so far is that the USB-C port layout doesn’t work as well as with the XPS13 on my desk. The XPS13 has USB-C ports on both sides of the case, where as the X1 Carbon only has 2 on the left hand edge. But it does have a full sized HDMI port and 2 USB 3.1 A ports which means I don’t need the little USB-C to USB-A hub I’d been using. This also makes pluggin in a SD card reader a little easier as the Lenovo doesn’t have one built in.

The keyboard feels a little nicer (just got to get used to the ctrl and fn keys being swapped, though there is bios setting to flip them)

Pimoroni Keybow Upgrade

I’ve had a little 3 key Pimoroni Keybow sat on my desk for a while. It was running the same basic config I had setup when I bought it, namely mapping the three buttons to volume down, mute and volume up respectively.

Pimoroni Keybow Mini

While this was useful, it felt like there where better uses for it.

With more and more time being spent in video meetings having quick shortcuts to mute the mic or toggle the camera on/off sounded like a good idea. But then I wondered if I could find a way to switch the key mapping on the fly.

The key mapping is done by editing a short Lua script. This is stored on the sdcard that the Pi Zero that holds the Keybow boots from. This means the layout is normally fixed. Except the latest version (0.0.4) of the sdcard image on the Pimoroni Github page added support for starting a USB serial link as well as the HID used to send the keyboard events. This is exposed in the Lua environment so I managed to build the following script.

require "keybow"

-- Keybow MINI volume/zoom controls --

function setup()
    keybow.use_mini()
    keybow.auto_lights(false)
    keybow.clear_lights()
    keybow.set_pixel(0, 0, 255, 255)
    keybow.set_pixel(1, 255, 0, 255)
    keybow.set_pixel(2, 0, 255, 255)
end

-- Key mappings --

state = 'zoom'

function handle_minikey_02(pressed)
    if state == 'zoom' then
	if pressed then
            keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_DOWN)
            keybow.tap_key("v")
            keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_UP)
        end
    elseif state == 'media' then
        keybow.set_media_key(keybow.MEDIA_VOL_UP, pressed)
    end
end

function handle_minikey_01(pressed)
    if state == 'zoom' then
	if pressed then
            keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_DOWN)
            keybow.tap_key("a")
            keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_UP)
        end
    elseif state == 'media' then
        keybow.set_media_key(keybow.MEDIA_MUTE, pressed)
    end
end

function handle_minikey_00(pressed)
    if state == 'zoom' then
	if pressed then
             keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_DOWN)
             keybow.tap_key("n")
             keybow.set_modifier(keybow.LEFT_ALT, keybow.KEY_UP)
        end
    elseif state == 'media' then
        keybow.set_media_key(keybow.MEDIA_VOL_DOWN, pressed)
    end
end

local function isempty(s)
  return s == nil or s == ''
end

function tick()
    local line
    line = keybow_serial_read()
    if not isempty(line) then 
        -- keybow_serial_write( line .. "\n" )
        if line == 'zoom' then
            keybow.clear_lights()
            keybow.set_pixel(0, 0, 255, 255)
            keybow.set_pixel(1, 255, 0, 255)
            keybow.set_pixel(2, 0, 255, 255)
            state = 'zoom'
        elseif line == 'media' then
            keybow.clear_lights()
            keybow.set_pixel(0, 255, 0, 255)
            keybow.set_pixel(1, 0, 255, 255)
            keybow.set_pixel(2, 255, 0, 255)
            state = 'media'
        end
    end

end

The serial port gets setup on /dev/ttyACM0 on my laptop so I’m toggling between the 2 modes with echo media > /dev/ttyACM0 and echo zoom > /dev/ttyACM0.

In media mode it works exactly the same as before, but in zoom mode it toggles the camera on/off, toggles mute on/off and cycles through the available cameras.

This worked but keybow_serial_read() call added 1 second of latency to each call to the tick function which really wasn’t great as it was possible to miss key presses.

A bit of digging in the git rep turned up the file that implemented the serial access and this bit of code:

int serial_open(){
    if(port_fd > -1) return 0;

    port_fd = open(KEYBOW_SERIAL, O_RDWR);

    if(port_fd > -1){
        printf("Open success\n");
        tcgetattr(port_fd, &termios);
        termios.c_lflag &= ~ICANON;
        termios.c_cc[VTIME] = 10;
        termios.c_cc[VMIN] = 0;
        tcsetattr(port_fd, TCSANOW, &termios);
    }
    return 0;
}

The termios.c_cc[VTIME] = 10; was what was causing the delay. I rebuilt the library changing the value to 1 and 0. The value is in deciseconds (1/10 seconds)

With 1 the delay was cut to a tenth of a second, which was OK, but meant you had to be very deliberate in pushing the button to make sure it didn’t get missed, which with a mute toggle is a little risky.

With 0 it worked perfectly.

The script also changes the backlight colour for the keys based on mode so I can see which is active. It should be possible to add more modes as needed.

Next up is to see if I can script the toggling the mode based on if Zoom is the currently active window. Looks like it should be possible with tools like xprop or xdotool.

D11 Label Printer

A couple of weeks ago I was rearranging my collection of Raspberry Pi’s that live in the attic (A Kubernetes cluster, LoRa gateway and a few other things) and I was having problems remembering exactly which was which as a few of them have the same Pimoroni Pibow case in the same colours. I decided it was time to actually label them all to make life a little easier.

My first thought was a Dynmo device, but I decided that I didn’t want one of the original plastic tape embossing machine and the newer printers get expensive quickly. I did go down a rabbit hole around Brother label printers that come with Linux printer drivers, but decided I didn’t actually need that level of support.

I ended up grabbing a D11 Bluetooth Thermal Label Printer from Amazon. It comes with a roll of stickers 12×40 mm and can print text, numbers emoji, barcodes or QR codes. There are many different sized stickers with a selection of borders, background prints or transparent and even a glow in the dark version.

The rolls have NFC tags to identify the size and type of stickers currently installer in the printer and it updates the app with this when you try to print.

It uses an Android (and iOS) app to create the labels and then send them to the printer. The app is pretty intuitive, the only slight niggle is that if you want to save pre-built layouts you need to sign up to a online account. This is not a problem if you are just doing one off labels for different things.

Summary

It has solved the problem I bought it for, we will see how the thermal paper holds up over time but most of the labels are on the underside of the Pibow cases so should be out of direct light most of the time.

FlowForge v0.1.0

So it’s finally time to talk a bit more about what I’ve been up to for the last few months since joining FlowForge Inc.

The FlowForge platform is a way to manage multiple instances of Node-RED at scale and to control user access to those instances.

The platform comes with 3 different backend drivers

  • LocalFS
  • Docker Compose
  • Kubernetes

LocalFS

This is the driver to use for evaluating the platform or as a home user that doesn’t want to install all the overhead that is required for the other 2 drivers. I starts Projects (Node-RED instances) as separate processes on the same machine and runs each one on a separate port. It keeps state in a local SQLite database.

Docker Compose

This version is a little more complicated, it uses the Docker runtime to start containers for the FlowForge runtime, a PostgreSQL database and Nginx reverse proxy. Each Project lives in it’s own container and is accessed by a unique hostname prepended to a supplied hostname. This can still run on a single machine (or multiple if Docker Swarm mode is used)

Kubernetes

This is the whole shebang, similar to Docker Compose the FlowForge platform all runs in containers and the Projects end up in their own containers. But the Kubernetes platform provides more ways to manage the resources behind the containers and to scale to even bigger deployments.

Release

Today we have released version 0.1.0 and made all the GitHub projects public.

The initial release is primarily focused on getting the core FlowForge platform out there for feedback and we’ve tried to make the LocalFS install experience as smooth as possible. There are example installers for the Docker and Kubernetes drivers but the documentation around these will improve very soon.

You can read the official release announcement here which has a link to the installer and also includes a walk through video.

Determining which Linux Distro you are on to install NodeJS

I’ve recently been working on an install script for a project. As part of the install I need to check if there is a suitable version of NodeJS installed and if not install one.

The problem is that there are 2 main ways in which NodeJS can be installed using the default package management systems for different Linux Distributions. So I needed a way to work out which distro the script was running on.

The step was to work out if it is actually Linux or if it’s OSx, since I’m using bash as the interpreter for the script there is the OSTYPE environment variable that I can check.

case "$OSTYPE" in
  darwin*) 
    MYOS=darwin
  ;;
  linux*)
    MYOS=$(cat /etc/os-release | grep "^ID=" | cut -d = -f 2 | tr -d '"')
  ;;
  *) 
    # unknown OS
  ;;
esac

Once we are sure we are on Linux the we can check the /etc/os-release file and cut out the ID= entry. The tr is to cut the quotes off (Amazon Linux I’m looking at you…)

MYOS then contains one of the following:

  • debian
  • ubuntu
  • raspbian
  • fedora
  • rhel
  • centos
  • amzon

And using this we can then decide how to install NodeJS

if [[ "$MYOS" == "debian" ]] || [[ "$MYOS" == "ubuntu" ]] || [[ "$MYOS" == "raspbian" ]]; then
      curl -sSL "https://deb.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo apt-get install -y nodejs build-essential
elif [[ "$MYOS" == "fedora" ]]; then
      sudo dnf module reset -y nodejs
      sudo dnf module install -y "nodejs:$MIN_NODEJS/default"
      sudo dnf group install -y "C Development Tools and Libraries"
elif [[ "$MYOS" == "rhel" ]] || [[ "$MYOS" == "centos" || "$MYOS" == "amzn" ]]; then
      curl -fsSL "https://rpm.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo yum install -y nodejs
      sudo yum group install -y "Development Tools"
elif [[ "$MYOS" == "darwin" ]]; then
      echo "**************************************************************"
      echo "* On OSx you will need to manually install NodeJS            *"
      echo "* Please install the latest LTS release from:                *"
      echo "* https://nodejs.org/en/download/                            *"
      echo "**************************************************************"
      exit 1
fi

Now that’s out of the way time to look at how to nicely setup a Systemd service…

Debugging Node-RED nodes with Visual Code

A recent Stack Overflow post had me looking at how to run Node-RED using Visual Code to debug custom nodes. Since I’d not tried Visual Code before (I tend to use Sublime Text 4 as my day to day editor) I thought I’d give it a go and see if I could get it working.

We will start with a really basic test node as an example. This just prints the content of msg.payload to the console for any message passing through.

test.js

module.exports = function(RED) {
    function test(n) {
        RED.nodes.createNode(this,n)
        const node = this
        node.on('input', function(msg, send, done){
            send = send || function() { node.send.apply(node,arguments) }
            console.log(msg.payload)
            send(msg)
            done()
        })
    }
    RED.nodes.registerType("test", test)
}

test.html

<script type="text/html" data-template-name="node-type">
</script>

<script type="text/html" data-help-name="node-type">
</script>

<script type="application/javascript">
    RED.nodes.registerType('test',{
        category: 'test',
        defaults: {},
        inputs: 1,
        outputs: 1,
        label: "test"
    })
</script>

package.json

{
  "name": "test",
  "version": "1.0.0",
  "description": "Example node-red node",
  "keywords": [
    "node-red"
  ],
  "node-red": {
    "nodes": {
      "test": "test.js"
    }
  },
  "author": "ben@example.com",
  "license": "Apache-2.0"
}

Setting up

All three files mentioned above are placed in a directory and then the following steps are followed:

  • In the Node-RED userDir (normally ~/.node-red on a Linux machine) run the following command to create a symlink in the node_modules directory. This will allow Node-RED to find and load the node.
    npm install /path/to/test/directory
  • Add the following section to the package.json file
...
  ],
  "scripts": {
    "debug": "node /usr/lib/node_modules/node-red/red.js"
  },
  "node-red": {
...

Where usr/lib/node_modules/node-red/red.js is the output from readlink -f `which node-red`.

You can then add a breakpoint to the code

View of node's javascript code with break point set on line 7

And then start Node-RED by clicking on the Play button just above the scripts block.

view of node's package.json with play symbol and Debug above the scripts block

This will launch Node-RED and attach the debugger and stop when the breakpoint if hit. You can also enable the debugger to stop the application on exceptions, filtering on if they are caught or not.

This even works when using Visual Code’s remote capabilities for editing, running and debugging projects on remote machines. I’ve tested this running over SSH to a Raspberry Pi Zero 2 W (which is similar to the original StackOverflow question as they were trying to debug nodes working with the Pi’s GPIO system). The only change I had to make on the Pi was to increase the default swap file size from 100mb to 256mb as squeezing the Visual Code remote agent and Node-RED into 512mb RAM is a bit of a squeeze.

I might give Visual Code a go as my daily driver in the new year.

Test Certificates for localhost

While answering a couple of Stack Overflow questions recently I needed to create some certificates to use with localhost so I thought I’d record the steps to I would have something to link to next time.

Generate CA cert

$ openssl genrsa -out ca.key 2048
$ openssl req -new -x509 -days 365 -key ca.key \ 
  -subj "/C=GB/ST=Gloucestershire/O=localhost CA/CN=locahost Root CA" \
  -out ca.pem

Generate Server cert

$ openssl req -newkey rsa:2048 -nodes -keyout server.key \
  -subj "/C=GB/ST=Gloucestershire/O=Localhost CA/CN=localhost" \
  -out server.csr
$ openssl x509 -req \
  -extfile <(printf "subjectAltName=DNS:localhost,IP:127.0.0.1,IP:::1") \
  -days 365 -in server.csr -CA ca.pem -CAkey ca.key \
  -CAcreateserial -out server.pem

The outputs are

  • ca.key the private key for the CA
  • ca.pem the CA certificate
  • server.key the private key for the server
  • server.pem the certificate fro the server

Traditionally the certificates Subject’s CN value has contained the hostname of the machine the certificate is representing. But the spec doesn’t actually assign any specific meaning to this field and it was deprecated as part of RFC2818.

v3 of the x509 spec adds an extension for storing hostnames and IP addresses called Subject Alternative Names (known as SAN). The last line in the instructions adds SANs for the hostname localhost and the IP addresses 127.0.0.1 and ::1. This means it should be valid for all possible ways of accessing localhost.

The Linear Clock Ticks Again

I’ve had a background project ticking over slowly in the background for a number of years.

Last year I designed and had built a number of PCBs to be used as HATs for a Raspberry Pi Zero. They included a RTC and a terminal block to attach the LED strip.

I did say that I would write another post when the boards where delivered and I had assembled the first prototype. Unfortunately I had made a small, but critical mistake when designing the boards, I slightly messed up the package package size for the RTC so it wasn’t possible to get assemble the boards correctly. I didn’t get round to re-doing the PCB layout with the correct sized parts so the whole thing just sat for a while.

In the meantime the Raspberry Pi Foundation went and released a new product, the Raspberry Pi Pico, which is based on the RP2040 chip. As well as the Pico they are also making the RP2040 chip available to other folk to include it directly in their own projects.

Pimoroni have created a number of different boards but their latest is the Plasma 2040 which is specifically designed to drive LED strips.

B.O.M.

Assembly

  • Solder the RTC on to the breakout section of the Plasma 2040, the terminals are labelled so just make sure you match up the pins, I used the headers that came with the RTC and arranged it so the breakout was over the top of the Plasma2040
  • Loosen the screw terminals for the connections marked 5V, DA and -. Insert the Red wire of the adapter in the 5V, Green wire in DA and White wire in –
  • Clip the LED strip to the end of the adapter.
Plasma 2040

Code

When you first attach the Plasma2040 to your computer it will show up as a USB flash drive. This is so you can install the runtime. In this case we’ll be using the Pimoroni Micropython build that comes with support for the board. You can grab a version from the release page on GitHub here. Once downloaded copy it into the root of the drive. When the copy has finished the board will reboot and be ready to run Python code.

You can use the Thonny IDE to both write and push code to the device. You will need at least version 3.3.3 to support the Plasma2040.

The fist version of the code was as follows:

import plasma
from plasma import plasma2040
from pimoroni import RGBLED, Button
import time

NUM_LEDS = 60
LOW = 32
MED = 64
HIGH = 128
BRIGHTNESS = [LOW,MED,HIGH]
BRIGHTNESS_LEVEL = 0

button_brightness = Button(plasma2040.BUTTON_A)

led = RGBLED(plasma2040.LED_R, plasma2040.LED_G, plasma2040.LED_B)
led.set_rgb(0, 0, 0)
led_strip = plasma.WS2812(NUM_LEDS, 0, 0, plasma2040.DAT)

led_strip.start()

while True:
    RED = [0]*NUM_LEDS
    GREEN = [0]*NUM_LEDS
    BLUE = [0]*NUM_LEDS
    t = time.localtime()

    hour = (t[3] % 12) * 5
    #Hours
    RED[hour] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 1] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 2] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 3] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    RED[hour + 4] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    #Mins
    GREEN[t[4]] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    #Secs
    BLUE[t[5]] = BRIGHTNESS[BRIGHTNESS_LEVEL]
    
    #set the LEDS
    for i in range (NUM_LEDS):
        led_strip.set_rgb(i, RED[i], GREEN[i], BLUE[i])
    
    #change brightness
    if button_brightness.read():
        BRIGHTNESS_LEVEL += 1
        BRIGHTNESS_LEVEL %= 3
    
    time.sleep(1)
 

This works well when triggered from Thonny as it syncs the laptop’s time to the RP2040 each time it connects. But when the clock is powered by a USB power supply or a battery, the clock starts at 00:00:01 Jan 1st 2021 and has no way to be updated to match now.

This is why we need the RTC module, it keeps track of the time while the clock is powered down.

It also has a way to change the brightness, by pressing the A button it will cycle through 3 different brightness levels.

Setting the RTC Time

With a little bit of playing I worked out how to sync the RTC to the current time in the Thonny console

>>> from pimoroni_i2c import PimoroniI2C
>>> from breakout_rtc import BreakoutRTC
>>> import time
>>> PINS_PLASMA = {"sda": 20, "scl": 21}
>>> i2c = PimoroniI2C(**PINS_PLASMA)
>>> rtc = BreakoutRTC(i2c)
>>> rtc.set_unix(time.time())
>>> rtc.set_time(54,18,17,6,18,9,2021)
True
>>> rtc.update_time()
True
>>> print(rtc.string_time())
17:18:54
>>> rtc.set_backup_switchover_mode(3)

The most important line is the last one, which enables the battery backup for the RTC so it remembers the time you just set.

I was going to use the rtc.set_unix() function and pass in time.time() but it appears that the unix timestamp is maintained independently of the “Real” time on the RTC.

The set_time() function takes values in the order

  • seconds (0-60)
  • minutes (0-60)
  • hours (0-23)
  • day of the week (1-7 -> mon-sun)
  • day of month (1-31)
  • monthe (1-12)
  • year (2000-2099)

With the RTC set correctly a small update to the code to read from the RTC rather than from the time object and we are good to go.

import plasma
from plasma import plasma2040
from pimoroni import RGBLED, Button
from pimoroni_i2c import PimoroniI2C
from breakout_rtc import BreakoutRTC
import time

PINS_PLASMA = {"sda": 20, "scl": 21}

i2c = PimoroniI2C(**PINS_PLASMA)
rtc = BreakoutRTC(i2c)

if rtc.is_12_hour():
    rtc.set_24_hour()

if rtc.update_time():
    print(rtc.string_time())
    print(rtc.string_date())

NUM_LEDS = 60
LOW = 32
MED = 64
HIGH = 128
BRIGHTNESS = [LOW,MED,HIGH]
BRIGHTNESS_LEVEL = 0

button_brightness = Button(plasma2040.BUTTON_A)

led = RGBLED(plasma2040.LED_R, plasma2040.LED_G, plasma2040.LED_B)
led.set_rgb(0, 0, 0)
led_strip = plasma.WS2812(NUM_LEDS, 0, 0, plasma2040.DAT)

led_strip.start()

rtc.enable_periodic_update_interrupt(True)

while True:
    RED = [0]*NUM_LEDS
    GREEN = [0]*NUM_LEDS
    BLUE = [0]*NUM_LEDS
    t = time.localtime()

    if rtc.read_periodic_update_interrupt_flag():
        rtc.clear_periodic_update_interrupt_flag()
         
        rtc.update_time()
        hour = (rtc.get_hours() % 12) * 5
        RED[hour] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 1] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 2] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 3] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        RED[hour + 4] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        GREEN[rtc.get_minutes()] = BRIGHTNESS[BRIGHTNESS_LEVEL]
        BLUE[rtc.get_seconds()] = BRIGHTNESS[BRIGHTNESS_LEVEL]

        for i in range (NUM_LEDS):
            led_strip.set_rgb(i, RED[i], GREEN[i], BLUE[i])
        
        if button_brightness.read():
            BRIGHTNESS_LEVEL += 1
            BRIGHTNESS_LEVEL %= 3
    
    time.sleep(1)
2021 Edition

Next Steps

There are a few things that need doing next. The first is to build a case for the clock, I’m thinking about something made up of layers of thin plywood with a channel for the LED strip and maybe a layer of smoked/mat acrylic to act as a diffuser.

The second part is to work out a way to work with DST, Micropython doesn’t support timezones as the database needed to keep track of all the different timezones takes up a huge amount of space. I could hard code in the dates for my location, but I’ll probably just make use of the B button to toggle an hours difference on/off.

Optionally I might add another 31 LED strip (probably at 30/meter) to be used as a calendar showing the current month with markers for weekends and the current day.

Another option is to use 4 of these to build a 60 LED ring for something a little more conventionally shaped.

And the final extra hack is to daisy chain the Light level sensor (e.g. one of these) on top of the RTC and dynamically adjust the brightness based on ambient light levels.

I’ll also probably keep tinkering with the Raspberry Pi Zero W version as that will allow oAuth to link to things like Google Calendar to show meetings in the clock view and add Holidays to the Calendar view. It will also have access to the full timezone database and NTP for time syncing over the network.

Router swap

With all the working from home over the last 18months and the fact I now work for a 100% remote company I decided it was time to have another look at my home broadband setup.

I currently have a FTTC install supplied by A&A which currently tops out at about 60/15 and while a FTTP setup would be nice I’ll have to wait until OpenReach get their finger out and actually fully enable my exchange (A recent new build development is already full fibre, but the existing properties will have to wait).

The line has been pretty reliable but I decided it was time to add some backup capability if I’m going to be relying on it all the time. I decided to add an LTE/4G link (no 5G out here in the sticks yet either).

I already had a LTE USB stick but the Ubiquiti EdgeRouter X that I was running didn’t have a USB port so I looked at putting the stick in Pi and adding a second low priority default route via the Pi. This worked but meant that I lost IPv6 (finding a UK cell provider that will offer IPv6 on Pay&Go is a problem I’ve looked at before) and others won’t be able to reach my web server or the other services I host at home. I’ll cover the 4G network provision later.

A&A offer a L2TP service which can route the fixed IPv4 and IPv6 ranges over any connection if your main line is down for any reason. This can easily run over a LTE connection, but it does have one slight niggle. If the L2TP tunnel is running at the same time as the FTTC line then it will take priority which means it should only be started when the FTTC line goes down.

The EdgeRouter X only supports L2TP Tunnels when paired with IPSEC so can’t easily be used with this option. I could run something like xl2tp on the Pi with the LTE USB stick but then I would need a way to trigger it on the Pi when the PPPoE link goes down on the EdgeRouter. All of this combined with Ubiquiti’s apparent pulling back from the EdgeRouter line as they focus more on their Dream Machine range I thought I’d see what else was available.

MikroTik

If you poke around the internet in the places where people talk about Ubiquiti kit they also mention MikroTik and RouterOS so I thought I’d have a look and see what was available.

MikroTik hEX s router

The closest match to the EdgeRouter X looked to be a MikroTik hEX S. It has the same 5 Gigabit Ethernet ports, PoE powered and also has a USB port and a SFP port for if I ever want to add fibre support.

I already had a Huawei E3372-200 LTE stick to plug into the side. This supports up to 150Mbps connections and has connectors to add external antenna if needed to get the best signal. I also grabbed a 90° USB adapter, because everybody knows that USB sticks work better when pointed straight up.

Router & Switch

I plugged the hEX S into my desktop ISP setup to work out how configure it and play with some of the settings.

There are 3 ways to configure most RouterBoard/RouterOS devices

  • Winbox – a native application that supports Windows (can be run under Wine on Linux and OSx)
  • WebFig – a web interface
  • Console/SSH – a command line interface

I’ve not tried Winbox, I did most of the setup via the console interface, but I used the WebFig to check. Most of the time WebFig works just fine, but occasionally it would throw javascript errors. I’m hoping that most of this is down to the fact I had to install a 7.1 release candidate build to get LTE stick to work properly. I’ll check back once 7.1 gets a proper release.

Using the console I managed to setup the LAN IP address range, DHCP server and pre-reserved all the static IP addresses to match my old setup.

Getting the port forwarding and hairpin NAT setup was a little bit more challenging than on the EdgeRouter but I have something that looks to behave the same for everything I had setup before.

I set the LTE device to be always on but with a static route to the L2TP endpoint and a script that run when the PPPoE device goes up or down. When the PPPoE goes down it will connect the L2TP client and disconnect it when the PPPoE device comes back up. The easiest way to test is to unplug the ethernet cable between the router and the modem running in bridge mode.

Cellular Contract

The next question is what mobile data plan to use, this is meant to be only used as a fall back, so I don’t really want to be paying for a monthly contract and then not using it, which means I’m looking for a Pay & Go sim card. I also want a plan that has the longest possible lifetime for any credit. Luckily Terrence Eden had recently collated a list of the best deals for this kind of data sim. It looks like the Three 24GB or the matching Vodafone 24GB plan are the best fit.

I opted for the Three as I have reasonable coverage at home, it comes with 24GB pre-loaded and it will last for up to 2 years (unlike a lot of the others that expire every month). It’s list price at time of writing is £44.99, I got mine for $39.96, but it’s been as low as £31.29 on offer recently.

Next

At the moment the router only fails over if the PPPoE connection goes down, it would be nice to try and detect if the PPPoE link stays up, but traffic stops flowing and change over. The challenge here is how to know to switch back since the L2TP tunnel takes priority. I’ll have to think about that one.