Contributing to Upstream

Over the last couple of weeks I’ve run into a few little bugs/niggles in Open Software components we are using at FlowForge. As a result I have been opening Pull Requests and Feature Requests against those projects to get them fixed.


First up was a really small change for the Mosquitto MQTT broker when using MQTT over WebSockets. In FlowForge v0.8.0 we added support to use MQTT to communicate between different projects in the same team and also as a way for devices to send state back to the core Forge App. To support this we have bundled the Mosquitto MQTT broker as part of the deployment.

When running on Docker or Kubernetes we use MQTT over WebSockets and the MQTT broker is exposed via the same HTTP reverse proxy as the Node-RED projects. I noticed that in the logs all the connections were coming from the IP address of the HTTP reverse proxy, not the actual clients. This makes sense because as far as mosquitto is concerned this is the source of the connection. To work around this the proxy usually adds a HTTP Header with the original clients IP address as follows:


Web applications normally have a flag you can set to tell them to trust the proxy to add the correct value and substitute this IP address in any logging. Mosquitto uses the libwebsocket library to handle the set up of the WebSocket connection and this library supports exposing this HTTP Header when a new connection is created.

I submitted a Pull Request (#2616) to allow mosquitto to make use of this feature. Which adds the following code in src/websockets.c here.

    if (lws_hdr_copy(wsi, ip_addr_buff, sizeof(ip_addr_buff), WSI_TOKEN_X_FORWARDED_FOR) > 0) {
        mosq->address = mosquitto__strdup(ip_addr_buff);
    } else {
        easy_address(lws_get_socket_fd(wsi), mosq);

This will use the HTTP header value if it exists, or fallback to the remote socket address if not.

Roger Light reviewed and merged this for me pretty much straight away and then released mosquitto v2.0.15, which was amazing.


To secure our mosquitto broker we make use of the mosquitto-go-auth plugin which allows us to dynamically create users and ACL entries as we add/remove projects or users from the system. To make life easy this project publishes a Docker container with mosquitto, libwebsocket and the plugin all pre-built and setup to work together.

I had earlier run into a problem with the container not always working well when using MQTT over WebSockets connections. This turned out to be down to the libwebsockets instance in the container not being compiled with a required flag.

To fix this I submitted a Pull Request (#241) that

  • Updated the version of libwebsockets built into the container
  • Added the missing compile time flag when building libwebsockets
  • Bumped the mosquitto version to the just released v2.0.15

Again the project maintainer was really responsive and got the Pull Request turned round and release in a couple of days.

This means the latest version of the plugin container works properly with MQTT over WebSocket connections and will log the correct IP address of the connecting clients.


As I mentioned earlier I spotted the problem with the client IP addresses while looking at the logs from the MQTT broker of our Kubernetes deployment. To gather the logs we make use of Grafana Loki. Loki gathers the logs from all the different components and these can then be interrogated from the Gafana front end.

To deploy Grafana and Loki I made use of the Helm charts provided by the project. This means that all the required parts can be installed and configured with a single command and a file containing the specific customisations required.

One of the customisations is the setting up a Kubernetes Ingress entry for the Grafana frontend to make it accessible outside the cluster. The documentation says that you can pass a list of hostnames using the grafana.ingress.hosts key. If you just set this then most things work properly until you start to build dashboards based on the logs at which point you get some strange behaviour where you get redirected to http://localhost which doesn’t work. This is because to get the redirects to work correctly you also need to pass the hostname in the grafana."grafana.ini".server.domain setting.

In order to get this to work cleanly you have to know that you need to pass the same hostname in two different places. To try and make this a little simpler for the default basic case I have submitted a PR (#1689) that will take the first enty in the grafana.ingress.hosts list and use it for the value of grafana."grafana.ini".server.domain if there isn’t one provided.

Many thanks to Marcus Noble for helping me with the Go Templating needed for this.

This PR is currently undergoing review and hopefully will be merged soon.


And finally this isn’t a code contribution, but I opened a feature request against WeaveWorks’ eksctl command. This is a tool create and managed Kubernetes clusters on AWS systems.

A couple of weeks ago we received a notice from AWS that one of the EC2 machines that makes up the FlowForge Cloud instance was due to be restarted as part of some EC2 planned maintenance.

We had a couple of options as to how to approach this

  1. Just leave it alone, the nodegroup would automatically replace the node when it was shutdown, but this would lead to downtime as the new EC2 instance was spun up.
  2. Add an extra node to the impacted nodegroup and migrate the running workload to the new node once it was fully up and running.

We obviously went with option 2, but this left us with an extra node in the group after maintenance. eksctl has options to allow the group size to be reduced but it doesn’t come with a way to say which node should be removed, so there was no way to be sure it would be the (new) node with no pods scheduled that is removed.

I asked a question on AWS’s re:Post forum as to how to remove a specific node and got details of how to do this with the awscli tool. While this works it would be really nice to be able to do it all from eksctl as a single point of contact.

I’ve raised a feature request (#5629) asking if this can be added and it’s currently waiting review and hopefully design and planning.

Home MicroK8s Cluster

I started to write about my home test environment for FlowForge a while ago, having just had to rebuild my K8s cluster due to a node failure I thought I should come back to this and document how I set it up (as much for next time as to share).

Cluster Nodes

I’m using the following nodes

Base OS

I’m working with Ubuntu 20.04 as this is the default OS of choice for MicroK8s and it’s available for both x86_64 and Arm8 for the Raspberry Pi 4.

Installing MicroK8s

$ sudo snap install microk8s --classic --channel=1.24

Once deployed on all 3 nodes, then we need to pick one of the nodes as the manager. In this case I’m using the Intel Celeron machine as the master and will run the following:

$ microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join

And then on the other 2 nodes run the following

$ microk8s join --worker

You can verify the nodes are joined to the cluster with:

$ microk8s.kubectl get nodes
kube-two     Ready    <none>   137m   v1.24.0-2+f76e51e86eadea
kube-one     Ready    <none>   138m   v1.24.0-2+f76e51e86eadea
kube-three   Ready    <none>   140m   v1.24.0-2+59bbb3530b6769

Once the nodes are added to the cluster we need to enable a bunch of plugins, on the master node run:

$ microk8s enable dns:192.168.1.xx ingress helm helm3

dns:192.168.1.xx overrides the default of using Google’s DNS server to resolve names outside the cluster. This is important because I want it to point to my local DNS as I have set *.flowforge.loc and *.k8s.loc to point to the cluster IP addresses for Ingress.

Install kubectl and helm

By default Microk8s ships with a bunch of tools baked in, these include kubectl and helm that can be accessed as microk8s.kubectl and microk8s.helm respectively.


Instructions for installing standalone kubectl can be found here. Once installed you can generate the config by running the following on the master node:

$ microk8s config > ~/.kube/config

This can be copied to other machines that you want to be able to administrate the cluster.


Instructions for installing helm can be found standalone here.

This will make use of the same ~/.kube/config credentials file as kubectl.

NFS Persistent Storage

In order to have a consistent persistence storage pool across all 3 nodes I’m using a NFS share from my NAS. This is controlled using the nfs-subdir-external-provisioner. This creates a new directory on the NFS share for each volume created.

All the nodes need to have all the NFS client tools installed, this can be achieved with:

$ sudo apt-get install nfs-common

This is deployed using helm

$ helm repo add nfs-subdir-external-provisioner
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server= \
    --set nfs.path=/volume1/kube

To set this as the default StorageClass run the following:

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"":"true"}}}'


That is enough for the basic Kubernetes cluster setup, there are some FlowForge specific bits that are needed (e.g. tagging nodes) but I’ll leave that for the FlowForge on Kubernetes install docs (which I have to finish writing before the next release).

FlowForge v0.1.0

So it’s finally time to talk a bit more about what I’ve been up to for the last few months since joining FlowForge Inc.

The FlowForge platform is a way to manage multiple instances of Node-RED at scale and to control user access to those instances.

The platform comes with 3 different backend drivers

  • LocalFS
  • Docker Compose
  • Kubernetes


This is the driver to use for evaluating the platform or as a home user that doesn’t want to install all the overhead that is required for the other 2 drivers. I starts Projects (Node-RED instances) as separate processes on the same machine and runs each one on a separate port. It keeps state in a local SQLite database.

Docker Compose

This version is a little more complicated, it uses the Docker runtime to start containers for the FlowForge runtime, a PostgreSQL database and Nginx reverse proxy. Each Project lives in it’s own container and is accessed by a unique hostname prepended to a supplied hostname. This can still run on a single machine (or multiple if Docker Swarm mode is used)


This is the whole shebang, similar to Docker Compose the FlowForge platform all runs in containers and the Projects end up in their own containers. But the Kubernetes platform provides more ways to manage the resources behind the containers and to scale to even bigger deployments.


Today we have released version 0.1.0 and made all the GitHub projects public.

The initial release is primarily focused on getting the core FlowForge platform out there for feedback and we’ve tried to make the LocalFS install experience as smooth as possible. There are example installers for the Docker and Kubernetes drivers but the documentation around these will improve very soon.

You can read the official release announcement here which has a link to the installer and also includes a walk through video.

Debugging Node-RED nodes with Visual Code

A recent Stack Overflow post had me looking at how to run Node-RED using Visual Code to debug custom nodes. Since I’d not tried Visual Code before (I tend to use Sublime Text 4 as my day to day editor) I thought I’d give it a go and see if I could get it working.

We will start with a really basic test node as an example. This just prints the content of msg.payload to the console for any message passing through.


module.exports = function(RED) {
    function test(n) {
        const node = this
        node.on('input', function(msg, send, done){
            send = send || function() { node.send.apply(node,arguments) }
    RED.nodes.registerType("test", test)


<script type="text/html" data-template-name="node-type">

<script type="text/html" data-help-name="node-type">

<script type="application/javascript">
        category: 'test',
        defaults: {},
        inputs: 1,
        outputs: 1,
        label: "test"


  "name": "test",
  "version": "1.0.0",
  "description": "Example node-red node",
  "keywords": [
  "node-red": {
    "nodes": {
      "test": "test.js"
  "author": "",
  "license": "Apache-2.0"

Setting up

All three files mentioned above are placed in a directory and then the following steps are followed:

  • In the Node-RED userDir (normally ~/.node-red on a Linux machine) run the following command to create a symlink in the node_modules directory. This will allow Node-RED to find and load the node.
    npm install /path/to/test/directory
  • Add the following section to the package.json file
  "scripts": {
    "debug": "node /usr/lib/node_modules/node-red/red.js"
  "node-red": {

Where usr/lib/node_modules/node-red/red.js is the output from readlink -f `which node-red`.

You can then add a breakpoint to the code

View of node's javascript code with break point set on line 7

And then start Node-RED by clicking on the Play button just above the scripts block.

view of node's package.json with play symbol and Debug above the scripts block

This will launch Node-RED and attach the debugger and stop when the breakpoint if hit. You can also enable the debugger to stop the application on exceptions, filtering on if they are caught or not.

This even works when using Visual Code’s remote capabilities for editing, running and debugging projects on remote machines. I’ve tested this running over SSH to a Raspberry Pi Zero 2 W (which is similar to the original StackOverflow question as they were trying to debug nodes working with the Pi’s GPIO system). The only change I had to make on the Pi was to increase the default swap file size from 100mb to 256mb as squeezing the Visual Code remote agent and Node-RED into 512mb RAM is a bit of a squeeze.

I might give Visual Code a go as my daily driver in the new year.


Having seen a tweet to a Hackaday article (/ht Andy Piper) about adding a ESP8266 to the new IKEA VINDRIKTNING air quality sensor.

IKEA Air Quality Sensor showing Green Light

The sensor is a little stand alone platform that measures the amount of PM 2.5 particles in the air and it has an array of coloured LEDs on the front to show a spectrum from green when the count is low and red when high.

Sören Beye opened one up and worked out that the micro controller that reads the sensor to control the leds does so over a uart serial connection and that the Tx/Rx lines were exposed via a a set of test pads along with 5v and Ground power. This makes it easy to attach a second micro controller to the Rx line to read the response when the sensor is polled.

Sören has written some code for an ESP8266 to decode that response and publish the result via MQTT.

Making the hardware modification is pretty simple

Wemos D1 Mini attached to sensor
  • Unscrew the case
  • Strip the ends on 3 short pieces of wire
  • Solder the 3 leads to the test pads labelled 5v, G and REST
  • Solder the 5V to 5V, G to G and REST to D2 (assuming using a Wemos D1 Mini)
  • Place the Wemos in the empty space above the sensor
  • Screw the case back together

The software is built using the Ardunio IDE and is easily flashed via the USB port. Once installed when the ESP8266 boots it will set up a WiFi Access Point to allow you to enter details for the local WiFi network and the address, username and password for a MQTT broker.

When connected the sensor publishes a couple of messages to allow auto configuration for people who use Home Assistant but it also publishes messages like this:

    "ssid":"IoT Network",

It includes the pm25 value and information about which network it’s connected to and it’s current IP address. I’m subscribing to this with Node-RED and using it to convert the numerical value, which has units of μg/m3 into a recognised scale (found on page 4).

let pm25 = msg.payload.pm25
if ( pm25 < 12 ) {
  msg.payload.string = "good"
} else if (pm25 >= 12 && pm25 < 36) {
  msg.payload.string = "moderate"
} else if (pm25 >= 36 && pm25 < 56) {
  msg.payload.string = "unhealthy for sensitive groups"
} else if (pm25 >= 56 && pm25 < 151 ) {
  msg.payload.string = "unhealthy"
} else if (pm25 >= 151 && pm25 < 251 ) {
  msg.payload.string = "very unhealthy"
} else if (pm25 >= 251 ) {
  msg.payload.string = "hazardous"
return msg;

I’m feeding this into a Google Smart Home Assistant Sensor device that has the SensorState trait, this takes the scale values as input, but you can also include the raw values as well.

msg.payload = {
        "rawValue": msg.payload.pm25
return msg;

I will add the an Air Quality trait to the Node-RED Google Assistant Bridge shortly.

I’m also routing it to gauge in a Node-RED Dashboard setup.

Joining FlowForge Inc.

FlowForge Logo

Today is my first day working for FlowForge Inc. I’ll be employee number 2 and joining Nick O’Leary working on all things based around Node-RED and continuing to contribute to the core Open Source project.

We should be building on some of the things I’ve been playing with recently.

Hopefully I’ll be able to share some of the things I’ll be working on soon, but in the mean time here is the short post that Nick wrote when he announced FlowForge a few weeks ago and a post welcoming me to the team

To go with this announcement Hardill Technologies Ltd will be going dormant. It’s been an good 3 months and I’ve built something interesting for my client which I hope to see it go live soon.

Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])

Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
      "online": true

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "",
    cameraStreamProtocol: "hls"
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.

Viewing Node-RED Credentials

A question popped up on the Node-RED Slack yesterday asking how to recover an entry from the credentials file.


The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json

The content of the file will look something a little like this:


This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.

The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:

  • In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
  • In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.


In order to make use of thex

const crypto = require('crypto');

var encryptionAlgorithm = "aes-256-ctr";

function decryptCreds(key, cipher) {
  var flows = cipher["$"];
  var initVector = Buffer.from(flows.substring(0, 32),'hex');
  flows = flows.substring(32);
  var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
  var decrypted = decipher.update(flows, 'base64', 'utf8') +'utf8');
  return JSON.parse(decrypted);

var creds = require("./" + process.argv[2])
var secret = process.argv[3]

var key = crypto.createHash('sha256').update(secret).digest();

console.log(decryptCreds(key, creds))

If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:

$ node show-creds creds.json [secret]

where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.

Hardill Technologies Ltd

Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.

Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.

Node-RED asking for credentials

The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.

At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.

As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:

If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on

Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.