Building a Kubernetes Test Environment

Over the last couple of weekends I’ve been noodling around with my home lab set up to build a full local environment to test out FlowForge with both the Kubernetes Container and Docker Drivers.

The other reason to put all this together is to help to work the right way to put together a proper CI pipeline to build, automatically test and deploy to our staging environment.

Components

NPM Registry

This is somewhere to push the various FlowForge NodeJS modules so they can then be installed while building the container images for the FlowForge App and the Project Container Stacks.

This is a private registry so that I can push pre-release builds without them slipping out in to the public domain, but also so I can delete releases and reuse version numbers which is not allowed on the public NPM registry.

I’m using the Verdaccio registry as I’ve used this in the past to host custom Node-RED nodes (which it will probably end up doing again in this set up as things move forwards). This runs as Docker container and I use my Nginx instance to reverse proxy for it.

As well as hosting my private builds it can proxy for the public npmjs.org regisry which speeds up local builds.

Docker Container Registry

This is somewhere to push the Docker containers that represent both the FlowForge app it’s self and the containers that represent the Project Stacks.

Docker ship a container image available that will run a registry.

As well as the registry I’m also running second container with this web UI project to help keep track of what I’ve pushed to the registry and also allows me to delete tags which is useful when testing

Again my internet facing Nginx instance is proxying for both of these (on the same virtual host since their routes do not clash and it makes CORS easier since the UI is all browser side JavaScript)

Helm Chart Repository

This isn’t really needed, as you can generate all the required files with the helm command and host the results on any Web server, but this lets me test the whole stack end to end.

I’m using a package called ChartMuseum which will automatically generate index.yaml manifest file when charts are uploaded via it’s simple UI.

Nginx Proxy

All of the previous components have been stood up as virtual hosts on my public Nginx instance so that they can get HTTPS certificates from LetsEncrypt. This is makes things a lot easier because both Docker and Kubernetes basically require the container registry be secure by default.

While it is possible to add exceptions for specific registries, these days it’s just easier to do it “properly” up front.

MicroK8s Cluster

And finally I need a Kubernetes cluster to run all this on. In this case I have a 3 node cluster made up of

  • 2 Raspberry Pi 4s with 8gb of RAM each
  • 1 Intel Celeron based mini PC with 8gb of RAM

All 3 of these are running 64bit Ubuntu 20.04 and MicroK8s. The Intel machine is needed at the moment because the de facto standard PostrgresSQL Helm Chat only have amd64 based containers at the moment so won’t run on the Raspberry Pi based nodes.

The cluster uses the NFS Persistent Volume provisioner to store volumes on my local NAS so they are available to all the nodes.

Usage

I’ll write some more detailed posts about how I’ve configured each of these components and then how I’m using them.

As well as testing the full Helm chart builds, I can also use this to run the FlowForge app locally and have the Kubernetes Container driver running locally on my development machine and have it create Projects in the Kubernetes cluster.

FlowForge v0.1.0

So it’s finally time to talk a bit more about what I’ve been up to for the last few months since joining FlowForge Inc.

The FlowForge platform is a way to manage multiple instances of Node-RED at scale and to control user access to those instances.

The platform comes with 3 different backend drivers

  • LocalFS
  • Docker Compose
  • Kubernetes

LocalFS

This is the driver to use for evaluating the platform or as a home user that doesn’t want to install all the overhead that is required for the other 2 drivers. I starts Projects (Node-RED instances) as separate processes on the same machine and runs each one on a separate port. It keeps state in a local SQLite database.

Docker Compose

This version is a little more complicated, it uses the Docker runtime to start containers for the FlowForge runtime, a PostgreSQL database and Nginx reverse proxy. Each Project lives in it’s own container and is accessed by a unique hostname prepended to a supplied hostname. This can still run on a single machine (or multiple if Docker Swarm mode is used)

Kubernetes

This is the whole shebang, similar to Docker Compose the FlowForge platform all runs in containers and the Projects end up in their own containers. But the Kubernetes platform provides more ways to manage the resources behind the containers and to scale to even bigger deployments.

Release

Today we have released version 0.1.0 and made all the GitHub projects public.

The initial release is primarily focused on getting the core FlowForge platform out there for feedback and we’ve tried to make the LocalFS install experience as smooth as possible. There are example installers for the Docker and Kubernetes drivers but the documentation around these will improve very soon.

You can read the official release announcement here which has a link to the installer and also includes a walk through video.

Debugging Node-RED nodes with Visual Code

A recent Stack Overflow post had me looking at how to run Node-RED using Visual Code to debug custom nodes. Since I’d not tried Visual Code before (I tend to use Sublime Text 4 as my day to day editor) I thought I’d give it a go and see if I could get it working.

We will start with a really basic test node as an example. This just prints the content of msg.payload to the console for any message passing through.

test.js

module.exports = function(RED) {
    function test(n) {
        RED.nodes.createNode(this,n)
        const node = this
        node.on('input', function(msg, send, done){
            send = send || function() { node.send.apply(node,arguments) }
            console.log(msg.payload)
            send(msg)
            done()
        })
    }
    RED.nodes.registerType("test", test)
}

test.html

<script type="text/html" data-template-name="node-type">
</script>

<script type="text/html" data-help-name="node-type">
</script>

<script type="application/javascript">
    RED.nodes.registerType('test',{
        category: 'test',
        defaults: {},
        inputs: 1,
        outputs: 1,
        label: "test"
    })
</script>

package.json

{
  "name": "test",
  "version": "1.0.0",
  "description": "Example node-red node",
  "keywords": [
    "node-red"
  ],
  "node-red": {
    "nodes": {
      "test": "test.js"
    }
  },
  "author": "ben@example.com",
  "license": "Apache-2.0"
}

Setting up

All three files mentioned above are placed in a directory and then the following steps are followed:

  • In the Node-RED userDir (normally ~/.node-red on a Linux machine) run the following command to create a symlink in the node_modules directory. This will allow Node-RED to find and load the node.
    npm install /path/to/test/directory
  • Add the following section to the package.json file
...
  ],
  "scripts": {
    "debug": "node /usr/lib/node_modules/node-red/red.js"
  },
  "node-red": {
...

Where usr/lib/node_modules/node-red/red.js is the output from readlink -f `which node-red`.

You can then add a breakpoint to the code

View of node's javascript code with break point set on line 7

And then start Node-RED by clicking on the Play button just above the scripts block.

view of node's package.json with play symbol and Debug above the scripts block

This will launch Node-RED and attach the debugger and stop when the breakpoint if hit. You can also enable the debugger to stop the application on exceptions, filtering on if they are caught or not.

This even works when using Visual Code’s remote capabilities for editing, running and debugging projects on remote machines. I’ve tested this running over SSH to a Raspberry Pi Zero 2 W (which is similar to the original StackOverflow question as they were trying to debug nodes working with the Pi’s GPIO system). The only change I had to make on the Pi was to increase the default swap file size from 100mb to 256mb as squeezing the Visual Code remote agent and Node-RED into 512mb RAM is a bit of a squeeze.

I might give Visual Code a go as my daily driver in the new year.

IKEA VINDRIKTNING PM2.5 Sensor

Having seen a tweet to a Hackaday article (/ht Andy Piper) about adding a ESP8266 to the new IKEA VINDRIKTNING air quality sensor.

IKEA Air Quality Sensor showing Green Light

The sensor is a little stand alone platform that measures the amount of PM 2.5 particles in the air and it has an array of coloured LEDs on the front to show a spectrum from green when the count is low and red when high.

Sören Beye opened one up and worked out that the micro controller that reads the sensor to control the leds does so over a uart serial connection and that the Tx/Rx lines were exposed via a a set of test pads along with 5v and Ground power. This makes it easy to attach a second micro controller to the Rx line to read the response when the sensor is polled.

Sören has written some code for an ESP8266 to decode that response and publish the result via MQTT.

Making the hardware modification is pretty simple

Wemos D1 Mini attached to sensor
  • Unscrew the case
  • Strip the ends on 3 short pieces of wire
  • Solder the 3 leads to the test pads labelled 5v, G and REST
  • Solder the 5V to 5V, G to G and REST to D2 (assuming using a Wemos D1 Mini)
  • Place the Wemos in the empty space above the sensor
  • Screw the case back together

The software is built using the Ardunio IDE and is easily flashed via the USB port. Once installed when the ESP8266 boots it will set up a WiFi Access Point to allow you to enter details for the local WiFi network and the address, username and password for a MQTT broker.

When connected the sensor publishes a couple of messages to allow auto configuration for people who use Home Assistant but it also publishes messages like this:

{
  "pm25":12,
  "wifi":{
    "ssid":"IoT Network",
    "ip":"192.168.1.58",
    "rssi":-60
  }
}

It includes the pm25 value and information about which network it’s connected to and it’s current IP address. I’m subscribing to this with Node-RED and using it to convert the numerical value, which has units of μg/m3 into a recognised scale (found on page 4).

let pm25 = msg.payload.pm25
if ( pm25 < 12 ) {
  msg.payload.string = "good"
} else if (pm25 >= 12 && pm25 < 36) {
  msg.payload.string = "moderate"
} else if (pm25 >= 36 && pm25 < 56) {
  msg.payload.string = "unhealthy for sensitive groups"
} else if (pm25 >= 56 && pm25 < 151 ) {
  msg.payload.string = "unhealthy"
} else if (pm25 >= 151 && pm25 < 251 ) {
  msg.payload.string = "very unhealthy"
} else if (pm25 >= 251 ) {
  msg.payload.string = "hazardous"
}
return msg;

I’m feeding this into a Google Smart Home Assistant Sensor device that has the SensorState trait, this takes the scale values as input, but you can also include the raw values as well.

msg.payload = {
  "params":{
    "currentSensorStateData":[
      {
        "name":"AirQuality",
        "currentSensorState":msg.payload.string
      },
      {
        "name":"PM2.5",
        "rawValue": msg.payload.pm25
      }
    ]
  }
}
return msg;

I will add the an Air Quality trait to the Node-RED Google Assistant Bridge shortly.

I’m also routing it to gauge in a Node-RED Dashboard setup.

Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])
  }
}

Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

{
  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
        "progressive_mp4",
        "hls",
        "dash",
        "smooth_stream"
      ],
      "online": true
    }
  }
}

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "http://192.168.1.96:8080/hls/stream.m3u8",
    cameraStreamProtocol: "hls"
}
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.

Viewing Node-RED Credentials

A question popped up on the Node-RED Slack yesterday asking how to recover an entry from the credentials file.

Background

The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json

The content of the file will look something a little like this:

{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}

This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.

The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:

  • In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
  • In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.

Code

In order to make use of thex

const crypto = require('crypto');

var encryptionAlgorithm = "aes-256-ctr";

function decryptCreds(key, cipher) {
  var flows = cipher["$"];
  var initVector = Buffer.from(flows.substring(0, 32),'hex');
  flows = flows.substring(32);
  var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
  var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
  return JSON.parse(decrypted);
}

var creds = require("./" + process.argv[2])
var secret = process.argv[3]

var key = crypto.createHash('sha256').update(secret).digest();

console.log(decryptCreds(key, creds))

If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:

$ node show-creds creds.json [secret]

where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.

Hardill Technologies Ltd

Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.

Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.

Node-RED asking for credentials

The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.

At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.

As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:

If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on tech@hardill.me.uk.

Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.

Multi Tenant Node-RED with Kubernetes

Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.

I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.

Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.

I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.

Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.

Architecture

As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red-multi-tenant
  labels:
    app: nr-mt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nr-mt
  template:
    metadata:
      labels:
        app: nr-mt
    spec:
      containers:
      - name: node-red-manager
        image: private.example.com/k8s-manager
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: secret
          mountPath: /usr/src/app/config
        env:
        - name: MONGO_URL
          value: mongodb://mongo/nodered
        - name: ROOT_DOMAIN
          value: example.com
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
      - name: registry
        image: verdaccio/verdaccio
        ports:
        - containerPort: 4873
        volumeMounts:
        - name: registry-data
          mountPath: /verdaccio/storage
        - name: registry-conf
          mountPath: /verdaccio/conf
      volumes:
      - name: secret
        secret:
          secretName: kube-config
      - name: mongo-data
        hostPath:
          path: /opt/mongo-data
          type: Directory
      - name: registry-data
        hostPath:
          path: /opt/registry-data
          type: Directory
      - name: registry-conf
        secret:
          secretName: registry-conf

This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.

It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.

I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.

Manager

This is mainly the same as the docker version, but I had to swap out dockerode for kubernetes-client.

This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.

Standing up a new instance is a little more complicated as it’s now a multi step process.

  1. Create a Pod with the custom-node-red container
  2. Create a Service based on that pod
  3. Expose that service via the Ingress addon

I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.

All the code for this version of the app is on github here.

Catalogue

In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.

This container runs a small web app that has 2 endpoints.

  • /catalogue.json – which returns the current version of the catalogue
  • /update – which is triggered by the the notify function of the Verdaccio private npm registry

The registry has this snippet added to the end of the config.yml

notify:
  method: POST
  headers: [{'Content-Type': 'application/json'}]
  endpoint: http://catalogue/update
  content: '{"name": "{{name}}", "versions": "{{versions}}", "dist-tags": "{{dist-tags}}"}'

The code for this container can be found here.

Deploying

First clone the project from github

$ github clone --recurse-submodules https://github.com/hardillb/multi-tenant-node-red-k8s.git

Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.

$ ./setup.sh example.com private.example.com:5000

This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)

With all the configuration files updated the containers need building and pushing to the local container registry.

$ docker build ./manager -t private.example.com:5000/k8s-manager
$ docker push private.example.com:5000/k8s-manager
$ docker build ./catalogue -t private.example.com:5000/catalogue
$ docker push private.example.com:5000/catalogue
$ docker build ./custom-node-red -t private.example.com:5000/custom-node-red
$ docker push private.example.com:5000/custom-node-red

Finally trigger the actual deployment with kubectl

$ kubectl apply -f ./deployment

Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.

A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.

As usual the whole solution can be found on github here.

What’s Next

I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.

I also need to back port the new catalogue handling to the docker-compose version.

And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.

p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.

Update

What is described above was a great way to work out what was possible and learn a lot, but it is all PoC code and not really intended to be used in a “production” environment.

If that is what you are looking for I suggest you look at FlowForge as this is under active development. As well as the core being OpenSource, the project is also available as a licensable/supported offering both to run yourself or as a hosted service. For details check out FlowForge’s website here.

Multi Tenant Node-RED Working Example

I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.

Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.

Once you’ve checked out the project run the setup.sh script with the root domain as the first argument. This will do the following:

  • Checkouts the submodules (Management app and Mongoose schema objects).
  • Create and set the correct permissions on the required local directories that are mounted as volumes.
  • Build the custom-node-red docker container that will be used for each of the Node-RED instances.
  • Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with .local appended.
  • Create the Docker network that all the images will be attached to.

The following docker-compose.yaml file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.

version: "3.3"

services:
  nginx:
    image: jwilder/nginx-proxy
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
    ports:
      - "80:80"

  mongodb:
    image: mongo
    networks:
      - internal
    volumes:
      - "./mongodb:/data/db"
    ports:
      - "27017:27017"

  registry:
    image: verdaccio/verdaccio
    networks:
      - internal
    volumes:
      - "./registry:/verdaccio/conf"
    ports:
      - "4873:4873"

  manager:
    image: manager
    build: "./manager"
    depends_on:
      - mongodb
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock"
      - "./catalogue:/data"
    environment:
      - "VIRTUAL_HOST=manager.example.com"
      - "MONGO_URL=mongodb://mongodb/nodered"
      - "ROOT_DOMAIN=docker.local"


networks:
  internal:
    external:
      name: internal

It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.

The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.

The port for the private NPM registry is also exposed to allow packages to be published from outside the system.

And finally it binds to the network that was created by the setup.sh script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.

The management application is also directly exposed as manager.example.com, you should be able to log in with the username admin and the password of password (both can be changed in the manager/settings.js file) and create a new instance.

Conclusions

This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.

Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.

If you build something based on this please let both me and the Node-RED community know.

And if you are looking for somebody to help you build on this please get in touch.