Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])
  }
}

Setting up a AWS EC2 Mac

I recently needed to debug some problems running a Kubernetes app on a Mac. The problem is I don’t have a Mac or easy access to one that I can have full control over to poke and prod at things. (I also am not the biggest fan of OSx, but that’s a separate story)

Recently AWS started to offer Mac Mini EC2 instances. These differ a little from most normal EC2 instances as they are an actual dedicated bit of hardware that you have exclusive access to rather than a VM on hardware shared with others.

Because of the fact it’s a dedicated bit of hardware the process for setting one up is a little different.

Starting the Instance

First you probably need to request to have a limit increasing on your account. as the default limit for dedicated hardware looks to be 0. This limit is also per region so you will need to ask for the update in every one you would need. To request the update use the AWS Support Center, user the “Create Case” button and select “Service Limit Increase”. From the drop down select “EC2 Dedicated Hosts”, then the region and you want to request and update to the mac1 instance type and enter the number of concurrent instances you will need. It took a little time for my request to be processed, but I did submit it on Friday afternoon and it was approved on Sunday morning.

Once it has been approved you can create a new “Dedicated Hosts” instance on the EC2 console, with a “Instance Family” of mac1 and a “Instance Type” of mac1.metal. You can pick your availability zone (not all Regions and AZ have all instance type so it might not be possible to allocate a mac in every zone). I also suggest you tick the “Instance auto-placement” box.

Once that is complete you can actually start allocate an EC2 instance on this dedicated host. You get to pick which version of OSx you want to run. Assuming you only have one dedicated host and you ticked the auto-placement box then you shouldn’t need to pick the hardware you want to run the instance on.

The other main things to pick as you walk through the wizard are the amount of disk space (default is 60gb), which security policy you want (be sure to pick one with ssh access) and which SSH key you’ll use to log in.

The instances do take a while to start, but given it’s doing a fresh OSx install the hardware this is probably not a surprise. But once the console says it’s up and both the status checks are passing you’ll be able to ssh into the box.

Enabling a GUI

Once logged in you can do most things from the command line, but I needed to run Docker, and all the instructions I could find online said I needed to download Docker Desktop and install that via the GUI.

I found the following gist which helped.

  • Fist up set a password for the ec2-user
    sudo passwd ec2-user
  • Second enabled the the VNC
% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
-activate -configure -access -on \
-configure -allowAccessFor -specifiedUsers \
-configure -users ec2-user \
-configure -restart -agent -privs -all

% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
 -configure -access -on -privs -all -users ec2-user

You can then add -L 5900:localhost:5900 to the ssh command that you use to log into the mac. This will port forward the VNC port to localhost.

VNCViewer or Remmina can be used to start a session that gives full access to the Mac’s gui.

Expand the disk

If you have allocated more than the default 60gb then you will need to expand the disk to make full use of it.

% PDISK=$(diskutil list physical external | head -n1 | cut -d" " -f1)
APFSCONT=$(diskutil list physical external | grep "Apple_APFS" | tr -s " " | cut -d" " -f8)
% sudo diskutil repairDisk $PDISK
# Accept the prompt with "y", then paste this command
% sudo diskutil apfs resizeContainer $APFSCONT 0

Add tools

The instance comes with Homebrew pre-setup so you can install nearly anything else you might need.

Shut it down when you are done

Mac EC2 instances really are not cheap ($25.99 per day…) so remember to kill it off when you are done.

Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

{
  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
        "progressive_mp4",
        "hls",
        "dash",
        "smooth_stream"
      ],
      "online": true
    }
  }
}

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "http://192.168.1.96:8080/hls/stream.m3u8",
    cameraStreamProtocol: "hls"
}
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.

Working with multiple EFS file system in EKS

I’ve been building a system recently on AWS EKS and using EFS filesystems as volumes for persistent storage.

I initially only had one container that required any storage, but as I added a second I ran into the issue that there didn’t look to be a way to bind a EFS volume to a specific PersistentVolumeClaim so no way to make sure the same volume was mounted into the same container each time.

A Pod requests a volume by referencing a PersistentVolumeClaim as follows:

apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    volumeMounts:
    - name: efs-volume
      mountPath: /data
  volumes:
  - name: efs-volume
    persistentVolumeClaim:
      claimName: efs-claim

The PersistentVolumeClaim would look:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

You can bind the EFS volume to a PersistentVolume as follows

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-persistent-volume
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-6eb2fc16

The volumeHandle points to the EFS volume you want to back it.

If there is only one PersistentVolume then there is not a problem as the PersistentVolumeClaim will grab the only one available. But if there are more than one then you can include the volumeName in the PersistentVolumeClaim description to bind the two together.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
  volumeName: efs-persistent-volume

After a bit of poking around I found this Stack Overflow question which pointed me in the right direction.

Viewing Node-RED Credentials

A question popped up on the Node-RED Slack yesterday asking how to recover an entry from the credentials file.

Background

The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json

The content of the file will look something a little like this:

{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}

This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.

The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:

  • In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
  • In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.

Code

In order to make use of thex

const crypto = require('crypto');

var encryptionAlgorithm = "aes-256-ctr";

function decryptCreds(key, cipher) {
  var flows = cipher["$"];
  var initVector = Buffer.from(flows.substring(0, 32),'hex');
  flows = flows.substring(32);
  var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
  var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
  return JSON.parse(decrypted);
}

var creds = require("./" + process.argv[2])
var secret = process.argv[3]

var key = crypto.createHash('sha256').update(secret).digest();

console.log(decryptCreds(key, creds))

If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:

$ node show-creds creds.json [secret]

where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.

Adding a TPM to My Offline Certificate Authority

Back at the start of last year, I built an offline Certificate Authority based around Pi Zero and a RTC module.

The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.

For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.

TPM

A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.

TPM & RTC on a Raspberry Pi Zero

TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.

You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.

You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).

It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.

dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670

Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.

Using the TPM with the CA

Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.

Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.

There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.

Generating New Keys

You can generate a new CA certificate with the following commands

$ tpm2tss-genkey -a rsa -s 2048 ca.tss
$ openssl req -new -x509 -engine tpm2tss -key ca.tss  -keyform engine -out ca.crt

This certificate can now be used to sign CSRs

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Importing Keys

For an existing ca.key private key file.

$ tpm2_createprimary --hierarchy=o --hash-algorithm=sha256 --key-algorithm=rsa --key-context=primiary_owner_key.ctx
$ HANDLE=$(tpm2_evictcontrol --hierarchy=o --object-context=primiary_owner_key.ctx | cut -d ' ' -f 2 | head -n 1)
$ tpm2_import -C primiary_owner_key.ctx -G rsa -i ca.key -u ca-pub.tpm -r ca.tpm
$ tpm2tss-genkey --public ca-pub-tpm --private ca.tpm --parent $HANDLE --password secret ca.tss

And we can then sign new CSRs the same way as with the generated key

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.

Web Interface

At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.


1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)

Hardill Technologies Ltd

Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.

Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.

Node-RED asking for credentials

The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.

At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.

As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:

If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on tech@hardill.me.uk.

Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.

Email Autoconfiguration

I finally got round to setting up a new version of a Virtual Machine I had on my old laptop. It’s purpose is basically to host an email client that accesses a bunch of email addresses I have set up on my domain.

It was all going smoothly until I actually got round to adding the account details to Thunderbird

It sat and span like this for a while then pops up the manual configuration view.

Which is fine as I know the difference between pop3 and imap but it’s the sort of thing that really confuses the most users (I’ve lost count of the number of times I’ve had to talk people through this over the phone).

The problem is I thought I’d already fixed particular probelm. Back last time I set up a bunch of email addresses I remember setting up a bunch of DNS SRV records to point to both the inbound mail server and the IMAP server.

SRV Records

SRV records allow you to say which servers to use for a particular protocol using a given domain. The entries are made up of the protocol followed by the transport type and then the domain e.g.

_submission._tcp.example.com

The mail client would look the SRV record for this hostname to find the mail submission protocol server for the example.com domain and would get a response that looks like this:

_submission._tcp.example.com.	3600 IN	SRV	0 1 587 mail.example.com.

where:

  • 3600 is the Time to Live (how long to cache this result in seconds)
  • IN is the Internet Protocol
  • SRV Record type
  • 0 Weight (If multiple records try the lowest first)
  • 1 Priority (If multiple records with the same Weight pick the highest first)
  • 587 The port number of the service
  • mail.example.com the host where to find the service.

I have SRV records for the following protocols enabled:

  • Mail Submission _submission._tcp
  • IMAPS _imaps._tcp
  • SIP _sip._tcp & _sip._udp
  • SIPS _sips._tcp

Using SRV records for email discovery is covered by RFC6186. SRV records are also used in the VoIP space to point to SIP servers.

So the question is why this doesn’t work. The problem looks to be that Thunderbird hasn’t implemented support for RFC6186 just yet. A bit of digging found this document which covers what the current design for Thunderbird is and which bits are still to be implemented. It looks like the only option that currently works in the XML configuration file

config-v1.1.xml file

The document lists a few locations that a file can be placed relative to the domain that holds details of how to configure the email account. This includes http://example.com/.well-known/autoconfig/mail/config-v1.1.xml where example.com is the domain part of the email address.

The schema for config-v1.1.xml can be found here. A basic minimal entry would look something like this:

<?xml version="1.0"?>
<clientConfig version="1.1">
    <emailProvider id="example.com">
      <domain>example.com</domain>
      <displayName>Example Mail</displayName>
      <displayShortName>Example</displayShortName>
      <incomingServer type="imap">
         <hostname>mail.example.com</hostname>
         <port>995</port>
         <socketType>SSL</socketType>
         <username>%EMAILADDRESS%</username>
         <authentication>password-cleartext</authentication>
      </incomingServer>

      <outgoingServer type="smtp">
         <hostname>mail.example.com</hostname>
         <port>587</port>
         <socketType>STARTTLS</socketType> 
         <username>%EMAILADDRESS%</username> 
         <authentication>password-cleartext</authentication>
         <addThisServer>true</addThisServer>
      </outgoingServer>
    </emailProvider>
    <clientConfigUpdate url="https://www.example.com/config/mozilla.xml" />
</clientConfig>

Apart from the obvious parts that say which servers to connect to the other useful bit is found in the <username> tags. Here I’m using %EMAILADDRESS% which says to use the whole email address as the username. You can also use %EMAILLOCALPART% which is everything before the @ sign and %EMAILDOMAIN% which is everything after the @ sign.

The documentation includes options for setting up remote address books and calendar information, though it doesn’t look like Thunderbird supports all of these options just yet.

With this file now in place on my HTTP server Thunderbird now sets everything up properly.

Setting up mDNS CNAME entries for K8S Ingress Hostnames

As I hinted at in the end of my last post, I’ve been looking for a way to take the hostnames setup for Kubernetes Ingress endpoints and turn them into mDNS CNAME entries.

When I’m building things I like to spin up a local copy where possible (e.g. microk8s on a Pi 4 for the Node-RED on Kubernetes and the Docker Compose environment on another Pi 4 for the previous version). These setups run on my local network at home and while I have my own DNS server set up and running I also make extensive use of mDNS to be able to access the different services.

I’ve previously built little utilities to generate mDNS CNAME entries for both Nginx and Traefik reverse proxies using Environment Variables or Labels in a Docker environment, so I was keen to see if I can build the same for Kubernetes’ Ingress proxy.

Watching for new Ingress endpoints

The kubernetes-client node module supports for watching certain endpoints, so can be used to get notifications when an Ingress endpoint is created or destroyed.

const stream = client.apis.extensions.v1beta1.namespaces("default").ingresses.getStream({qs:{ watch: true}})
const jsonStream = new JSONStream()
stream.pipe(jsonStream)
jsonStream.on('data', async obj => {
  if (obj.type == "ADDED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  } else if (obj.type == "DELETED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  }
}

Creating the CNAME

For the previous versions I used a python library called mdns-publish to set up the CNAME entries. It works by sending DBUS messages to the Avahi daemon which actually answers the mDNS requests on the network. For this version I decided to try and send those DBUS messages directly from the app watching for changes in K8s.

The dbus-next node module allows working directly with the DBUS interfaces that Avahi exposes.

const dbus = require('dbus-next');
const bus = dbus.systemBus()
bus.getProxyObject('org.freedesktop.Avahi', '/')
.then( async obj => {
	const server = obj.getInterface('org.freedesktop.Avahi.Server')
	const entryGroupPath = await server.EntryGroupNew()
	const entryGroup = await bus.getProxyObject('org.freedesktop.Avahi',  entryGroupPath)
	const entryGroupInt = entryGroup.getInterface('org.freedesktop.Avahi.EntryGroup')
	var interface = -1
	var protocol = -1
	var flags = 0
	var name = host
	var clazz = 0x01
	var type = 0x05
	var ttl = 60
	var rdata = encodeFQDN(hostname)
	entryGroupInt.AddRecord(interface, protocol, flags, name, clazz, type, ttl, rdata)
	entryGroupInt.Commit()
})

Adding a signal handler to clean up when the app gets killed and we are pretty much good to go.

process.on('SIGTERM', cleanup)
process.on('SIGINT', cleanup)
function cleanup() {
	const keys = Object.keys(cnames)
	for (k in keys) {
		//console.log(keys[k])
		cnames[keys[k]].Reset()
    	cnames[keys[k]].Free()
	}
	bus.disconnect()
	process.exit(0)
}

Running

Once it’s all put together it runs as follows:

$ node index.js /home/ubuntu/.kube/config ubuntu.local

The first argument is the path to the kubectl config fileand the second is the hostname the CNAME should point to.

If the Ingress controller is running on ubuntu.local then Ingress YAML would look like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: manager-ingress
spec:
  rules:
  - host: "manager.ubuntu.local"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: manager
            port:
              number: 3000 

I’ve tested this with my local microk8s install and it is working pretty well (even on my folks really sketchy wifi). The code is all up here.

Multi Tenant Node-RED with Kubernetes

Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.

I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.

Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.

I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.

Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.

Architecture

As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red-multi-tenant
  labels:
    app: nr-mt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nr-mt
  template:
    metadata:
      labels:
        app: nr-mt
    spec:
      containers:
      - name: node-red-manager
        image: private.example.com/k8s-manager
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: secret
          mountPath: /usr/src/app/config
        env:
        - name: MONGO_URL
          value: mongodb://mongo/nodered
        - name: ROOT_DOMAIN
          value: example.com
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
      - name: registry
        image: verdaccio/verdaccio
        ports:
        - containerPort: 4873
        volumeMounts:
        - name: registry-data
          mountPath: /verdaccio/storage
        - name: registry-conf
          mountPath: /verdaccio/conf
      volumes:
      - name: secret
        secret:
          secretName: kube-config
      - name: mongo-data
        hostPath:
          path: /opt/mongo-data
          type: Directory
      - name: registry-data
        hostPath:
          path: /opt/registry-data
          type: Directory
      - name: registry-conf
        secret:
          secretName: registry-conf

This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.

It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.

I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.

Manager

This is mainly the same as the docker version, but I had to swap out dockerode for kubernetes-client.

This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.

Standing up a new instance is a little more complicated as it’s now a multi step process.

  1. Create a Pod with the custom-node-red container
  2. Create a Service based on that pod
  3. Expose that service via the Ingress addon

I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.

All the code for this version of the app is on github here.

Catalogue

In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.

This container runs a small web app that has 2 endpoints.

  • /catalogue.json – which returns the current version of the catalogue
  • /update – which is triggered by the the notify function of the Verdaccio private npm registry

The registry has this snippet added to the end of the config.yml

notify:
  method: POST
  headers: [{'Content-Type': 'application/json'}]
  endpoint: http://catalogue/update
  content: '{"name": "{{name}}", "versions": "{{versions}}", "dist-tags": "{{dist-tags}}"}'

The code for this container can be found here.

Deploying

First clone the project from github

$ github clone --recurse-submodules https://github.com/hardillb/multi-tenant-node-red-k8s.git

Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.

$ ./setup.sh example.com private.example.com:5000

This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)

With all the configuration files updated the containers need building and pushing to the local container registry.

$ docker build ./manager -t private.example.com:5000/k8s-manager
$ docker push private.example.com:5000/k8s-manager
$ docker build ./catalogue -t private.example.com:5000/catalogue
$ docker push private.example.com:5000/catalogue
$ docker build ./custom-node-red -t private.example.com:5000/custom-node-red
$ docker push private.example.com:5000/custom-node-red

Finally trigger the actual deployment with kubectl

$ kubectl apply -f ./deployment

Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.

A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.

As usual the whole solution can be found on github here.

What’s Next

I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.

I also need to back port the new catalogue handling to the docker-compose version.

And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.

p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.