Multi Tenant Node-RED

I was recently approached by a company that wanted to sponsor adding Multi Tenant support to Node-RED. This would be to enable multiple users to run independent flows on a single Node-RED instance.

This is really not a good idea for a number of reasons. But mainly it is because the NodeJS runtime is inherently single threaded and there is no way to get real separation between different users. For example, if one user uses a poorly written node (or function in a function node) it is possible to block the event loop starving out all the other users or in extreme cases an uncaught asynchronous exception will cause the whole application to exit.

The best approach is to run a separate instance of Node-RED per user, which gives you the required separation between users. The usual way to do this is to use a container based system and a HTTP reverse proxy to control access to the instances.

Over the next months worth of posts I’ll outline the components required to build a system like I described and at the end should hopefully have a fully working Proof of Concept that people can use as the base to build their own deployments.

As the future posts are published I will add links here.

Required components

Flow Storage

Because containers file systems are not persistent we are going to need somewhere to reliably store the flows each user creates.

Node-RED supplies an API that lets you control how/where flows (and a bunch of things that would normally end up on disk)

Authentication

We are going to need a way to only allow right users access to the Node-RED editor, again there is a plugin API that allows this to be wired into nearly any existing authentication source of your choice.

I wrote a simple implementation of this API which uses LDAP as the source of users not long after Node-RED was released . But in this series of post I’ll write a new one that uses the same backend database as the flow storage plugin.

Custom Container

Once we’ve built storage and authentication plugins we will need to build a custom version of the Node-RED Docker container that includes these extras.

HTTP Reverse Proxy

Now we have a collection of containers each running a users instance of Node-RED we are going to need a way to expose these to the outside world. Since Node-RED is a web application then a HTTP Reverse Proxy is probably the right way forward.

Management

Once all of the other pieces are in place we need is a way to control the creation/deletion of Node-RED instances. It will also be useful to see the instances’ logs.

Extra bits

Finally I’ll cover some extra bits that help make a deployment specific to a particular environment, such as

Custom node repository

This allows you to host your own private nodes and still install them using the Manage Palette menu.

Custom Theme

Tweaking page titles and colour schemes can make Node-RED fit in better with your existing look and feel.

dns-to-mdns

Having built the nginx-proxy-avahi-helper container to expose proxied instances as mDNS CNAME entries on my LAN I also wanted a way to allow these containers to also be able to resolve other devices that are exposed via mDNS on my LAN.

By default the Docker DNS service does not resolve mDNS hostnames, it either takes the docker hosts DNS server settings or defaults to using Goolge’s 8.8.8.8 service.

The containers themselves can’t do mDNS resolution unless you set their network mode to host mode which isn’t really what I want.

You can pass a list of DNS servers to a docker container when it’s started with the --dns= command line argument which means that if I can run a bridge that will convert normal DNS requests into mDNS resquests on my LAN I should be able to get the containers to resolve the local devices.

I’ve played with writing DNS proxies before when I was looking at DoH so I had a reasonably good idea where to start.

const dgram = require('dgram')
const dnsPacket = require('dns-packet')
const mdnsResolver = require('mdns-resolver')

const port = process.env["DNS_PORT"] || 53
const server = dgram.createSocket('udp4')

server.on('listening', () => {
  console.log("listening")
})

server.on('message', (msg, remote) => {
  const packet = dnsPacket.decode(msg)
  var response = {
    type: "response",
    id: packet.id,
    questions: [ packet.questions[0] ],
    answers: []
  }

  mdnsResolver.resolve(packet.questions[0].name, packet.questions[0].type)
  .then(addr => {
    response.answers.push({
      type: packet.questions[0].type,
      class: packet.questions[0].class,
      name: packet.questions[0].name,
      ttl: 30,
      data: addr
    })
    server.send(dnsPacket.encode(response), remote.port)
  })
  .catch (err => {
    server.send(dnsPacket.encode(response), remote.port)
  })
})
server.bind(port)

This worked well but I ran into a small problem with the mdns-resolver library which wouldn’t resolve CNAMEs, but a small pull request soon fixed that.

The next spin of the code added support to send any request not for a .local domain to an upstream DNS server to resolve which means I don’t need to add as may DNS servers to each container.

All the code is on github here.

Bonus points

This will also allow Windows machines, which don’t have native mDNS support, to do local name resolution.

nginx-proxy-avahi-helper

I’ve been playing with the jwilder/nginx-proxy docker container recently. It automatically generates reverse proxy entries for other containers.

It does this by monitoring when new containers are started and then inspecting the environment variables attached to the new container. If the container has the VIRTUAL_HOST variable is uses it’s value as the virtual host for nginx to proxy access to the container.

Normally for this type of setup you would set up a wildcard DNS that points to the docker host so all DNS lookups for a machine in the root domain will return the same IP address.

If all the virtual hosts are in the example.com domain e.g. foo.example.com and bar.example.com you would setup the *.example.com DNS to point to the docker hosts IP address.

When working on my home LAN I normally use mDNS to access local machines so there is no where to set up the wildcard DNS entry. Instead I have build a container to add CNAME mDNS entries for the docker host for each of the virtual hosts.

In my case Docker is running on a machine called docker-pi.local and I’m using that as the root domain. e.g. manager.docker-pi.local or registry.docker-pi.local.

The container is using the docker-gen application that use templates to generate configuration files based on the running containers. In this case it generates a list of virtual hosts and writes them to a file.

The file is read by a small python application that connects to d-bus to talk to the avahi daemon on the docker host and configures the CNAMEs.

Both Docker and d-bus use unix domain sockets as their transport protocol so you have to mount the sockets from the host into the container.

 docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/tmp/docker.sock --name avahi-cname hardillb/nginx-proxy-avahi-helper

I’ve put the code on github here and the container is on docker hub here.

Back to Building the Linear Clock

A LONG time ago I started to work out how to build a linear clock using a strip of 60 LEDs. It’s where all my playing with using Pi Zeros and Pi 4s as USB devices started from.

I’ve had a version running on the desk with jumper wires hooking everything up, but I’ve always wanted to try and do something a bit neater. So I’ve been looking at building a custom Pi pHAT to link everything together.

The design should be pretty simple

  • Break out pin 18 on the Pi to drive the ws2812b LEDs.
  • Supply 5v to the LED strip.
  • Include an I2C RTC module so the Pi will keep accurate time, especially when using a Pi Zero (not a Pi Zero W) with no network connectivity.

I know that the Pi can supply enough power from the 5v pin in the 40 pin header to drive the number of LEDs that the clock should have lit at any one time.

Also technically the data input for the ws2812b should also be 5v but I know that at least with the strip that I have it will work with the 3.3v supplied from GPIO pin 18.

I had started to work on this with Eagle back before it got taken over by Autodesk, while there is still a free version for hobbyists I thought I’d try something different this time round and found librePCB.

Designing the Board

All the PCB board software looks to work in a similar way, you start by laying out the circuit logically before working out how to lay it out physically.

The component list is:

The RTC is going to need a battery to keep it up to date if the power goes out so I’m using the Keystone 500 holder which is for a 12mm coin cell. These are a lot smaller than the more common 20mm (cr2032) version of coin cells, so should take up lot less space on the board.

I also checked that the M41T81 has a RTC Linux kernel driver and it looks to be included in the rtc-m41t80 module, so should work.

Finally I’m using the terminal block because I don’t seem to be able to find a suitable board mountable connector for the JST SM connector that comes on most of the LED strips I’ve seen. The Wikipedia entry implies that the standard is only used for wire to wire connectors.

Circuit layout

LibrePCB has a number of component libraries that include one for the Raspberry Pi 40 pin header.

Physical and logical diagram of RTC component

But I had to create a local library for some of the other parts, especially the RTC and the battery holders. I will look at how to contribute these parts to the library once I’ve got the whole thing up and running properly.

Block circuit diagram

The Pi has built in pull up resistors on the I2C bus pins so I shouldn’t need to add any.

Board layout

Now I have all the components linked up it was time to layout the actual board.

View of PCB layout

The board dimensions are 65mm x 30mm which matches the Pi Zero and with the 40 pin header across the top edge.

The arrangement of the pins on the RTC mean that no matter which way round I mount it I always end up with 2 tracks having to cross which means I have one via to the underside of the board. Everything else fits into one layer. I’ve stuck the terminal block on the left hand edge so the strip can run straight out from there and the power connector for the Pi can come in to the bottom edge.

Possible Improvements

  • An I2C identifier IC – The Raspberry Pi HAT spec includes the option to use the second I2C bus on the Pi to hold the device tree information for the device.
  • A power supply connector – Since the LED strip can draw a lot of power when all are lit, it can make sense to add either a separate power supply for just the LEDs or a bigger power supply that can power the Pi via the 5v GPIO pins.
  • A 3.3v to 5v Level shifter – Because not all the LED strips will work with 3.3v GPIO input.
  • Find a light level sensor so I can adjust the brightness of the LED strip based on the room brightness.

Next Steps

The board design files are all up on GitHub here.

I now need to send off the design to get some boards made, order the components and try and put one together.

Getting boards made is getting cheaper and cheaper, an initial test order of 5 boards to test from JLC PCB came in at £1.53 + £3.90 shipping (this looks to include a 50% first order discount). This has a 12 day shipping time, but I’m not in a rush so this just feels incredibly cheap.

Soldering the surface mount RTC is going to challenge my skills a bit but I’m up for giving it a go. I think I might need to buy some solder paste and a magnifier.

I’ll do another post when the parts all come in and I’ve put one together.

Deploying my Homepage with Github Actions

While playing with the log analyser I mentioned in my post about building fail2ban rules, I accidentally overwrote the index.html file for my homepage.

After a bit of digging around and not being able to find a copy of it anywhere (it was probably on my old work laptop, so might be on the backup USB drive in the bottom draw…) I started to write a new one. To make sure I could always find a copy I decided I’d check it into GitHub.

As part of setting up the new repo I thought I’d have a look at the relatively new GitHib Actions that allow you to run jobs on your own hardware.

Actions are GitHubs implementation of a CI pipeline, you can attach an action to any number of events that happen on a repo e.g.

  • Post a welcome message when somebody new opens an issue
  • Run tests on any new pull requests
  • Deploy the project when code is merged into a specific branch

You can find a full list of triggers here.

It’s this last one that I’m going to use a trigger to deploy the new homepage when ever I commit to the master branch.

There is a list of pre-build Actions that can be used or extended to do all kinds of things or you can build your own. The documentation is here.

Actions are defined in a YAML file that you place in the .github/workflow directory of your repository.

name: PublishHomepage

on: [push]

jobs:
  deploy:
    name: Deploy Homepage
    runs-on: [self-hosted, linux]
    steps:
    - name: Checkout
      uses: actions/checkout@v2
      with:
        repository: 'hardillb/homepage'
        path: 'homepage'
    - run: bash ./homepage/scripts/deploy.sh

This attaches to the push trigger and asks that it is run on a self-hosted, linux machine. Where it will checkout the /hardillb/homepage repository and then execute the deploy.sh script found in the scripts directory of the repo.

Github supply runners for Windows/Linux/macOS on a range of architectures including ARM which means I can deploy one on the Raspberry Pi that hosts my website. Details of how to add a Action Runner to your specific project and download the package can be found here.

I went with running my own Action Runner on the same machine as the site is hosted because it means I don’t need to worry about setting up a custom ssh key and storing it as a secret so a cloud runner could access the device and upload the changed files. This way the script runs on the same machine and I can make sure the user has access rights to the right directory to copy the files to.

Another LoRa Temperature/Humidity Sensor

Having deployed my Adafruit Feather version of a Temperature/Humidity LoRa Sensor to make use of my The Things Network gateway it’s now time to build another sesnor, this time with a Raspberry Pi Zero.

Pi Supply LoRa node pHat with sensor

I have a Pi Supply LoRa node pHAT that adds the radio side of things, but I needed a sensor. The pHAT has a header on the right hand edge which pulls out the 3.3v, 2, 3, 4 and Gnd, pins from the pi.

These just happen to line up with the I2C bus. They are also broken out in the same order as used by the Pimoroni Breakout Garden series of I2C sensors which is really useful.

Pimoroni’s Breakout Garden offer a huge range of sensors and output devices in either I2C or SPI. In this case I grabbed the BME680 which offers temperature, pressure, humidity and air quality in a single package.

Pimoroni supply a python library to read from the sensor so it was pretty easy to combine this with python to drive the LoRa pHat

#!/usr/bin/env python3
from rak811 import Mode, Rak811
import bme680
import time

lora = Rak811()
lora.hard_reset()
lora.mode = Mode.LoRaWan
lora.band = 'EU868'
lora.set_config(app_eui='xxxxxxxxxxxxx',
                app_key='xxxxxxxxxxxxxxxxxxxxxx')
lora.join_otaa()
lora.dr = 5

sensor = bme680.BME680(bme680.I2C_ADDR_PRIMARY)
sensor.set_humidity_oversample(bme680.OS_2X)
sensor.set_temperature_oversample(bme680.OS_8X)
sensor.set_filter(bme680.FILTER_SIZE_3)

while True:
    if sensor.get_sensor_data():
        temp = int(sensor.data.temperature * 100)
        humidity = int(sensor.data.humidity * 100)
        foo = "{0:04x}{1:04x}".format(temp,humidity)
        lora.send(bytes.fromhex(foo))
        time.sleep(20)

lora.close()

I’m only using the temperature and humidity values at the moment to help keep the payload compatible with the feather version so I can use a single “port” in the TTN app.

I’ve put all the code for both this and the Feather version on GitHub here.

As well as the Pi Zero version I’ve upgraded the Feather with a 1200mAh LiPo battery and updated the code to also transmit the battery voltage so I can track battery life and work out how often I need to charge it.

Adding the battery also meant I could stick it in my bag and go for a walk round the village to see what sort of range I’m getting.

I’m currently using a little coil antenna on the gateway that is laid on it’s side, I’ll find something to prop it up with which should help. The feather is also only using a length of wire antenna so this could be seen as a worst case test.

It’s more than good enough for what I need right now, but it would be good to put a proper high gain one in then it might be useful to more people. It would be great if there was a way to see how many unique TTN applications had forwarded data through a gateway to see if anybody else has used it.

LoRa Temperature/Humidity Sensor

After deploying the TTN Gateway in my loft last year, I’ve finally got round to deploying a sensor to make use of it.

As the great lock down of 2020 kicked off I stuck in a quick order with Pimoroni for a Adafruit Feather 32u4 with an onboard LoRa radio (In hindsight I probably should have got the M0 based board).

Adafruit Feather LoRa Module

I already had a DHT22 temperature & humidity and the required resistor so it was just a case of soldering on an antenna of the right length and adding the headers so I could hook up the 3 wires needed to talk to the DHT22.

There is an example app included in with the TinyLoRa library that reads from the sensor and sends a message to TTN every 30 seconds.

(There are full instructions on how to set everything up on the Adafruit site here)

Now the messages are being published to TTN Application I needed a way to consume them. TTN have a set of Node-RED nodes which make this pretty simple. Though installing them did lead to a weekend adventure of having to upgrade my entire home automation system. The nodes wouldn’t build/install on Node-RED v0.16.0, NodeJS v6 and Raspbian Jessie so it was time to upgrade to Node-RED v1.0.4, NodeJS v12 and Raspbian Buster.

Node-RED flow consuming messages from Temp/Humidity sensor from TTN

This flow receives messages via MQTT direct from TTN, it then separates out the Temperature and Humidity values, runs them through smooth nodes to generate a rolling average of the last 20 values. This is needed because the DHT22 sensor is pretty noisy.

Plots of Temperature and Humdity

It outputs these smoothed values to 2 charts on a Node-RED Dashboard. which show the trend over the last 12 hours.

It also outputs to 2 MQTT topics that mapped to my public facing broker, which means they can be used to resurrect the temperature chart on my home page, and when I get an outdoor version of the sensor (hopefully solar/battery powered) up and running it will have both lines that my old weather station used to produce.

The last thing the flow does is to update the temperature reading for my virtual thermostat in the Google Assistant HomeGraph. This uses the RBE node to make sure that it only sends updates when the value actually changes.

The next step is to build a couple more and find a suitable battery and enclosure so I can stick one in the back garden.

Gophering around

I’ve been listening to a podcast called “Brad & Will Made a Tech Pod.” . It’s a fun geeky hour that comes out every Sunday.

In their last episode (44-alt-barney-dinosaur-die-die-die) they went back to their first contact with the Internet and this lead to a discussion of “The Gopher Space”, which is basically what the Internet was before “The Web” took over.

In my case my first trip online was a dial up educational BBS on a BBC Micro while at junior school, I dread to think how big the school phone bill must have been and how often I cut off the school Secretary off mid call by flicking the switch that hooked the line up to the acoustic coupler modem.

By the time I got to secondary school we had a 486 PC at home and I remember convincing my parents to let me buy a modem and posting 12 forward dated £12.00 cheques to Deamon Internet for a years worth of dial up access. At this point the Mosaic graphical Web browser had just turned up so I missed out on digging too deep into the Gopher Space.

But all the talk got me wondering how hard it would be to set up a gopher server to have a play with. I already run my own webserver to host this blog so why not see if I can host my posts on gopher as well.

A little bit of digging and I found pygopherd which comes ready packaged on Rasbian/RaspberryOS and just needs the servername setting in /etc/pygopherd/pygopherd.conf and a entry in the port forwarding rules on the router to get up and running.

I can now use the command gopher gopher.hardill.me.uk to start exploring. This is a terminal based tool which gives an authentic experience, but there are plugins for Firefox and Chrome if you want to play on the desktop.

A gopher site is sort of like a very stripped back webpage, it’s text only with no real formatting but it can contain links to other pages and to binary files that can be downloaded but not viewed inline. For example here is my “homepage” is in a file called gophermap (a bit like index.html) in /var/gopher

Ben's Place - Gopher

Just a place to make notes about things I've 
been playing with

1Blog	/blog
1Brad & Will Made a Tech Pod	/podcast
A screen shot of the gophermap file listed above rendered by the gophner command line applicatio

The lines starting with a 1 point to another gophermap with titles Blog and Brad & Will Made a Tech Pod in /var/gopher/blog and /var/gopher/podcast respectively. If the line had started with 0 it would have pointed to a text file. A full list of the prefixes can be found here.

The plan is to generate text versions of all the posts on the blog. I’ll probably write a script that takes the ATOM feed and do the conversion, but in the mean time, there was a small challenge thrown down in podcast to host all the podcast over gopher.

I was also looking for an excuse to start to doing some playing in Go which just seemed apt.

A quick search found a library to do the RSS download and parsing so the whole thing only took about 50 lines. You can find the code on github here.

You run the code as follows:

./gopherPodcast http://some.podcast.com/rss > gophermap

I’ll be running this every Sunday morning with a crontab entry to grab the latest episode and add it to the list.

Creating Fail2Ban rules

I recently installed some log charting software to get a view NGINX logs for the instance that servers up this site. While looking at some of the output I found some strange results so went to have a closer look at the actual logs

Chart showing hits/visitor to site

There were a whole bunch of entries with with a similar pattern, something similar to the following:

94.130.58.174 - - [15/Jul/2020:11:43:39 +0100] "CONNECT rate-market.ru:443 HTTP/1.1" 400 173 "-" "-"

CONNECT is a HTTP verb that is used by HTTP Proxy clients to tell the proxy server to set up a tunnel to the requested machine. This is needed because is the client is trying to connect to a HTTP server that uses SSL/TLS then it needs to be able to do the SSL/TLS handshake directly with that server to ensure it has the correct certificates.

Since my instance of NGINX is not configured to be a HTTP Proxy server it is rightly rejecting these requests as can been seen by the 400 (Bad Request) after the request string. Even though the clients making these request are all getting the 400 error they are continuing to send the same request over and over again.

I wanted to filter these bogus request out for 2 reasons

  • They clog the logs up, making it harder to see real problems
  • They take up resources to process

The solution is to use a tool called fail2ban, which watches log files for signs of abuse and then sets up the correct firewall rules to prevent the remote clients from being able to connect.

I already run fail2ban monitoring the SSH logs for all my machines that face the internet as there are plenty of scripts out there running trying standard username/password combinations attempting to log into any machine listening on port 22.

Fail2ban comes with a bunch of log filters to work will a applications, but the ones bundled on raspbian (RaspberryOS) at this time didn’t include one that would match this particular pattern. Writing filters is not that hard but it does require writing regular expressions which is a bit of dark art. The trick is to get a regex that only matches what we want.

The one I came up with is this:

^<HOST> - - \[.*\] \"CONNECT .* HTTP/1\.[0-1]\" 400

Let’s break that down into it’s specific parts

  • ^ this matches the start of the log line
  • <HOST> fail2ban has some special tags that match an IP address or Host name
  • - - this matches the 2 dashes in the log
  • \[.*\] this matches the date/timestamp in the square brackets, I could have matched the date more precisely, but we don’t need to in this case. We have to escape the [ as it has meaning in regex as we will see later.
  • \"CONNECT this matches the start of the request string and limits it to just CONNECT requests
  • .* this matches the host:port combination that is being requested
  • HTTP/1\.[0-1]\" here we match both HTTP/1.0 and HTTP/1.1. We need to escape the . and the square brackets let us specify a range of values
  • 400 and finally the 400 error code

Now we have the regex we can test it with the fail2ban-regex command to make sure it finds what we expect.

# fail2ban-regex /var/log/nginx/access.log '^<HOST> - - \[.*\] \"CONNECT .* HTTP/1\.[0-1]\" 400' 

Running tests
=============

Use   failregex line : ^<HOST> - - \[.*\] \"CONNECT .* HTTP/1\.[0-1]\" 400
Use         log file : /var/log/nginx/access.log
Use         encoding : UTF-8


Results
=======

Failregex: 12819 total
|-  #) [# of hits] regular expression
|   1) [12819] ^<HOST> - - \[.*\] \"CONNECT .* HTTP/1\.[0-1]\" 400
`-

Ignoreregex: 0 total

Date template hits:
|- [# of hits] date format
|  [526165] Day(?P<_sep>[-/])MON(?P=_sep)ExYear[ :]?24hour:Minute:Second(?:\.Microseconds)?(?: Zone offset)?
`-

Lines: 526165 lines, 0 ignored, 12819 matched, 513346 missed
[processed in 97.73 sec]

Missed line(s): too many to print.  Use --print-all-missed to print all 513346 lines

We can see that the regex matched 12819 lines out of 526165 which is in the right ball park. Now we have the right regex we can build the fail2ban config files.

First up is the filter file that will live in /etc/fail2ban/filter.d/nginx-connect.conf

# Fail2Ban filter to match HTTP Proxy attempts
#

[INCLUDES]

[Definition]

failregex = ^<HOST> - - \[.*\] \"CONNECT .* HTTP/1\.[0-1]\" 400

ignoreregex = 

next is the jail file in /etc/fail2ban/jail.d/nginx-connect.conf, this tells fail2ban what to do if it finds log lines that match the filter and where to find the logs.

[nginx-connect]
port	= http,https
enabled	= true
logpath = %(nginx_access_log)s
action	= %(action_mwl)s

The port entry is which ports to block in the firewall, enabled & logpath should be obvious (it’s using a pre-configured macro for the path). And finally the action, there are a number of pre-configured actions that range from just logging the fact that there were hits in the log all the way through to this on action_mwl, which adds a firewall rule to block the <HOST>, then emails me that with information about the IP address and a selection of the matching log lines.

I’ll probably reduce the action to one that just adds the firewall rule after a day or two when I have a feel for how much of this sort of traffic I’m getting. But the info about each IP address normally include a lookup of the owner and that usually has an email address to report problems. It’s not often that reporting things actually stops it happening, but it’s worth giving it ago at the start to see what happens.

Over the course of the afternoon since it was deployed the filter has blocked 148 IP addresses.

My default rules are a 12 hour ban if the same IP address triggers more than 3 times in 10 mins with a follow up of getting a 2 week ban if you trip that more than 3 times in 3 days.

Next up is to create a filter to match the GET https://www.instagram.com requests that are also polluting the logs.

64bit Raspberry Pi OS build with USB-C Ethernet enabled

As a follow up to the 32bit version I built, using the script I described in my post about building custom SD card images, I have created a 64bit version based on the Raspberry Pi OS beta.

You can download the 64bit version from here. This is a full Raspberry Pi OS image, not a “lite” version as only full versions are available in beta.

This is a manual build (following these instructions) because at the moment the script won’t work with the 64bit version because DockerPi doesn’t support USB devices when emulating 64bit CPUs. USB is needed to attach the emulated Network Adapter. There are a set of patches for QUEMU pending that should enable this.

As soon as the patches become generally available I’ll update the project on github.