As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.
The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.
You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”
This will create an input message in Node-RED that looks like:
The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.
Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.
The function node updates the msg.payload.params with the required details. In this case
It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.
This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.
A question popped up on the Node-REDSlack yesterday asking how to recover an entry from the credentials file.
Background
The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json
The content of the file will look something a little like this:
{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}
This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.
The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:
In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.
Code
In order to make use of thex
const crypto = require('crypto');
var encryptionAlgorithm = "aes-256-ctr";
function decryptCreds(key, cipher) {
var flows = cipher["$"];
var initVector = Buffer.from(flows.substring(0, 32),'hex');
flows = flows.substring(32);
var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
return JSON.parse(decrypted);
}
var creds = require("./" + process.argv[1])
var secret = process.argv[2]
var key = crypto.createHash('sha256').update(secret).digest();
console.log(decryptCreds(key, creds))
If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:
where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.
The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.
For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.
TPM
A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.
TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.
You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.
You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).
It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.
dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670
Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.
Using the TPM with the CA
Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.
Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.
There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.
Generating New Keys
You can generate a new CA certificate with the following commands
Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.
Web Interface
At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.
1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)
Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.
Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.
Node-RED asking for credentials
The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.
At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.
As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:
If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on tech@hardill.me.uk.
Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.
Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.
I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.
Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.
I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.
Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.
Architecture
As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services
This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.
It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.
I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.
This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.
Standing up a new instance is a little more complicated as it’s now a multi step process.
Create a Pod with the custom-node-red container
Create a Service based on that pod
Expose that service via the Ingress addon
I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.
All the code for this version of the app is on github here.
Catalogue
In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.
This container runs a small web app that has 2 endpoints.
/catalogue.json – which returns the current version of the catalogue
/update – which is triggered by the the notify function of the Verdaccio private npm registry
The registry has this snippet added to the end of the config.yml
Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.
$ ./setup.sh example.com private.example.com:5000
This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)
With all the configuration files updated the containers need building and pushing to the local container registry.
Finally trigger the actual deployment with kubectl
$ kubectl apply -f ./deployment
Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.
A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.
As usual the whole solution can be found on github here.
What’s Next
I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.
I also need to back port the new catalogue handling to the docker-compose version.
And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.
p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.
I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.
Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.
Once you’ve checked out the project run the setup.sh script with the root domain as the first argument. This will do the following:
Create and set the correct permissions on the required local directories that are mounted as volumes.
Build the custom-node-red docker container that will be used for each of the Node-RED instances.
Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with .local appended.
Create the Docker network that all the images will be attached to.
The following docker-compose.yaml file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.
It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.
The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.
The port for the private NPM registry is also exposed to allow packages to be published from outside the system.
And finally it binds to the network that was created by the setup.sh script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.
The management application is also directly exposed as manager.example.com, you should be able to log in with the username admin and the password of password (both can be changed in the manager/settings.js file) and create a new instance.
Conclusions
This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.
Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.
If you build something based on this please let both me and the Node-RED community know.
And if you are looking for somebody to help you build on this please get in touch.
Across the last 6 posts I’ve walked through deploying a Multi Tenant Node-RED service. In this post I’m going to talk about how you go about customising the install to make it more specific to your needs.
Custom Node Catalogue
As I mentioned in the post about creating the custom docker container you can use the nodeExcludes in settings.js to disable nodes and if needed the package.json can be edited to remove any core nodes that you want.
But you might have a collection of nodes that are specific to your deployment that are not of use outside your environment and you may not want to publish to npmjs.org. In extreme cases you may even want to remove all the core nodes and only allow users to use your own set of nodes.
Node-RED downloads the list of nodes available to install from catalogue.nodered.org and allows you to add either additional URLs to pull in a list of extra nodes or replace the URL with a custom list. The documentation for this can be found here. The list is kept in the settings.js under the editorTheme entry.
There is a small script called build-catalogue.js in the manager app that will generate a catalogue.json file from a given npm repository.
Now there is a custom list of nodes we need to be able to load them with the npm command. There are a few options
Publish the node to npmjs.org but without the node-red keyword so they don’t end up being index on flows.nodered.org (and not manually submit them as you now need to)
Publish the node to a private npm repository that also acts as a pass through proxy to npmjs.org
Publish the node to a private npm repository with a scope and configure npm to use a different repository for a given scope.
The first option doesn’t need anything special settings up, you just add the nodes you want to the catalogue. The second two options need a private npm package repository. For the second it needs to act as a pass through so all the dependencies can also be loaded which is what makes the third option probably best.
I’ve been playing with running verdaccio on the same docker infrastructure as everything else and set npm to map the @private scope http://registry:4873 in the custom docker container.
...
RUN npm config set @private:registry http://registry:4873
Verdaccio can also proxy to npmjs.org if needed. (Like the nginx-proxy container I had to rebuild the verdaccio container to get it to run on my Pi4 since it only ships a AMD64 version)
I’m hosting my catalogue.json from the same Express application as is used to provision new Node-RED instances.
Or if you want to prevent users being able to install/remove nodes then you can add:
The last thing on my list is to give the Node-RED instances a custom look and feel.
The basics like the page title, header image, favicon and logon screen graphic can all be set directly from the settings.js with the option to also link to a custom CSS style sheet so the colour scheme and shape/size of elements be changed as well. The design document can be found here.
editorTheme: {
projects: {
// To enable the Projects feature, set this value to true
enabled: false
},
page: {
title: "Ben-RED"
},
header: {
title: "Ben-RED"
},
palette: {
catalogues: [
'https://catalogue.nodered.org/catalogue.json',
'http://manager.example.com/catalogue.json'
]
}
},
Over the last series of posts I’ve outlined how to build all the components that are needed for a Multi Tenant Node-RED service. What’s missing is a way to automate the spinning up of new instances.
One option would be to do this with Node-RED it’s self, you can drive docker using the node-red-contrib-dockerode, but for this I’ve decided to create a dedicated application.
I’ve written a small Express app that uses dockerode directly and also will populate the Users collection in the MongoDB with the admin password for the editor and then spin up a new instance.
It’s pretty basic but it does just enough to get started. It is being exposed using the same nginx-proxy that exposes the Node-RED instances, so the management interface is available on the manger.docker-pi.local domain. If it was being deployed in a production environment it should probably not be internet facing and should have some basic access control to prevent anybody from being able to stand up a new Node-RED instance.
When the app has completed creating a new instance a link to that instance is displayed.
You can also Start/Stop/Remove the instance as well as streaming the logs via a websocket.
Thanks to Dave Conway-Jones (@ceejay) for help with making my very utilitarian UI look a lot better.
So far we have worked out how to set up Node-RED to store flows in a database, use authentication to prevent unauthorised access and how to start multiple containerised instances under Docker.
In this post I will cover how to expose those multiple instances so their users can access them.
The easiest way to do this is to stick something like Nginx or Traefik in front of the docker containers and have it act as a reverse proxy. There are two ways we can set this up
Virtual Host based proxying – where each instance has it’s own hostname e.g. http://r1.example.com
Path based proxying – where each instance has a root path on the same hostname e.g. http://example.com/r1
In this case I’m going to use the first option of virtual hosts as currently Node-RED uses browser local storage to hold the Admin API security token and this is scoped to the host the editor is loaded from which means that you can only access one instance at a time if you use path based proxying. This is on the Node-RED backlog to get fixed soon.
To do that I’m going to use the nginx-proxy container. This container runs in the same Docker environment as the Node-RED instances and monitors all Docker to watch for containers starting/stopping. When it sees a new container start up it automatically create the right entry in the nginx configuration files and triggers it to reload the config.
To make this work I needed to add an extra environment variable to the command used to start the Node-RED containers
I added the VIRTUAL_HOST environment variable which contains the hostname to use for this container. Th is means I can access this specific instance of Node-RED on http://r1.docker.local.
Traefik can be run in a similar way using labels instead of environment variables.
To make is all work smoothly I’ve added a wildcard domain entry to my local DNS that maps anything that matches *.example.com to the docker-pi.local machine that the container are running on.
Security
If I was going to run this exposed to the internet I’d probably want to enable HTTPS. To do this there are 2 options again
Use a separate certificate for each Virtual Host
Use a wildcard certificate that matches the wildcard DNS entry
I would probably go with the second option here as it is just one certificate you have to manage and even LetsEncrypt will issue a wildcard domain these days if you have access to the DNS.
For the first option there is a companion docker container for nginx-proxy that will use LetsEncrypt to issue certificates for each Virtual Host as they are started. It’s called letsencrypt-nginx-proxy-companion you can read how to use it in the nginx-proxy README.md
Limitations
Exposing Node-RED via a HTTP proxy does have one drawback. This approach means that only HTTP requests can directly reach the instances.
While this helps to offer a little more security, it also means that you will not be able to use the TCP-in or UDP-in nodes in server mode, that would allow arbitrary network connections into the instance. You will still be able to connect out from the instances to hosts as Docker provides NAT routing from containers to the outside world.
Sidebar
I’m testing all this on a Raspberry Pi4 running the beta of 64bit Raspberry Pi OS. I need this to get the official MongoDB container to work as they only formally support 64bit. As a result of this I had to modify and rebuild the nginx-proxy container because it only ships with support for AMD64 architectures. I had to build a ARM64 version of the forego and docker-gen packages and manually copy these into the container.
There is an outstanding pull-request open against the project to use a multistage-build which will build target specific binaries of forego and docker-gen which will fix this.
In this post I’ll talk about building a custom Node-RED Docker container that adds the storage and authentication plugins I build earlier, along with disabling a couple of the core nodes that don’t make much sense on a platform where the local disk isn’t really usable.
Settings.js
The same modifications to the settings.js from the last two posts need to be added to enabled the storage and authentication modules. The MongoDB URI and the AppName are populated with environment variables that we can pass in different values when starting the container.
We will also add the nodeExcludes entry which removes the nodes that interact with files on the local file system as we don’t want users saving things into the container which will get lost if we have to restart. It also removes the exec node since we don’t want users running arbitrary commands inside the container.
Setting the autoInstallModules to true means that if the container gets restarted then any extra nodes the user has installed with the Palette Manager will get reinstalled.
This is a pretty easy extension of the default Node-RED container. We add the modified settings.js from above to the /data directory in the container (and set the permissions on it) and then install the plugins. Everything else stays the same.
FROM nodered/node-red
COPY settings.js /data/
USER root
RUN chown -R node-red:root /data
USER node-red
RUN npm install --no-fund --no-update-notifier --save node-red-contrib-auth-mongodb node-red-contrib-storage-mongodb
We can build this with the following command
$ docker build -t custom-node-red .
It is important to node here I’ve not given a version tag in the FROM tag, so every time the container is rebuilt it will pull the very latest shipped version. This might not be what you want for a deployed environment where making sure all users are on the same version is probably a good idea from a support point of view. It may also be useful to use the -minimal tag postfix to use the version of the container based on alpine to reduce the size.
Starting an instance
You can start a new instance with the following command
In this example I’ve mapped the containers port 1880 to the host, but that will only work for a single container or every container will need to be on a different port on the host, for a Multi Tenant solution we need something different and I’ll cover that in the next post.