Multi Tenant Node-RED with Kubernetes

Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.

I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.

Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.

I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.

Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.

Architecture

As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red-multi-tenant
  labels:
    app: nr-mt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nr-mt
  template:
    metadata:
      labels:
        app: nr-mt
    spec:
      containers:
      - name: node-red-manager
        image: private.example.com/k8s-manager
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: secret
          mountPath: /usr/src/app/config
        env:
        - name: MONGO_URL
          value: mongodb://mongo/nodered
        - name: ROOT_DOMAIN
          value: example.com
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
      - name: registry
        image: verdaccio/verdaccio
        ports:
        - containerPort: 4873
        volumeMounts:
        - name: registry-data
          mountPath: /verdaccio/storage
        - name: registry-conf
          mountPath: /verdaccio/conf
      volumes:
      - name: secret
        secret:
          secretName: kube-config
      - name: mongo-data
        hostPath:
          path: /opt/mongo-data
          type: Directory
      - name: registry-data
        hostPath:
          path: /opt/registry-data
          type: Directory
      - name: registry-conf
        secret:
          secretName: registry-conf

This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.

It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.

I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.

Manager

This is mainly the same as the docker version, but I had to swap out dockerode for kubernetes-client.

This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.

Standing up a new instance is a little more complicated as it’s now a multi step process.

  1. Create a Pod with the custom-node-red container
  2. Create a Service based on that pod
  3. Expose that service via the Ingress addon

I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.

All the code for this version of the app is on github here.

Catalogue

In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.

This container runs a small web app that has 2 endpoints.

  • /catalogue.json – which returns the current version of the catalogue
  • /update – which is triggered by the the notify function of the Verdaccio private npm registry

The registry has this snippet added to the end of the config.yml

notify:
  method: POST
  headers: [{'Content-Type': 'application/json'}]
  endpoint: http://catalogue/update
  content: '{"name": "{{name}}", "versions": "{{versions}}", "dist-tags": "{{dist-tags}}"}'

The code for this container can be found here.

Deploying

First clone the project from github

$ github clone --recurse-submodules https://github.com/hardillb/multi-tenant-node-red-k8s.git

Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.

$ ./setup.sh example.com private.example.com:5000

This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)

With all the configuration files updated the containers need building and pushing to the local container registry.

$ docker build ./manager -t private.example.com:5000/k8s-manager
$ docker push private.example.com:5000/k8s-manager
$ docker build ./catalogue -t private.example.com:5000/catalogue
$ docker push private.example.com:5000/catalogue
$ docker build ./custom-node-red -t private.example.com:5000/custom-node-red
$ docker push private.example.com:5000/custom-node-red

Finally trigger the actual deployment with kubectl

$ kubectl apply -f ./deployment

Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.

A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.

As usual the whole solution can be found on github here.

What’s Next

I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.

I also need to back port the new catalogue handling to the docker-compose version.

And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.

p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.

Update

What is described above was a great way to work out what was possible and learn a lot, but it is all PoC code and not really intended to be used in a “production” environment.

If that is what you are looking for I suggest you look at FlowForge as this is under active development. As well as the core being OpenSource, the project is also available as a licensable/supported offering both to run yourself or as a hosted service. For details check out FlowForge’s website here.

Multi Tenant Node-RED Working Example

I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.

Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.

Once you’ve checked out the project run the setup.sh script with the root domain as the first argument. This will do the following:

  • Checkouts the submodules (Management app and Mongoose schema objects).
  • Create and set the correct permissions on the required local directories that are mounted as volumes.
  • Build the custom-node-red docker container that will be used for each of the Node-RED instances.
  • Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with .local appended.
  • Create the Docker network that all the images will be attached to.

The following docker-compose.yaml file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.

version: "3.3"

services:
  nginx:
    image: jwilder/nginx-proxy
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
    ports:
      - "80:80"

  mongodb:
    image: mongo
    networks:
      - internal
    volumes:
      - "./mongodb:/data/db"
    ports:
      - "27017:27017"

  registry:
    image: verdaccio/verdaccio
    networks:
      - internal
    volumes:
      - "./registry:/verdaccio/conf"
    ports:
      - "4873:4873"

  manager:
    image: manager
    build: "./manager"
    depends_on:
      - mongodb
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock"
      - "./catalogue:/data"
    environment:
      - "VIRTUAL_HOST=manager.example.com"
      - "MONGO_URL=mongodb://mongodb/nodered"
      - "ROOT_DOMAIN=docker.local"


networks:
  internal:
    external:
      name: internal

It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.

The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.

The port for the private NPM registry is also exposed to allow packages to be published from outside the system.

And finally it binds to the network that was created by the setup.sh script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.

The management application is also directly exposed as manager.example.com, you should be able to log in with the username admin and the password of password (both can be changed in the manager/settings.js file) and create a new instance.

Conclusions

This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.

Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.

If you build something based on this please let both me and the Node-RED community know.

And if you are looking for somebody to help you build on this please get in touch.

Advanced Multi Tenant Node-RED topics

Across the last 6 posts I’ve walked through deploying a Multi Tenant Node-RED service. In this post I’m going to talk about how you go about customising the install to make it more specific to your needs.

Custom Node Catalogue

As I mentioned in the post about creating the custom docker container you can use the nodeExcludes in settings.js to disable nodes and if needed the package.json can be edited to remove any core nodes that you want.

But you might have a collection of nodes that are specific to your deployment that are not of use outside your environment and you may not want to publish to npmjs.org. In extreme cases you may even want to remove all the core nodes and only allow users to use your own set of nodes.

Node-RED downloads the list of nodes available to install from catalogue.nodered.org and allows you to add either additional URLs to pull in a list of extra nodes or replace the URL with a custom list. The documentation for this can be found here. The list is kept in the settings.js under the editorTheme entry.

...
editorTheme: {
  palette: {
    catalogues: [
      "http://catalogue.nodered.org/catalogue.json", //default catalogue
      'http://manager.example.com/catalogue.json'
    ]
  }
},
...

The URL should point to a JSON file that has the following format

{
  "name":"Ben's custom catalogue",
  "updated_at": "2016-08-05T18:37:50.673Z",
  "modules": [
    {
      "id": "@ben/ben-red-random",
      "version": "1.3.0",
      "description": "A node-red node that generates random numbers",
      "keywords": [
        "node-red",
        "random"
      ],
      "updated_at": "2016-08-05T18:37:50.673Z",
      "url": "http://flows.example.com/node/ben-red-random"
    },
    ...
  ]
}

There is a small script called build-catalogue.js in the manager app that will generate a catalogue.json file from a given npm repository.

Now there is a custom list of nodes we need to be able to load them with the npm command. There are a few options

  • Publish the node to npmjs.org but without the node-red keyword so they don’t end up being index on flows.nodered.org (and not manually submit them as you now need to)
  • Publish the node to a private npm repository that also acts as a pass through proxy to npmjs.org
  • Publish the node to a private npm repository with a scope and configure npm to use a different repository for a given scope.

The first option doesn’t need anything special settings up, you just add the nodes you want to the catalogue. The second two options need a private npm package repository. For the second it needs to act as a pass through so all the dependencies can also be loaded which is what makes the third option probably best.

I’ve been playing with running verdaccio on the same docker infrastructure as everything else and set npm to map the @private scope http://registry:4873 in the custom docker container.

...
RUN npm config set @private:registry http://registry:4873

Verdaccio can also proxy to npmjs.org if needed. (Like the nginx-proxy container I had to rebuild the verdaccio container to get it to run on my Pi4 since it only ships a AMD64 version)

I’m hosting my catalogue.json from the same Express application as is used to provision new Node-RED instances.

Screen shot of nodes listed on Verdaccio

Or if you want to prevent users being able to install/remove nodes then you can add:

...
editorTheme: {
  palette: {
    editable: false
  }
},
...

Skinning Node-RED

The last thing on my list is to give the Node-RED instances a custom look and feel.

The basics like the page title, header image, favicon and logon screen graphic can all be set directly from the settings.js with the option to also link to a custom CSS style sheet so the colour scheme and shape/size of elements be changed as well. The design document can be found here.

editorTheme: {
  projects: {
    // To enable the Projects feature, set this value to true
    enabled: false
  },
  page: {
    title: "Ben-RED"
  },
  header: {
    title: "Ben-RED"
  },
  palette: {
    catalogues: [
      'https://catalogue.nodered.org/catalogue.json',
      'http://manager.example.com/catalogue.json'
    ]
  }
},

Managing Multi Tenant Node-RED Instances

Over the last series of posts I’ve outlined how to build all the components that are needed for a Multi Tenant Node-RED service. What’s missing is a way to automate the spinning up of new instances.

One option would be to do this with Node-RED it’s self, you can drive docker using the node-red-contrib-dockerode, but for this I’ve decided to create a dedicated application.

I’ve written a small Express app that uses dockerode directly and also will populate the Users collection in the MongoDB with the admin password for the editor and then spin up a new instance.

docker.createContainer({
  Image: "custom-node-red",
  name: req.body.appname,
  Env: [
    "VIRTUAL_HOST=" + req.body.appname + "." + settings.rootDomain,
    "APP_NAME="+ req.body.appname,
    "MONGO_URL=mongodb://mongodb/nodered"
  ],
  AttachStdin: false,
  AttachStdout: false,
  AttachStderr: false,
  HostConfig: {
    Links: [
      "mongodb:mongodb"
    ]
  }
})
.then(container => {
  console.log("created");
  cont = container;
  return container.start()
})
.then(() => {
  res.status(201).send({started: true, url: "http://" + req.body.appname + "." + settings.rootDomain});
})

It’s pretty basic but it does just enough to get started. It is being exposed using the same nginx-proxy that exposes the Node-RED instances, so the management interface is available on the manger.docker-pi.local domain. If it was being deployed in a production environment it should probably not be internet facing and should have some basic access control to prevent anybody from being able to stand up a new Node-RED instance.

When the app has completed creating a new instance a link to that instance is displayed.

You can also Start/Stop/Remove the instance as well as streaming the logs via a websocket.

Thanks to Dave Conway-Jones (@ceejay) for help with making my very utilitarian UI look a lot better.

The code for the management app is on github here

Proxying for Mulitple Node-RED instances

So far we have worked out how to set up Node-RED to store flows in a database, use authentication to prevent unauthorised access and how to start multiple containerised instances under Docker.

In this post I will cover how to expose those multiple instances so their users can access them.

The easiest way to do this is to stick something like Nginx or Traefik in front of the docker containers and have it act as a reverse proxy. There are two ways we can set this up

  • Virtual Host based proxying – where each instance has it’s own hostname e.g. http://r1.example.com
  • Path based proxying – where each instance has a root path on the same hostname e.g. http://example.com/r1

In this case I’m going to use the first option of virtual hosts as currently Node-RED uses browser local storage to hold the Admin API security token and this is scoped to the host the editor is loaded from which means that you can only access one instance at a time if you use path based proxying. This is on the Node-RED backlog to get fixed soon.

To do that I’m going to use the nginx-proxy container. This container runs in the same Docker environment as the Node-RED instances and monitors all Docker to watch for containers starting/stopping. When it sees a new container start up it automatically create the right entry in the nginx configuration files and triggers it to reload the config.

To make this work I needed to add an extra environment variable to the command used to start the Node-RED containers

$ docker run -d --rm -e VIRTUAL_HOST=r1.example.com -e MONGO_URL=mongodb://mongodb/nodered -e APP_NAME=r1 --name r1 --link mongodb custom-node-red

I added the VIRTUAL_HOST environment variable which contains the hostname to use for this container. Th is means I can access this specific instance of Node-RED on http://r1.docker.local.

Traefik can be run in a similar way using labels instead of environment variables.

To make is all work smoothly I’ve added a wildcard domain entry to my local DNS that maps anything that matches *.example.com to the docker-pi.local machine that the container are running on.

Security

If I was going to run this exposed to the internet I’d probably want to enable HTTPS. To do this there are 2 options again

  • Use a separate certificate for each Virtual Host
  • Use a wildcard certificate that matches the wildcard DNS entry

I would probably go with the second option here as it is just one certificate you have to manage and even LetsEncrypt will issue a wildcard domain these days if you have access to the DNS.

For the first option there is a companion docker container for nginx-proxy that will use LetsEncrypt to issue certificates for each Virtual Host as they are started. It’s called letsencrypt-nginx-proxy-companion you can read how to use it in the nginx-proxy README.md

Limitations

Exposing Node-RED via a HTTP proxy does have one drawback. This approach means that only HTTP requests can directly reach the instances.

While this helps to offer a little more security, it also means that you will not be able to use the TCP-in or UDP-in nodes in server mode, that would allow arbitrary network connections into the instance. You will still be able to connect out from the instances to hosts as Docker provides NAT routing from containers to the outside world.

Sidebar

I’m testing all this on a Raspberry Pi4 running the beta of 64bit Raspberry Pi OS. I need this to get the official MongoDB container to work as they only formally support 64bit. As a result of this I had to modify and rebuild the nginx-proxy container because it only ships with support for AMD64 architectures. I had to build a ARM64 version of the forego and docker-gen packages and manually copy these into the container.

There is an outstanding pull-request open against the project to use a multistage-build which will build target specific binaries of forego and docker-gen which will fix this.

Custom Node-RED container for a Multi Tenant Environment

In this post I’ll talk about building a custom Node-RED Docker container that adds the storage and authentication plugins I build earlier, along with disabling a couple of the core nodes that don’t make much sense on a platform where the local disk isn’t really usable.

Settings.js

The same modifications to the settings.js from the last two posts need to be added to enabled the storage and authentication modules. The MongoDB URI and the AppName are populated with environment variables that we can pass in different values when starting the container.

We will also add the nodeExcludes entry which removes the nodes that interact with files on the local file system as we don’t want users saving things into the container which will get lost if we have to restart. It also removes the exec node since we don’t want users running arbitrary commands inside the container.

Setting the autoInstallModules to true means that if the container gets restarted then any extra nodes the user has installed with the Palette Manager will get reinstalled.

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
},
adminAuth: require('node-red-contrib-auth-mongodb').setup({
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
}),
nodesExcludes:['90-exec.js','28-tail.js','10-file.js','23-watch.js'],
autoInstallModules: true,
...

Dockerfile

This is a pretty easy extension of the default Node-RED container. We add the modified settings.js from above to the /data directory in the container (and set the permissions on it) and then install the plugins. Everything else stays the same.

FROM nodered/node-red

COPY settings.js /data/
USER root
RUN chown -R node-red:root /data
USER node-red
RUN npm install --no-fund --no-update-notifier --save node-red-contrib-auth-mongodb node-red-contrib-storage-mongodb

We can build this with the following command

$ docker build -t custom-node-red .

It is important to node here I’ve not given a version tag in the FROM tag, so every time the container is rebuilt it will pull the very latest shipped version. This might not be what you want for a deployed environment where making sure all users are on the same version is probably a good idea from a support point of view. It may also be useful to use the -minimal tag postfix to use the version of the container based on alpine to reduce the size.

Starting an instance

You can start a new instance with the following command

$ docker run -d -rm -p 1880:1880 -e MONGO_URL=mongodb://mongodb/nodered -e APP_NAME=r1 --name r1 custom-node-red

In this example I’ve mapped the containers port 1880 to the host, but that will only work for a single container or every container will need to be on a different port on the host, for a Multi Tenant solution we need something different and I’ll cover that in the next post.

Node-RED Authentication Plugin

Next in the Multi Tenant Node-RED series is authentication.

If you are going to run a multi user environment one of the key features will be identifying which users are allowed to do what and where. We need to only allow users to access their specific instances of Node-RED.

Node-RED provides a couple of options, the simplest is just to include the username/password/permissions details directly in the settings.js but this doesn’t allow for dynamic updates like adding/removing users or changing passwords.

// Securing Node-RED
// -----------------
// To password protect the Node-RED editor and admin API, the following
// property can be used. See http://nodered.org/docs/security.html for details.
adminAuth: {
    type: "credentials",
    users: [{
        username: "admin",
        password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
        permissions: "*"
    }]
},

The documentation also explains how to use PassportJS strategies to authenticate against oAuth providers, meaning you can do things like have users sign in with their Twitter credentials or use an existing Single Sign On solution if you want.

And finally the documentation covers how to implement your own authentication plugin, which is what I’m going to cover in this post.

In the past I have built a version of this type of plugin that uses LDAP but in this case I’ll be using MongoDB. I’ll be using the same database that I used in the last post about building a storage plugin. I’m also going to use Mongoose to wrap the collections and I’ll be using the passport-local-mongoose plugin to handle the password hashing.

const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const passportLocalMongoose = require('passport-local-mongoose');

const Users = new Schema({
  appname: String,
  username: String,
  email: String,
  permissions: { type: String, default: '*' },
});

var options = {
  usernameUnique: true,
  saltlen: 12,
  keylen: 24,
  iterations: 901,
  encoding: 'base64'
};

Users.plugin(passportLocalMongoose,options);
Users.set('toObject', {getters: true});
Users.set('toJSON', {getters: true});

module.exports = mongoose.model('Users',Users);

We need the username and permissions fields for Node-RED, the appname is the same as in the storage plugin to allow us to keep the details for multiple Node-RED instances in the same collection. I added the email field as a way to contact the user if we need do something like a password reset. You can see that there is no password field, this is all handled by the passport-local-mongoose code, it injects salt and hash fields into the schema and adds methods like authenticate(password) to the returned objects that will check a supplied password against the stored hash.

Required API

There are 3 function that need to be implemented for Node-RED

users(username)

This function just checks if a given user exists for this instance

  users: function(username) {
    return new Promise(function (resolve, reject){
      Users.findOne({appname: appname, username: username}, {username: 1, permissions: 1})
      .then(user => {
        resolve({username:user.username, permissions:user.permissions);
      })
      .catch(err => {
        reject(err)
      })
    });
  },

authenticate(username, password)

This does the actual checking of the supplied password against the database. It looks up the user with the username and appname and then has passport-local-mongo check it against the hash in the database.

authenticate: function(username, password) {
      return new Promise(function(resolve, reject){
        Users.findOne({appname: appname, username})
        .then((user) => {
          user.authenticate(password, function(e,u,pe){
            if (u) {
              resolve({username: u.username, permissions: u.permissions)
            } else {
              resolve(null);
            }
          })
        })
        .catch(err => {
          reject(err)
        })
      }) 
  },

default()

In this case we don’t want a default user so we just return a null object.

default: function() {
    return Promise.resolve(null);
  }

Extra functions

setup(settings)

Since authentication plugins do not get the whole setting object passed in like the storage plugins we need include a setup() function to allow details about the MongoDB to be passed in.

type: "credentials",
setup: function(settings) {
  if (settings && settings.mongoURI) {
    appname = settings.appname;
    mongoose.connect(settings.mongoURI, mongoose_options)
    .catch(err => {
      throw err;
    });
  }
  return this;
},

Using the plugin

This is again similar to the storage plugin where an entry is made in the settings.js file. The difference is that this time the settings object isn’t explicitly passed to the plugin so we need to include the call to setup in the entry.

...
adminAuth: require('node-red-contrib-auth-mongodb').setup({
   mongoURI: "mongodb://localhost/nodered",
   appname: "r1"
})

How to add users to the database will be covered in a later post about managing the creation of new instances.

Source code

You can find the code here and it’s on npmjs here

Node-RED Storage Plugin

As part of my series of posts about the components needed to build a Multi Tenant Node-RED system in this post I’ll talk about writing a plugin to store the users flow in the database rather than on disk.

There are a number of existing Storage plugins, the default local filessystem, the CloudantDB plugin that is used with Node-RED in the IBM Cloud.

I’m going to use MongoDB as the backend storage and the Mongoose library to wrap the reading/writing to the database (I’ll be reusing the Mongoose schema definiations later in the Authentication plugin and the app to manager Node-RED instances).

The documentation from the Storage API can be found here. There are a number of methods that a plugin needs to provide:

init()

This sets up the plugin, it reads the settings and then opens the connection to the database.

init: function(nrSettings) {
  settings = nrSettings.mongodbSettings || {};

  if (!settings) {
    var err = Promise.reject("No MongoDB settings for flow storage found");
    err.catch(err => {});
    return err;
  }

  appname = settings.appname;

  return new Promise(function(resolve, reject){
    mongoose.connect(settings.mongoURI, mongoose_options)
    .then(() => {
      resolve();
    })
    .catch(err => {
      reject(err);
    });
  })
},

getFlows()/saveFlows(flows)

Here we retrieve/save the flow to the database. If there isn’t a current flow (such as the first time the instance is run) we need to return an empty array ([])

getFlows: function() {
  return new Promise(function(resolve, reject) {
    Flows.findOne({appname: appname}, function(err, flows){
      if (err) {
        reject(err);
      } else {
        if (flows){
          resolve(flows.flow);
        } else {
          resolve([]);
        }
      }
    })
  })
},
saveFlows: function(flows) {
  return new Promise(function(resolve, reject) {
    Flows.findOneAndUpdate({appname: appname},{flow: flows}, {upsert: true}, function(err,flow){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

The upsert: true in the options passed to findOneAndUpdatet() triggers an insert if there isn’t an existing matching document.

getCredentials/saveCredentials(credentials)

Here we had to convert the credentials object to a string because MongoDB doesn’t like root object keys that start with a $ (the encrypted credentials string is held in the $_ entry in the object.

getCredentials: function() {
  return new Promise(function(resolve, reject) {
    Credentials.findOne({appname: appname}, function(err, creds){
      if (err) {
        reject(err);
      } else {
        if (creds){
          resolve(JSON.parse(creds.credentials));
        } else {
          resolve({});  
        }
      }
    })
  })
},
saveCredentials: function(credentials) {
  return new Promise(function(resolve, reject) {
    Credentials.findOneAndUpdate({appname: appname},{credentials: JSON.stringify(credentials)}, {upsert: true}, function(err,credentials){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

getSessions()/saveSessions(sessions)/getSettings()/saveSettings(settings)

These are pretty much just carbon copies of the getFlows()/saveFlows(flows) functions since they are just storing/retrieving a single JSON object.

getLibraryEntry(type,name)/saveLibraryEntry(type,name,meta,body)

saveLibraryEntry(type,name,meta,body) is pretty standard with a little bit of name manipulation to make it look more like a file path.

getLibraryEntry(type,name,meta,body) needs a bit more work as we need to build the directory structure as well as being able to return the actual file content.

getLibraryEntry: function(type,name) {
  if (name == "") {
    name = "/"
  } else if (name.substr(0,1) != "/") {
    name = "/" + name
  }

  return new Promise(function(resolve,reject) {
    Library.findOne({appname: appname, type: type, name: name}, function(err, file){
      if (err) {
        reject(err);
      } else if (file) {
        resolve(file.body);
      } else {
        var reg = new RegExp('^' + name , "");
        Library.find({appname: appname, type: type, name: reg }, function(err, fileList){
          if (err) {
            reject(err)
          } else {
            var dirs = [];
            var files = [];
            for (var i=0; i<fileList.length; i++) {
              var n = fileList[i].name;
              n = n.replace(name, "");
              if (n.indexOf('/') == -1) {
                var f = fileList[i].meta;
                f.fn = n;
                files.push(f);
              } else {
                n = n.substr(0,n.indexOf('/'))
                dirs.push(n);
              }
            }
            dirs = dirs.concat(files);
            resolve(dirs);
          }
        })
          
      }
    })
  });
},
saveLibraryEntry: function(type,name,meta,body) {
  return new Promise(function(resolve,reject) {
    var p = name.split("/");    // strip multiple slash
    p = p.filter(Boolean);
    name = p.slice(0, p.length).join("/")
    if (name != "" && name.substr(0, 1) != "/") {
      name = "/" + name;
    }
    Library.findOneAndUpdate({appname: appname, name: name}, 
      {name:name, meta:meta, body:body, type: type},
      {upsert: true, useFindAndModify: false},
      function(err, library){
        if (err) {
          reject(err);
        } else {
          resolve();
        }
      });
  });
}

Using the plugin

To use the plugin you need to include it in the settings.js file. This is normally found in the userDir (the location of this file is logged when Node-RED starts up).

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
    mongoURI: "mongodb://localhost/nodered",
    appname: "r1"
},
...

The mongodbSettings object contains the URI for the database and the appname is a unique identifier for this Node-RED instance allowing the same database to be used for multiple instances of Node-RED.

Source code

You can find the code for this module here and it’s hosted on npmjs here

Multi Tenant Node-RED

I was recently approached by a company that wanted to sponsor adding Multi Tenant support to Node-RED. This would be to enable multiple users to run independent flows on a single Node-RED instance.

This is really not a good idea for a number of reasons. But mainly it is because the NodeJS runtime is inherently single threaded and there is no way to get real separation between different users. For example, if one user uses a poorly written node (or function in a function node) it is possible to block the event loop starving out all the other users or in extreme cases an uncaught asynchronous exception will cause the whole application to exit.

The best approach is to run a separate instance of Node-RED per user, which gives you the required separation between users. The usual way to do this is to use a container based system and a HTTP reverse proxy to control access to the instances.

Over the next months worth of posts I’ll outline the components required to build a system like I described and at the end should hopefully have a fully working Proof of Concept that people can use as the base to build their own deployments.

As the future posts are published I will add links here.

Required components

Flow Storage

Because containers file systems are not persistent we are going to need somewhere to reliably store the flows each user creates.

Node-RED supplies an API that lets you control how/where flows (and a bunch of things that would normally end up on disk)

Authentication

We are going to need a way to only allow right users access to the Node-RED editor, again there is a plugin API that allows this to be wired into nearly any existing authentication source of your choice.

I wrote a simple implementation of this API which uses LDAP as the source of users not long after Node-RED was released . But in this series of post I’ll write a new one that uses the same backend database as the flow storage plugin.

Custom Container

Once we’ve built storage and authentication plugins we will need to build a custom version of the Node-RED Docker container that includes these extras.

HTTP Reverse Proxy

Now we have a collection of containers each running a users instance of Node-RED we are going to need a way to expose these to the outside world. Since Node-RED is a web application then a HTTP Reverse Proxy is probably the right way forward.

Management

Once all of the other pieces are in place we need is a way to control the creation/deletion of Node-RED instances. It will also be useful to see the instances’ logs.

Extra bits

Finally I’ll cover some extra bits that help make a deployment specific to a particular environment, such as

Custom node repository

This allows you to host your own private nodes and still install them using the Manage Palette menu.

Custom Theme

Tweaking page titles and colour schemes can make Node-RED fit in better with your existing look and feel.

Working Example