Determining which Linux Distro you are on to install NodeJS

I’ve recently been working on an install script for a project. As part of the install I need to check if there is a suitable version of NodeJS installed and if not install one.

The problem is that there are 2 main ways in which NodeJS can be installed using the default package management systems for different Linux Distributions. So I needed a way to work out which distro the script was running on.

The step was to work out if it is actually Linux or if it’s OSx, since I’m using bash as the interpreter for the script there is the OSTYPE environment variable that I can check.

case "$OSTYPE" in
  darwin*) 
    MYOS=darwin
  ;;
  linux*)
    MYOS=$(cat /etc/os-release | grep "^ID=" | cut -d = -f 2 | tr -d '"')
  ;;
  *) 
    # unknown OS
  ;;
esac

Once we are sure we are on Linux the we can check the /etc/os-release file and cut out the ID= entry. The tr is to cut the quotes off (Amazon Linux I’m looking at you…)

MYOS then contains one of the following:

  • debian
  • ubuntu
  • raspbian
  • fedora
  • rhel
  • centos
  • amzon

And using this we can then decide how to install NodeJS

if [[ "$MYOS" == "debian" ]] || [[ "$MYOS" == "ubuntu" ]] || [[ "$MYOS" == "raspbian" ]]; then
      curl -sSL "https://deb.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo apt-get install -y nodejs build-essential
elif [[ "$MYOS" == "fedora" ]]; then
      sudo dnf module reset -y nodejs
      sudo dnf module install -y "nodejs:$MIN_NODEJS/default"
      sudo dnf group install -y "C Development Tools and Libraries"
elif [[ "$MYOS" == "rhel" ]] || [[ "$MYOS" == "centos" || "$MYOS" == "amzn" ]]; then
      curl -fsSL "https://rpm.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo yum install -y nodejs
      sudo yum group install -y "Development Tools"
elif [[ "$MYOS" == "darwin" ]]; then
      echo "**************************************************************"
      echo "* On OSx you will need to manually install NodeJS            *"
      echo "* Please install the latest LTS release from:                *"
      echo "* https://nodejs.org/en/download/                            *"
      echo "**************************************************************"
      exit 1
fi

Now that’s out of the way time to look at how to nicely setup a Systemd service…

Debugging Node-RED nodes with Visual Code

A recent Stack Overflow post had me looking at how to run Node-RED using Visual Code to debug custom nodes. Since I’d not tried Visual Code before (I tend to use Sublime Text 4 as my day to day editor) I thought I’d give it a go and see if I could get it working.

We will start with a really basic test node as an example. This just prints the content of msg.payload to the console for any message passing through.

test.js

module.exports = function(RED) {
    function test(n) {
        RED.nodes.createNode(this,n)
        const node = this
        node.on('input', function(msg, send, done){
            send = send || function() { node.send.apply(node,arguments) }
            console.log(msg.payload)
            send(msg)
            done()
        })
    }
    RED.nodes.registerType("test", test)
}

test.html

<script type="text/html" data-template-name="node-type">
</script>

<script type="text/html" data-help-name="node-type">
</script>

<script type="application/javascript">
    RED.nodes.registerType('test',{
        category: 'test',
        defaults: {},
        inputs: 1,
        outputs: 1,
        label: "test"
    })
</script>

package.json

{
  "name": "test",
  "version": "1.0.0",
  "description": "Example node-red node",
  "keywords": [
    "node-red"
  ],
  "node-red": {
    "nodes": {
      "test": "test.js"
    }
  },
  "author": "ben@example.com",
  "license": "Apache-2.0"
}

Setting up

All three files mentioned above are placed in a directory and then the following steps are followed:

  • In the Node-RED userDir (normally ~/.node-red on a Linux machine) run the following command to create a symlink in the node_modules directory. This will allow Node-RED to find and load the node.
    npm install /path/to/test/directory
  • Add the following section to the package.json file
...
  ],
  "scripts": {
    "debug": "node /usr/lib/node_modules/node-red/red.js"
  },
  "node-red": {
...

Where usr/lib/node_modules/node-red/red.js is the output from readlink -f `which node-red`.

You can then add a breakpoint to the code

View of node's javascript code with break point set on line 7

And then start Node-RED by clicking on the Play button just above the scripts block.

view of node's package.json with play symbol and Debug above the scripts block

This will launch Node-RED and attach the debugger and stop when the breakpoint if hit. You can also enable the debugger to stop the application on exceptions, filtering on if they are caught or not.

This even works when using Visual Code’s remote capabilities for editing, running and debugging projects on remote machines. I’ve tested this running over SSH to a Raspberry Pi Zero 2 W (which is similar to the original StackOverflow question as they were trying to debug nodes working with the Pi’s GPIO system). The only change I had to make on the Pi was to increase the default swap file size from 100mb to 256mb as squeezing the Visual Code remote agent and Node-RED into 512mb RAM is a bit of a squeeze.

I might give Visual Code a go as my daily driver in the new year.

Setting up mDNS CNAME entries for K8S Ingress Hostnames

As I hinted at in the end of my last post, I’ve been looking for a way to take the hostnames setup for Kubernetes Ingress endpoints and turn them into mDNS CNAME entries.

When I’m building things I like to spin up a local copy where possible (e.g. microk8s on a Pi 4 for the Node-RED on Kubernetes and the Docker Compose environment on another Pi 4 for the previous version). These setups run on my local network at home and while I have my own DNS server set up and running I also make extensive use of mDNS to be able to access the different services.

I’ve previously built little utilities to generate mDNS CNAME entries for both Nginx and Traefik reverse proxies using Environment Variables or Labels in a Docker environment, so I was keen to see if I can build the same for Kubernetes’ Ingress proxy.

Watching for new Ingress endpoints

The kubernetes-client node module supports for watching certain endpoints, so can be used to get notifications when an Ingress endpoint is created or destroyed.

const stream = client.apis.extensions.v1beta1.namespaces("default").ingresses.getStream({qs:{ watch: true}})
const jsonStream = new JSONStream()
stream.pipe(jsonStream)
jsonStream.on('data', async obj => {
  if (obj.type == "ADDED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  } else if (obj.type == "DELETED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  }
}

Creating the CNAME

For the previous versions I used a python library called mdns-publish to set up the CNAME entries. It works by sending DBUS messages to the Avahi daemon which actually answers the mDNS requests on the network. For this version I decided to try and send those DBUS messages directly from the app watching for changes in K8s.

The dbus-next node module allows working directly with the DBUS interfaces that Avahi exposes.

const dbus = require('dbus-next');
const bus = dbus.systemBus()
bus.getProxyObject('org.freedesktop.Avahi', '/')
.then( async obj => {
	const server = obj.getInterface('org.freedesktop.Avahi.Server')
	const entryGroupPath = await server.EntryGroupNew()
	const entryGroup = await bus.getProxyObject('org.freedesktop.Avahi',  entryGroupPath)
	const entryGroupInt = entryGroup.getInterface('org.freedesktop.Avahi.EntryGroup')
	var interface = -1
	var protocol = -1
	var flags = 0
	var name = host
	var clazz = 0x01
	var type = 0x05
	var ttl = 60
	var rdata = encodeFQDN(hostname)
	entryGroupInt.AddRecord(interface, protocol, flags, name, clazz, type, ttl, rdata)
	entryGroupInt.Commit()
})

Adding a signal handler to clean up when the app gets killed and we are pretty much good to go.

process.on('SIGTERM', cleanup)
process.on('SIGINT', cleanup)
function cleanup() {
	const keys = Object.keys(cnames)
	for (k in keys) {
		//console.log(keys[k])
		cnames[keys[k]].Reset()
    	cnames[keys[k]].Free()
	}
	bus.disconnect()
	process.exit(0)
}

Running

Once it’s all put together it runs as follows:

$ node index.js /home/ubuntu/.kube/config ubuntu.local

The first argument is the path to the kubectl config fileand the second is the hostname the CNAME should point to.

If the Ingress controller is running on ubuntu.local then Ingress YAML would look like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: manager-ingress
spec:
  rules:
  - host: "manager.ubuntu.local"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: manager
            port:
              number: 3000 

I’ve tested this with my local microk8s install and it is working pretty well (even on my folks really sketchy wifi). The code is all up here.

Multi Tenant Node-RED Working Example

I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.

Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.

Once you’ve checked out the project run the setup.sh script with the root domain as the first argument. This will do the following:

  • Checkouts the submodules (Management app and Mongoose schema objects).
  • Create and set the correct permissions on the required local directories that are mounted as volumes.
  • Build the custom-node-red docker container that will be used for each of the Node-RED instances.
  • Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with .local appended.
  • Create the Docker network that all the images will be attached to.

The following docker-compose.yaml file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.

version: "3.3"

services:
  nginx:
    image: jwilder/nginx-proxy
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
    ports:
      - "80:80"

  mongodb:
    image: mongo
    networks:
      - internal
    volumes:
      - "./mongodb:/data/db"
    ports:
      - "27017:27017"

  registry:
    image: verdaccio/verdaccio
    networks:
      - internal
    volumes:
      - "./registry:/verdaccio/conf"
    ports:
      - "4873:4873"

  manager:
    image: manager
    build: "./manager"
    depends_on:
      - mongodb
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock"
      - "./catalogue:/data"
    environment:
      - "VIRTUAL_HOST=manager.example.com"
      - "MONGO_URL=mongodb://mongodb/nodered"
      - "ROOT_DOMAIN=docker.local"


networks:
  internal:
    external:
      name: internal

It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.

The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.

The port for the private NPM registry is also exposed to allow packages to be published from outside the system.

And finally it binds to the network that was created by the setup.sh script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.

The management application is also directly exposed as manager.example.com, you should be able to log in with the username admin and the password of password (both can be changed in the manager/settings.js file) and create a new instance.

Conclusions

This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.

Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.

If you build something based on this please let both me and the Node-RED community know.

And if you are looking for somebody to help you build on this please get in touch.

Advanced Multi Tenant Node-RED topics

Across the last 6 posts I’ve walked through deploying a Multi Tenant Node-RED service. In this post I’m going to talk about how you go about customising the install to make it more specific to your needs.

Custom Node Catalogue

As I mentioned in the post about creating the custom docker container you can use the nodeExcludes in settings.js to disable nodes and if needed the package.json can be edited to remove any core nodes that you want.

But you might have a collection of nodes that are specific to your deployment that are not of use outside your environment and you may not want to publish to npmjs.org. In extreme cases you may even want to remove all the core nodes and only allow users to use your own set of nodes.

Node-RED downloads the list of nodes available to install from catalogue.nodered.org and allows you to add either additional URLs to pull in a list of extra nodes or replace the URL with a custom list. The documentation for this can be found here. The list is kept in the settings.js under the editorTheme entry.

...
editorTheme: {
  palette: {
    catalogues: [
      "http://catalogue.nodered.org/catalogue.json", //default catalogue
      'http://manager.example.com/catalogue.json'
    ]
  }
},
...

The URL should point to a JSON file that has the following format

{
  "name":"Ben's custom catalogue",
  "updated_at": "2016-08-05T18:37:50.673Z",
  "modules": [
    {
      "id": "@ben/ben-red-random",
      "version": "1.3.0",
      "description": "A node-red node that generates random numbers",
      "keywords": [
        "node-red",
        "random"
      ],
      "updated_at": "2016-08-05T18:37:50.673Z",
      "url": "http://flows.example.com/node/ben-red-random"
    },
    ...
  ]
}

There is a small script called build-catalogue.js in the manager app that will generate a catalogue.json file from a given npm repository.

Now there is a custom list of nodes we need to be able to load them with the npm command. There are a few options

  • Publish the node to npmjs.org but without the node-red keyword so they don’t end up being index on flows.nodered.org (and not manually submit them as you now need to)
  • Publish the node to a private npm repository that also acts as a pass through proxy to npmjs.org
  • Publish the node to a private npm repository with a scope and configure npm to use a different repository for a given scope.

The first option doesn’t need anything special settings up, you just add the nodes you want to the catalogue. The second two options need a private npm package repository. For the second it needs to act as a pass through so all the dependencies can also be loaded which is what makes the third option probably best.

I’ve been playing with running verdaccio on the same docker infrastructure as everything else and set npm to map the @private scope http://registry:4873 in the custom docker container.

...
RUN npm config set @private:registry http://registry:4873

Verdaccio can also proxy to npmjs.org if needed. (Like the nginx-proxy container I had to rebuild the verdaccio container to get it to run on my Pi4 since it only ships a AMD64 version)

I’m hosting my catalogue.json from the same Express application as is used to provision new Node-RED instances.

Screen shot of nodes listed on Verdaccio

Or if you want to prevent users being able to install/remove nodes then you can add:

...
editorTheme: {
  palette: {
    editable: false
  }
},
...

Skinning Node-RED

The last thing on my list is to give the Node-RED instances a custom look and feel.

The basics like the page title, header image, favicon and logon screen graphic can all be set directly from the settings.js with the option to also link to a custom CSS style sheet so the colour scheme and shape/size of elements be changed as well. The design document can be found here.

editorTheme: {
  projects: {
    // To enable the Projects feature, set this value to true
    enabled: false
  },
  page: {
    title: "Ben-RED"
  },
  header: {
    title: "Ben-RED"
  },
  palette: {
    catalogues: [
      'https://catalogue.nodered.org/catalogue.json',
      'http://manager.example.com/catalogue.json'
    ]
  }
},

Managing Multi Tenant Node-RED Instances

Over the last series of posts I’ve outlined how to build all the components that are needed for a Multi Tenant Node-RED service. What’s missing is a way to automate the spinning up of new instances.

One option would be to do this with Node-RED it’s self, you can drive docker using the node-red-contrib-dockerode, but for this I’ve decided to create a dedicated application.

I’ve written a small Express app that uses dockerode directly and also will populate the Users collection in the MongoDB with the admin password for the editor and then spin up a new instance.

docker.createContainer({
  Image: "custom-node-red",
  name: req.body.appname,
  Env: [
    "VIRTUAL_HOST=" + req.body.appname + "." + settings.rootDomain,
    "APP_NAME="+ req.body.appname,
    "MONGO_URL=mongodb://mongodb/nodered"
  ],
  AttachStdin: false,
  AttachStdout: false,
  AttachStderr: false,
  HostConfig: {
    Links: [
      "mongodb:mongodb"
    ]
  }
})
.then(container => {
  console.log("created");
  cont = container;
  return container.start()
})
.then(() => {
  res.status(201).send({started: true, url: "http://" + req.body.appname + "." + settings.rootDomain});
})

It’s pretty basic but it does just enough to get started. It is being exposed using the same nginx-proxy that exposes the Node-RED instances, so the management interface is available on the manger.docker-pi.local domain. If it was being deployed in a production environment it should probably not be internet facing and should have some basic access control to prevent anybody from being able to stand up a new Node-RED instance.

When the app has completed creating a new instance a link to that instance is displayed.

You can also Start/Stop/Remove the instance as well as streaming the logs via a websocket.

Thanks to Dave Conway-Jones (@ceejay) for help with making my very utilitarian UI look a lot better.

The code for the management app is on github here

Custom Node-RED container for a Multi Tenant Environment

In this post I’ll talk about building a custom Node-RED Docker container that adds the storage and authentication plugins I build earlier, along with disabling a couple of the core nodes that don’t make much sense on a platform where the local disk isn’t really usable.

Settings.js

The same modifications to the settings.js from the last two posts need to be added to enabled the storage and authentication modules. The MongoDB URI and the AppName are populated with environment variables that we can pass in different values when starting the container.

We will also add the nodeExcludes entry which removes the nodes that interact with files on the local file system as we don’t want users saving things into the container which will get lost if we have to restart. It also removes the exec node since we don’t want users running arbitrary commands inside the container.

Setting the autoInstallModules to true means that if the container gets restarted then any extra nodes the user has installed with the Palette Manager will get reinstalled.

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
},
adminAuth: require('node-red-contrib-auth-mongodb').setup({
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
}),
nodesExcludes:['90-exec.js','28-tail.js','10-file.js','23-watch.js'],
autoInstallModules: true,
...

Dockerfile

This is a pretty easy extension of the default Node-RED container. We add the modified settings.js from above to the /data directory in the container (and set the permissions on it) and then install the plugins. Everything else stays the same.

FROM nodered/node-red

COPY settings.js /data/
USER root
RUN chown -R node-red:root /data
USER node-red
RUN npm install --no-fund --no-update-notifier --save node-red-contrib-auth-mongodb node-red-contrib-storage-mongodb

We can build this with the following command

$ docker build -t custom-node-red .

It is important to node here I’ve not given a version tag in the FROM tag, so every time the container is rebuilt it will pull the very latest shipped version. This might not be what you want for a deployed environment where making sure all users are on the same version is probably a good idea from a support point of view. It may also be useful to use the -minimal tag postfix to use the version of the container based on alpine to reduce the size.

Starting an instance

You can start a new instance with the following command

$ docker run -d -rm -p 1880:1880 -e MONGO_URL=mongodb://mongodb/nodered -e APP_NAME=r1 --name r1 custom-node-red

In this example I’ve mapped the containers port 1880 to the host, but that will only work for a single container or every container will need to be on a different port on the host, for a Multi Tenant solution we need something different and I’ll cover that in the next post.

Node-RED Authentication Plugin

Next in the Multi Tenant Node-RED series is authentication.

If you are going to run a multi user environment one of the key features will be identifying which users are allowed to do what and where. We need to only allow users to access their specific instances of Node-RED.

Node-RED provides a couple of options, the simplest is just to include the username/password/permissions details directly in the settings.js but this doesn’t allow for dynamic updates like adding/removing users or changing passwords.

// Securing Node-RED
// -----------------
// To password protect the Node-RED editor and admin API, the following
// property can be used. See http://nodered.org/docs/security.html for details.
adminAuth: {
    type: "credentials",
    users: [{
        username: "admin",
        password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
        permissions: "*"
    }]
},

The documentation also explains how to use PassportJS strategies to authenticate against oAuth providers, meaning you can do things like have users sign in with their Twitter credentials or use an existing Single Sign On solution if you want.

And finally the documentation covers how to implement your own authentication plugin, which is what I’m going to cover in this post.

In the past I have built a version of this type of plugin that uses LDAP but in this case I’ll be using MongoDB. I’ll be using the same database that I used in the last post about building a storage plugin. I’m also going to use Mongoose to wrap the collections and I’ll be using the passport-local-mongoose plugin to handle the password hashing.

const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const passportLocalMongoose = require('passport-local-mongoose');

const Users = new Schema({
  appname: String,
  username: String,
  email: String,
  permissions: { type: String, default: '*' },
});

var options = {
  usernameUnique: true,
  saltlen: 12,
  keylen: 24,
  iterations: 901,
  encoding: 'base64'
};

Users.plugin(passportLocalMongoose,options);
Users.set('toObject', {getters: true});
Users.set('toJSON', {getters: true});

module.exports = mongoose.model('Users',Users);

We need the username and permissions fields for Node-RED, the appname is the same as in the storage plugin to allow us to keep the details for multiple Node-RED instances in the same collection. I added the email field as a way to contact the user if we need do something like a password reset. You can see that there is no password field, this is all handled by the passport-local-mongoose code, it injects salt and hash fields into the schema and adds methods like authenticate(password) to the returned objects that will check a supplied password against the stored hash.

Required API

There are 3 function that need to be implemented for Node-RED

users(username)

This function just checks if a given user exists for this instance

  users: function(username) {
    return new Promise(function (resolve, reject){
      Users.findOne({appname: appname, username: username}, {username: 1, permissions: 1})
      .then(user => {
        resolve({username:user.username, permissions:user.permissions);
      })
      .catch(err => {
        reject(err)
      })
    });
  },

authenticate(username, password)

This does the actual checking of the supplied password against the database. It looks up the user with the username and appname and then has passport-local-mongo check it against the hash in the database.

authenticate: function(username, password) {
      return new Promise(function(resolve, reject){
        Users.findOne({appname: appname, username})
        .then((user) => {
          user.authenticate(password, function(e,u,pe){
            if (u) {
              resolve({username: u.username, permissions: u.permissions)
            } else {
              resolve(null);
            }
          })
        })
        .catch(err => {
          reject(err)
        })
      }) 
  },

default()

In this case we don’t want a default user so we just return a null object.

default: function() {
    return Promise.resolve(null);
  }

Extra functions

setup(settings)

Since authentication plugins do not get the whole setting object passed in like the storage plugins we need include a setup() function to allow details about the MongoDB to be passed in.

type: "credentials",
setup: function(settings) {
  if (settings && settings.mongoURI) {
    appname = settings.appname;
    mongoose.connect(settings.mongoURI, mongoose_options)
    .catch(err => {
      throw err;
    });
  }
  return this;
},

Using the plugin

This is again similar to the storage plugin where an entry is made in the settings.js file. The difference is that this time the settings object isn’t explicitly passed to the plugin so we need to include the call to setup in the entry.

...
adminAuth: require('node-red-contrib-auth-mongodb').setup({
   mongoURI: "mongodb://localhost/nodered",
   appname: "r1"
})

How to add users to the database will be covered in a later post about managing the creation of new instances.

Source code

You can find the code here and it’s on npmjs here

Node-RED Storage Plugin

As part of my series of posts about the components needed to build a Multi Tenant Node-RED system in this post I’ll talk about writing a plugin to store the users flow in the database rather than on disk.

There are a number of existing Storage plugins, the default local filessystem, the CloudantDB plugin that is used with Node-RED in the IBM Cloud.

I’m going to use MongoDB as the backend storage and the Mongoose library to wrap the reading/writing to the database (I’ll be reusing the Mongoose schema definiations later in the Authentication plugin and the app to manager Node-RED instances).

The documentation from the Storage API can be found here. There are a number of methods that a plugin needs to provide:

init()

This sets up the plugin, it reads the settings and then opens the connection to the database.

init: function(nrSettings) {
  settings = nrSettings.mongodbSettings || {};

  if (!settings) {
    var err = Promise.reject("No MongoDB settings for flow storage found");
    err.catch(err => {});
    return err;
  }

  appname = settings.appname;

  return new Promise(function(resolve, reject){
    mongoose.connect(settings.mongoURI, mongoose_options)
    .then(() => {
      resolve();
    })
    .catch(err => {
      reject(err);
    });
  })
},

getFlows()/saveFlows(flows)

Here we retrieve/save the flow to the database. If there isn’t a current flow (such as the first time the instance is run) we need to return an empty array ([])

getFlows: function() {
  return new Promise(function(resolve, reject) {
    Flows.findOne({appname: appname}, function(err, flows){
      if (err) {
        reject(err);
      } else {
        if (flows){
          resolve(flows.flow);
        } else {
          resolve([]);
        }
      }
    })
  })
},
saveFlows: function(flows) {
  return new Promise(function(resolve, reject) {
    Flows.findOneAndUpdate({appname: appname},{flow: flows}, {upsert: true}, function(err,flow){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

The upsert: true in the options passed to findOneAndUpdatet() triggers an insert if there isn’t an existing matching document.

getCredentials/saveCredentials(credentials)

Here we had to convert the credentials object to a string because MongoDB doesn’t like root object keys that start with a $ (the encrypted credentials string is held in the $_ entry in the object.

getCredentials: function() {
  return new Promise(function(resolve, reject) {
    Credentials.findOne({appname: appname}, function(err, creds){
      if (err) {
        reject(err);
      } else {
        if (creds){
          resolve(JSON.parse(creds.credentials));
        } else {
          resolve({});  
        }
      }
    })
  })
},
saveCredentials: function(credentials) {
  return new Promise(function(resolve, reject) {
    Credentials.findOneAndUpdate({appname: appname},{credentials: JSON.stringify(credentials)}, {upsert: true}, function(err,credentials){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

getSessions()/saveSessions(sessions)/getSettings()/saveSettings(settings)

These are pretty much just carbon copies of the getFlows()/saveFlows(flows) functions since they are just storing/retrieving a single JSON object.

getLibraryEntry(type,name)/saveLibraryEntry(type,name,meta,body)

saveLibraryEntry(type,name,meta,body) is pretty standard with a little bit of name manipulation to make it look more like a file path.

getLibraryEntry(type,name,meta,body) needs a bit more work as we need to build the directory structure as well as being able to return the actual file content.

getLibraryEntry: function(type,name) {
  if (name == "") {
    name = "/"
  } else if (name.substr(0,1) != "/") {
    name = "/" + name
  }

  return new Promise(function(resolve,reject) {
    Library.findOne({appname: appname, type: type, name: name}, function(err, file){
      if (err) {
        reject(err);
      } else if (file) {
        resolve(file.body);
      } else {
        var reg = new RegExp('^' + name , "");
        Library.find({appname: appname, type: type, name: reg }, function(err, fileList){
          if (err) {
            reject(err)
          } else {
            var dirs = [];
            var files = [];
            for (var i=0; i<fileList.length; i++) {
              var n = fileList[i].name;
              n = n.replace(name, "");
              if (n.indexOf('/') == -1) {
                var f = fileList[i].meta;
                f.fn = n;
                files.push(f);
              } else {
                n = n.substr(0,n.indexOf('/'))
                dirs.push(n);
              }
            }
            dirs = dirs.concat(files);
            resolve(dirs);
          }
        })
          
      }
    })
  });
},
saveLibraryEntry: function(type,name,meta,body) {
  return new Promise(function(resolve,reject) {
    var p = name.split("/");    // strip multiple slash
    p = p.filter(Boolean);
    name = p.slice(0, p.length).join("/")
    if (name != "" && name.substr(0, 1) != "/") {
      name = "/" + name;
    }
    Library.findOneAndUpdate({appname: appname, name: name}, 
      {name:name, meta:meta, body:body, type: type},
      {upsert: true, useFindAndModify: false},
      function(err, library){
        if (err) {
          reject(err);
        } else {
          resolve();
        }
      });
  });
}

Using the plugin

To use the plugin you need to include it in the settings.js file. This is normally found in the userDir (the location of this file is logged when Node-RED starts up).

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
    mongoURI: "mongodb://localhost/nodered",
    appname: "r1"
},
...

The mongodbSettings object contains the URI for the database and the appname is a unique identifier for this Node-RED instance allowing the same database to be used for multiple instances of Node-RED.

Source code

You can find the code for this module here and it’s hosted on npmjs here

Multi Tenant Node-RED

I was recently approached by a company that wanted to sponsor adding Multi Tenant support to Node-RED. This would be to enable multiple users to run independent flows on a single Node-RED instance.

This is really not a good idea for a number of reasons. But mainly it is because the NodeJS runtime is inherently single threaded and there is no way to get real separation between different users. For example, if one user uses a poorly written node (or function in a function node) it is possible to block the event loop starving out all the other users or in extreme cases an uncaught asynchronous exception will cause the whole application to exit.

The best approach is to run a separate instance of Node-RED per user, which gives you the required separation between users. The usual way to do this is to use a container based system and a HTTP reverse proxy to control access to the instances.

Over the next months worth of posts I’ll outline the components required to build a system like I described and at the end should hopefully have a fully working Proof of Concept that people can use as the base to build their own deployments.

As the future posts are published I will add links here.

Required components

Flow Storage

Because containers file systems are not persistent we are going to need somewhere to reliably store the flows each user creates.

Node-RED supplies an API that lets you control how/where flows (and a bunch of things that would normally end up on disk)

Authentication

We are going to need a way to only allow right users access to the Node-RED editor, again there is a plugin API that allows this to be wired into nearly any existing authentication source of your choice.

I wrote a simple implementation of this API which uses LDAP as the source of users not long after Node-RED was released . But in this series of post I’ll write a new one that uses the same backend database as the flow storage plugin.

Custom Container

Once we’ve built storage and authentication plugins we will need to build a custom version of the Node-RED Docker container that includes these extras.

HTTP Reverse Proxy

Now we have a collection of containers each running a users instance of Node-RED we are going to need a way to expose these to the outside world. Since Node-RED is a web application then a HTTP Reverse Proxy is probably the right way forward.

Management

Once all of the other pieces are in place we need is a way to control the creation/deletion of Node-RED instances. It will also be useful to see the instances’ logs.

Extra bits

Finally I’ll cover some extra bits that help make a deployment specific to a particular environment, such as

Custom node repository

This allows you to host your own private nodes and still install them using the Manage Palette menu.

Custom Theme

Tweaking page titles and colour schemes can make Node-RED fit in better with your existing look and feel.

Working Example