Multi Tenant Node-RED Working Example

I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.

Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.

Once you’ve checked out the project run the setup.sh script with the root domain as the first argument. This will do the following:

  • Checkouts the submodules (Management app and Mongoose schema objects).
  • Create and set the correct permissions on the required local directories that are mounted as volumes.
  • Build the custom-node-red docker container that will be used for each of the Node-RED instances.
  • Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with .local appended.
  • Create the Docker network that all the images will be attached to.

The following docker-compose.yaml file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.

version: "3.3"

services:
  nginx:
    image: jwilder/nginx-proxy
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
    ports:
      - "80:80"

  mongodb:
    image: mongo
    networks:
      - internal
    volumes:
      - "./mongodb:/data/db"
    ports:
      - "27017:27017"

  registry:
    image: verdaccio/verdaccio
    networks:
      - internal
    volumes:
      - "./registry:/verdaccio/conf"
    ports:
      - "4873:4873"

  manager:
    image: manager
    build: "./manager"
    depends_on:
      - mongodb
    networks:
      - internal
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock"
      - "./catalogue:/data"
    environment:
      - "VIRTUAL_HOST=manager.example.com"
      - "MONGO_URL=mongodb://mongodb/nodered"
      - "ROOT_DOMAIN=docker.local"


networks:
  internal:
    external:
      name: internal

It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.

The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.

The port for the private NPM registry is also exposed to allow packages to be published from outside the system.

And finally it binds to the network that was created by the setup.sh script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.

The management application is also directly exposed as manager.example.com, you should be able to log in with the username admin and the password of password (both can be changed in the manager/settings.js file) and create a new instance.

Conclusions

This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.

Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.

If you build something based on this please let both me and the Node-RED community know.

And if you are looking for somebody to help you build on this please get in touch.

4 thoughts on “Multi Tenant Node-RED Working Example”

  1. Great job, it’s exactly what i was looking for in our school environment.
    I was just wondering if it is possible to run two seperate instances on the same server, say one for each class with their own manager.
    Since I am a docker newbie, how would I go about this?
    Thanks

    1. It’s important to remember that this was intended to be a PoC and a starting point to build more complicated systems.

      It is possible, but it would be none trivial. It would be an opportunity for a LOT of learning.

      I’m making a bunch of assumptions here about what you mean by “same server” here (real hardware, single IP address), but you would need to look at the following

      • All the classes need to share the same Nginx instance
      • You’d probably want each classes instance on their own separate virtual network to help enforce speration
      • Each class would need their own MongoDB instance with the current schema as it assumes only one manage

      Depending on how complicated the flows the students will be generating and how many students there are will have a direct impact on the size of machine that you would need. You could probably run a introduction class with say 10 students on 4gb or 8gb Raspberry Pi4. For larger/more complicated classes you might want to run it on a larger machine or if needed you can use Docker swarm to run it on a cluster of machines.

      If you can run Virtual Machines on top of the “server” each with their own IP addresses then it would be simpler as they would appear to be separate servers.

      A comment on here isn’t really the right place to go deep into detail on this. If you need more help, drop me an email (address on my CV page) with more detail of the environment and we can explore what’s possible.

  2. I was indeed referring to the same hardware with a single IP-Adress.
    I agree with you, the way to go is to use virtualization with two seperate servers, each running their own instance on a seperate subdomain.

    For right now, this setup works fine since we are running it in a controlled environment.

    Thanks again

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.