I’ve now completed all the parts I outlined in the first post in this series and it’s time to put it all together to end up with a running system.
Since the stack is already running on Docker, using docker-compose to assemble the orchestration seemed like the right thing to do. The whole docker-compose project can be found on GitHub here.
Once you’ve checked out the project run the setup.sh
script with the root domain as the first argument. This will do the following:
- Checkouts the submodules (Management app and Mongoose schema objects).
- Create and set the correct permissions on the required local directories that are mounted as volumes.
- Build the
custom-node-red
docker container that will be used for each of the Node-RED instances. - Change the root domain for all the virtual hosts to match the value passed in as an argument or if left blank the current hostname with
.local
appended. - Create the Docker network that all the images will be attached to.
The following docker-compose.yaml
file starts the Nginx proxy, the MongoDB instance, the custom npm registry and finally the management application.
version: "3.3"
services:
nginx:
image: jwilder/nginx-proxy
networks:
- internal
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
ports:
- "80:80"
mongodb:
image: mongo
networks:
- internal
volumes:
- "./mongodb:/data/db"
ports:
- "27017:27017"
registry:
image: verdaccio/verdaccio
networks:
- internal
volumes:
- "./registry:/verdaccio/conf"
ports:
- "4873:4873"
manager:
image: manager
build: "./manager"
depends_on:
- mongodb
networks:
- internal
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
- "./catalogue:/data"
environment:
- "VIRTUAL_HOST=manager.example.com"
- "MONGO_URL=mongodb://mongodb/nodered"
- "ROOT_DOMAIN=docker.local"
networks:
internal:
external:
name: internal
It mounts local directories for the MongoDB storage, the Verdaccio registry configuration/storage so that this data is persisted across restarts.
The ports for direct access to the MongoDB and the registry are currently exposed on the docker host. In a production environment it would probably make sense to not expose the MongoDB instance as it is currently unsecured, but it’s useful while I’ve been developing/debugging the system.
The port for the private NPM registry is also exposed to allow packages to be published from outside the system.
And finally it binds to the network that was created by the setup.sh
script, this is used for all the containers to communicate. This also means that the containers can address each other by their names as docker runs a DNS resolver on the custom network.
The management application is also directly exposed as manager.example.com
, you should be able to log in with the username admin
and the password of password
(both can be changed in the manager/settings.js
file) and create a new instance.
Conclusions
This is about as bare bones as a Multi Tenant Node-RED deployment can get, there are still a lot of things that would need to be considered for a commercial offering (Things like cgroup based memory and CPU limits))based on something like this, but it should be enough for a class room deployment allowing each student to have their own instance.
Next steps would be to look what more the management app could do e.g. expose the logs to the instance admins and look at would be needed to deploy this to something like OpenShift.
If you build something based on this please let both me and the Node-RED community know.
And if you are looking for somebody to help you build on this please get in touch.