Proxying for Mulitple Node-RED instances

So far we have worked out how to set up Node-RED to store flows in a database, use authentication to prevent unauthorised access and how to start multiple containerised instances under Docker.

In this post I will cover how to expose those multiple instances so their users can access them.

The easiest way to do this is to stick something like Nginx or Traefik in front of the docker containers and have it act as a reverse proxy. There are two ways we can set this up

  • Virtual Host based proxying – where each instance has it’s own hostname e.g.
  • Path based proxying – where each instance has a root path on the same hostname e.g.

In this case I’m going to use the first option of virtual hosts as currently Node-RED uses browser local storage to hold the Admin API security token and this is scoped to the host the editor is loaded from which means that you can only access one instance at a time if you use path based proxying. This is on the Node-RED backlog to get fixed soon.

To do that I’m going to use the nginx-proxy container. This container runs in the same Docker environment as the Node-RED instances and monitors all Docker to watch for containers starting/stopping. When it sees a new container start up it automatically create the right entry in the nginx configuration files and triggers it to reload the config.

To make this work I needed to add an extra environment variable to the command used to start the Node-RED containers

$ docker run -d --rm -e -e MONGO_URL=mongodb://mongodb/nodered -e APP_NAME=r1 --name r1 --link mongodb custom-node-red

I added the VIRTUAL_HOST environment variable which contains the hostname to use for this container. Th is means I can access this specific instance of Node-RED on http://r1.docker.local.

Traefik can be run in a similar way using labels instead of environment variables.

To make is all work smoothly I’ve added a wildcard domain entry to my local DNS that maps anything that matches * to the docker-pi.local machine that the container are running on.


If I was going to run this exposed to the internet I’d probably want to enable HTTPS. To do this there are 2 options again

  • Use a separate certificate for each Virtual Host
  • Use a wildcard certificate that matches the wildcard DNS entry

I would probably go with the second option here as it is just one certificate you have to manage and even LetsEncrypt will issue a wildcard domain these days if you have access to the DNS.

For the first option there is a companion docker container for nginx-proxy that will use LetsEncrypt to issue certificates for each Virtual Host as they are started. It’s called letsencrypt-nginx-proxy-companion you can read how to use it in the nginx-proxy


Exposing Node-RED via a HTTP proxy does have one drawback. This approach means that only HTTP requests can directly reach the instances.

While this helps to offer a little more security, it also means that you will not be able to use the TCP-in or UDP-in nodes in server mode, that would allow arbitrary network connections into the instance. You will still be able to connect out from the instances to hosts as Docker provides NAT routing from containers to the outside world.


I’m testing all this on a Raspberry Pi4 running the beta of 64bit Raspberry Pi OS. I need this to get the official MongoDB container to work as they only formally support 64bit. As a result of this I had to modify and rebuild the nginx-proxy container because it only ships with support for AMD64 architectures. I had to build a ARM64 version of the forego and docker-gen packages and manually copy these into the container.

There is an outstanding pull-request open against the project to use a multistage-build which will build target specific binaries of forego and docker-gen which will fix this.

One thought on “Proxying for Mulitple Node-RED instances”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.