Custom Node-RED container for a Multi Tenant Environment

In this post I’ll talk about building a custom Node-RED Docker container that adds the storage and authentication plugins I build earlier, along with disabling a couple of the core nodes that don’t make much sense on a platform where the local disk isn’t really usable.

Settings.js

The same modifications to the settings.js from the last two posts need to be added to enabled the storage and authentication modules. The MongoDB URI and the AppName are populated with environment variables that we can pass in different values when starting the container.

We will also add the nodeExcludes entry which removes the nodes that interact with files on the local file system as we don’t want users saving things into the container which will get lost if we have to restart. It also removes the exec node since we don’t want users running arbitrary commands inside the container.

Setting the autoInstallModules to true means that if the container gets restarted then any extra nodes the user has installed with the Palette Manager will get reinstalled.

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
},
adminAuth: require('node-red-contrib-auth-mongodb').setup({
  mongoURI: process.env["MONGO_URL"],
  appname: process.env["APP_NAME"]
}),
nodesExcludes:['90-exec.js','28-tail.js','10-file.js','23-watch.js'],
autoInstallModules: true,
...

Dockerfile

This is a pretty easy extension of the default Node-RED container. We add the modified settings.js from above to the /data directory in the container (and set the permissions on it) and then install the plugins. Everything else stays the same.

FROM nodered/node-red

COPY settings.js /data/
USER root
RUN chown -R node-red:root /data
USER node-red
RUN npm install --no-fund --no-update-notifier --save node-red-contrib-auth-mongodb node-red-contrib-storage-mongodb

We can build this with the following command

$ docker build -t custom-node-red .

It is important to node here I’ve not given a version tag in the FROM tag, so every time the container is rebuilt it will pull the very latest shipped version. This might not be what you want for a deployed environment where making sure all users are on the same version is probably a good idea from a support point of view. It may also be useful to use the -minimal tag postfix to use the version of the container based on alpine to reduce the size.

Starting an instance

You can start a new instance with the following command

$ docker run -d -rm -p 1880:1880 -e MONGO_URL=mongodb://mongodb/nodered -e APP_NAME=r1 --name r1 custom-node-red

In this example I’ve mapped the containers port 1880 to the host, but that will only work for a single container or every container will need to be on a different port on the host, for a Multi Tenant solution we need something different and I’ll cover that in the next post.

Node-RED Authentication Plugin

Next in the Multi Tenant Node-RED series is authentication.

If you are going to run a multi user environment one of the key features will be identifying which users are allowed to do what and where. We need to only allow users to access their specific instances of Node-RED.

Node-RED provides a couple of options, the simplest is just to include the username/password/permissions details directly in the settings.js but this doesn’t allow for dynamic updates like adding/removing users or changing passwords.

// Securing Node-RED
// -----------------
// To password protect the Node-RED editor and admin API, the following
// property can be used. See http://nodered.org/docs/security.html for details.
adminAuth: {
    type: "credentials",
    users: [{
        username: "admin",
        password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
        permissions: "*"
    }]
},

The documentation also explains how to use PassportJS strategies to authenticate against oAuth providers, meaning you can do things like have users sign in with their Twitter credentials or use an existing Single Sign On solution if you want.

And finally the documentation covers how to implement your own authentication plugin, which is what I’m going to cover in this post.

In the past I have built a version of this type of plugin that uses LDAP but in this case I’ll be using MongoDB. I’ll be using the same database that I used in the last post about building a storage plugin. I’m also going to use Mongoose to wrap the collections and I’ll be using the passport-local-mongoose plugin to handle the password hashing.

const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const passportLocalMongoose = require('passport-local-mongoose');

const Users = new Schema({
  appname: String,
  username: String,
  email: String,
  permissions: { type: String, default: '*' },
});

var options = {
  usernameUnique: true,
  saltlen: 12,
  keylen: 24,
  iterations: 901,
  encoding: 'base64'
};

Users.plugin(passportLocalMongoose,options);
Users.set('toObject', {getters: true});
Users.set('toJSON', {getters: true});

module.exports = mongoose.model('Users',Users);

We need the username and permissions fields for Node-RED, the appname is the same as in the storage plugin to allow us to keep the details for multiple Node-RED instances in the same collection. I added the email field as a way to contact the user if we need do something like a password reset. You can see that there is no password field, this is all handled by the passport-local-mongoose code, it injects salt and hash fields into the schema and adds methods like authenticate(password) to the returned objects that will check a supplied password against the stored hash.

Required API

There are 3 function that need to be implemented for Node-RED

users(username)

This function just checks if a given user exists for this instance

  users: function(username) {
    return new Promise(function (resolve, reject){
      Users.findOne({appname: appname, username: username}, {username: 1, permissions: 1})
      .then(user => {
        resolve({username:user.username, permissions:user.permissions);
      })
      .catch(err => {
        reject(err)
      })
    });
  },

authenticate(username, password)

This does the actual checking of the supplied password against the database. It looks up the user with the username and appname and then has passport-local-mongo check it against the hash in the database.

authenticate: function(username, password) {
      return new Promise(function(resolve, reject){
        Users.findOne({appname: appname, username})
        .then((user) => {
          user.authenticate(password, function(e,u,pe){
            if (u) {
              resolve({username: u.username, permissions: u.permissions)
            } else {
              resolve(null);
            }
          })
        })
        .catch(err => {
          reject(err)
        })
      }) 
  },

default()

In this case we don’t want a default user so we just return a null object.

default: function() {
    return Promise.resolve(null);
  }

Extra functions

setup(settings)

Since authentication plugins do not get the whole setting object passed in like the storage plugins we need include a setup() function to allow details about the MongoDB to be passed in.

type: "credentials",
setup: function(settings) {
  if (settings && settings.mongoURI) {
    appname = settings.appname;
    mongoose.connect(settings.mongoURI, mongoose_options)
    .catch(err => {
      throw err;
    });
  }
  return this;
},

Using the plugin

This is again similar to the storage plugin where an entry is made in the settings.js file. The difference is that this time the settings object isn’t explicitly passed to the plugin so we need to include the call to setup in the entry.

...
adminAuth: require('node-red-contrib-auth-mongodb').setup({
   mongoURI: "mongodb://localhost/nodered",
   appname: "r1"
})

How to add users to the database will be covered in a later post about managing the creation of new instances.

Source code

You can find the code here and it’s on npmjs here

traefik-avahi-helper

Having built a helper container to advertise containers via mDNS when using the jwilder/nginx-proxy container I decided to have look at doing the same for Traefik.

The nginx-proxy container uses environment variables to hold the virtual host details where as Traefik uses container labels. e.g.

traefik.http.routers.r1.rule=Host(`r1.docker.local`)

The brackets and the back ticks mean you have to escape both if you try to add the label on the command line:

$ docker run --label traefik.enable=true --label traefik.http.routers.r1.rule=Host\(\`r1.docker.local\`\) nodered/node-red

I was struggling to get the Go template language that docker-gen uses to extract the hostname from the label value so I decided to write my own parser this time. It uses Dockerode to monitor the Docker events (for starting/stopping containers) and then parses out the needed details.

This is actually easier than the nginx version as the start event actually contains the container labels so there is no need to inspect the new container to get the environment variables.

if (eventJSON.status == "start") {
  var keys = Object.keys(eventJSON.Actor.Attributes)
  keys.forEach(key => {
    if (re.test(key)) {
      var host = eventJSON.Actor.Attributes[key].match(re2)[1]
      cnames.push(host)
      fs.writeFile("cnames",cnames.join('\n'),'utf8', err =>{})
    }
  })
} else if (eventJSON.status == "stop") {
  var keys = Object.keys(eventJSON.Actor.Attributes)
  keys.forEach(key => {
    if (re.test(key)) {
      var host = eventJSON.Actor.Attributes[key].match(re2)[1]
      var index = cnames.indexOf(host)
      if (index != -2) {
        cnames.splice(index,1)
      }
      fs.writeFile("cnames",cnames.join('\n'), 'utf8', err => {})
    }
  })
} 

I’ve also rolled in the nodemon to watch the output file and restart the python app that interfaces with the docker hosts instance of avahi. This removes the need for the forego app that was used for the nginx version.

nodemon({
  watch: "cnames",
  script: "cname.py",
  execMap: {
    "py": "python"
  }
})
nodemon.on('start', function(){
  console.log("starting cname.py")
})
.on('restart', function(files){
  console.log("restarting cname.py with " + files)
})

I’ve built it for ARM64 and AMD64 and pushed it to docker hub as hardillb/traefik-avahi-helper and you start it as follows:

$ docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/var/run/docker.sock --name avahi-cname hardillb/traefik-avahi-helper

You may also need to add the --priviledged flag if you are running on a system that uses AppArmor or SELinux.

All the code is on github here.

Node-RED Storage Plugin

As part of my series of posts about the components needed to build a Multi Tenant Node-RED system in this post I’ll talk about writing a plugin to store the users flow in the database rather than on disk.

There are a number of existing Storage plugins, the default local filessystem, the CloudantDB plugin that is used with Node-RED in the IBM Cloud.

I’m going to use MongoDB as the backend storage and the Mongoose library to wrap the reading/writing to the database (I’ll be reusing the Mongoose schema definiations later in the Authentication plugin and the app to manager Node-RED instances).

The documentation from the Storage API can be found here. There are a number of methods that a plugin needs to provide:

init()

This sets up the plugin, it reads the settings and then opens the connection to the database.

init: function(nrSettings) {
  settings = nrSettings.mongodbSettings || {};

  if (!settings) {
    var err = Promise.reject("No MongoDB settings for flow storage found");
    err.catch(err => {});
    return err;
  }

  appname = settings.appname;

  return new Promise(function(resolve, reject){
    mongoose.connect(settings.mongoURI, mongoose_options)
    .then(() => {
      resolve();
    })
    .catch(err => {
      reject(err);
    });
  })
},

getFlows()/saveFlows(flows)

Here we retrieve/save the flow to the database. If there isn’t a current flow (such as the first time the instance is run) we need to return an empty array ([])

getFlows: function() {
  return new Promise(function(resolve, reject) {
    Flows.findOne({appname: appname}, function(err, flows){
      if (err) {
        reject(err);
      } else {
        if (flows){
          resolve(flows.flow);
        } else {
          resolve([]);
        }
      }
    })
  })
},
saveFlows: function(flows) {
  return new Promise(function(resolve, reject) {
    Flows.findOneAndUpdate({appname: appname},{flow: flows}, {upsert: true}, function(err,flow){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

The upsert: true in the options passed to findOneAndUpdatet() triggers an insert if there isn’t an existing matching document.

getCredentials/saveCredentials(credentials)

Here we had to convert the credentials object to a string because MongoDB doesn’t like root object keys that start with a $ (the encrypted credentials string is held in the $_ entry in the object.

getCredentials: function() {
  return new Promise(function(resolve, reject) {
    Credentials.findOne({appname: appname}, function(err, creds){
      if (err) {
        reject(err);
      } else {
        if (creds){
          resolve(JSON.parse(creds.credentials));
        } else {
          resolve({});  
        }
      }
    })
  })
},
saveCredentials: function(credentials) {
  return new Promise(function(resolve, reject) {
    Credentials.findOneAndUpdate({appname: appname},{credentials: JSON.stringify(credentials)}, {upsert: true}, function(err,credentials){
      if (err) {
        reject(err)
      } else {
        resolve();
      }
    })
  })
},

getSessions()/saveSessions(sessions)/getSettings()/saveSettings(settings)

These are pretty much just carbon copies of the getFlows()/saveFlows(flows) functions since they are just storing/retrieving a single JSON object.

getLibraryEntry(type,name)/saveLibraryEntry(type,name,meta,body)

saveLibraryEntry(type,name,meta,body) is pretty standard with a little bit of name manipulation to make it look more like a file path.

getLibraryEntry(type,name,meta,body) needs a bit more work as we need to build the directory structure as well as being able to return the actual file content.

getLibraryEntry: function(type,name) {
  if (name == "") {
    name = "/"
  } else if (name.substr(0,1) != "/") {
    name = "/" + name
  }

  return new Promise(function(resolve,reject) {
    Library.findOne({appname: appname, type: type, name: name}, function(err, file){
      if (err) {
        reject(err);
      } else if (file) {
        resolve(file.body);
      } else {
        var reg = new RegExp('^' + name , "");
        Library.find({appname: appname, type: type, name: reg }, function(err, fileList){
          if (err) {
            reject(err)
          } else {
            var dirs = [];
            var files = [];
            for (var i=0; i<fileList.length; i++) {
              var n = fileList[i].name;
              n = n.replace(name, "");
              if (n.indexOf('/') == -1) {
                var f = fileList[i].meta;
                f.fn = n;
                files.push(f);
              } else {
                n = n.substr(0,n.indexOf('/'))
                dirs.push(n);
              }
            }
            dirs = dirs.concat(files);
            resolve(dirs);
          }
        })
          
      }
    })
  });
},
saveLibraryEntry: function(type,name,meta,body) {
  return new Promise(function(resolve,reject) {
    var p = name.split("/");    // strip multiple slash
    p = p.filter(Boolean);
    name = p.slice(0, p.length).join("/")
    if (name != "" && name.substr(0, 1) != "/") {
      name = "/" + name;
    }
    Library.findOneAndUpdate({appname: appname, name: name}, 
      {name:name, meta:meta, body:body, type: type},
      {upsert: true, useFindAndModify: false},
      function(err, library){
        if (err) {
          reject(err);
        } else {
          resolve();
        }
      });
  });
}

Using the plugin

To use the plugin you need to include it in the settings.js file. This is normally found in the userDir (the location of this file is logged when Node-RED starts up).

...
storageModule: require('node-red-contrib-storage-mongodb'),
mongodbSettings: {
    mongoURI: "mongodb://localhost/nodered",
    appname: "r1"
},
...

The mongodbSettings object contains the URI for the database and the appname is a unique identifier for this Node-RED instance allowing the same database to be used for multiple instances of Node-RED.

Source code

You can find the code for this module here and it’s hosted on npmjs here

Multi Tenant Node-RED

I was recently approached by a company that wanted to sponsor adding Multi Tenant support to Node-RED. This would be to enable multiple users to run independent flows on a single Node-RED instance.

This is really not a good idea for a number of reasons. But mainly it is because the NodeJS runtime is inherently single threaded and there is no way to get real separation between different users. For example, if one user uses a poorly written node (or function in a function node) it is possible to block the event loop starving out all the other users or in extreme cases an uncaught asynchronous exception will cause the whole application to exit.

The best approach is to run a separate instance of Node-RED per user, which gives you the required separation between users. The usual way to do this is to use a container based system and a HTTP reverse proxy to control access to the instances.

Over the next months worth of posts I’ll outline the components required to build a system like I described and at the end should hopefully have a fully working Proof of Concept that people can use as the base to build their own deployments.

As the future posts are published I will add links here.

Required components

Flow Storage

Because containers file systems are not persistent we are going to need somewhere to reliably store the flows each user creates.

Node-RED supplies an API that lets you control how/where flows (and a bunch of things that would normally end up on disk)

Authentication

We are going to need a way to only allow right users access to the Node-RED editor, again there is a plugin API that allows this to be wired into nearly any existing authentication source of your choice.

I wrote a simple implementation of this API which uses LDAP as the source of users not long after Node-RED was released . But in this series of post I’ll write a new one that uses the same backend database as the flow storage plugin.

Custom Container

Once we’ve built storage and authentication plugins we will need to build a custom version of the Node-RED Docker container that includes these extras.

HTTP Reverse Proxy

Now we have a collection of containers each running a users instance of Node-RED we are going to need a way to expose these to the outside world. Since Node-RED is a web application then a HTTP Reverse Proxy is probably the right way forward.

Management

Once all of the other pieces are in place we need is a way to control the creation/deletion of Node-RED instances. It will also be useful to see the instances’ logs.

Extra bits

Finally I’ll cover some extra bits that help make a deployment specific to a particular environment, such as

Custom node repository

This allows you to host your own private nodes and still install them using the Manage Palette menu.

Custom Theme

Tweaking page titles and colour schemes can make Node-RED fit in better with your existing look and feel.

Working Example

nginx-proxy-avahi-helper

I’ve been playing with the jwilder/nginx-proxy docker container recently. It automatically generates reverse proxy entries for other containers.

It does this by monitoring when new containers are started and then inspecting the environment variables attached to the new container. If the container has the VIRTUAL_HOST variable is uses it’s value as the virtual host for nginx to proxy access to the container.

Normally for this type of setup you would set up a wildcard DNS that points to the docker host so all DNS lookups for a machine in the root domain will return the same IP address.

If all the virtual hosts are in the example.com domain e.g. foo.example.com and bar.example.com you would setup the *.example.com DNS to point to the docker hosts IP address.

When working on my home LAN I normally use mDNS to access local machines so there is no where to set up the wildcard DNS entry. Instead I have build a container to add CNAME mDNS entries for the docker host for each of the virtual hosts.

In my case Docker is running on a machine called docker-pi.local and I’m using that as the root domain. e.g. manager.docker-pi.local or registry.docker-pi.local.

The container is using the docker-gen application that use templates to generate configuration files based on the running containers. In this case it generates a list of virtual hosts and writes them to a file.

The file is read by a small python application that connects to d-bus to talk to the avahi daemon on the docker host and configures the CNAMEs.

Both Docker and d-bus use unix domain sockets as their transport protocol so you have to mount the sockets from the host into the container.

 docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/tmp/docker.sock --name avahi-cname hardillb/nginx-proxy-avahi-helper

I’ve put the code on github here and the container is on docker hub here.

Back to Building the Linear Clock

A LONG time ago I started to work out how to build a linear clock using a strip of 60 LEDs. It’s where all my playing with using Pi Zeros and Pi 4s as USB devices started from.

I’ve had a version running on the desk with jumper wires hooking everything up, but I’ve always wanted to try and do something a bit neater. So I’ve been looking at building a custom Pi pHAT to link everything together.

The design should be pretty simple

  • Break out pin 18 on the Pi to drive the ws2812b LEDs.
  • Supply 5v to the LED strip.
  • Include an I2C RTC module so the Pi will keep accurate time, especially when using a Pi Zero (not a Pi Zero W) with no network connectivity.

I know that the Pi can supply enough power from the 5v pin in the 40 pin header to drive the number of LEDs that the clock should have lit at any one time.

Also technically the data input for the ws2812b should also be 5v but I know that at least with the strip that I have it will work with the 3.3v supplied from GPIO pin 18.

I had started to work on this with Eagle back before it got taken over by Autodesk, while there is still a free version for hobbyists I thought I’d try something different this time round and found librePCB.

Designing the Board

All the PCB board software looks to work in a similar way, you start by laying out the circuit logically before working out how to lay it out physically.

The component list is:

The RTC is going to need a battery to keep it up to date if the power goes out so I’m using the Keystone 500 holder which is for a 12mm coin cell. These are a lot smaller than the more common 20mm (cr2032) version of coin cells, so should take up lot less space on the board.

I also checked that the M41T81 has a RTC Linux kernel driver and it looks to be included in the rtc-m41t80 module, so should work.

Finally I’m using the terminal block because I don’t seem to be able to find a suitable board mountable connector for the JST SM connector that comes on most of the LED strips I’ve seen. The Wikipedia entry implies that the standard is only used for wire to wire connectors.

Circuit layout

LibrePCB has a number of component libraries that include one for the Raspberry Pi 40 pin header.

Physical and logical diagram of RTC component

But I had to create a local library for some of the other parts, especially the RTC and the battery holders. I will look at how to contribute these parts to the library once I’ve got the whole thing up and running properly.

Block circuit diagram

The Pi has built in pull up resistors on the I2C bus pins so I shouldn’t need to add any.

Board layout

Now I have all the components linked up it was time to layout the actual board.

View of PCB layout

The board dimensions are 65mm x 30mm which matches the Pi Zero and with the 40 pin header across the top edge.

The arrangement of the pins on the RTC mean that no matter which way round I mount it I always end up with 2 tracks having to cross which means I have one via to the underside of the board. Everything else fits into one layer. I’ve stuck the terminal block on the left hand edge so the strip can run straight out from there and the power connector for the Pi can come in to the bottom edge.

Possible Improvements

  • An I2C identifier IC – The Raspberry Pi HAT spec includes the option to use the second I2C bus on the Pi to hold the device tree information for the device.
  • A power supply connector – Since the LED strip can draw a lot of power when all are lit, it can make sense to add either a separate power supply for just the LEDs or a bigger power supply that can power the Pi via the 5v GPIO pins.
  • A 3.3v to 5v Level shifter – Because not all the LED strips will work with 3.3v GPIO input.
  • Find a light level sensor so I can adjust the brightness of the LED strip based on the room brightness.

Next Steps

The board design files are all up on GitHub here.

I now need to send off the design to get some boards made, order the components and try and put one together.

Getting boards made is getting cheaper and cheaper, an initial test order of 5 boards to test from JLC PCB came in at £1.53 + £3.90 shipping (this looks to include a 50% first order discount). This has a 12 day shipping time, but I’m not in a rush so this just feels incredibly cheap.

Soldering the surface mount RTC is going to challenge my skills a bit but I’m up for giving it a go. I think I might need to buy some solder paste and a magnifier.

I’ll do another post when the parts all come in and I’ve put one together.

Node-RED Google Home Smart Home Action Generally Available

I started this post back in November 2017, it’s been a long slog, but we are finally here. We had a false start back in January 2019 when I got the bulk of the code all working and I thought it would take a few weeks get it certified and released and again at the beginning of this year when the Action got approved but still required you to be a member of a Google Group to be able to sign up . Unfortunately that wasn’t the case. But all that is behind us now.

A Node-RED flow with lots of Google Home Assistant nodes

You can find the full docs for how to install and configure the Action here, but the short version is:

  • Create an account here
  • Create some devices using the wizard
  • Link the NR-GAB Action to your Google Account in the Google Home app, it should have the following icon.
  • Install the node-red-contrib-googlehome node in Node-RED
  • Drag the googlehome-in node on to the canvas and start building flows that can be triggered by the Google Assistant.

As always the doc probably needs a little bit more work, so I’ll keep updating it as folks run into issues with it. If you have any questions the best place to ask them is in the #google-home-assistant channel on the Node-RED Slack team.

Next steps are I have a working version of the Google Assistant Local Control API that I will release as soon as Google open that up for general availability. This sends commands directly to Node-RED from the Smart Speaker device which reduces the latency in triggering actions.

Contributing to Accel-PPP

I mentioned in a previous post about my desktop ISP project I was using the accel-ppp access concentrator. This provides the main components for allowing my “users” to connect to my network, it handles all the authentication and setting up of PPPoE connections.

While playing with it I wanted to look at settings different DNS servers for different users. For IPv4 this was already built in, you could put entries in the LDAP that get passed on via the Radius lookup when a user connects. But with my messing about with IPv6 only networks I also wanted to set custom IPv6 DNS servers on a per user basis.

You can set a global set of DNS servers in the config file

...
[ipv6-dns]
dns=fd12:3456:789a::1
dns=fd12:3456:789a::2
...

The Radius spec has an entry for holding IPv6 DNS server values (DNS-Server-IPv6-Address from RFC6911), but accel-ppp didn’t support using it. I mentioned it on the accel-ppp forum as a feature request and the developers seemed to think it would be a good idea, so I decided to have a go at implementing it.

First up adding the required bits to the LDAP for the test user, here I have one IPv4 DNS and one IPv6 DNS.

displayName: Ben Hardill
cn: Ben
sn: Hardill
mail: isp1@hardill.me.uk
uid: isp1
radiusReplyAttribute: MS-Primary-DNS-Server := "192.168.5.1"
radiusReplyAttribute: Reply-Message := "Hello World"
radiusReplyAttribute: Delegated-IPv6-Prefix := "fd12:3456:789a:2::/64"
radiusReplyAttribute: Framed-IPv6-Prefix := "fd12:3456:789a:0:192:168:5:2/128"
radiusReplyAttribute: DNS-Server-IPv6-Address := "fd12:3456:789a:ff64::1"
radiusFramedIPAddress: 192.168.5.2
radiusFramedIPNetmask: 255.255.255.0

My first pass was a little naive in that it ended up changing the DNS servers for all subsequent users by overwriting the global settings. It also only supported providing 1 DNS server when the DHCPv6 and RA both support up to 3. So it was back to the drawing board.

First up is to generate a linked list containing the DNS server addresses from the Radius response.

...
			case Framed_IPv6_Route:
				rad_add_framed_ipv6_route(attr->val.string, rpd);
				break;
			case DNS_Server_IPv6_Address:
				a = _malloc(sizeof(*a));
				memset(a, 0, sizeof(*a));
				a->addr = attr->val.ipv6addr;
				list_add_tail(&a->entry, &rpd->ipv6_dns.addr_list);
				break;
		}
	}
...

This gets bound to the users session object so it can be retrieved later. In this case when responding to a DHCPv6 request.

...
	if (!list_empty(&ses->ipv6_dns->addr_list)) {
		list_for_each_entry(dns, &ses->ipv6_dns->addr_list, entry) {
			j++;
		}
		if (j >= 3) {
			j = 3;
		}
		opt1 = dhcpv6_option_alloc(reply, D6_OPTION_DNS_SERVERS, j * sizeof(addr));
		addr_ptr = (struct in6_addr *)opt1->hdr->data;
		list_for_each_entry(dns, &ses->ipv6_dns->addr_list, entry) {
			if (k < j) {
				memcpy(addr_ptr, &dns->addr, sizeof(addr));
				k++;
				addr_ptr++;
			} else {
				break;
			}
		}
...

Now this is not as clean as I would like since we have to walk the list to know how many DNS servers there are in order to allocate the right size chunk of memory to hold the addresses and then walk it again to copy the addresses into the response. I might go back and add the list length to the session structure and update it when adding items to the list so we can avoid this step.

Testing

While working on these changes I wanted something a bit quicker than having to deploy the changes to my physical test rig (a collection of Raspberry Pi and switched hidden under my sofa). To help with this I spun up a VM with a copy of CORE. CORE comes from the NRL and allows you to build virtual networks and run code on nodes within the network.

CORE emulating a simple network

This means I can start and stop the access coordinator really quickly and spin up as many clients as I want. I can also easily drop in L2TP clients (which use the same Radius and PPP infrastructure in accel-ppp) to check it works there as well. It also lets you attach tools like tcpdump and wireshark to capture packets at any point in the network which can be difficult with home grade switches (enterprise switches tend to have admin interfaces and the ability to nominate ports as taps that can see all traffic).

I have submitted a pull request against the project, so now fingers crossed it gets accepted.

Open Source Rewards

This is a bit of a rambling piece around some things that have been rattling round in my brain for a while. I’ve written it out mainly to just get it out of my head.

There has been a of noise around Open Source projects and cloud companies making huge profits when running these as services. To the extent of some projects even changing the license to either prevent this, to force the cloud providers to publish all the supporting code that allows the projects to be run at scale or include new features under none Open Source licenses.

Most of the cases that have been making the news have been around projects that have an organisation that supports them e.g. Elastic search that also sell support and hosted versions of the project. While I’m sympathetic to the arguments of the projects I’m not sure the license changes work, and the cloud companies do tend to commit developers to the projects (OK, usually with an aim of getting the features they want implemented, but they do fix bugs as well). Steve O’Grady from Redmonk has a good piece about this here.

I’m less interested in the big firms fighting over this sort of thing, I’m more interested in the other end of the scale, the little “guys/gals”.

There have also been cases about single developers that have built some core components that under pin huge amounts of Open Source software. This is especially clear in the NodeJS world where hundreds of tiny npm modules come together to build thousands of more complex modules that then make up every bodies applications.

While a lot of OS developers do it for the love, or to practice their skills on things they enjoy. But when a project becomes popular the expectations start to stack up. There are literally too many stories of entitled users expecting the same levels of support/service that they would get from a large enterprise when paying for a support contract.

The problem here is when the developer at the bottom of the stack gets fed up with everybody that depends on their module raising bugs and not contributing fixes or just gets bored and walks away. We end up with what happened to the event-stream module. In this case the dev got bored and handy ownership over to somebody else (the first person who asked), that somebody later injected a bunch of code that would steal cryptocurrency private keys.

So what are the options to allow a loan developer to work on their projects and get the support they deserve.

Get employed by a somebody that depends on your code

If your project is of value to a medium/large organisation it can be in their interests to bring support for that project in house. I’ve seen this happen with projects I’ve been involved with (if only on the periphery) and it can work really well.

The thing that can possibly be tricky is balancing the needs of the community that has grown up around a project and the developers new “master” who may well have their own idea’s about what direction the project should take.

I’ve also seen work projects be open sourced and their developers continuing to drive them and get paid to do so.

Set up own business around the project

This is sort of the ultimate version of the previous one.

It can be done by selling support/services around a project, but it also can make some of the problems I mentioned earlier worse as now some will expect even more now they are paying for it.

Paypal donations or Github sponsorship/Patreon/Ko-Fi

Adding a link on the projects About page to a paypal account or a Patreon/Github sponsorship page can let people show their appreciation for a developers work.

Paypal links work well for one off payments, where as the Patreon/Github/Ko-Fi sponsorship model is a little bit more of a commitment but can be a good way to cover on going costs without needing to charge for a service directly. With a little work the developer can make use of the APIs these platforms provide bespoke content/services for users who choose to donate.

I have included a Paypal link the about page of some of my projects, I set have set the default amount to £5 with the suggestion that I will probably use it to buy myself a beer from time to time.

I have also recently signed up to the Githib sponsorship project to see how it works. Github lets you set different monthly amounts between $1 and $20000, at this time I only have 1 level set to $1.

Adverts/Affiliate links in projects

If you are building a mobile app or run a website then there is always the option of including adverts in the page/app. With this approach the user of the project doesn’t have to do anything apart from put up with some hopefully relevant adverts.

There is a balance that has to be struck with this as too many adverts or for irrelevant things can annoy users. I do occasionally post Amazon affiliate links in blog posts and I keep track of how much I’ve earned on the about page.

This is not just a valid model for open source projects, many (most) mobile games have adopted this sort of model, even if it is just as a starting tire before either allowing users to pay a fee to remove the adds or to buy in game content.

Amazon Wishlists

This is a slightly different approach is to publish a link to something like an Amazon wishlist. This allows users to buy developers a gift as a token of appreciation. The list allow the developer to get things they actually want and to select a range of items at different price points.

Back when Amazon was closer to it’s roots as a online book store (and people still read books to learn new things) it was a great way to get a book about a new subject to start a new project.

Other random thoughts

For another very interesting take on some of this please watch this video from Tom Scott for the Royal Institution about Science Communicating in the world of Youtube and social media. It has a section in middle about Parasocial Relationships which is really interesting (as it the rest of the video) in this context.

Conclusion

I don’t really have one at the moment, as I said at the start this is a bit of a stream of conciousness post.

I do think that there isn’t a one size fits all model, nor are the options I’ve listed above all of them, they were just the ones that came to mind as it typed.

If I come up with anything meaningful, I’ll do a follow up post, also if somebody want to sponsor me $20,000 a month on Github to come up with something, drop me a line ;-).