traefik-avahi-helper

Having built a helper container to advertise containers via mDNS when using the jwilder/nginx-proxy container I decided to have look at doing the same for Traefik.

The nginx-proxy container uses environment variables to hold the virtual host details where as Traefik uses container labels. e.g.

traefik.http.routers.r1.rule=HOST(`r1.docker.local`)

The brackets and the back ticks mean you have to escape both if you try to add the label on the command line:

$ docker run --label traefik.enable=true --label traefik.http.routers.r1.rule=Host\(\`r1.docker.local\`\) nodered/node-red

I was struggling to get the Go template language that docker-gen uses to extract the hostname from the label value so I decided to write my own parser this time. It uses Dockerode to monitor the Docker events (for starting/stopping containers) and then parses out the needed details.

This is actually easier than the nginx version as the start event actually contains the container labels so there is no need to inspect the new container to get the environment variables.

if (eventJSON.status == "start") {
  var keys = Object.keys(eventJSON.Actor.Attributes)
  keys.forEach(key => {
    if (re.test(key)) {
      var host = eventJSON.Actor.Attributes[key].match(re2)[1]
      cnames.push(host)
      fs.writeFile("cnames",cnames.join('\n'),'utf8', err =>{})
    }
  })
} else if (eventJSON.status == "stop") {
  var keys = Object.keys(eventJSON.Actor.Attributes)
  keys.forEach(key => {
    if (re.test(key)) {
      var host = eventJSON.Actor.Attributes[key].match(re2)[1]
      var index = cnames.indexOf(host)
      if (index != -2) {
        cnames.splice(index,1)
      }
      fs.writeFile("cnames",cnames.join('\n'), 'utf8', err => {})
    }
  })
} 

I’ve also rolled in the nodemon to watch the output file and restart the python app that interfaces with the docker hosts instance of avahi. This removes the need for the forego app that was used for the nginx version.

nodemon({
  watch: "cnames",
  script: "cname.py",
  execMap: {
    "py": "python"
  }
})
nodemon.on('start', function(){
  console.log("starting cname.py")
})
.on('restart', function(files){
  console.log("restarting cname.py with " + files)
})

I’ve built it for ARM64 and AMD64 and pushed it to docker hub as hardillb/traefik-avahi-helper and you start it as follows:

$ docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/var/run/docker.sock --name avahi-cname hardillb/traefik-avahi-helper

You may also need to add the --priviledged flag if you are running on a system that uses AppArmor or SELinux.

All the code is on github here.

dns-to-mdns

Having built the nginx-proxy-avahi-helper container to expose proxied instances as mDNS CNAME entries on my LAN I also wanted a way to allow these containers to also be able to resolve other devices that are exposed via mDNS on my LAN.

By default the Docker DNS service does not resolve mDNS hostnames, it either takes the docker hosts DNS server settings or defaults to using Goolge’s 8.8.8.8 service.

The containers themselves can’t do mDNS resolution unless you set their network mode to host mode which isn’t really what I want.

You can pass a list of DNS servers to a docker container when it’s started with the --dns= command line argument which means that if I can run a bridge that will convert normal DNS requests into mDNS resquests on my LAN I should be able to get the containers to resolve the local devices.

I’ve played with writing DNS proxies before when I was looking at DoH so I had a reasonably good idea where to start.

const dgram = require('dgram')
const dnsPacket = require('dns-packet')
const mdnsResolver = require('mdns-resolver')

const port = process.env["DNS_PORT"] || 53
const server = dgram.createSocket('udp4')

server.on('listening', () => {
  console.log("listening")
})

server.on('message', (msg, remote) => {
  const packet = dnsPacket.decode(msg)
  var response = {
    type: "response",
    id: packet.id,
    questions: [ packet.questions[0] ],
    answers: []
  }

  mdnsResolver.resolve(packet.questions[0].name, packet.questions[0].type)
  .then(addr => {
    response.answers.push({
      type: packet.questions[0].type,
      class: packet.questions[0].class,
      name: packet.questions[0].name,
      ttl: 30,
      data: addr
    })
    server.send(dnsPacket.encode(response), remote.port)
  })
  .catch (err => {
    server.send(dnsPacket.encode(response), remote.port)
  })
})
server.bind(port)

This worked well but I ran into a small problem with the mdns-resolver library which wouldn’t resolve CNAMEs, but a small pull request soon fixed that.

The next spin of the code added support to send any request not for a .local domain to an upstream DNS server to resolve which means I don’t need to add as may DNS servers to each container.

All the code is on github here.

Bonus points

This will also allow Windows machines, which don’t have native mDNS support, to do local name resolution.

nginx-proxy-avahi-helper

I’ve been playing with the jwilder/nginx-proxy docker container recently. It automatically generates reverse proxy entries for other containers.

It does this by monitoring when new containers are started and then inspecting the environment variables attached to the new container. If the container has the VIRTUAL_HOST variable is uses it’s value as the virtual host for nginx to proxy access to the container.

Normally for this type of setup you would set up a wildcard DNS that points to the docker host so all DNS lookups for a machine in the root domain will return the same IP address.

If all the virtual hosts are in the example.com domain e.g. foo.example.com and bar.example.com you would setup the *.example.com DNS to point to the docker hosts IP address.

When working on my home LAN I normally use mDNS to access local machines so there is no where to set up the wildcard DNS entry. Instead I have build a container to add CNAME mDNS entries for the docker host for each of the virtual hosts.

In my case Docker is running on a machine called docker-pi.local and I’m using that as the root domain. e.g. manager.docker-pi.local or registry.docker-pi.local.

The container is using the docker-gen application that use templates to generate configuration files based on the running containers. In this case it generates a list of virtual hosts and writes them to a file.

The file is read by a small python application that connects to d-bus to talk to the avahi daemon on the docker host and configures the CNAMEs.

Both Docker and d-bus use unix domain sockets as their transport protocol so you have to mount the sockets from the host into the container.

 docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/tmp/docker.sock --name avahi-cname hardillb/nginx-proxy-avahi-helper

I’ve put the code on github here and the container is on docker hub here.