dns-to-mdns

Having built the nginx-proxy-avahi-helper container to expose proxied instances as mDNS CNAME entries on my LAN I also wanted a way to allow these containers to also be able to resolve other devices that are exposed via mDNS on my LAN.

By default the Docker DNS service does not resolve mDNS hostnames, it either takes the docker hosts DNS server settings or defaults to using Goolge’s 8.8.8.8 service.

The containers themselves can’t do mDNS resolution unless you set their network mode to host mode which isn’t really what I want.

You can pass a list of DNS servers to a docker container when it’s started with the --dns= command line argument which means that if I can run a bridge that will convert normal DNS requests into mDNS resquests on my LAN I should be able to get the containers to resolve the local devices.

I’ve played with writing DNS proxies before when I was looking at DoH so I had a reasonably good idea where to start.

const dgram = require('dgram')
const dnsPacket = require('dns-packet')
const mdnsResolver = require('mdns-resolver')

const port = process.env["DNS_PORT"] || 53
const server = dgram.createSocket('udp4')

server.on('listening', () => {
  console.log("listening")
})

server.on('message', (msg, remote) => {
  const packet = dnsPacket.decode(msg)
  var response = {
    type: "response",
    id: packet.id,
    questions: [ packet.questions[0] ],
    answers: []
  }

  mdnsResolver.resolve(packet.questions[0].name, packet.questions[0].type)
  .then(addr => {
    response.answers.push({
      type: packet.questions[0].type,
      class: packet.questions[0].class,
      name: packet.questions[0].name,
      ttl: 30,
      data: addr
    })
    server.send(dnsPacket.encode(response), remote.port)
  })
  .catch (err => {
    server.send(dnsPacket.encode(response), remote.port)
  })
})
server.bind(port)

This worked well but I ran into a small problem with the mdns-resolver library which wouldn’t resolve CNAMEs, but a small pull request soon fixed that.

The next spin of the code added support to send any request not for a .local domain to an upstream DNS server to resolve which means I don’t need to add as may DNS servers to each container.

All the code is on github here.

Bonus points

This will also allow Windows machines, which don’t have native mDNS support, to do local name resolution.

nginx-proxy-avahi-helper

I’ve been playing with the jwilder/nginx-proxy docker container recently. It automatically generates reverse proxy entries for other containers.

It does this by monitoring when new containers are started and then inspecting the environment variables attached to the new container. If the container has the VIRTUAL_HOST variable is uses it’s value as the virtual host for nginx to proxy access to the container.

Normally for this type of setup you would set up a wildcard DNS that points to the docker host so all DNS lookups for a machine in the root domain will return the same IP address.

If all the virtual hosts are in the example.com domain e.g. foo.example.com and bar.example.com you would setup the *.example.com DNS to point to the docker hosts IP address.

When working on my home LAN I normally use mDNS to access local machines so there is no where to set up the wildcard DNS entry. Instead I have build a container to add CNAME mDNS entries for the docker host for each of the virtual hosts.

In my case Docker is running on a machine called docker-pi.local and I’m using that as the root domain. e.g. manager.docker-pi.local or registry.docker-pi.local.

The container is using the docker-gen application that use templates to generate configuration files based on the running containers. In this case it generates a list of virtual hosts and writes them to a file.

The file is read by a small python application that connects to d-bus to talk to the avahi daemon on the docker host and configures the CNAMEs.

Both Docker and d-bus use unix domain sockets as their transport protocol so you have to mount the sockets from the host into the container.

 docker run -d -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket -v /var/run/docker.sock:/tmp/docker.sock --name avahi-cname hardillb/nginx-proxy-avahi-helper

I’ve put the code on github here and the container is on docker hub here.

Back to Building the Linear Clock

A LONG time ago I started to work out how to build a linear clock using a strip of 60 LEDs. It’s where all my playing with using Pi Zeros and Pi 4s as USB devices started from.

I’ve had a version running on the desk with jumper wires hooking everything up, but I’ve always wanted to try and do something a bit neater. So I’ve been looking at building a custom Pi pHAT to link everything together.

The design should be pretty simple

  • Break out pin 18 on the Pi to drive the ws2812b LEDs.
  • Supply 5v to the LED strip.
  • Include an I2C RTC module so the Pi will keep accurate time, especially when using a Pi Zero (not a Pi Zero W) with no network connectivity.

I know that the Pi can supply enough power from the 5v pin in the 40 pin header to drive the number of LEDs that the clock should have lit at any one time.

Also technically the data input for the ws2812b should also be 5v but I know that at least with the strip that I have it will work with the 3.3v supplied from GPIO pin 18.

I had started to work on this with Eagle back before it got taken over by Autodesk, while there is still a free version for hobbyists I thought I’d try something different this time round and found librePCB.

Designing the Board

All the PCB board software looks to work in a similar way, you start by laying out the circuit logically before working out how to lay it out physically.

The component list is:

The RTC is going to need a battery to keep it up to date if the power goes out so I’m using the Keystone 500 holder which is for a 12mm coin cell. These are a lot smaller than the more common 20mm (cr2032) version of coin cells, so should take up lot less space on the board.

I also checked that the M41T81 has a RTC Linux kernel driver and it looks to be included in the rtc-m41t80 module, so should work.

Finally I’m using the terminal block because I don’t seem to be able to find a suitable board mountable connector for the JST SM connector that comes on most of the LED strips I’ve seen. The Wikipedia entry implies that the standard is only used for wire to wire connectors.

Circuit layout

LibrePCB has a number of component libraries that include one for the Raspberry Pi 40 pin header.

Physical and logical diagram of RTC component

But I had to create a local library for some of the other parts, especially the RTC and the battery holders. I will look at how to contribute these parts to the library once I’ve got the whole thing up and running properly.

Block circuit diagram

The Pi has built in pull up resistors on the I2C bus pins so I shouldn’t need to add any.

Board layout

Now I have all the components linked up it was time to layout the actual board.

View of PCB layout

The board dimensions are 65mm x 30mm which matches the Pi Zero and with the 40 pin header across the top edge.

The arrangement of the pins on the RTC mean that no matter which way round I mount it I always end up with 2 tracks having to cross which means I have one via to the underside of the board. Everything else fits into one layer. I’ve stuck the terminal block on the left hand edge so the strip can run straight out from there and the power connector for the Pi can come in to the bottom edge.

Possible Improvements

  • An I2C identifier IC – The Raspberry Pi HAT spec includes the option to use the second I2C bus on the Pi to hold the device tree information for the device.
  • A power supply connector – Since the LED strip can draw a lot of power when all are lit, it can make sense to add either a separate power supply for just the LEDs or a bigger power supply that can power the Pi via the 5v GPIO pins.
  • A 3.3v to 5v Level shifter – Because not all the LED strips will work with 3.3v GPIO input.
  • Find a light level sensor so I can adjust the brightness of the LED strip based on the room brightness.

Next Steps

The board design files are all up on GitHub here.

I now need to send off the design to get some boards made, order the components and try and put one together.

Getting boards made is getting cheaper and cheaper, an initial test order of 5 boards to test from JLC PCB came in at £1.53 + £3.90 shipping (this looks to include a 50% first order discount). This has a 12 day shipping time, but I’m not in a rush so this just feels incredibly cheap.

Soldering the surface mount RTC is going to challenge my skills a bit but I’m up for giving it a go. I think I might need to buy some solder paste and a magnifier.

I’ll do another post when the parts all come in and I’ve put one together.