Email Autoconfiguration

I finally got round to setting up a new version of a Virtual Machine I had on my old laptop. It’s purpose is basically to host an email client that accesses a bunch of email addresses I have set up on my domain.

It was all going smoothly until I actually got round to adding the account details to Thunderbird

It sat and span like this for a while then pops up the manual configuration view.

Which is fine as I know the difference between pop3 and imap but it’s the sort of thing that really confuses the most users (I’ve lost count of the number of times I’ve had to talk people through this over the phone).

The problem is I thought I’d already fixed particular probelm. Back last time I set up a bunch of email addresses I remember setting up a bunch of DNS SRV records to point to both the inbound mail server and the IMAP server.

SRV Records

SRV records allow you to say which servers to use for a particular protocol using a given domain. The entries are made up of the protocol followed by the transport type and then the domain e.g.

_submission._tcp.example.com

The mail client would look the SRV record for this hostname to find the mail submission protocol server for the example.com domain and would get a response that looks like this:

_submission._tcp.example.com.	3600 IN	SRV	0 1 587 mail.example.com.

where:

  • 3600 is the Time to Live (how long to cache this result in seconds)
  • IN is the Internet Protocol
  • SRV Record type
  • 0 Weight (If multiple records try the lowest first)
  • 1 Priority (If multiple records with the same Weight pick the highest first)
  • 587 The port number of the service
  • mail.example.com the host where to find the service.

I have SRV records for the following protocols enabled:

  • Mail Submission _submission._tcp
  • IMAPS _imaps._tcp
  • SIP _sip._tcp & _sip._udp
  • SIPS _sips._tcp

Using SRV records for email discovery is covered by RFC6186. SRV records are also used in the VoIP space to point to SIP servers.

So the question is why this doesn’t work. The problem looks to be that Thunderbird hasn’t implemented support for RFC6186 just yet. A bit of digging found this document which covers what the current design for Thunderbird is and which bits are still to be implemented. It looks like the only option that currently works in the XML configuration file

config-v1.1.xml file

The document lists a few locations that a file can be placed relative to the domain that holds details of how to configure the email account. This includes http://example.com/.well-known/autoconfig/mail/config-v1.1.xml where example.com is the domain part of the email address.

The schema for config-v1.1.xml can be found here. A basic minimal entry would look something like this:

<?xml version="1.0"?>
<clientConfig version="1.1">
    <emailProvider id="example.com">
      <domain>example.com</domain>
      <displayName>Example Mail</displayName>
      <displayShortName>Example</displayShortName>
      <incomingServer type="imap">
         <hostname>mail.example.com</hostname>
         <port>995</port>
         <socketType>SSL</socketType>
         <username>%EMAILADDRESS%</username>
         <authentication>password-cleartext</authentication>
      </incomingServer>

      <outgoingServer type="smtp">
         <hostname>mail.example.com</hostname>
         <port>587</port>
         <socketType>STARTTLS</socketType> 
         <username>%EMAILADDRESS%</username> 
         <authentication>password-cleartext</authentication>
         <addThisServer>true</addThisServer>
      </outgoingServer>
    </emailProvider>
    <clientConfigUpdate url="https://www.example.com/config/mozilla.xml" />
</clientConfig>

Apart from the obvious parts that say which servers to connect to the other useful bit is found in the <username> tags. Here I’m using %EMAILADDRESS% which says to use the whole email address as the username. You can also use %EMAILLOCALPART% which is everything before the @ sign and %EMAILDOMAIN% which is everything after the @ sign.

The documentation includes options for setting up remote address books and calendar information, though it doesn’t look like Thunderbird supports all of these options just yet.

With this file now in place on my HTTP server Thunderbird now sets everything up properly.

Setting up mDNS CNAME entries for K8S Ingress Hostnames

As I hinted at in the end of my last post, I’ve been looking for a way to take the hostnames setup for Kubernetes Ingress endpoints and turn them into mDNS CNAME entries.

When I’m building things I like to spin up a local copy where possible (e.g. microk8s on a Pi 4 for the Node-RED on Kubernetes and the Docker Compose environment on another Pi 4 for the previous version). These setups run on my local network at home and while I have my own DNS server set up and running I also make extensive use of mDNS to be able to access the different services.

I’ve previously built little utilities to generate mDNS CNAME entries for both Nginx and Traefik reverse proxies using Environment Variables or Labels in a Docker environment, so I was keen to see if I can build the same for Kubernetes’ Ingress proxy.

Watching for new Ingress endpoints

The kubernetes-client node module supports for watching certain endpoints, so can be used to get notifications when an Ingress endpoint is created or destroyed.

const stream = client.apis.extensions.v1beta1.namespaces("default").ingresses.getStream({qs:{ watch: true}})
const jsonStream = new JSONStream()
stream.pipe(jsonStream)
jsonStream.on('data', async obj => {
  if (obj.type == "ADDED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  } else if (obj.type == "DELETED") {
    for (x in obj.object.spec.rules) {
      let hostname = obj.object.spec.rules[x].host
      ...
    }
  }
}

Creating the CNAME

For the previous versions I used a python library called mdns-publish to set up the CNAME entries. It works by sending DBUS messages to the Avahi daemon which actually answers the mDNS requests on the network. For this version I decided to try and send those DBUS messages directly from the app watching for changes in K8s.

The dbus-next node module allows working directly with the DBUS interfaces that Avahi exposes.

const dbus = require('dbus-next');
const bus = dbus.systemBus()
bus.getProxyObject('org.freedesktop.Avahi', '/')
.then( async obj => {
	const server = obj.getInterface('org.freedesktop.Avahi.Server')
	const entryGroupPath = await server.EntryGroupNew()
	const entryGroup = await bus.getProxyObject('org.freedesktop.Avahi',  entryGroupPath)
	const entryGroupInt = entryGroup.getInterface('org.freedesktop.Avahi.EntryGroup')
	var interface = -1
	var protocol = -1
	var flags = 0
	var name = host
	var clazz = 0x01
	var type = 0x05
	var ttl = 60
	var rdata = encodeFQDN(hostname)
	entryGroupInt.AddRecord(interface, protocol, flags, name, clazz, type, ttl, rdata)
	entryGroupInt.Commit()
})

Adding a signal handler to clean up when the app gets killed and we are pretty much good to go.

process.on('SIGTERM', cleanup)
process.on('SIGINT', cleanup)
function cleanup() {
	const keys = Object.keys(cnames)
	for (k in keys) {
		//console.log(keys[k])
		cnames[keys[k]].Reset()
    	cnames[keys[k]].Free()
	}
	bus.disconnect()
	process.exit(0)
}

Running

Once it’s all put together it runs as follows:

$ node index.js /home/ubuntu/.kube/config ubuntu.local

The first argument is the path to the kubectl config fileand the second is the hostname the CNAME should point to.

If the Ingress controller is running on ubuntu.local then Ingress YAML would look like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: manager-ingress
spec:
  rules:
  - host: "manager.ubuntu.local"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: manager
            port:
              number: 3000 

I’ve tested this with my local microk8s install and it is working pretty well (even on my folks really sketchy wifi). The code is all up here.

Multi Tenant Node-RED with Kubernetes

Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.

I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.

Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.

I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.

Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.

Architecture

As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red-multi-tenant
  labels:
    app: nr-mt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nr-mt
  template:
    metadata:
      labels:
        app: nr-mt
    spec:
      containers:
      - name: node-red-manager
        image: private.example.com/k8s-manager
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: secret
          mountPath: /usr/src/app/config
        env:
        - name: MONGO_URL
          value: mongodb://mongo/nodered
        - name: ROOT_DOMAIN
          value: example.com
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
      - name: registry
        image: verdaccio/verdaccio
        ports:
        - containerPort: 4873
        volumeMounts:
        - name: registry-data
          mountPath: /verdaccio/storage
        - name: registry-conf
          mountPath: /verdaccio/conf
      volumes:
      - name: secret
        secret:
          secretName: kube-config
      - name: mongo-data
        hostPath:
          path: /opt/mongo-data
          type: Directory
      - name: registry-data
        hostPath:
          path: /opt/registry-data
          type: Directory
      - name: registry-conf
        secret:
          secretName: registry-conf

This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.

It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.

I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.

Manager

This is mainly the same as the docker version, but I had to swap out dockerode for kubernetes-client.

This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.

Standing up a new instance is a little more complicated as it’s now a multi step process.

  1. Create a Pod with the custom-node-red container
  2. Create a Service based on that pod
  3. Expose that service via the Ingress addon

I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.

All the code for this version of the app is on github here.

Catalogue

In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.

This container runs a small web app that has 2 endpoints.

  • /catalogue.json – which returns the current version of the catalogue
  • /update – which is triggered by the the notify function of the Verdaccio private npm registry

The registry has this snippet added to the end of the config.yml

notify:
  method: POST
  headers: [{'Content-Type': 'application/json'}]
  endpoint: http://catalogue/update
  content: '{"name": "{{name}}", "versions": "{{versions}}", "dist-tags": "{{dist-tags}}"}'

The code for this container can be found here.

Deploying

First clone the project from github

$ github clone https://github.com/hardillb/multi-tenant-node-red-k8s.git

Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.

$ ./setup.sh example.com private.example.com:5000

This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)

With all the configuration files updated the containers need building and pushing to the local container registry.

$ docker build ./manager -t private.example.com:5000/k8s-manager
$ docker push private.example.com:5000/k8s-manager
$ docker build ./catalogue -t private.example.com:5000/catalogue
$ docker push private.example.com:5000/catalogue
$ docker build ./custom-node-red -t private.example.com:5000/custom-node-red
$ docker push private.example.com:5000/custom-node-red

Finally trigger the actual deployment with kubectl

$ kubectl apply -f ./deployment

Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.

A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.

As usual the whole solution can be found on github here.

What’s Next

I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.

I also need to back port the new catalogue handling to the docker-compose version.

And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.

p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.

Getting a Little Rusty

After using building a tool to populate a Gopher server as an excuse to learn the Go programming language I’ve recently been wanting to try my hand at Rust.

The best way to learn a new programming language is to use it to actually solve a problem, rather than just copying exercises out of a tutorial. So this time I thought I’d try and build my own Gopher server.

Specification

The initial version of the Gopher specification is laid down in RFC1436. It’s pretty simple. The client basically sends a string that represents the path to the document it wants followed by \r\n. In the case of the root document the client sends just the line ending chars.

The server responds with either the raw content of the document or if the path points to a directory then it sends the content of a file called gophermap if found in that directory.

The gophermap file holds a list of links a bit like a index.html for a HTTP server.

Ben's Place - Gopher

Just a place to make notes about things I've 
been playing with

0CV	cv.txt
1Blog	/blog
1Brad &amp; Will Made a Tech Pod	/podcast

The lines that start with 0 are direct links to a file and have the label and then the file name. Where as 1 are links to another directory. The fields are delimited with tabs.

You can also link to files/directories on other servers by including the server and port after the filename/dir path again separated by tabs.

1Blog	/blog	gopher.hardill.me.uk	70

There is also something called Gopher+ which is an extended version that looks to never have been formally adopted as a standard but both my gopher client and PyGopherd look to support. A copy of the draft is here.

Rust

Similar to things like NodeJS, Rust has a package manager called cargo that can be used to create a new project, running cargo new rust_gopher will create a Cargo.toml file that looks a bit like this:

[package]
name = "rust_gopher"
version = "0.1.0"
authors = ["Ben Hardill <hardillb@gmail.com"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]

It also create a src directory with a main.rs and calls git init to start a new repository.

The src/main.rs is pre-populated with “Hello World”

fn main() {
    println!("Hello, world!");
}

I’ve replaced the main() function with one that uses the clap create to parse some command line arguments.

let matches = App::new("rust_gopher")
	.version("0.1.0")
	.author("Ben Hardill")
	.about("Basic Gopher Server")
	.arg(Arg::with_name("hostname")
		.short("h")
		.long("hostname")
		.takes_value(true)
		.help("The hostname of this server"))
	.arg(Arg::with_name("port")
		.short("p")
		.long("port")
		.takes_value(true)
		.help("Port number to listen on"))
	.arg(Arg::with_name("dir")
		.short("d")
		.long("dir")
		.takes_value(true)
		.help("path to gopher content"))
	.get_matches();

let hostname = matches.value_of("hostname").unwrap_or("localhost");
let port :i16 = matches.value_of("port").unwrap_or("70").parse().unwrap();
let dir = matches.value_of("dir").unwrap_or("root");

Which gives really nicely formatted help output:

$ rust_gopher --help
rust_gopher 0.1.0
Ben Hardill
Basic Gopher Server

USAGE:
    rust_gopher [OPTIONS]

FLAGS:
        --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -d, --dir <dir>              path to gopher content
    -h, --hostname <hostname>    The hostname of this server
    -p, --port <port>            Port number to listen on

Reading the gophermap

To add the required bits to a basic gophermap file into what actually gets sent, the files get parsed into the following structure.

struct Gophermap {
	row_type: char,
	label: String,
	path: String,
	server: String,
	port: i16
}

fn read_gophermap(path: &amp;Path, config: &amp;Config) -> Vec<Gophermap>{
	let mut entries: Vec<Gophermap> = Vec::new();

	let file = File::open(path).unwrap();
	let reader = BufReader::new(file);

	for line in reader.lines() {
		let mut l = line.unwrap();
...
		let  entry = Gophermap {
 			row_type: t,
 			label: label,
 			path: p.to_string(),
 			server: s.to_string(),
 			port: port
 		};
		entries.push(entry);
	}
	entries;
}

What’s next?

It still needs a bunch of work, mainly around adding a bunch of error handling. I’ll probably keep poking at it over the holidays.

As usual the code is up on github here.

More MQTT VirtualHost Proxying

A really quick follow up to the earlier post about using TLS SNI to host multiple MQTT brokers on a single IP address.

In the previous post I used nginx to do the routing, but I have also worked out that the required input to Traefik would be.

The static config file looks like this

global:
  checkNewVersion: false
  sendAnonymousUsage: false
entryPoints:
  mqtts:
    address: ":1883"
api:
  dashboard: true
  insecure: true
providers:
  file:
    filename: config.yml
    directory: /config
    watch: true

And the dynamic config like this
tcp:
  services:
    test1:
      loadBalancer:
        servers:
          - address: "192.168.1.1:1883"
    test2:
      loadBalancer:
        servers:
          - address: "192.168.1.2:1883"
  routers:
    test1:
      entryPoints:
        - "mqtts"
      rule: "HostSNI(`test1.example.com`)"
      service: test1
      tls: {}
    test2:
      entryPoints:
        - "mqtts"
      rule: "HostSNI(`test2.example.com`)"
      service: test2
      tls: {}

tls:
  certificates:
    - certFile: /certs/test1-chain.crt
      keyFile: /certs/test1.key
    - certFile: /certs/test2-chain.crt
      keyFile: /certs/test2.key

Of course all the dymaic stuff can be generated via any of the Traefik providers.

Hostname Based Proxying with MQTT

An interesting question came up on Stack Overflow recently that I suggested a hypothetical answer for how to do hostname based proxying for MQTT.

In this post I’ll explore how to actually implement that hypothetical solution.

History

HTTP added the ability to do hostname based proxying when it instroduced the Host header in HTTP v1.1. This meant that a single IP address could be used for many sites and the server would decide which content to serve based on the his header. Front end reverse proxies (e.g. nginx) can use the same header to decide which backend server to forward the traffic to.

This works well until we need encrypt the traffic to the HTTP server using SSL/TLS where the headers are encrypted. The solution to this is to use the SNI header in the TLS handshake, this tells the server which hostname the client is trying to connect to. The front end proxy can then either use this information to find the right local copy of the certificate/key for that site if it’s terminating the encryption at the frontend or it can forward the whole connection directly to the correct backend server.

MQTT

Since the SNI header is in the initial TLS handshake and is nothing to do with the underlying protocol it can be used for ay protocol, in this case MQTT. This measn if we set up a frontend proxy that uses SNI to pick the correct backend server to connect to.

Here is a nginx configuration file that proxies for 2 different MQTT brokers based on the hostname the client uses to connect. It is doing the TLS termination at the proxy before forwarding the clear version to the backend.

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

stream {  
  map $ssl_server_name $targetBackend {
    test1.example.com  192.168.1.1:1883;
    test2.example.com  192.168.1.2:1883;
  }

  map $ssl_server_name $targetCert {
    test1.example.com /certs/test1-chain.crt;
    test2.example.com /certs/test2-chain.crt;
  }

  map $ssl_server_name $targetCertKey {
    test1.example.com /certs/test1.key;
    test2.example.com /certs/test2.key;
  }
  
  server {
    listen 1883         ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_pass $targetBackend;
  } 
}

Assuming the the DNS entries for test1.example.com and test2.example.com both point to the host running nginx then we can test this with the mosquitto_sub command as follows

$ mosquitto_sub -v -h test1.example.com -t test --cafile ./ca-certs.pem

This will be proxied to the broker running on 192.168.1.1, where as

$ mosquitto_sub -v -h test2.example.com -t test --cafile ./ca-certs.pem

will be proxied to the broker on 192.168.1.2.

Cavets

The main drawback with this approach is that it requires that all the clients connect using TLS, but this is not a huge problem as nearly all devices are capable of this now and for any internet facing service it should be the default anyway.

Acknowledgment

How to do this was mainly informed by the following Gist

dns-to-mdns

Having built the nginx-proxy-avahi-helper container to expose proxied instances as mDNS CNAME entries on my LAN I also wanted a way to allow these containers to also be able to resolve other devices that are exposed via mDNS on my LAN.

By default the Docker DNS service does not resolve mDNS hostnames, it either takes the docker hosts DNS server settings or defaults to using Goolge’s 8.8.8.8 service.

The containers themselves can’t do mDNS resolution unless you set their network mode to host mode which isn’t really what I want.

You can pass a list of DNS servers to a docker container when it’s started with the --dns= command line argument which means that if I can run a bridge that will convert normal DNS requests into mDNS resquests on my LAN I should be able to get the containers to resolve the local devices.

I’ve played with writing DNS proxies before when I was looking at DoH so I had a reasonably good idea where to start.

const dgram = require('dgram')
const dnsPacket = require('dns-packet')
const mdnsResolver = require('mdns-resolver')

const port = process.env["DNS_PORT"] || 53
const server = dgram.createSocket('udp4')

server.on('listening', () => {
  console.log("listening")
})

server.on('message', (msg, remote) => {
  const packet = dnsPacket.decode(msg)
  var response = {
    type: "response",
    id: packet.id,
    questions: [ packet.questions[0] ],
    answers: []
  }

  mdnsResolver.resolve(packet.questions[0].name, packet.questions[0].type)
  .then(addr => {
    response.answers.push({
      type: packet.questions[0].type,
      class: packet.questions[0].class,
      name: packet.questions[0].name,
      ttl: 30,
      data: addr
    })
    server.send(dnsPacket.encode(response), remote.port)
  })
  .catch (err => {
    server.send(dnsPacket.encode(response), remote.port)
  })
})
server.bind(port)

This worked well but I ran into a small problem with the mdns-resolver library which wouldn’t resolve CNAMEs, but a small pull request soon fixed that.

The next spin of the code added support to send any request not for a .local domain to an upstream DNS server to resolve which means I don’t need to add as may DNS servers to each container.

All the code is on github here.

Bonus points

This will also allow Windows machines, which don’t have native mDNS support, to do local name resolution.

Back to Building the Linear Clock

A LONG time ago I started to work out how to build a linear clock using a strip of 60 LEDs. It’s where all my playing with using Pi Zeros and Pi 4s as USB devices started from.

I’ve had a version running on the desk with jumper wires hooking everything up, but I’ve always wanted to try and do something a bit neater. So I’ve been looking at building a custom Pi pHAT to link everything together.

The design should be pretty simple

  • Break out pin 18 on the Pi to drive the ws2812b LEDs.
  • Supply 5v to the LED strip.
  • Include an I2C RTC module so the Pi will keep accurate time, especially when using a Pi Zero (not a Pi Zero W) with no network connectivity.

I know that the Pi can supply enough power from the 5v pin in the 40 pin header to drive the number of LEDs that the clock should have lit at any one time.

Also technically the data input for the ws2812b should also be 5v but I know that at least with the strip that I have it will work with the 3.3v supplied from GPIO pin 18.

I had started to work on this with Eagle back before it got taken over by Autodesk, while there is still a free version for hobbyists I thought I’d try something different this time round and found librePCB.

Designing the Board

All the PCB board software looks to work in a similar way, you start by laying out the circuit logically before working out how to lay it out physically.

The component list is:

The RTC is going to need a battery to keep it up to date if the power goes out so I’m using the Keystone 500 holder which is for a 12mm coin cell. These are a lot smaller than the more common 20mm (cr2032) version of coin cells, so should take up lot less space on the board.

I also checked that the M41T81 has a RTC Linux kernel driver and it looks to be included in the rtc-m41t80 module, so should work.

Finally I’m using the terminal block because I don’t seem to be able to find a suitable board mountable connector for the JST SM connector that comes on most of the LED strips I’ve seen. The Wikipedia entry implies that the standard is only used for wire to wire connectors.

Circuit layout

LibrePCB has a number of component libraries that include one for the Raspberry Pi 40 pin header.

Physical and logical diagram of RTC component

But I had to create a local library for some of the other parts, especially the RTC and the battery holders. I will look at how to contribute these parts to the library once I’ve got the whole thing up and running properly.

Block circuit diagram

The Pi has built in pull up resistors on the I2C bus pins so I shouldn’t need to add any.

Board layout

Now I have all the components linked up it was time to layout the actual board.

View of PCB layout

The board dimensions are 65mm x 30mm which matches the Pi Zero and with the 40 pin header across the top edge.

The arrangement of the pins on the RTC mean that no matter which way round I mount it I always end up with 2 tracks having to cross which means I have one via to the underside of the board. Everything else fits into one layer. I’ve stuck the terminal block on the left hand edge so the strip can run straight out from there and the power connector for the Pi can come in to the bottom edge.

Possible Improvements

  • An I2C identifier IC – The Raspberry Pi HAT spec includes the option to use the second I2C bus on the Pi to hold the device tree information for the device.
  • A power supply connector – Since the LED strip can draw a lot of power when all are lit, it can make sense to add either a separate power supply for just the LEDs or a bigger power supply that can power the Pi via the 5v GPIO pins.
  • A 3.3v to 5v Level shifter – Because not all the LED strips will work with 3.3v GPIO input.
  • Find a light level sensor so I can adjust the brightness of the LED strip based on the room brightness.

Next Steps

The board design files are all up on GitHub here.

I now need to send off the design to get some boards made, order the components and try and put one together.

Getting boards made is getting cheaper and cheaper, an initial test order of 5 boards to test from JLC PCB came in at £1.53 + £3.90 shipping (this looks to include a 50% first order discount). This has a 12 day shipping time, but I’m not in a rush so this just feels incredibly cheap.

Soldering the surface mount RTC is going to challenge my skills a bit but I’m up for giving it a go. I think I might need to buy some solder paste and a magnifier.

I’ll do another post when the parts all come in and I’ve put one together.

Another LoRa Temperature/Humidity Sensor

Having deployed my Adafruit Feather version of a Temperature/Humidity LoRa Sensor to make use of my The Things Network gateway it’s now time to build another sesnor, this time with a Raspberry Pi Zero.

Pi Supply LoRa node pHat with sensor

I have a Pi Supply LoRa node pHAT that adds the radio side of things, but I needed a sensor. The pHAT has a header on the right hand edge which pulls out the 3.3v, 2, 3, 4 and Gnd, pins from the pi.

These just happen to line up with the I2C bus. They are also broken out in the same order as used by the Pimoroni Breakout Garden series of I2C sensors which is really useful.

Pimoroni’s Breakout Garden offer a huge range of sensors and output devices in either I2C or SPI. In this case I grabbed the BME680 which offers temperature, pressure, humidity and air quality in a single package.

Pimoroni supply a python library to read from the sensor so it was pretty easy to combine this with python to drive the LoRa pHat

#!/usr/bin/env python3
from rak811 import Mode, Rak811
import bme680
import time

lora = Rak811()
lora.hard_reset()
lora.mode = Mode.LoRaWan
lora.band = 'EU868'
lora.set_config(app_eui='xxxxxxxxxxxxx',
                app_key='xxxxxxxxxxxxxxxxxxxxxx')
lora.join_otaa()
lora.dr = 5

sensor = bme680.BME680(bme680.I2C_ADDR_PRIMARY)
sensor.set_humidity_oversample(bme680.OS_2X)
sensor.set_temperature_oversample(bme680.OS_8X)
sensor.set_filter(bme680.FILTER_SIZE_3)

while True:
    if sensor.get_sensor_data():
        temp = int(sensor.data.temperature * 100)
        humidity = int(sensor.data.humidity * 100)
        foo = "{0:04x}{1:04x}".format(temp,humidity)
        lora.send(bytes.fromhex(foo))
        time.sleep(20)

lora.close()

I’m only using the temperature and humidity values at the moment to help keep the payload compatible with the feather version so I can use a single “port” in the TTN app.

I’ve put all the code for both this and the Feather version on GitHub here.

As well as the Pi Zero version I’ve upgraded the Feather with a 1200mAh LiPo battery and updated the code to also transmit the battery voltage so I can track battery life and work out how often I need to charge it.

Adding the battery also meant I could stick it in my bag and go for a walk round the village to see what sort of range I’m getting.

I’m currently using a little coil antenna on the gateway that is laid on it’s side, I’ll find something to prop it up with which should help. The feather is also only using a length of wire antenna so this could be seen as a worst case test.

It’s more than good enough for what I need right now, but it would be good to put a proper high gain one in then it might be useful to more people. It would be great if there was a way to see how many unique TTN applications had forwarded data through a gateway to see if anybody else has used it.

LoRa Temperature/Humidity Sensor

After deploying the TTN Gateway in my loft last year, I’ve finally got round to deploying a sensor to make use of it.

As the great lock down of 2020 kicked off I stuck in a quick order with Pimoroni for a Adafruit Feather 32u4 with an onboard LoRa radio (In hindsight I probably should have got the M0 based board).

Adafruit Feather LoRa Module

I already had a DHT22 temperature & humidity and the required resistor so it was just a case of soldering on an antenna of the right length and adding the headers so I could hook up the 3 wires needed to talk to the DHT22.

There is an example app included in with the TinyLoRa library that reads from the sensor and sends a message to TTN every 30 seconds.

(There are full instructions on how to set everything up on the Adafruit site here)

Now the messages are being published to TTN Application I needed a way to consume them. TTN have a set of Node-RED nodes which make this pretty simple. Though installing them did lead to a weekend adventure of having to upgrade my entire home automation system. The nodes wouldn’t build/install on Node-RED v0.16.0, NodeJS v6 and Raspbian Jessie so it was time to upgrade to Node-RED v1.0.4, NodeJS v12 and Raspbian Buster.

Node-RED flow consuming messages from Temp/Humidity sensor from TTN

This flow receives messages via MQTT direct from TTN, it then separates out the Temperature and Humidity values, runs them through smooth nodes to generate a rolling average of the last 20 values. This is needed because the DHT22 sensor is pretty noisy.

Plots of Temperature and Humdity

It outputs these smoothed values to 2 charts on a Node-RED Dashboard. which show the trend over the last 12 hours.

It also outputs to 2 MQTT topics that mapped to my public facing broker, which means they can be used to resurrect the temperature chart on my home page, and when I get an outdoor version of the sensor (hopefully solar/battery powered) up and running it will have both lines that my old weather station used to produce.

The last thing the flow does is to update the temperature reading for my virtual thermostat in the Google Assistant HomeGraph. This uses the RBE node to make sure that it only sends updates when the value actually changes.

The next step is to build a couple more and find a suitable battery and enclosure so I can stick one in the back garden.