A couple of weeks ago I was rearranging my collection of Raspberry Pi’s that live in the attic (A Kubernetes cluster, LoRa gateway and a few other things) and I was having problems remembering exactly which was which as a few of them have the same Pimoroni Pibow case in the same colours. I decided it was time to actually label them all to make life a little easier.
My first thought was a Dynmo device, but I decided that I didn’t want one of the original plastic tape embossing machine and the newer printers get expensive quickly. I did go down a rabbit hole around Brother label printers that come with Linux printer drivers, but decided I didn’t actually need that level of support.
I ended up grabbing a D11 Bluetooth Thermal Label Printer from Amazon. It comes with a roll of stickers 12×40 mm and can print text, numbers emoji, barcodes or QR codes. There are many different sized stickers with a selection of borders, background prints or transparent and even a glow in the dark version.
The rolls have NFC tags to identify the size and type of stickers currently installer in the printer and it updates the app with this when you try to print.
It uses an Android (and iOS) app to create the labels and then send them to the printer. The app is pretty intuitive, the only slight niggle is that if you want to save pre-built layouts you need to sign up to a online account. This is not a problem if you are just doing one off labels for different things.
It has solved the problem I bought it for, we will see how the thermal paper holds up over time but most of the labels are on the underside of the Pibow cases so should be out of direct light most of the time.
Recently I’ve been playing with how to build a Bluetooth audio device using a Raspberry Pi Zero. The following are some notes on what I found.
First question is why build one when you can buy one for way less than the cost of the parts. There are a couple of reasons:
I build IoT prototypes for a living, and the best way to get a feel for the challenges is to actually face them.
Hacking on stuff is fun.
I’m starting out with a standard Raspberry Pi Zero W. This gets me a base high level platform that includes a WiFi and Bluetooth.
The one thing that’s missing is an audio output (apart from the HDMI) but Raspberry Pi’s support audio using the I2S standard. There are several I2S pHATs available and I’m going to be using a pHAT DAC from Pimoroni. I’ve used these before for a project so I’m reasonably happy with how to set it up, but Pimoroni have detailed instructions.
I’m going to add a screen to show things like the current track title & artist along with the volume. I’m also going to need some buttons to send Play/Pause, Next & Previous commands to the connected device. I have a PaPiRus e-ink display that has 5 buttons built in which I was going to use but this clashes with the GPIO pins used for the DAC so instead I’ve opted for the Inky pHAT and the Button Shim.
I knew the core components of this had to be a problem others had solved and this proved to be the case. After a little bit of searching I found this project on github.
As part of the configuration we need to generate the Bluetooth Class bitmask. This can be done one this site.
This outputs a hex value of 0x24043C which is added to the /etc/bluetooth/main.conf
With this up and running I had a basic Bluetooth speaker that any phone can connect to without a pin and play music, but nothing else. The next step is to add some code to handle the button pushes and to update the display.
The Bluetooth stack on Linux is controlled and configured using DBus. Dbus is a messaging system supporting IPC and RPC
A bit of Googling round turned up this askubuntu question that got me started with the following command:
This sends a Play command to the connected phone with the Bluetooth mac address of 44:78:3E:85:9D:6F. The problem is knowing what the mac address is as the system allows multiple devices to pair with the speaker. Luckily you can use DBus to query the system for the connected device. DBus also has some really good Python bindings. So with a bit more poking around I ended up with this:
bus = dbus.SystemBus()
manager = dbus.Interface(
def playPause(button, pressed):
objects = manager.GetManagedObjects()
for path in objects.keys():
interfaces = objects[path]
for interface in interfaces.keys():
if interface in [
if interface == "org.bluez.Device1":
props = interfaces[interface]
if props["Connected"] == 1:
media = objects[path + "/player0"]["org.bluez.MediaPlayer1"]
mediaControlInterface = dbus.Interface(
bus.get_object("org.bluez",path + "/player0"),
if media["Status"] == "paused":
When button A is pressed this looks up the connected device, and also checks the current state of the player, is it playing or paused and toggles the state. This means that one button can be Play and Pause. It also uses the org.bluez.MediaPlay1 API rather than the org.bluez.MediaControl1 which is marked as deprecated in the doc.
The button shim also comes with Python bindings so putting it all together was pretty simple.
DBus also lets you register to be notified when a property changes, this paired with the Track property on the org.bluez.MediaPlay1 as this holds the Artist, Track Name, Album Name and Track length information supplied by the source. This can be combined with the Inky pHAT python library to show the information on the screen.
from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib
def trackChanged(*args, **kw):
target = args
if target == "org.bluez.MediaPlayer1":
data = args.get("Track",0)
if data != 0:
artist = data.get('Artist')
track = data.get('Title')
system_bus = dbus.SystemBus()
loop = GLib.MainLoop()
This code attaches a listener to the MediaPlayer object and when it spots that the Track has changed it prints out the new Artist and Title. The code matches all PropertiesChanged events which is a little messy but I’ve not found a way to use wildcards or partial matches for the DBus interface in python (since we don’t know the mac address of the connected device at the time we start listening for changes).
Converting the Artist/Title information into an image with the Pyton Image Library then getting the Inky pHAT to render that is not too tricky
As I mentioned I recently got my hands on a set of 4 flic.io buttons. I pretty immediately paired one of them with my phone and started playing. It soon became obvious that while fun, the use cases for a button paired to a phone where limited to a single user environment and not what I had in mind.
What was needed was a way to hook the flic buttons up to something like a Raspberry Pi and Node-RED. While I was waiting for the buttons to arrive I was poking round the messages posted to the indiegogo one of the guys from Shortcut Labs mentioned that such a library was in the works. I reached out to their developer contact point asking about getting access to the library to build a Node-RED node round, I said I was happy to help test any code they had. Just before Christmas I managed got hold of a early beta release to have a play with.
From that I was able to spin up a npm module and a Node-RED node.
The Node-RED node will currently listen for any buttons that are paired with the computer and publish a message as to if it was a single, double or long click
I said I would sit on these nodes until the library shipped, but it appeared on github yesterday so hence this post. The build includes binaries for Raspberry Pi, i386 and x86_64 and needs the very latest Bluez packages (5.36+).
Both my nodes need a little bit of cleaning up and a decent README writing, once that is done I’ll push them to npm.
Along with a lot of other people I’ve been waiting for these to drop through my door for most of the year.
I even started to look at building something similar (if a fair bit bigger) using a Raspberry Pi and the noble Node.JS module.
Flic.io buttons are small silicon rubber buttons that can be used to trigger up to three different actions based on a single click, double click and a push and hold. They connect to your phone via BLE. The app comes with support for a whole bunch of actions such as WeMO, Philips Hue, Android actions such as taking photos and IFTTT for a bunch of extra web actions.
I’m starting to look at what it will take to build a Node-RED node for these as I want to set them up to control my WeMO lights at home and a bunch of other stuff. There is talk of a C library for use on the Raspberry Pi as well as the iOS/Android SDKs which I should be able to wrap as a Node.js module if I can get hold of it. Otherwise I’ll have to get down and dirty with reverse engineering the GATT profile.
A new toy arrived in the post this morning. Eric Morse from TitanXT very kindly arranged this for me as a thank you for building the Tracks2TitanXT Android application.
I’ve been after one of these for a little while after getting a look at one when a couple of colleges bought them to help with their training.
As well as heart rate information the unit contains an accelerometer which is used to generate cadence information, this means that it can be used to approximate distances where there is no GPS reception. The data to is passed to a mobile phone via Bluetooth and there are a number of applications that understand the data including My Tacks. There is also an opensource project called zephyropen on goolge code that has documented the protocol and created a desktop app to display the data.
Zephyr also do a smarter version of the HxM called the BioHarness, this version gathers skin temperature and respiration rates as well as all the data from the HxM.
My Tracks only collects data from the sensor while it’s recording a route and it’s not easy to see the data once the workout has finished so I’m tempted to write a little app to grab the data at any time and possible a tool to view the data stored in the My Tracks database. I’ll also look at adding average heart rate info to the workout summary Tracks2Miles can upload to dailymile.
The soft belt and pickups is a lot comfier than some of the previous HRMs I’ve used before.