Open Source Rewards

This is a bit of a rambling piece around some things that have been rattling round in my brain for a while. I’ve written it out mainly to just get it out of my head.

There has been a of noise around Open Source projects and cloud companies making huge profits when running these as services. To the extent of some projects even changing the license to either prevent this, to force the cloud providers to publish all the supporting code that allows the projects to be run at scale or include new features under none Open Source licenses.

Most of the cases that have been making the news have been around projects that have an organisation that supports them e.g. Elastic search that also sell support and hosted versions of the project. While I’m sympathetic to the arguments of the projects I’m not sure the license changes work, and the cloud companies do tend to commit developers to the projects (OK, usually with an aim of getting the features they want implemented, but they do fix bugs as well). Steve O’Grady from Redmonk has a good piece about this here.

I’m less interested in the big firms fighting over this sort of thing, I’m more interested in the other end of the scale, the little “guys/gals”.

There have also been cases about single developers that have built some core components that under pin huge amounts of Open Source software. This is especially clear in the NodeJS world where hundreds of tiny npm modules come together to build thousands of more complex modules that then make up every bodies applications.

While a lot of OS developers do it for the love, or to practice their skills on things they enjoy. But when a project becomes popular the expectations start to stack up. There are literally too many stories of entitled users expecting the same levels of support/service that they would get from a large enterprise when paying for a support contract.

The problem here is when the developer at the bottom of the stack gets fed up with everybody that depends on their module raising bugs and not contributing fixes or just gets bored and walks away. We end up with what happened to the event-stream module. In this case the dev got bored and handy ownership over to somebody else (the first person who asked), that somebody later injected a bunch of code that would steal cryptocurrency private keys.

So what are the options to allow a loan developer to work on their projects and get the support they deserve.

Get employed by a somebody that depends on your code

If your project is of value to a medium/large organisation it can be in their interests to bring support for that project in house. I’ve seen this happen with projects I’ve been involved with (if only on the periphery) and it can work really well.

The thing that can possibly be tricky is balancing the needs of the community that has grown up around a project and the developers new “master” who may well have their own idea’s about what direction the project should take.

I’ve also seen work projects be open sourced and their developers continuing to drive them and get paid to do so.

Set up own business around the project

This is sort of the ultimate version of the previous one.

It can be done by selling support/services around a project, but it also can make some of the problems I mentioned earlier worse as now some will expect even more now they are paying for it.

Paypal donations or Github sponsorship/Patreon/Ko-Fi

Adding a link on the projects About page to a paypal account or a Patreon/Github sponsorship page can let people show their appreciation for a developers work.

Paypal links work well for one off payments, where as the Patreon/Github/Ko-Fi sponsorship model is a little bit more of a commitment but can be a good way to cover on going costs without needing to charge for a service directly. With a little work the developer can make use of the APIs these platforms provide bespoke content/services for users who choose to donate.

I have included a Paypal link the about page of some of my projects, I set have set the default amount to £5 with the suggestion that I will probably use it to buy myself a beer from time to time.

I have also recently signed up to the Githib sponsorship project to see how it works. Github lets you set different monthly amounts between $1 and $20000, at this time I only have 1 level set to $1.

Adverts/Affiliate links in projects

If you are building a mobile app or run a website then there is always the option of including adverts in the page/app. With this approach the user of the project doesn’t have to do anything apart from put up with some hopefully relevant adverts.

There is a balance that has to be struck with this as too many adverts or for irrelevant things can annoy users. I do occasionally post Amazon affiliate links in blog posts and I keep track of how much I’ve earned on the about page.

This is not just a valid model for open source projects, many (most) mobile games have adopted this sort of model, even if it is just as a starting tire before either allowing users to pay a fee to remove the adds or to buy in game content.

Amazon Wishlists

This is a slightly different approach is to publish a link to something like an Amazon wishlist. This allows users to buy developers a gift as a token of appreciation. The list allow the developer to get things they actually want and to select a range of items at different price points.

Back when Amazon was closer to it’s roots as a online book store (and people still read books to learn new things) it was a great way to get a book about a new subject to start a new project.

Other random thoughts

For another very interesting take on some of this please watch this video from Tom Scott for the Royal Institution about Science Communicating in the world of Youtube and social media. It has a section in middle about Parasocial Relationships which is really interesting (as it the rest of the video) in this context.

Conclusion

I don’t really have one at the moment, as I said at the start this is a bit of a stream of conciousness post.

I do think that there isn’t a one size fits all model, nor are the options I’ve listed above all of them, they were just the ones that came to mind as it typed.

If I come up with anything meaningful, I’ll do a follow up post, also if somebody want to sponsor me $20,000 a month on Github to come up with something, drop me a line ;-).

Alternate PPPoE Server (Access Concentrator)

Earlier this year I had a short series of posts where I walked through building a tiny (fantasy) ISP.

I’ve been using the Roaring Penguin version of the PPPoE Server that was available by default in Raspbian as I am running all of this on a Raspberry Pi4. It worked pretty well but I had to add the traffic shaping manually, at the time this was useful as it gave me an excuse to finally get round to learning how to do some of those things.

I’ve been looking for a better accounting solution, one that I can reset the data counters on at a regular interval without forcing the connection to drop. While digging around I found an alternative PPPoE implementation called accel-ppp.

accel-ppp supports a whole host of tunnelling protocols such as pptp, l2tp, sstp and ipoe as well as pppoe. It also has a built in traffic shaper module. It builds easily enough on Rasbian so I thought I’d give it a try.

The documentation for accel-ppp isn’t great, it’s not complete and a mix of English and Russian which is not the most helpful. But it is possible to patch enough together to get things working.

[modules]
log_file
pppoe
auth_pap
radius
shaper
#net-snmp
#logwtmp
#connlimit
ipv6_nd
#ipv6_dhcp
#ipv6pool

[core]
log-error=/var/log/accel-ppp/core.log
thread-count=4

[common]
#single-session=replace
#sid-case=upper
#sid-source=seq
#max-sessions=1000
check-ip=1

[ppp]
verbose=1
min-mtu=1280
mtu=1400
mru=1400
#accomp=deny
#pcomp=deny
#ccp=0
#mppe=require
ipv4=require
ipv6=allow
ipv6-intf-id=0:0:0:1
ipv6-peer-intf-id=0:0:0:2
ipv6-accept-peer-intf-id=1
lcp-echo-interval=20
#lcp-echo-failure=3
lcp-echo-timeout=120
unit-cache=1
#unit-preallocate=1

[auth]
#any-login=0
#noauth=0

[pppoe]
verbose=1
ac-name=PPPoE7
#service-name=PPPoE7
#pado-delay=0
#pado-delay=0,100:100,200:200,-1:500
called-sid=mac
#tr101=1
#padi-limit=0
#ip-pool=pppoe
#ifname=pppoe%d
#sid-uppercase=0
#vlan-mon=eth0,10-200
#vlan-timeout=60
#vlan-name=%I.%N
interface=eth0

[dns]
#dns1=172.16.0.1
#dns2=172.16.1.1

[radius]
dictionary=/usr/local/share/accel-ppp/radius/dictionary
nas-identifier=accel-ppp
nas-ip-address=127.0.0.1
gw-ip-address=192.168.5.1
server=127.0.0.1,testing123,auth-port=1812,acct-port=1813,req-limit=50,fail-timeout=0,max-fail=10,weight=1
#dae-server=127.0.0.1:3799,testing123
verbose=1
#timeout=3
#max-try=3
#acct-timeout=120
#acct-delay-time=0
#acct-on=0
#attr-tunnel-type=My-Tunnel-Type

[log]
log-file=/var/log/accel-ppp/accel-ppp.log
log-emerg=/var/log/accel-ppp/emerg.log
log-fail-file=/var/log/accel-ppp/auth-fail.log
copy=1
#color=1
#per-user-dir=per_user
#per-session-dir=per_session
#per-session=1
level=3

[shaper]
vendor=RoaringPenguin
attr-up=RP-Upstream-Speed-Limit
attr-down=RP-Downstream-Speed-Limit
#down-burst-factor=0.1
#up-burst-factor=1.0
#latency=50
#mpu=0
#mtu=0
#r2q=10
#quantum=1500
#moderate-quantum=1
#cburst=1534
#ifb=ifb0
up-limiter=police
down-limiter=tbf
#leaf-qdisc=sfq perturb 10
#leaf-qdisc=fq_codel [limit PACKETS] [flows NUMBER] [target TIME] [interval TIME] [quantum BYTES] [[no]ecn]
#rate-multiplier=1
#fwmark=1
#rate-limit=2048/1024
verbose=1

[cli]
verbose=1
telnet=127.0.0.1:2000
tcp=127.0.0.1:2001
#password=123
#sessions-columns=ifname,username,ip,ip6,ip6-dp,type,state,uptime,uptime-raw,calling-sid,called-sid,sid,comp,rx-bytes,tx-bytes,rx-bytes-raw,tx-bytes-raw,rx-pkts,tx-pkts

[snmp]
master=0
agent-name=accel-ppp

[connlimit]
limit=10/min
burst=3
timeout=60

To use the same Radius attributes as before I had to copy the Roaring Penguin dictionary to /usr/local/share/accel-ppp/radius/ and edit it to add in BEGIN-VENDOR and END-VENDOR tags.

With this configuration is a straight drop in replacement for the Roaring Penguin version, no need to change anything in the LDAP or Radius, but it doesn’t need the `ip-up` script to setup the traffic shaping.

I’m still playing with some of the extra features, like SNMP support and the command line/telnet support for sending management commands.

Basic traffic shaping

So, I thought this would be a lot harder than it ended up being1.

Over the last few posts I’ve been walking through the parts needed to build a super simple miniature ISP and one of the last bits I think (I know I’ll have forgotten something) we need is a way to limit the bandwidth available to the end users.

Normally this is mainly done by a step in the chain we have been missing out, that being the actual DSL link between the users house and the exchange. The length of the telephone line imposes some technical restrictions as well as the encoding scheme used by the DSL modems. In the case I’ve been taking about we don’t have any of that as it’s all running directly over Gigabit Ethernet.

Limiting bandwidth is called traffic shaping. One of the reasons to apply traffic shaping is to make sure all the users get a consistent experience, e.g. to stop one user maxing out all the backhual bandwidth (streaming many 4k Netflix episodes) and preventing all the other users from being able to even just browse basic pages.

Home broadband connections tend to have an asymmetric bandwidth profile, this is because most of what home users do is dominated by information being downloaded rather than uploaded, e.g. requesting a web page consists of a request (a small upload) followed by a much larger download (the content of the page). So as a starting point I will assume the backhaul for our ISP is going to be configured in a similar way and set each user up with similar asymmetric set up of 10mb down and 5mb up.

Initially I thought it might be just a case of setting a couple of variable in the RADIUS response. While looking at the dictionary for the RADIUS client I came across the dictionary.roaringpenguin file that includes the following two attribute types

  • RP-Upstream-Speed-Limit
  • RP-Downstream-Speed-Limt

Since Roaring Penguin is the name of the package that provided the pppoe-server I wondered if this meant it had bandwidth control built in. I updated the RADIUS configuration files to include these alongside where I’d set Acct-Interim-Interval so they are sent for every user.

post-auth {

	update reply {
		Acct-Interim-Interval = 300
		RP-Upstream-Speed-Limit = 5120
		RP-Downstream-Speed-Limit = 10240
	}
        ...
}

Unfortunately this didn’t have any noticeable effect so it was time to have a bit of a wider look.

Linux has a traffic shaping tool called tc. The definitive guide is included in a document called the Linux Advanced Routing and Traffic Control HowTo and it is incredibly powerful. Luckily for me what I want is relatively trivial so there is no need to dig into all of it’s intricacies.

Traffic shaping is normally applied to outbound traffic so we will deal with that first. In this case outbound is relative to the machine running the pppoe-server so we will be setting the limits for the user’s download speed. Section 9.2.2.2 has an example we can use.

# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540

This limits the out going connection on device ppp0 to 220kbit. We can adjust the values for the rate to 10240kbitor 1mbitto get the right speed.

Traffic coming into the device is controlled with ingress rules and is called policing. The tc-policing man page has example for limiting incoming traffic.

 # tc qdisc add dev eth0 handle ffff: ingress
 # tc filter add dev eth0 parent ffff: u32 \
                   match u32 0 0 \
                   police rate 1mbit burst 100k

We can change the device to ppp0 and the rate to 5mbit and we have what we are looking for.

Automation

Setting this up on the command line once the connection is up and running is easy enough, but it really needs to be done automatically when ever a user connects. The pppd daemon that gets started for each connection has a script that can be used to do this. The /etc/ppp/ip-up.sh script is called and in turn this calls all the scripts in /etc/ppp/ip-up.d so we can include a script in there to do the work.

The next trick is where to find the settings. When setting up the pppoe-server we added the plugin radattr.so line to the /etc/ppp/options file, this causes all the RADIUS attributes to be written to a file when the connection is created. The file is /var/run/radattr.ppp0 (with the prefix changing for each connection).

Framed-Protocol PPP
Framed-Compression Van-Jacobson-TCP-IP
Reply-Message Hello World
Framed-IP-Address 192.168.5.2
Framed-IP-Netmask 255.255.255.0
Acct-Interim-Interval 300
RP-Upstream-Speed-Limit 5120
RP-Downstream-Speed-Limit 10240

With a little bit of sed and awk magic we can tidy (environments can’t contain - & we need to wrap the string value in ") that up and turn it into environment variables and a script to set the traffic shaping.

#!/bin/sh

eval "$(sed 's/-/_/g; s/ /=/' /var/run/radattr.$PPP_IFACE | awk -F = '{if ($0  ~ /(.*)=(.* .*)/) {print $1 "=\"" $2  "\""} else {print $0}}')"

if [ -n "$RP_Upstream_Speed_Limit" ];
then

#down
tc qdisc add dev $PPP_IFACE root tbf rate ${RP_Upstream_Speed_Limit}kbit latency 50ms burst 1540

#up
tc qdisc add dev $PPP_IFACE handle ffff: ingress
tc filter add dev $PPP_IFACE parent ffff: u32 \
          match u32 0 0 \
          police rate ${RP_Downstream_Speed_Limit}kbit burst 100k

else
	echo "no rate info"
fi

Now when we test the bandwidth with iperf we see the the speeds limited to what we are looking for.

Advanced

1 This is a super simple version that probably has lots of problems I’ve not yet discovered and it would be good to try and set up something that would allow a single user to get bursts of speed above a simple total/number of users share of the bandwidth if nobody else is wanting to use it. So it’s back to reading the LARTC guide to dig out some of the more advanced options.

Depolying a TTN LoRa Gateway

I’ve been meaning to get round to this ever since the Pi Supply Kickstarter delivered my LoRa Gateway HAT and the LoRa Node pHAT.

They have been sat in their boxes waiting until I had some spare time (and I’d finally finished moving a few things around to free up a spare Pi).

LoRa Gateway on a Pi 3

LoRa is a long range, low bandwidth radio system that uses the unlicensed spectrum. When combined with the higher level LoRaWAN protocol it makes great IoT platform for low power devices that want to send low volumes of data in places where there is no WiFi coverage and can’t justify the cost of a cellular connection.

LoRaWAN allows you to deploy a collection of Gateway devices that can act as receivers for a large number of deployed devices. These gateways then forward on messages to central point for processing.

The Things Network

A group called The Things Network run a LoRaWAN deployment. They are aiming for as large a coverage area as possible. To do this they allow users to deploy their own gateways and join these to the network. By joining the network you get to use everybody elses gateways in exchange for letting other people use yours.

Setting up the Gateway

This was particularly easy. I just had to download an image and flash it to a SD card. Stick that into the pi along with an ethernet cable and some power.

After the pi boots up you point your browser at http://iotloragateway.local and fill in a couple of values generated when I registered the gateway on the TTN site and that was it. The gateway is now up and running and ready to send/receive packets from any devices in range.

Testing

In order to test the gateway I need to set up a Pi Zero with the LoRa Node pHAT. This was a little trickier, but not much.

Fist I had to disable the Linux serial console, this can be done using the raspi-config command. I also had to add dtoverlay=pi3-miniuart-bt /boot/config.txt.

That was all that was needed to get the hardware configured, as for the software there is a rak811 python package that supplies the api and utilities to work with pHAT.

I now needed to declare an application on The Things Network site, this is how messages get routed to be processes. Taking the values for this application I could now write the following helloWorld.py

#!/usr/bin/env python3
from rak811 import Mode, Rak811

lora = Rak811()
lora.hard_reset()
lora.mode = Mode.LoRaWan
lora.band = 'EU868'
lora.set_config(app_eui='xxxxxxxxx',
                app_key='xxxxxxxxxxxx')
lora.join_otaa()
lora.dr = 5
lora.send('Hello world')
lora.close()

Which can then be seen arriving in The Things Network console.

Data arriving and being displayed in The Things Network console.

And I can subscribe directly to that data feed via MQTT:

$ mosquitto_sub -h eu.thethings.network -u 'lora-app1-hardill-me-uk' -P 'xxxxxxxxx' -v -t '+/devices/+/up'
{
  "app_id": "lora-app1-hardill-me-uk",
  "dev_id": "test-lora-pi-zero",
  "hardware_serial": "323833356E387901",
  "port": 1,
  "counter": 0,
  "is_retry": true,
  "payload_raw": "SGVsbG8gd29ybGQ=",
  "metadata": {
    "time": "2019-08-10T15:45:07.568449769Z",
    "frequency": 867.5,
    "modulation": "LORA",
    "data_rate": "SF7BW125",
    "airtime": 61696000,
    "coding_rate": "4/5",
    "gateways": [
      {
        "gtw_id": "lora-gw1-hardill-me-uk",
        "gtw_trusted": true,
        "timestamp": 910757708,
        "time": "2019-08-10T15:45:07Z",
        "channel": 5,
        "rssi": -91,
        "snr": 7.75,
        "rf_chain": 0,
        "latitude": 51.678905,
        "longitude": -2.3549008,
        "location_source": "registry"
      }
    ]
  }
}

Next Steps

First up will be to get a better antenna for the gateway and to move the whole things up in the attic, from there it should get a good view north out towards the River Severn. After that I want to get a small battery powered LoRa/GPS board, like a TTGO T-Beam and ride round on my bike to get a feel for what the range/coverage actually is.

I’ll also be keeping an eye on the stats from the gateway to see if anybody else near by is deploying TTN LoRaWAN devices.

Tracks2Miles and Tracks2TitanXT removed from Play Store

I have removed both of the apps in the title from the play store.

This is for a number reasons:

  1. MyTracks removed the API required to get at the recorded tracks meta data which vastly reduced the capability of the app.
  2. DailyMile shut down, which rendered Tracks2Miles useless.
  3. Google flagged both apps as potentially having SQL injection attacks (while theoretically possible I couldn’t find a directly exploitable use case).
  4. Even before the shutdown I had completely moved all my activity tracking to dedicated Garmin devices and Strava.

Listing AWS Lambda Runtimes

For the last few weeks I’ve been getting emails from AWS about Node 6.10 going end of life and saying I have deployed Lambda using this level.

The emails don’t list which Lambda or which region they think are at fault which makes tracking down the culprit difficult. I only really have 1 live instance deployed across multiple regions (and 1 test instance on a single region).

AWS Lambda region list

Clicking down the list of regions is time consuming and prone to mistakes.

In the email AWS do provide a command to list which Lambda are running with Node 6.10:

aws lambda list-functions --query="Functions[?Runtime=='nodejs6.10']"

But what they fail to mention is that this only checks your current default region. I can’t find a way to get the aws command line tool to list the Lambda regions, the closest I’ve found is the list of ec2 regions which hopefully match up. Pairing this with the command line JSON search tool jq and a bit of Bash scripting I’ve come up with the following:

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This walks over all the regions as prints out all the function names and the runtime they are using.

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
"oAuth-test - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"

In my case it only lists NodeJS 8.10 so I have no idea why AWS keep sending me these emails. Also since I’m only on the basic level I can’t even raise a technical help desk query to find out.

Anyway I hope this might be useful to others with the same problem.

DNS-over-HTTPS update

My post on DNS-over-HTTPS from last year is getting a fair bit more traffic after a few UK news paper articles (mainly crying that the new UK Government  censoring won’t work if Google roll it out in Chrome… what a shame). The followning article has a good overview [nakedsecurity].

Anyway I tweeted a link to the old post and it started a bit of a discussion and the  question about the other side of system came up. Namely how to use a DNS resolver that pushed traffic over DNS-over-HTTPS rather than provide a HTTPS endpoint that supported queries. The idea being that at the moment only Firefox & Chrome can take advantage of the secure lookups.

I did a bit of poking around and found things like stubby which DNS-over-TLS (another approach to secure DNS lookups) and also Cloudflare have cloudflared which can proxy for DNS-over-HTTPS to Cloudflare’s DNS server (it also is used to set up the VPN tunnel to Cloudflare’s Argo service, which is also worth a good look at.)

Anyway, while there are existing solutions out there I thought I’d have a really quick go at writing my own, to go with the part I’d written last year, just to see how hard it could be.

It turned out a really basic first pass could be done in about 40 lines of Javascript:

const dgram = require('dgram')
const request = require('request')
const dnsPacket = require('dns-packet')

const port = process.env["DNS_PORT"] || 53
//https://cloudflare-dns.com/dns-query
const url = process.env["DNS_URL"] 
    || "https://dns.google.com/experimental" 
const allow_selfSigned = 
    (process.env["DNS_INSECURE"] == 1) 

const server = dgram.createSocket('udp6')

server.on('listening', function(){
  console.log("listening")
})

server.on('message', function(msg, remote){
  var packet = dnsPacket.decode(msg)
  var id = packet.id
  var options = {
    url: url,
    method: 'POST',
    body: msg,
    encoding: null,
    rejectUnauthorized: allow_selfSigned ? false : true,
    headers: {
      'Accept': 'application/dns-message',
      'Content-Type': 'application/dns-message'
    }
  }

  request(options, function(err, resp, body){
    if (!err && resp.statusCode == 200) {
      var respPacket = dnsPacket.decode(body)
      respPacket.id = id
      server.send(body,remote.port)
    } else {
      console.log(err)
    }
  })

})

server.bind(port)

It really could do with some caching and some more error handling and I’d like to add support for Google JSON based lookups as well as the binary DNS format, but I’m going to add it to the github project with the other half and people can help extend it if they want.

The hardest part was working out I needed the encoding: null in the request options to stop it trying to turn the binary response into a string but leaving it as a Buffer.

I’m in the process of migrating my DNS setup to a new machine, I’ll be adding a DNS-over-TLS (using stunnel) & a DNS-over-HTTPS listeners for the public facing sides.

Building a Bluetooth speaker

Recently I’ve been playing with how to build a Bluetooth audio device using a Raspberry Pi Zero. The following are some notes on what I found.

First question is why build one when you can buy one for way less than the cost of the parts. There are a couple of reasons:

  • I build IoT prototypes for a living, and the best way to get a feel for the challenges is to actually face them.
  • Hacking on stuff is fun.

The Hardware

I’m starting out with a standard Raspberry Pi Zero W. This gets me a base high level platform that includes a WiFi and Bluetooth.

Raspberry Pi Zero W

The one thing that’s missing is an audio output (apart from the HDMI) but Raspberry Pi’s support audio using the I2S standard. There are several I2S pHATs available and I’m going to be using a pHAT DAC from Pimoroni. I’ve used these before for a project so I’m reasonably happy with how to set it up, but Pimoroni have detailed instructions.

I’m going to add a screen to show things like the current track title & artist along with the volume. I’m also going to need some buttons to send Play/Pause, Next & Previous commands to the connected device. I have a PaPiRus e-ink display that has 5 buttons built in which I was going to use but this clashes with the GPIO pins used for the DAC so instead I’ve opted for the Inky pHAT and the Button Shim.

The Software

I knew the core components of this had to be a problem others had solved and this proved to be the case. After a little bit of searching I found this project on github.

As part of the configuration we need to generate the Bluetooth Class bitmask. This can be done one this site.


Class options

This outputs a hex value of 0x24043C which is added to the /etc/bluetooth/main.conf

With this up and running I had a basic Bluetooth speaker that any phone can connect to without a pin and play music, but nothing else. The next step is to add some code to handle the button pushes and to update the display.

The Bluetooth stack on Linux is controlled and configured using DBus. Dbus is a messaging system supporting IPC and RPC

A bit of Googling round turned up this askubuntu question that got me started with the following command:

dbus-send --system --print-reply --dest=org.bluez /org/bluez/hci0/dev_44_78_3E_85_9D_6F org.bluez.MediaControl1.Play

This sends a Play command to the connected phone with the Bluetooth mac address of 44:78:3E:85:9D:6F. The problem is knowing what the mac address is as the system allows multiple devices to pair with the speaker. Luckily you can use DBus to query the system for the connected device. DBus also has some really good Python bindings. So with a bit more poking around I ended up with this:

#!/usr/bin/env python
import signal
import buttonshim
import dbus
bus = dbus.SystemBus()
manager = dbus.Interface(
bus.get_object("org.bluez","/"), 
"org.freedesktop.DBus.ObjectManager")

@buttonshim.on_press(buttonshim.BUTTON_A)
def playPause(button, pressed):
	objects = manager.GetManagedObjects()
	 
	for path in objects.keys():
	    interfaces = objects[path]
	    for interface in interfaces.keys():
	        if interface in [
	        "orge.freedesktop.DBus.Introspectable",
	        "org.freedesktop.DBus.Properties"]:
	            continue
	 
	        if interface == "org.bluez.Device1":
	            props = interfaces[interface]
	            if props["Connected"] == 1:
	                media = objects[path + "/player0"]["org.bluez.MediaPlayer1"]
	 
	                mediaControlInterface = dbus.Interface(
	                bus.get_object("org.bluez",path + "/player0"),
	                "org.bluez.MediaPlayer1")
	 
	                if media["Status"] == "paused":
	                    mediaControlInterface.Play()
	                else:
	                    mediaControlInterface.Pause()

signal.pause()

When button A is pressed this looks up the connected device, and also checks the current state of the player, is it playing or paused and toggles the state. This means that one button can be Play and Pause. It also uses the org.bluez.MediaPlay1 API rather than the org.bluez.MediaControl1 which is marked as deprecated in the doc.

The button shim also comes with Python bindings so putting it all together was pretty simple.

DBus also lets you register to be notified when a property changes, this paired with the Track property on the org.bluez.MediaPlay1 as this holds the Artist, Track Name, Album Name and Track length information supplied by the source. This can be combined with the Inky pHAT python library to show the information on the screen.

#!/usr/bin/env python

import dbus
from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib

def trackChanged(*args, **kw):
	target = args[0]
	if target == "org.bluez.MediaPlayer1":
		data = args[1].get("Track",0)
		if data != 0:
			artist = data.get('Artist')
			track = data.get('Title')
			print(artist)
			print(track)


DBusGMainLoop(set_as_default=True)
system_bus = dbus.SystemBus()
system_bus.add_signal_receiver(trackChanged, 
	dbus_interface="org.freedesktop.DBus.Properties", 
	signal_name="PropertiesChanged", 
	path='/org/bluez/hci0/dev_80_5A_04_12_03_0E/player0')
loop = GLib.MainLoop()
loop.run()

This code attaches a listener to the MediaPlayer object and when it spots that the Track has changed it prints out the new Artist and Title. The code matches all PropertiesChanged events which is a little messy but I’ve not found a way to use wildcards or partial matches for the DBus interface in python (since we don’t know the mac address of the connected device at the time we start listening for changes).

Converting the Artist/Title information into an image with the Pyton Image Library then getting the Inky pHAT to render that is not too tricky

from PIL import Image, ImageDraw, ImageFont
from font_fredoka_one import FredokaOne
from inky import InkyPHAT

...

disp = InkyPHAT("yellow")
font = ImageFont.truetype(FredokaOne, 22)

img = Image.new("P", (inky_display.WIDTH, inky_display.HEIGHT))
draw = ImageDraw.Draw(img)

draw.text((), "Artist: "+ artist, disp.WHITE, font=font)
draw.text((), "Track: "+ track, disp.WHITE, font=font)

disp.set_image(img)
disp.show()


That’s the basics working, now I need to find/build a case for it and then look at seeing if I can add Chromecast Audio and Airplay support.

Node-RED Google Home Smart Home Action

Google Home

Following on from my Alexa Home Skill for Node-RED it’s time to see about showing some love to the Google Home users (OK, I’ve been slowly chipping away at this for ages, but I’ve finally found a bit of time).

One of the nice things about Google Assistant is that it works all over the place, I can use it via the text interface if I’m somewhere and can’t talk, or even from the car via Android Auto.

Screenshot_20190101-170716

Google offer a pretty similar API for controlling Smart home devices to the one offered by Amazon for the Alexa so the implementation of this was very similar. The biggest difference is the is no requirement to use something like Amazon’s Lambda to interface with the service so it’s just a single web endpoint.

I’ve taken pretty much the same approach as with the Alexa version in that I have a Web Site where you can sign up for an account and then define virtual devices with specific names and characteristics.

Virtual devices

Google support a lot more different types of devices and characteristics than Amazon with Alexa at the moment, but to start with I’m just supporting Sockets/Light/Switches and Thermostats. I intend to add more later as I work out the best way to surface the data.

The other big change is that Google Assistant supports asynchronously updating the device state and the ability for the Assistant backend to query the state of a device. To support this I’m going to allow the response node to be configured with a specific device and to accept input that has not come from an input node.

The node is currently being beta tested, if you are interested post in #google-home-assistant on the Node-RED Slack and I can add you to the ACL for the beta.

Google Assistant Node-RED Node

I’ll do another post when the node has finished testing and has been accepted by Google.