Tag Archives: Linux

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XSP13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.

Flic.io Linux library

As I mentioned I recently got my hands on a set of 4 flic.io buttons. I pretty immediately paired one of them with my phone and started playing. It soon became obvious that while fun, the use cases for a button paired to a phone where limited to a single user environment and not what I had in mind.

What was needed was a way to hook the flic buttons up to something like a Raspberry Pi and Node-RED. While I was waiting for the buttons to arrive I was poking round the messages posted to the indiegogo one of the guys from Shortcut Labs mentioned that such a library was in the works. I reached out to their developer contact point asking about getting access to the library to build a Node-RED node round, I said I was happy to help test any code they had. Just before Christmas I managed got hold of a early beta release to have a play with.

From that I was able to spin up a npm module and a Node-RED node.

The Node-RED node will currently listen for any buttons that are paired with the computer and publish a message as to if it was a single, double or long click

Flic.io Node-RED node

I said I would sit on these nodes until the library shipped, but it appeared on github yesterday so hence this post. The build includes binaries for Raspberry Pi, i386 and x86_64 and needs the very latest Bluez packages (5.36+).

Both my nodes need a little bit of cleaning up and a decent README writing, once that is done I’ll push them to npm.

UPDATE:
Both nodes are now on npmjs.org:
https://www.npmjs.com/package/node-flic-buttons
https://www.npmjs.com/package/node-red-contrib-flic-buttons

Securing Node-RED

Node-RED added some new authentication/authorisation code in the 0.10 release that allows for a plugable scheme. In this post I’m going to talk about how to use this to make Node-RED use a LDAP server to look up users.

HTTPS

First of all, to do this properly we will need to enable HTTPS to ensure the communication channel between the browser and Node-RED is properly protected. This is done by adding a https value to the settings.js file like this:

...
},
https: {
  key: fs.readFileSync('privkey.pem'),
  cert: fs.readFileSync('cert.pem')
},
...

You also need to un-comment the var fs = require(‘fs’); line at the top of settings.js.

You can generate the privkey.pem and cert.pem with the following commands in your node-red directory:

pi@raspberrypi ~/node-red $ openssl genrsa -out privkey.pem
Generating RSA private key, 1024 bit long modulus
.............................++++++
......................++++++
e is 65537 (0x10001)
pi@raspberrypi ~/node-red $ openssl req -new -x509 -key privkey.pem -out cert.pem -days 1095
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:Hampshire
Locality Name (eg, city) []:Eastleigh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Node-RED
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:raspberrypi.local
Email Address []:

The important bit is the Common Name value, this needs to match either the name or IP address that you will use to access your Node-RED console. In my case I have avahi enabled so I can access my pi using it’s host name raspberrypi with .local as the domain, but you may be more used to using an IP address like 192.168.1.12.

Since this is a self signed certificate your browser will reject it the first time you try to connect with a warning like this:

Chrome warning about unsafe cert.
Chrome warning about unsafe cert.

This is because your certificate is not signed by one of the trusted certificate authorities, you can get past this error by clicking on Advanced then Proceed to raspberrypi.local (unsafe). With Chrome this error will be shown once every time you access the page, you can avoid this by copying the cert.pem file to your client machine and import it into Chrome:

  1. Open Chrome settings page chrome://settings
  2. Scroll to the bottom of page and click on the “+Show advanced settings” link
  3. Scroll to the HTTPS/SSL and click on “Manage certificates…”
  4. Select the Servers tab and select import
  5. Select the cert.pem you copied from your Raspberry Pi

Usernames and Passwords

In previous Node-RED releases you could set a single username and password for the admin interface, any static content or one that covered bh. This was done by adding a object to the settings.js file containing the user name and password. This was useful but could be a little limited. Since the 0.10 release there is now a pluggable authentication interface that also includes support for things like read only access to the admin interface. Details of these updates can be found here.

To implement a authentication plugin you need to create a NodeJS module based on this skeleton:

var when = require("when");

module.exports = {
   type: "credentials",
   users: function(username){
      //returns a promise that checks the authorisation for a given user
      return when.promise(function(resolve) {
         if (username == 'foo') {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   authenticate: function(username, password) {
      //returns a promise that completes when the user has been authenticated
      return when.promise(function(resolve) {
         if (username == 'foo' && password == 'bar' ) {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   default: function() {
      // Resolve with the user object for the default user.
      // If no default user exists, resolve with null.
      return when.promise(function(resolve) {
         resolve(null);
      });
   }
};

This comprises on 3 functions, one to authenticate a user against the backend, one to check the level of authorisation (used by Node-REDs built in oAuth mechanism once a user has been authenticated), and finally default which matches unauthenticated users.

For my LDAP example I’m not going to implement different read only/write levels of authorisation to make things a little easier.

The source for the module can be found here or on npmjs here. The easiest way to install it is with:

npm install -g node-red-contrib-ldap-auth

Then edit the Node-RED settings.js to include the following:

adminAuth: require('node-red-contrib-ldap-auth').setup({
    uri:'ldap://url.to.server', 
    base: 'ou=group,o=company.com', 
    filterTemplate: 'mail={{username}}'
}),

The filterTemplate is a mustache template to use to search the LDAP for the username, in this case the username is mail address.

Once it’s all up and running Node-RED should present you with something that looks like this:

Node-RED asking for credentials
Node-RED asking for credentials

Emergency FTTC Router

On Monday I moved to a new broadband provider (A&A). The BT Openreach guy turned up and swapped over the face plate on my master socket, dropped off the FTTC modem, then went and down to the green box in the street and flipped my connection over. It all would have been very uneventful except for the small problem that the new router I needed to link my kit up to FTTC modem had not arrived.

This is because BT messed up the address for where they think my line is installed a few of years ago and I’ve not been able to get them to fix it. A&A quickly sent me out a replacement router with next day delivery but it would mean a effectively 2 days without any access at home.

The routers talks to the FTTC modem using a protocol called PPPoE over normal ethernet. There is a Linux package called rp-pppoe which provides the required support. So to quickly test that the install was working properly I installed this on to my laptop and plugged it directly into FFTC modem. Things looked really good but did mean I was only able to get one device online and I was tied to one end of the sofa by the ethernet cable.

PPPoE is configured the same way PPP used to be used with dial up modems, you just need to create /etc/ppp/pppoe.conf file that looks a bit like this:

ETH=eth0
USER=xxxxxxx
DEMAND=no
DNSTYPE=SERVER
PEERDNS=yes
DEFAULTROUTE=yes
PING="."
CONNECT_POLL=2
CF_BASE=`basename $CONFIG`
PIDFILE="/var/run/$CF_BASE-adsl.pid"
LCP_INTERVAL=20
LCP_FAILURE=3
FIREWALL=NONE
CLAMPMSS=1412
SYNCHRONOUS=no
ACNAME=
SERVICENAME=
CONNECT_TIMEOUT=30
PPPOE_TIMEOUT=80

And include you username and password in the /etc/ppp/chap-secrets. Once set up you just need to run pppoe-start as root

In order to get back to something like normal I needed something else. I had a Raspberry Pi in my bag along with a USB Ethernet adapter which looked like it should fit the bill.

I installed rp-pppoe and the dhcp server then plugged one ethernet adapter into the FTTC modem and the other into a ethernet hub. Into the hub I had a old WiFi access point and the rest of my usual machines. After configuring the Pi to masquerade IP traffic from the hub I had everything back up and running. The only downside is that speeds are limited to 10mbps as that is as quick as the built in ethernet adapter on Pi will do.

Playing with touchatag reader

Touchatag reader
We got a bunch of Touchatag NFC readers in the office just after Christmas and I said I would have a play with one to see what we could use them for. I had seen one before (in fact I borrowed one from Andy Piper for a little while) but didn’t get much further than trying to read my work id card.

To get it work on Linux you need to use PCSC Lite and download the driver from ASC (the guys that actually make the readers that Touchatag use). You can grab the download here. Once you’ve built these (standard “configure; make; sudo make install”) you will need to update one of the PCSC config files. There is a patch linked to from this page: http://idefix.net/~koos/rfid-touchatag.html

PCSC Lite is packaged for Fedora and Ubuntu, but I couldn’t find libnfc in the standard repos for Ubuntu so ended up building it myself for one of the machines in the office. Again this was a simple “configure; make; sudo make install”. With that done I was able to use the nfc-mfclassic tool from the libnfc samples to read data from a tag.

$ nfc-mfclassic r a test.out
Connected to NFC reader: ACS ACR122U 00 00 / ACR122U103 - PN532 v1.6 (0x07)
Found MIFARE Classic 4k card with UID: 9e2bcaea
Reading out 256 blocks |.....................................|

Which gets me a file with all the data stored on a tag (assuming I know the right keys to access all the blocks), but most of the time just having the tag id is enough to trigger an event. After a bit more poking around I found nfc-eventd which seamed to fit the bill perfectly.

This allows you to specify commands to be run when a tag is placed on the reader and when it is removed and it will pass the tag id to the command. Here is the config file I used

nfc-eventd {

  # Run in background? Implies debug=false if true
  daemon = false;

  # show debug messages?
  debug = true;
	
  # polling time in seconds
  polling_time = 1;

  # expire time in seconds
  # default = 0 ( no expire )
  expire_time = 0;
	
  device my_touchatag {
    driver = "ACR122";
    name = "ACS ACR 38U-CCID 01 00";
  }

  # which device to use ? note: if this part is commented out, 
  # nfc-eventd will try to pick up device automagically...
  #nfc_device = "my_touchatag";

  # list of events and actions
  module nem_execute {
    # Tag inserted
    event tag_insert {
      # what to do if an action fail?
      # ignore  : continue to next action
      # return  : end action sequence
      # quit    : end program
      on_error = ignore ;
	
      # You can enter several, comma-separated action entries
      # they will be executed in turn
      action = "publish 'wmqtt://nfc@broker:1883/nfc/tagid?qos=0&retain=n&debug=0' $TAG_UID"
    }
	
    # Tag has been removed
    event tag_remove { 
      on_error = ignore;
      action = "(echo -n 'Tag (uid=$TAG_UID) removed at: ' && date) >> /tmp/nfc-eventd.log";
    }
	
    # Too much time card removed
    event expire_time { 
      on_error = ignore;
      action = "/bin/false";
    }
  }

}

Here I have used the publish command from the IBM IA93 C MQTT package to publish a message with the tag id to the nfc topic. You can do something similar with mosquitto_pub like this:

mosquitto_pub -h broker -i nfc -t nfc -m $TAG_ID

The plan is to use this now to allow the guys in ETS to log into various demos in the lab with their id badges.

Next on the list is to see if I can get the reader to respond to my Galaxy Nexus when it’s in tag mode.

Timelapse photography

As I mentioned at the end of my last post I have been playing around with gphoto2 to create some time lapse videos of the assembly of one of my Christmas gifts.

I have played with making time lapse video before, when I set up my MMS CCTV system with motion I enabled a feature that creates a video from a sample image every 30 seconds. Motion uses the web cam (or other Video4Linux src) and all the web cams I had access to up at my folks are pretty low resolution so this wasn’t what I was looking for.

I did have my Canon 350D and the little point and shoot Canon A520 that lives on the end of my work bag so I thought I’d see what I could do with them. The 350D is 8 Megapixel and the A520 is 4 Megapixel so both will take a frame way bigger than 720p that I can crop down.

I had a bit of a look round and found an app called gphoto2 that claimed to be able to drive both cameras via the USB port to take images at a set interval. I plugged the 350D in and tried to get it to work but I kept getting the following error*:

*** Error ***
Sorry, your camera does not support generic capture
ERROR: Could not capture.
*** Error (-6: 'Unsupported operation') ***

So I swapped over to the A520 and things seamed to work fine. So I set it up on the tripod and fired off the following command:

[hardillb@bagend ~]$ gphoto2 --capture-image -I 120

This triggers the camera every 2 mins which I thought should be enough time to see some progress in each frame.

Apart from having to swap the batteries twice it all went rather well, I soon started to ignore the little beep from the camera as it took each shot. At the end I copied all the images off the SD card for processing. Each image started out at 2272×1704 so they would have been big enough to use in 1080p video but I decided to shrink them down to 720p.

The following little ImageMagik script resizes the images down and adds 2 black bars down the sides to pad it out to a full 16:9 720p frame size.

#!/bin/sh
for x in `ls *.JPG`
do 
   convert $x -resize 1280x720 -bordercolor black -border 160x0 resized/$x
done

The first bit -resize 1280×720 resized the original image down to 720 pixels high and the second bit -bordercolor black -border 160×0 adds on the 2 x 160 pixel wide black bars to pad the image up to the required 1280 pixels wide before writing a copy out to the resized directory.

And this mencoder line to stitch them together as a video with 2 frame per second so each second of video is equivalent to about 4 minutes of real time.

mencoder "mf://*.JPG" -mf fps=2 -o day1.avi -ovc lavc -lavcopts \
vcodec=msmpeg4v2:vbitrate=800

Here is a sample of the output

YouTube Preview Image

*I have since found this gphoto bug report that mentioned changing the camera’s USB mode from PTP mode to normal mode. After finding this setting on the camera I managed to get the 350D to work with gphoto2 as well.

WIFI presence detection

Back in my very first post I talked about using Bluetooth to detect my presence at home in order to disable the CCTV system and control a few other things.

While this works well it does not scale well to multiple people as the Bluetooth layer 2 ping takes about 5 seconds to time out if the device in not in range. This means that at most 12 different phones can be checked in a minute.

A couple of recent chats with a few people at work (Vaibhavi Joshi & Dale Lane and Bharat Bedi) got me thinking about this again. Modern phones tend to have WIFI as well as Bluetooth and 3G radios these days so I thought that I’d have a look at seeing if this could be used to locate devices.

After bit of a poke around it looked like a package called Kisment should be able to do what I wanted.

Kismet

Kismet is a client server application, the backend server reads from the network card and decodes the packets and the UI which requests data from the server over a socket connection. This also means the backend can be on a different machine, in fact several different drone backends can be consolidated in a single master backend server and all the captured data presented to UI. This means you could distribute a number of drones over site and generate a map as devices move between areas covered by the different backends.

The default client is a ncurses based application that can list all the visible networks and a chart showing the incoming packet rates. It’s great for getting a view of what networks are active which can be very useful when you have to set up a new one and want to see which channels are free.

Rather than use the default client I decided to write my own to drive the backend the way I wanted it and to make exposing the data easier (I’m going to publish detected devices on a MQTT topic). But first I had a bit of a play using the netcat (nc) command. Netcat basically pipes stdin/stdout to and from a given socket, this is useful because the Kisment protocol is just a set of simple text commands. For example the following command will get the kismet backend to return a list of all the clients it has seen to date.


echo -e '!0 enable client MAC,manuf,signal_dbm,signal_rssi' | nc localhost 2501


Returns something that looks like this:

...
*CLIENT: 00:25:69:7D:53:D9 [0x01]SagemCommu[0x01] -71 0
...

The only tricky bit about the response is that any field that can contain a space is wrapped in characters with a value of 0x01, in this case the manufacture field could contain spaces so we need the following regexp to chop up the responses for each time a client is spotted.

I decided my client in Java (because the MQTT libraries are easy to use) so I chose to use a regular expression to split up the response

Pattern.compile("\*CLIENT: ([0-9A-F:]*) \x01(.*)\x01 (-?\d+) (\d) ");

By default Kismet cycles round all the available channels to try and get a full picture of all the WIFI traffic in range, but this means it can miss some packets and in turn miss clients that are not generating a lot of traffic. To help get round this I have locked Kismet to just listen on the same channel as my WIFI access point since all my devices are likely to try and connect to it as soon as it comes in range and there is less chance of me missing detecting my phone up front.

!1  HOPSOURCE cab63dc8-9916-11e0-b51a-0f04751ce201 LOCK 13

cab63dc8-9916-11e0-b51a-0f04751ce201 is the UUID assigned to the wifi card by kismet and the 13 is the channel I run my WIFI access point on. You can find the UUID by running the following command:

echo -e '!1234 enable source type,username,channel,uuid' | nc localhost 2501

Which returns a string that looks like this every time the back end hops to new channel.

*KISMET: 0.0.0 1308611701 [0x01]Kismet_2009[0x01] [0x01]alert[0x01] 0 
*PROTOCOLS: KISMET,ERROR,ACK,PROTOCOLS,CAPABILITY,TERMINATE,TIME,PACKET,STATUS, 
PLUGIN,SOURCE,ALERT,WEPKEY,STRING,GPS,BSSID,SSID,CLIENT,BSSIDSRC,CLISRC, 
NETTAG,CLITAG,REMOVE,CHANNEL,SPECTRUM,INFO,BATTERY 
*SOURCE: orinoco_cs test 3 30b9b5a4-9b93-11e0-acfe-ee054e2c7201 
*ACK: 1234 [0x01]OK[0x01]

Publishing the last seen time on the following topic /WIFIWatch/<mac> allows applications to register to see a specific device and also build up a list of all devices ever seen and when.

WIFI Watch topics

It’s not just phones that have WIFI adapters these days, net books, tablets even digital cameras (with things like eyefi) all have , also with multiple kismet nodes it might be possible to track devices as they move around an area.

Next is to look at the signal strength information to see if I can judge a relative distance from the detection adapter.

Resources:

Kismet

Unbricking Guruplug Server

I’ve been playing with a Guruplug Server over the last few days trying to get it to work with my MIMO UM-740 Touch Screen. It’s been slow progress because nobody seams to have a link to the src for the kernel that it comes with so I can’t build the DisplayLink module.

Guruplug
Used with permission from Flickr taken by andypiper

To try and get round this I tried to update the kernel using some instructions on the openplug.org website. Unfortunately due to a miss match in the uboot boot loader that came installed on the device the new kernel wouldn’t boot.

I tried following the un-bricking instructions on the openplug.org wiki and ran into the problem that not only is the src not available for the original but neither is the binary. I tried to grab a copy from a colleagues machine but it didn’t want to read the area of flash properly once booted.

I finally found a nice little post on the New IT forum that had all the bits needed to update uboot, the kernel and a clean root file system using a USB stick.

The updated version of uboot should also allow the pre built kernels out there to boot now as well.

So while I’m not back to original state of the as the Guruplug shipped, it is now up are running again and I can get back to getting my MIMO working with it.

Linux Photo-me-booth

I’ve spent the last few days on site at a bespoke coach builder’s helping to fit out a truck, this all came about after a chat over lunch at the end of last year. Kevin mentioned a project he was involved in to put some our lab demo’s into a truck that will be touring round a number of universities as part of the Smarter Planet initiative.

Side 1

As well as building portable versions of some demo’s there was a new bit. The plan was to have a sort of photo booth to take pictures of the visiting students and build a custom avatar for them to use at each station and upload to sites like Flickr and Facebook.

Since we were already doing some work for the truck we said we would have a look at doing this bit as well, about half an hour of playing that afternoon it looked like we should be able to put something together using a commodity webcam, Linux and some existing libraries.

First Pass

The first approach was to see if we could do what we needed in a reasonably simple web page. Using a streaming video server, the HTML 5 <video> tag and a bit of javascript we had a working prototype up and running very quickly.

The only problem was the lag introduced by the video encoding and the browser buffering, most of the time it was about 12 seconds, with a bit of tinkering we got it down to 5 seconds, but this was still far too long to ask somebody to pose for in order to just grab a single image.

Second Pass

So after getting so close with the last attempt I decided to have a look at a native solution that should remove the lag. I had a bit of a look round to see what options where available and I came across the following:

  • Video4Linux
    This is gives direct access to the video hardware connected to Linux
  • GStreamer
    This is a framework that allows you to build flows for interacting with media sources. This can be audio or video and from files as well as hardware devices.

As powerful as the Video4Linux API is it’s seamed a bit too heavy weight for what I was looking for. While looking into the GStreamer code I found it had a pre-built package that would do pretty much exactly what I wanted called CameraBin.

With a little bit of python it is possible to use the CameraBin module to show a Viewfinder and then write an image on request.

<code>
	self.camerabin = gst.element_factory_make("camerabin", "cam")
	self.sink = gst.element_factory_make("xvimagesink", "sink")
	src = gst.element_factory_make("v4l2src","src")
	src.set_property("device","/dev/video0")
	self.camerabin.set_property("viewfinder-sink", self.sink)
	self.camerabin.set_property("video-source", src)
	self.camerabin.set_property("flicker-mode", 1)
	self.camerabin.connect("image-done",self.image_captured)
</code>

Where self.sink is a Glade drawing area to use as a view finder and self.image_captured is a call back to execute when the image has been captured. To set the filename to save the image to and start the viewfinder run the following code.

<code>
	self.camerabin.set_property("filename", "foo.jpg")
	self.camerabin.set_state(gst.STATE_PLAYING)
</code>

To take a photo call the self.camerabin.emit(“capture-start”) method

The plan was for the avatar to be a silhouette on a supplied background, to make generating the silhouette easier the students will be standing in front of a green screen

Green screen

The Python Imaging Library makes manipulate the captured image and extract the silhouette and then build up the final image from the background, the silhouette and finally the text.

&lt;code&gt;
	image = Image.open(path)
	image2 = image.crop((150,80,460,450))
	image3 = image2.convert("RGBA")
	pixels = image3.load()
	size = image2.size;
	for y in range(size[1]):
		for x in range(size[0]):
			pixel = pixels[x,y]
			if (pixel[1] > 135 and pixel[0] < 142 and pixel[2] < 152):
				pixels[x,y] = (0, 255, 0, 0)
			else:
				pixels[x,y] = (0, 0, 0, 255)

	image4 = image3.filter(ImageFilter.ModeFilter(7))
	image5 = image4.resize((465,555))
	background = Image.open('facebook-background.jpg')
	background.paste(image5,(432,173,897,728),image5)
	text = Image.open('facebook-text.png')
	background.paste(text,(0,0),text)
&lt;/code&gt;

The final result shown on one of the plasma screens in the truck.

Silhouette wall

As well as building Silhouette wall, ETS has provided a couple of other items to go on the truck

  1. See It Sign It

    This application is a text to sign language translation engine that uses 3D avatars to sign. There will be 2 laptops on the truck that can be used to have a signing conversation. There is a short video demonstration of the system hooked up to a voice to text system here: http://www.youtube.com/watch?v=RarMKnjqzZU

  2. Smarter Office

    This is an evolution of the Smarter Home section in the ETS Demo lab at Hursley. This uses a Current Cost power meter to monitor the energy used and feeds this to a Ambient Orb to visualise the information better. It also has a watch that can recognise different gestures which in turn can be used to turn things like the lamp and desk fan on and off and the amount of power used by these is reflected in the change in colour from the orb.

For details of where the truck will be visiting over the year, please visit the tours facebook page in the resources.

Resources

TV Scrobbling

Having seen Dale Lane’s work with VDR to build something similar to Last.FM for the TV he’s been watching, I’ve been looking to see if I could do the same with MythTV.

MythTV 0.23 has shipped recently and with it came a new event system. Having had a quick look at the feature for this it looked like it would be a good starting point for this. The following events looked like they may be able to provide what I needed.

  • Playback started
  • Playback stopped
  • Playback changed

To use events you specify a command to run when each fires. You can set these up with the mythtvsetup command. As well as specifying a command you can also pass arguments, these arguments are passed using the same tokens used for the MythTV job system. The following is not the full list, but should be enough to do what I’m looking for.

  • %TITLE% – Title
  • %SUBTITLE% – Episode subtitle
  • %CHANID% – MythTV channel id
  • %DESCRIPTION% – The blurb from EPG
  • %PROGSTART% – The listed started time for the programme
  • %PROGEND% – The listed end time for the programme

There is also an option to add a command that fires on every event which you can also pass a %EVENTNAME% argument, this is very useful for debugging event ordering. Not all of the events support all of the arguments as the data is not relevant.

After a bit of playing I managed to get most of what I needed, but there where a few problems

  1. No way to tell the difference between watching recorded and live TV
  2. No way to tell when you change channel when watching live TV
  3. No way to tell when one live programme ends and a new one starts with live TV

I raised a bug in the MythTV tracker (#8388) to cover the first 2 (since the 3rd one didn’t occur to me until I got a bit further). Which the guys very quickly added. So there is now a LiveTV Started event fired when a frontend starts watching live tv followed by a Playback Started event with the details of the programme. I’ll raise a ticket for the last one and assuming I can get sign off from the boss I’ll see about submitting a patch to add it in. In the mean time with a little extra work I can infer the programme change from the Record Started and Record Finished events.

Acting on the events

So with all these events which may be reported on either the backend or frontend it’s looking good for another messaging solution. Luckily I already broker up and running for my home power monitoring and security setup. So now I just needed to run up a script or two to publish the events and a consumer with a state machine to act on them.


#!/bin/sh
#
# playStarted.sh %TITLE% %SUBTITLE% %DESCRIPTION% %CHANID%

/home/mythtv/bin/sendMessage TV/watching/start “$1/$2 ($3) on $4″

mythtvsetup events screen

Where sendMessage is a small script that publishes the second argument to the topic in the first. The other end is Java JMS application that keeps the state machine up to date and update the database when a show ends. Dale was kind enough to send me his database schema, so the data should look the same.

What next

Now I’ve got my data logging to a local database I need to come up with a front end to present it all. Next time I get 5mins to chat to Dale when we are both in the office I will see if I can borrow some of his code and if he wants to look at if we can build a site where we can all share what we’ve been watching?

Along with the events for the internal workings of the front and backends there are 10 user configurable events that can be bound to key presses, assuming I can find a spare button on the remote it should be possible to bind one of these to something like Favourite.


Resources