developerWorks Days Zurich 2012

This week I had a day out of the office to go to Zurich to talk at this years IBM developerWorks Days. I had 2 sessions back to back in the mobile stream, the first an introduction to Android Development and the second on MQTT.

The slots were only 35mins long (well 45mins, but we had to leave 5 mins on each end to let people move round) so there was a limit to how much detail I could go into. With this in mind I decided the best way to give people a introduction to Android Development in that amount of time was to quickly walk through writing reasonably simple application. The application had to be at least somewhat practical, but also very simple so after a little bit of thinking about I settled on an app to download the latest image from the web comic XKCD. There are a number apps on Google Play that already do this (and a lot better) but it does show a little Activity GUI design. I got through about 95% of the app live on stage and only had to copy & paste the details for the onPostExecute method to clear the progress dialog and update the image in the last minute to get it to the point I could run it in the emulator.

Here are the slides for this session

And here is the Eclipse project for the Application I created live on stage:
http://www.hardill.me.uk/XKCD-demo-android-app.zip

The MQTT pitch was a little easier to set up, there is loads of great content on MQTT.org to use as a source and of course I remembered to include the section on the MQTT enabled mouse traps and twittering ferries from Andy Stanford-Clark.

Here are the slides for the MQTT session:

For the Demo I used the Javascript d3 topic tree viewer I blogged about last week and my Raspberry Pi running a Mosquitto broker and a little script to publish the core temperature, load and uptime values. The broker was also bridged to my home broker to show the feed from my weather centre and some other sensors.

Even More MQTT enabled TVs

Kevin Modelling the headset

A project at work recently came up to do with using one of the Emotiv headsets to help out a former Italian IBMer who was suffering from locked in syndrome. The project was being led by Kevin Brown who was looking for ways to use the headset to drive things sending email and browsing the web but he was also looking for a way to interact with some other tech around the house. The TV was the first on the list.

Continuing on from my previous work with controlling TVs and video walls with MQTT I said I would have a crack at this. My earlier solution was limited to LG TVs with serial ports and this needed to work with any make so a different approach was needed. It also needed to run on Windows so it was also a chance to play with C# and .Net.

To be TV agnostic it was decided to use a USB IR remote from a company called RedRat. They make a number of solutions, but their RedRat III was perfect for what was needed.

RedRat IR transmitter & receiver

The RedRat API comes with bindings for C++, .Net on Windows (and a LIRC plugin for Linux). The RedRat III is not just a IR transmitter, it is also a receiver which means it can “learn” from existing remote controls so it can be used with any TV.

Kevin is using Emotiv headset to drive Dasher as the input device. Dasher is a sort of keyboard replacement that allows the user to build up words using at a minimum a single input e.g. a single push button. As well as building words other actions can be added to the selector, Kevin added actions for browsing and also to control the TV. These actions publish MQTT messages to a topic with payloads like “volUp”, “chanDown” or “power”.

So now we had the inputs it was time to get round to writing the code to turn those messages into IR signals. There are 2 MQTT .NET libraries listed on the MQTT.org software page. I grabbed the first off the list MqttDotNet and got things working pretty quickly.

The following few lines sets up a connection and subscribes to a topic.

String connectionString = "tcp://" + 
  Properties.Settings.Default.host.Trim() + ":" +
  Properties.Settings.Default.port;
IMqtt client = MqttClientFactory.CreateClient(connectionString, "mqtt2ir");
try
{
	client.Connect(true);
	client.PublishArrived += new PublishArrivedDelegate(onMessage);
	client.ConnectionLost += new ConnectionDelegate(connectionLost);
	client.Subscribe(Properties.Settings.Default.topic.Trim(), 
	  QoS.BestEfforts);
}

Where the onMessage callback looks like this:

bool onMessage(object sender, PublishArrivedArgs msg)
{
	String command = msg.Payload.ToString().Trim();
	IRPacket packet = loadSignal(command);
	if (packet != null) 
	{
		redrat.OutputModulatedSignal(packet);
	}
	return true;
}

MQTT2IR Settings Window

And that is pretty much the meat of the whole application, the rest was just some code to initialise the RedRat and to turn it into a System Tray application with a window for entering the broker details and training the commands.

Unfortunately just a few days before we were due to have delivered this project we learned that the intended recipient had picked up a respiratory infection and had passed away. I would like to extend my thoughts to their family and I hope we can find somebody who may find this work useful in the future.

Playing with touchatag reader

Touchatag reader
We got a bunch of Touchatag NFC readers in the office just after Christmas and I said I would have a play with one to see what we could use them for. I had seen one before (in fact I borrowed one from Andy Piper for a little while) but didn’t get much further than trying to read my work id card.

To get it work on Linux you need to use PCSC Lite and download the driver from ASC (the guys that actually make the readers that Touchatag use). You can grab the download here. Once you’ve built these (standard “configure; make; sudo make install”) you will need to update one of the PCSC config files. There is a patch linked to from this page: http://idefix.net/~koos/rfid-touchatag.html

PCSC Lite is packaged for Fedora and Ubuntu, but I couldn’t find libnfc in the standard repos for Ubuntu so ended up building it myself for one of the machines in the office. Again this was a simple “configure; make; sudo make install”. With that done I was able to use the nfc-mfclassic tool from the libnfc samples to read data from a tag.

$ nfc-mfclassic r a test.out
Connected to NFC reader: ACS ACR122U 00 00 / ACR122U103 - PN532 v1.6 (0x07)
Found MIFARE Classic 4k card with UID: 9e2bcaea
Reading out 256 blocks |.....................................|

Which gets me a file with all the data stored on a tag (assuming I know the right keys to access all the blocks), but most of the time just having the tag id is enough to trigger an event. After a bit more poking around I found nfc-eventd which seamed to fit the bill perfectly.

This allows you to specify commands to be run when a tag is placed on the reader and when it is removed and it will pass the tag id to the command. Here is the config file I used

nfc-eventd {

  # Run in background? Implies debug=false if true
  daemon = false;

  # show debug messages?
  debug = true;
	
  # polling time in seconds
  polling_time = 1;

  # expire time in seconds
  # default = 0 ( no expire )
  expire_time = 0;
	
  device my_touchatag {
    driver = "ACR122";
    name = "ACS ACR 38U-CCID 01 00";
  }

  # which device to use ? note: if this part is commented out, 
  # nfc-eventd will try to pick up device automagically...
  #nfc_device = "my_touchatag";

  # list of events and actions
  module nem_execute {
    # Tag inserted
    event tag_insert {
      # what to do if an action fail?
      # ignore  : continue to next action
      # return  : end action sequence
      # quit    : end program
      on_error = ignore ;
	
      # You can enter several, comma-separated action entries
      # they will be executed in turn
      action = "publish 'wmqtt://nfc@broker:1883/nfc/tagid?qos=0&retain=n&debug=0' $TAG_UID"
    }
	
    # Tag has been removed
    event tag_remove { 
      on_error = ignore;
      action = "(echo -n 'Tag (uid=$TAG_UID) removed at: ' && date) >> /tmp/nfc-eventd.log";
    }
	
    # Too much time card removed
    event expire_time { 
      on_error = ignore;
      action = "/bin/false";
    }
  }

}

Here I have used the publish command from the IBM IA93 C MQTT package to publish a message with the tag id to the nfc topic. You can do something similar with mosquitto_pub like this:

mosquitto_pub -h broker -i nfc -t nfc -m $TAG_ID

The plan is to use this now to allow the guys in ETS to log into various demos in the lab with their id badges.

Next on the list is to see if I can get the reader to respond to my Galaxy Nexus when it’s in tag mode.

Unbricking Guruplug Server

I’ve been playing with a Guruplug Server over the last few days trying to get it to work with my MIMO UM-740 Touch Screen. It’s been slow progress because nobody seams to have a link to the src for the kernel that it comes with so I can’t build the DisplayLink module.

Guruplug
Used with permission from Flickr taken by andypiper

To try and get round this I tried to update the kernel using some instructions on the openplug.org website. Unfortunately due to a miss match in the uboot boot loader that came installed on the device the new kernel wouldn’t boot.

I tried following the un-bricking instructions on the openplug.org wiki and ran into the problem that not only is the src not available for the original but neither is the binary. I tried to grab a copy from a colleagues machine but it didn’t want to read the area of flash properly once booted.

I finally found a nice little post on the New IT forum that had all the bits needed to update uboot, the kernel and a clean root file system using a USB stick.

The updated version of uboot should also allow the pre built kernels out there to boot now as well.

So while I’m not back to original state of the as the Guruplug shipped, it is now up are running again and I can get back to getting my MIMO working with it.

Linux Photo-me-booth

I’ve spent the last few days on site at a bespoke coach builder’s helping to fit out a truck, this all came about after a chat over lunch at the end of last year. Kevin mentioned a project he was involved in to put some our lab demo’s into a truck that will be touring round a number of universities as part of the Smarter Planet initiative.

Side 1

As well as building portable versions of some demo’s there was a new bit. The plan was to have a sort of photo booth to take pictures of the visiting students and build a custom avatar for them to use at each station and upload to sites like Flickr and Facebook.

Since we were already doing some work for the truck we said we would have a look at doing this bit as well, about half an hour of playing that afternoon it looked like we should be able to put something together using a commodity webcam, Linux and some existing libraries.

First Pass

The first approach was to see if we could do what we needed in a reasonably simple web page. Using a streaming video server, the HTML 5 <video> tag and a bit of javascript we had a working prototype up and running very quickly.

The only problem was the lag introduced by the video encoding and the browser buffering, most of the time it was about 12 seconds, with a bit of tinkering we got it down to 5 seconds, but this was still far too long to ask somebody to pose for in order to just grab a single image.

Second Pass

So after getting so close with the last attempt I decided to have a look at a native solution that should remove the lag. I had a bit of a look round to see what options where available and I came across the following:

  • Video4Linux
    This is gives direct access to the video hardware connected to Linux
  • GStreamer
    This is a framework that allows you to build flows for interacting with media sources. This can be audio or video and from files as well as hardware devices.

As powerful as the Video4Linux API is it’s seamed a bit too heavy weight for what I was looking for. While looking into the GStreamer code I found it had a pre-built package that would do pretty much exactly what I wanted called CameraBin.

With a little bit of python it is possible to use the CameraBin module to show a Viewfinder and then write an image on request.

<code>
	self.camerabin = gst.element_factory_make("camerabin", "cam")
	self.sink = gst.element_factory_make("xvimagesink", "sink")
	src = gst.element_factory_make("v4l2src","src")
	src.set_property("device","/dev/video0")
	self.camerabin.set_property("viewfinder-sink", self.sink)
	self.camerabin.set_property("video-source", src)
	self.camerabin.set_property("flicker-mode", 1)
	self.camerabin.connect("image-done",self.image_captured)
</code>

Where self.sink is a Glade drawing area to use as a view finder and self.image_captured is a call back to execute when the image has been captured. To set the filename to save the image to and start the viewfinder run the following code.

<code>
	self.camerabin.set_property("filename", "foo.jpg")
	self.camerabin.set_state(gst.STATE_PLAYING)
</code>

To take a photo call the self.camerabin.emit(“capture-start”) method

The plan was for the avatar to be a silhouette on a supplied background, to make generating the silhouette easier the students will be standing in front of a green screen

Green screen

The Python Imaging Library makes manipulate the captured image and extract the silhouette and then build up the final image from the background, the silhouette and finally the text.

&lt;code&gt;
	image = Image.open(path)
	image2 = image.crop((150,80,460,450))
	image3 = image2.convert("RGBA")
	pixels = image3.load()
	size = image2.size;
	for y in range(size[1]):
		for x in range(size[0]):
			pixel = pixels[x,y]
			if (pixel[1] > 135 and pixel[0] < 142 and pixel[2] < 152):
				pixels[x,y] = (0, 255, 0, 0)
			else:
				pixels[x,y] = (0, 0, 0, 255)

	image4 = image3.filter(ImageFilter.ModeFilter(7))
	image5 = image4.resize((465,555))
	background = Image.open('facebook-background.jpg')
	background.paste(image5,(432,173,897,728),image5)
	text = Image.open('facebook-text.png')
	background.paste(text,(0,0),text)
&lt;/code&gt;

The final result shown on one of the plasma screens in the truck.

Silhouette wall

As well as building Silhouette wall, ETS has provided a couple of other items to go on the truck

  1. See It Sign It

    This application is a text to sign language translation engine that uses 3D avatars to sign. There will be 2 laptops on the truck that can be used to have a signing conversation. There is a short video demonstration of the system hooked up to a voice to text system here: http://www.youtube.com/watch?v=RarMKnjqzZU

  2. Smarter Office

    This is an evolution of the Smarter Home section in the ETS Demo lab at Hursley. This uses a Current Cost power meter to monitor the energy used and feeds this to a Ambient Orb to visualise the information better. It also has a watch that can recognise different gestures which in turn can be used to turn things like the lamp and desk fan on and off and the amount of power used by these is reflected in the change in colour from the orb.

For details of where the truck will be visiting over the year, please visit the tours facebook page in the resources.

Resources