Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:

/dts-v1/;
/plugin/;

/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;
            };
        };
    };
};

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.

Userspace

Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

device = uinput.device([
  uinput.ABS_X,
  uinput.ABS_Y
])

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
  else:
    device.emit(uinput.BTN_TOUCH, 0)   

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Sublime Text 3 UML tool

I’ve been using Sublime Text as my editor of choice for about a year. Every now and again I install another little plugin to help out.

For my current project we’ve been using plantuml to build sequence diagrams and UML models for the data structures.

I had a bit of a search round for a suitable plugin that would use plantuml to generate the images on the fly from with in Sublime. I found this project on github that looked perfect.

It was working fine, but it had 2 small niggles, the first was that it called out to Eye of Gnome to actually display the images which was a bit of a pain. The second is the random chars it appends to the end of the file name for the images it generates.

Sublime with EoG

There was an open issue that seamed to described the first issue problem and a comment from the original author stating that he didn’t have time but would welcome contributions. I thought I’d have a look at seeing how hard it would be to add what was needed.

It turned out to be very simple:

from .base import BaseViewer
import sublime

class Sublime3Viewer(BaseViewer):
	def load(self):
		if not sublime.version().startswith('3'):
			raise Exception("Not Sublime 3!")

	def view(self,diagram_files):
		for diagram_file in diagram_files:
			sublime.active_window().open_file(diagram_file.name)

I’ve submitted a pull request or you can pull from my fork.

Side By Side

I’ve also tweaked the code to remove the random chars in the name for my version but I’ve not checked that in yet as I don’t think the upstream author will accept that change at the moment.

EDIT:
In the meantime you can install my version direct from my git repo with the following steps

  1. Open the Command Pallet, Tools -> Command Pallet
  2. Select Package Control: Add Repository
  3. In the text box add the git repo address and hit return – https://github.com/hardillb/sublime_diagram_plugin
  4. Open the Command Pallet again
  5. Select Package Control: Install Package
  6. Type in “sublime_diagram_plugin”

Linux Photo-me-booth

I’ve spent the last few days on site at a bespoke coach builder’s helping to fit out a truck, this all came about after a chat over lunch at the end of last year. Kevin mentioned a project he was involved in to put some our lab demo’s into a truck that will be touring round a number of universities as part of the Smarter Planet initiative.

Side 1

As well as building portable versions of some demo’s there was a new bit. The plan was to have a sort of photo booth to take pictures of the visiting students and build a custom avatar for them to use at each station and upload to sites like Flickr and Facebook.

Since we were already doing some work for the truck we said we would have a look at doing this bit as well, about half an hour of playing that afternoon it looked like we should be able to put something together using a commodity webcam, Linux and some existing libraries.

First Pass

The first approach was to see if we could do what we needed in a reasonably simple web page. Using a streaming video server, the HTML 5 <video> tag and a bit of javascript we had a working prototype up and running very quickly.

The only problem was the lag introduced by the video encoding and the browser buffering, most of the time it was about 12 seconds, with a bit of tinkering we got it down to 5 seconds, but this was still far too long to ask somebody to pose for in order to just grab a single image.

Second Pass

So after getting so close with the last attempt I decided to have a look at a native solution that should remove the lag. I had a bit of a look round to see what options where available and I came across the following:

  • Video4Linux
    This is gives direct access to the video hardware connected to Linux
  • GStreamer
    This is a framework that allows you to build flows for interacting with media sources. This can be audio or video and from files as well as hardware devices.

As powerful as the Video4Linux API is it’s seamed a bit too heavy weight for what I was looking for. While looking into the GStreamer code I found it had a pre-built package that would do pretty much exactly what I wanted called CameraBin.

With a little bit of python it is possible to use the CameraBin module to show a Viewfinder and then write an image on request.

<code>
	self.camerabin = gst.element_factory_make("camerabin", "cam")
	self.sink = gst.element_factory_make("xvimagesink", "sink")
	src = gst.element_factory_make("v4l2src","src")
	src.set_property("device","/dev/video0")
	self.camerabin.set_property("viewfinder-sink", self.sink)
	self.camerabin.set_property("video-source", src)
	self.camerabin.set_property("flicker-mode", 1)
	self.camerabin.connect("image-done",self.image_captured)
</code>

Where self.sink is a Glade drawing area to use as a view finder and self.image_captured is a call back to execute when the image has been captured. To set the filename to save the image to and start the viewfinder run the following code.

<code>
	self.camerabin.set_property("filename", "foo.jpg")
	self.camerabin.set_state(gst.STATE_PLAYING)
</code>

To take a photo call the self.camerabin.emit(“capture-start”) method

The plan was for the avatar to be a silhouette on a supplied background, to make generating the silhouette easier the students will be standing in front of a green screen

Green screen

The Python Imaging Library makes manipulate the captured image and extract the silhouette and then build up the final image from the background, the silhouette and finally the text.

&lt;code&gt;
	image = Image.open(path)
	image2 = image.crop((150,80,460,450))
	image3 = image2.convert("RGBA")
	pixels = image3.load()
	size = image2.size;
	for y in range(size[1]):
		for x in range(size[0]):
			pixel = pixels[x,y]
			if (pixel[1] > 135 and pixel[0] < 142 and pixel[2] < 152):
				pixels[x,y] = (0, 255, 0, 0)
			else:
				pixels[x,y] = (0, 0, 0, 255)

	image4 = image3.filter(ImageFilter.ModeFilter(7))
	image5 = image4.resize((465,555))
	background = Image.open('facebook-background.jpg')
	background.paste(image5,(432,173,897,728),image5)
	text = Image.open('facebook-text.png')
	background.paste(text,(0,0),text)
&lt;/code&gt;

The final result shown on one of the plasma screens in the truck.

Silhouette wall

As well as building Silhouette wall, ETS has provided a couple of other items to go on the truck

  1. See It Sign It

    This application is a text to sign language translation engine that uses 3D avatars to sign. There will be 2 laptops on the truck that can be used to have a signing conversation. There is a short video demonstration of the system hooked up to a voice to text system here: http://www.youtube.com/watch?v=RarMKnjqzZU

  2. Smarter Office

    This is an evolution of the Smarter Home section in the ETS Demo lab at Hursley. This uses a Current Cost power meter to monitor the energy used and feeds this to a Ambient Orb to visualise the information better. It also has a watch that can recognise different gestures which in turn can be used to turn things like the lamp and desk fan on and off and the amount of power used by these is reflected in the change in colour from the orb.

For details of where the truck will be visiting over the year, please visit the tours facebook page in the resources.

Resources