Tag Archives: IBM

Node-RED at Zurich developerWorks days 2013

Node-RED icon
This year I was lucky enough to be asked back to speak at developerWorks days 2013 in Zurich after giving 2 presentations last year.

This year I was presenting on a great piece of work done by Nick O’Leary and Dave Conway-Jones called Node-RED.

Node-RED is a light weight, edge of the network event processing engine. The main aim is to make it easy to bridge a wide variety of input and output sources and to allow logic to be applied to the events/messages that flow between them.

For my presentation I wanted to try and use Node-RED as much as possible so I set about seeing if I could use it to host and control my slide deck. I started out with a impress.js presentation the Nick had written. Impress is a Javascript framework to build HTML5 presentation similar to prezi, but it also exposes an API to drive the slide transitions from and external source. Combining this feature with the MQTT over WebSockets will allow me to drive things remotely.

I added the following bit of code to the end of the presentation.html

<script type="text/javascript" src="js/mqttws31.js"></script>
<script src="js/impress.js"></script>
<script src="js/impressConsole.js"></script>
<script>
    var imp = impress();
    imp.init();

    var client;
    var slide;

    function setupMQTT() {
    	client = new Messaging.Client(document.location.hostname,8181,"presentation");
		client.onConnectionLost = onConnectionLost;
		client.onMessageArrived = onMessageArrived;
		client.connect({onSuccess:onConnect});
    }

   function onConnectionLost(response) {
	setTimeout(setupMQTT, 500);
	document.removeEventListener('impress:stepenter', sendStepEnter);
   }

   function onMessageArrived(message) {
	if (message.payloadString === "next") {
		imp.next();
	} else if (message.payloadString === "prev") {
		imp.prev();
	} else {
		console.log(slide);
		if (slide === "demotimeagain") {
			//update with twitter details
			console.log(message.payloadString)
			obb = JSON.parse(message.payloadString);
			console.log(obb);
			document.getElementById('injected-twitter-screen').innerHTML = obb.sender.screen_name;
			document.getElementById('injected-twitter-id').innerHTML = obb.sender.name;
			document.getElementById('injected-twitter-tweet').innerHTML = obb.body;
		} else {
			imp.goto(message.payloadString);
		}
	}
   }

   function onConnect() {
	client.subscribe("pres");
	document.addEventListener('impress:stepenter', sendStepEnter);
   }

   function sendStepEnter(step) {
	console.log(step.target.id);
	slide = step.target.id;
	message = new Messaging.Message(step.target.id);
	message.destinationName = "slide";
	client.send(message);
    }

    setupMQTT();

</script>

This first sets up impress.js then starts to set up some basic boiler plate to create a MQTT client connection over Web Sockets. The onConnect function subscribes this clients to the ‘pres’ topic that will be used to receive ‘next’ & ‘prev’ messages to advance the slides. It also adds a event listener that receives events from impress.js each time a new slide is displayed.

The onMessage function handles the ‘next’& ‘prev’ and also a couple of special case to populate some data into a slide following a demonstration.

This a basic Node-RED flow to make this all work can be found here

In order to get Node-RED to serve the presentation html and required javascript libraries used to require embedding Node-RED into a custom application, but this requirement was removed with a new feature in Node-RED 0.4.0. 0.4.0 include a new configuration setting called httpStatic which allows you to specify a directory holding a collection of static content, when you use this setting you also need to specify httpRoot to move the Node-RED gui to a different root directory.

The basic version of the presentation is embedded here:

If you click on the slide you can then navigate back and forth using the arrow keys.

You can access it full screen here

developerWorks Days Zurich 2012

This week I had a day out of the office to go to Zurich to talk at this years IBM developerWorks Days. I had 2 sessions back to back in the mobile stream, the first an introduction to Android Development and the second on MQTT.

The slots were only 35mins long (well 45mins, but we had to leave 5 mins on each end to let people move round) so there was a limit to how much detail I could go into. With this in mind I decided the best way to give people a introduction to Android Development in that amount of time was to quickly walk through writing reasonably simple application. The application had to be at least somewhat practical, but also very simple so after a little bit of thinking about I settled on an app to download the latest image from the web comic XKCD. There are a number apps on Google Play that already do this (and a lot better) but it does show a little Activity GUI design. I got through about 95% of the app live on stage and only had to copy & paste the details for the onPostExecute method to clear the progress dialog and update the image in the last minute to get it to the point I could run it in the emulator.

Here are the slides for this session

And here is the Eclipse project for the Application I created live on stage:
http://www.hardill.me.uk/XKCD-demo-android-app.zip

The MQTT pitch was a little easier to set up, there is loads of great content on MQTT.org to use as a source and of course I remembered to include the section on the MQTT enabled mouse traps and twittering ferries from Andy Stanford-Clark.

Here are the slides for the MQTT session:

For the Demo I used the Javascript d3 topic tree viewer I blogged about last week and my Raspberry Pi running a Mosquitto broker and a little script to publish the core temperature, load and uptime values. The broker was also bridged to my home broker to show the feed from my weather centre and some other sensors.

Linux Photo-me-booth

I’ve spent the last few days on site at a bespoke coach builder’s helping to fit out a truck, this all came about after a chat over lunch at the end of last year. Kevin mentioned a project he was involved in to put some our lab demo’s into a truck that will be touring round a number of universities as part of the Smarter Planet initiative.

Side 1

As well as building portable versions of some demo’s there was a new bit. The plan was to have a sort of photo booth to take pictures of the visiting students and build a custom avatar for them to use at each station and upload to sites like Flickr and Facebook.

Since we were already doing some work for the truck we said we would have a look at doing this bit as well, about half an hour of playing that afternoon it looked like we should be able to put something together using a commodity webcam, Linux and some existing libraries.

First Pass

The first approach was to see if we could do what we needed in a reasonably simple web page. Using a streaming video server, the HTML 5 <video> tag and a bit of javascript we had a working prototype up and running very quickly.

The only problem was the lag introduced by the video encoding and the browser buffering, most of the time it was about 12 seconds, with a bit of tinkering we got it down to 5 seconds, but this was still far too long to ask somebody to pose for in order to just grab a single image.

Second Pass

So after getting so close with the last attempt I decided to have a look at a native solution that should remove the lag. I had a bit of a look round to see what options where available and I came across the following:

  • Video4Linux
    This is gives direct access to the video hardware connected to Linux
  • GStreamer
    This is a framework that allows you to build flows for interacting with media sources. This can be audio or video and from files as well as hardware devices.

As powerful as the Video4Linux API is it’s seamed a bit too heavy weight for what I was looking for. While looking into the GStreamer code I found it had a pre-built package that would do pretty much exactly what I wanted called CameraBin.

With a little bit of python it is possible to use the CameraBin module to show a Viewfinder and then write an image on request.

<code>
	self.camerabin = gst.element_factory_make("camerabin", "cam")
	self.sink = gst.element_factory_make("xvimagesink", "sink")
	src = gst.element_factory_make("v4l2src","src")
	src.set_property("device","/dev/video0")
	self.camerabin.set_property("viewfinder-sink", self.sink)
	self.camerabin.set_property("video-source", src)
	self.camerabin.set_property("flicker-mode", 1)
	self.camerabin.connect("image-done",self.image_captured)
</code>

Where self.sink is a Glade drawing area to use as a view finder and self.image_captured is a call back to execute when the image has been captured. To set the filename to save the image to and start the viewfinder run the following code.

<code>
	self.camerabin.set_property("filename", "foo.jpg")
	self.camerabin.set_state(gst.STATE_PLAYING)
</code>

To take a photo call the self.camerabin.emit(“capture-start”) method

The plan was for the avatar to be a silhouette on a supplied background, to make generating the silhouette easier the students will be standing in front of a green screen

Green screen

The Python Imaging Library makes manipulate the captured image and extract the silhouette and then build up the final image from the background, the silhouette and finally the text.

&lt;code&gt;
	image = Image.open(path)
	image2 = image.crop((150,80,460,450))
	image3 = image2.convert("RGBA")
	pixels = image3.load()
	size = image2.size;
	for y in range(size[1]):
		for x in range(size[0]):
			pixel = pixels[x,y]
			if (pixel[1] > 135 and pixel[0] < 142 and pixel[2] < 152):
				pixels[x,y] = (0, 255, 0, 0)
			else:
				pixels[x,y] = (0, 0, 0, 255)

	image4 = image3.filter(ImageFilter.ModeFilter(7))
	image5 = image4.resize((465,555))
	background = Image.open('facebook-background.jpg')
	background.paste(image5,(432,173,897,728),image5)
	text = Image.open('facebook-text.png')
	background.paste(text,(0,0),text)
&lt;/code&gt;

The final result shown on one of the plasma screens in the truck.

Silhouette wall

As well as building Silhouette wall, ETS has provided a couple of other items to go on the truck

  1. See It Sign It

    This application is a text to sign language translation engine that uses 3D avatars to sign. There will be 2 laptops on the truck that can be used to have a signing conversation. There is a short video demonstration of the system hooked up to a voice to text system here: http://www.youtube.com/watch?v=RarMKnjqzZU

  2. Smarter Office

    This is an evolution of the Smarter Home section in the ETS Demo lab at Hursley. This uses a Current Cost power meter to monitor the energy used and feeds this to a Ambient Orb to visualise the information better. It also has a watch that can recognise different gestures which in turn can be used to turn things like the lamp and desk fan on and off and the amount of power used by these is reflected in the change in colour from the orb.

For details of where the truck will be visiting over the year, please visit the tours facebook page in the resources.

Resources

Eightbar

I got added as an author for the Eightbar blog today.

Eightbar is a group of techie/creative people working in and around IBM’s Hursley Park Lab in the UK. We have regular technical community meetings, well more like a cup of tea and a chat really, about all kinds of cool stuff. One of the things we talked about is that although there are lots of cool people and projects going on in Hursley, we never really let anyone know about them. So, we decided to try and record some of the stuff that goes on here in an unofficial blog: eightbar.

The plan is to give a bit of a UK flavour to it all, but talk about the technology coming out of the lab, things people are playing with, but also some of the fun side. Hursley’s a very unusual place (compared to most technology sites), so we want to get that across. Anyway, hopefully lots of different people who work in and around Hursley will contribute.

The name Eightbar comes from the IBM logo which uses letters made up of 8 horizontal bars.

I posted my first article today about Andy Piper’s new AR.Drone that he brought to the office on Tuesday.