New Daily Driver

I got my first Dell XPS13 back in 2016 and a second one in 2020. I really like them but with the new job I’ve been using it for both personal and work use. So I decided to grab a second machine to help with keeping things separate, it’s easier to switch off at the end of the day if I can leave the “work” machine in the office.

Lenovo X1 Carbon

I’ve gone for a Lenovo X1 Carbon. It’s a machine I looked at when I got the second XPS13 as it was another machine that can be ordered with Linux pre-installed. Lenovo are now offering both Ubuntu and Fedora as options. In my case I knew I wouldn’t have any problems installing the OS myself so I ordered a bare machine and installed Fedora 35. Because I got to do a clean install I could also enable LUKS out of the box to encrypt the drive.

Also running both a deb and rpm based distro will help me stay current with both and make testing a little easier without running VMs all the time.

I used to run Fedora when I was at IBM and even worked with the internal team that packaged some of the tools we needed on a day to day basis (like Lotus Notes and the IBM JVM). I decided it would be good to give it a try again especially as Fedora releases tend to move a little quicker than Ubuntu LTS and are more aggressive at picking up new versions.

The main hardware differences to the XPS13 are double the RAM at 32gb and double the storage with a 1TB SSD. The screen is the same resolution but slightly larger and without a touch screen (but that’s not something I make a lot of use of). It also comes with a Lenovo trademark track point as well as a trackpad. The CPU is still a 4 core ( with Hyper Threading) but the base clock speeds are better (Dell, Lenovo)

The only niggle I’ve found so far is that the USB-C port layout doesn’t work as well as with the XPS13 on my desk. The XPS13 has USB-C ports on both sides of the case, where as the X1 Carbon only has 2 on the left hand edge. But it does have a full sized HDMI port and 2 USB 3.1 A ports which means I don’t need the little USB-C to USB-A hub I’d been using. This also makes pluggin in a SD card reader a little easier as the Lenovo doesn’t have one built in.

The keyboard feels a little nicer (just got to get used to the ctrl and fn keys being swapped, though there is bios setting to flip them)

Determining which Linux Distro you are on to install NodeJS

I’ve recently been working on an install script for a project. As part of the install I need to check if there is a suitable version of NodeJS installed and if not install one.

The problem is that there are 2 main ways in which NodeJS can be installed using the default package management systems for different Linux Distributions. So I needed a way to work out which distro the script was running on.

The step was to work out if it is actually Linux or if it’s OSx, since I’m using bash as the interpreter for the script there is the OSTYPE environment variable that I can check.

case "$OSTYPE" in
  darwin*) 
    MYOS=darwin
  ;;
  linux*)
    MYOS=$(cat /etc/os-release | grep "^ID=" | cut -d = -f 2 | tr -d '"')
  ;;
  *) 
    # unknown OS
  ;;
esac

Once we are sure we are on Linux the we can check the /etc/os-release file and cut out the ID= entry. The tr is to cut the quotes off (Amazon Linux I’m looking at you…)

MYOS then contains one of the following:

  • debian
  • ubuntu
  • raspbian
  • fedora
  • rhel
  • centos
  • amzon

And using this we can then decide how to install NodeJS

if [[ "$MYOS" == "debian" ]] || [[ "$MYOS" == "ubuntu" ]] || [[ "$MYOS" == "raspbian" ]]; then
      curl -sSL "https://deb.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo apt-get install -y nodejs build-essential
elif [[ "$MYOS" == "fedora" ]]; then
      sudo dnf module reset -y nodejs
      sudo dnf module install -y "nodejs:$MIN_NODEJS/default"
      sudo dnf group install -y "C Development Tools and Libraries"
elif [[ "$MYOS" == "rhel" ]] || [[ "$MYOS" == "centos" || "$MYOS" == "amzn" ]]; then
      curl -fsSL "https://rpm.nodesource.com/setup_$MIN_NODEJS.x" | sudo -E bash -
      sudo yum install -y nodejs
      sudo yum group install -y "Development Tools"
elif [[ "$MYOS" == "darwin" ]]; then
      echo "**************************************************************"
      echo "* On OSx you will need to manually install NodeJS            *"
      echo "* Please install the latest LTS release from:                *"
      echo "* https://nodejs.org/en/download/                            *"
      echo "**************************************************************"
      exit 1
fi

Now that’s out of the way time to look at how to nicely setup a Systemd service…

Adding a TPM to My Offline Certificate Authority

Back at the start of last year, I built an offline Certificate Authority based around Pi Zero and a RTC module.

The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.

For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.

TPM

A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.

TPM & RTC on a Raspberry Pi Zero

TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.

You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.

You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).

It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.

dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670

Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.

Using the TPM with the CA

Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.

Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.

There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.

Generating New Keys

You can generate a new CA certificate with the following commands

$ tpm2tss-genkey -a rsa -s 2048 ca.tss
$ openssl req -new -x509 -engine tpm2tss -key ca.tss  -keyform engine -out ca.crt

This certificate can now be used to sign CSRs

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Importing Keys

For an existing ca.key private key file.

$ tpm2_createprimary --hierarchy=o --hash-algorithm=sha256 --key-algorithm=rsa --key-context=primiary_owner_key.ctx
$ HANDLE=$(tpm2_evictcontrol --hierarchy=o --object-context=primiary_owner_key.ctx | cut -d ' ' -f 2 | head -n 1)
$ tpm2_import -C primiary_owner_key.ctx -G rsa -i ca.key -u ca-pub.tpm -r ca.tpm
$ tpm2tss-genkey --public ca-pub-tpm --private ca.tpm --parent $HANDLE --password secret ca.tss

And we can then sign new CSRs the same way as with the generated key

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.

Web Interface

At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.


1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)

New Weapon of Choice

I’ve finally got round to getting myself a new personal laptop. My last one was a Lenovo U410 Ideapad I picked up back in 2012.

Lenovo U410 IdeaPad
Lenovo U410 IdeaPad

I tried running Linux on it but it didn’t go too well and the screen size and resolution was way too low to do anything serious on it. It ended up with Windows 7 back on it, permanently on power because the battery is toast and mainly being used to sync my Garmin watch to Strava.

Anyway it was well past time for something a little more seriously useful. I’d had my eye on the Dell XPS13 for a while and when they announced a new version last year it looked very interesting.

Dell have been shipping laptops installed with Linux for a while under a program called Project Sputnik. This project ensures that all the built in hardware is properly supported (in some cases swapping out components for ones known to work well with Linux). The first generation XPS13 was covered by Project Sputnik so I was eager to see if the 2nd generation would be as well.

It took a little while, but the 2015 model finally started to ship with Ubuntu installed at the end of 1Q 2016.

As well as comparing it to the U410, I’ve also compared the XPS13 to the other machine I use on a regular basis, my current work machine (a little long in the tooth, Lenovo w530 ThinkPad). The table shows some of the key stats

U410 IdeaPad W530 ThinkPad XPS13
Weight 2kg 2.6kg (+0.75kg) 1.3kg
CPU i5-3317U i7-3740QM i7-6560U
Memory 6gb 16gb 16gb
Disk 1tb 512gb 512gb (SSD)
Screen 14″ 1366×768 15.6″ 1920×1080 13.3″ 3200×1800

The one I like best is the weight, lugging the w530 round is a killer, so getting to leave it on the desk at the office a little more will be great.

Dell XPS13
Dell XPS13

As for actually running the machine it’s been very smooth so far. It comes with Ubuntu 14.04 with a couple of Dell specific tweaks/backports from upstream. I’m normally a Fedora user, so getting used to Ubuntu for my main machine may take a bit of getting used to and 14.04 is a little old at the moment. 16.04 is due to ship soon so I look forward to updating it to see how it fairs. I’ve swapped the desktop to Gnome Shell instead of Unity which is making things better but I still may swap the whole thing for Fedora 24 when it ships to pick up something a lot closer to the bleeding edge.

One of the only things missing on the XPS13 is a (normal) video port. It does have a USB-C/Thunderbolt port which can support HDMI and Display Port but the driver support for this on Linux is reported to still be a little brittle. While I wait for it to settle down a little I grabbed a Dell DA100 adapter. This little device plugs into one of the standard USB 3.0 port and supplies HDMI, VGA, 100mb Ethernet and a USB 2.0 socket. This is a DisplayLink device, but things seam to be a lot better than when last tried to get a DisplayLink to device to work. There is a new driver direct from the DisplayLink guys that seams to just work.

Flic.io Linux library

As I mentioned I recently got my hands on a set of 4 flic.io buttons. I pretty immediately paired one of them with my phone and started playing. It soon became obvious that while fun, the use cases for a button paired to a phone where limited to a single user environment and not what I had in mind.

What was needed was a way to hook the flic buttons up to something like a Raspberry Pi and Node-RED. While I was waiting for the buttons to arrive I was poking round the messages posted to the indiegogo one of the guys from Shortcut Labs mentioned that such a library was in the works. I reached out to their developer contact point asking about getting access to the library to build a Node-RED node round, I said I was happy to help test any code they had. Just before Christmas I managed got hold of a early beta release to have a play with.

From that I was able to spin up a npm module and a Node-RED node.

The Node-RED node will currently listen for any buttons that are paired with the computer and publish a message as to if it was a single, double or long click

Flic.io Node-RED node

I said I would sit on these nodes until the library shipped, but it appeared on github yesterday so hence this post. The build includes binaries for Raspberry Pi, i386 and x86_64 and needs the very latest Bluez packages (5.36+).

Both my nodes need a little bit of cleaning up and a decent README writing, once that is done I’ll push them to npm.

UPDATE:
Both nodes are now on npmjs.org:
https://www.npmjs.com/package/node-flic-buttons
https://www.npmjs.com/package/node-red-contrib-flic-buttons

Securing Node-RED

Node-RED added some new authentication/authorisation code in the 0.10 release that allows for a plugable scheme. In this post I’m going to talk about how to use this to make Node-RED use a LDAP server to look up users.

HTTPS

First of all, to do this properly we will need to enable HTTPS to ensure the communication channel between the browser and Node-RED is properly protected. This is done by adding a https value to the settings.js file like this:

...
},
https: {
  key: fs.readFileSync('privkey.pem'),
  cert: fs.readFileSync('cert.pem')
},
...

You also need to un-comment the var fs = require(‘fs’); line at the top of settings.js.

You can generate the privkey.pem and cert.pem with the following commands in your node-red directory:

pi@raspberrypi ~/node-red $ openssl genrsa -out privkey.pem
Generating RSA private key, 1024 bit long modulus
.............................++++++
......................++++++
e is 65537 (0x10001)
pi@raspberrypi ~/node-red $ openssl req -new -x509 -key privkey.pem -out cert.pem -days 1095
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:GB
State or Province Name (full name) [Some-State]:Hampshire
Locality Name (eg, city) []:Eastleigh
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Node-RED
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:raspberrypi.local
Email Address []:

The important bit is the Common Name value, this needs to match either the name or IP address that you will use to access your Node-RED console. In my case I have avahi enabled so I can access my pi using it’s host name raspberrypi with .local as the domain, but you may be more used to using an IP address like 192.168.1.12.

Since this is a self signed certificate your browser will reject it the first time you try to connect with a warning like this:

Chrome warning about unsafe cert.
Chrome warning about unsafe cert.

This is because your certificate is not signed by one of the trusted certificate authorities, you can get past this error by clicking on Advanced then Proceed to raspberrypi.local (unsafe). With Chrome this error will be shown once every time you access the page, you can avoid this by copying the cert.pem file to your client machine and import it into Chrome:

  1. Open Chrome settings page chrome://settings
  2. Scroll to the bottom of page and click on the “+Show advanced settings” link
  3. Scroll to the HTTPS/SSL and click on “Manage certificates…”
  4. Select the Servers tab and select import
  5. Select the cert.pem you copied from your Raspberry Pi

Usernames and Passwords

In previous Node-RED releases you could set a single username and password for the admin interface, any static content or one that covered bh. This was done by adding a object to the settings.js file containing the user name and password. This was useful but could be a little limited. Since the 0.10 release there is now a pluggable authentication interface that also includes support for things like read only access to the admin interface. Details of these updates can be found here.

To implement a authentication plugin you need to create a NodeJS module based on this skeleton:

var when = require("when");

module.exports = {
   type: "credentials",
   users: function(username){
      //returns a promise that checks the authorisation for a given user
      return when.promise(function(resolve) {
         if (username == 'foo') {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   authenticate: function(username, password) {
      //returns a promise that completes when the user has been authenticated
      return when.promise(function(resolve) {
         if (username == 'foo' && password == 'bar' ) {
            resolve({username: username, permissions: "*"});
         } else {
            resolve(null);
         }
      });
   },
   default: function() {
      // Resolve with the user object for the default user.
      // If no default user exists, resolve with null.
      return when.promise(function(resolve) {
         resolve(null);
      });
   }
};

This comprises on 3 functions, one to authenticate a user against the backend, one to check the level of authorisation (used by Node-REDs built in oAuth mechanism once a user has been authenticated), and finally default which matches unauthenticated users.

For my LDAP example I’m not going to implement different read only/write levels of authorisation to make things a little easier.

The source for the module can be found here or on npmjs here. The easiest way to install it is with:

npm install -g node-red-contrib-ldap-auth

Then edit the Node-RED settings.js to include the following:

adminAuth: require('node-red-contrib-ldap-auth').setup({
    uri:'ldap://url.to.server', 
    base: 'ou=group,o=company.com', 
    filterTemplate: 'mail={{username}}'
}),

The filterTemplate is a mustache template to use to search the LDAP for the username, in this case the username is mail address.

Once it’s all up and running Node-RED should present you with something that looks like this:

Node-RED asking for credentials
Node-RED asking for credentials

Emergency FTTC Router

On Monday I moved to a new broadband provider (A&A). The BT Openreach guy turned up and swapped over the face plate on my master socket, dropped off the FTTC modem, then went and down to the green box in the street and flipped my connection over. It all would have been very uneventful except for the small problem that the new router I needed to link my kit up to FTTC modem had not arrived.

This is because BT messed up the address for where they think my line is installed a few of years ago and I’ve not been able to get them to fix it. A&A quickly sent me out a replacement router with next day delivery but it would mean a effectively 2 days without any access at home.

The routers talks to the FTTC modem using a protocol called PPPoE over normal ethernet. There is a Linux package called rp-pppoe which provides the required support. So to quickly test that the install was working properly I installed this on to my laptop and plugged it directly into FFTC modem. Things looked really good but did mean I was only able to get one device online and I was tied to one end of the sofa by the ethernet cable.

PPPoE is configured the same way PPP used to be used with dial up modems, you just need to create /etc/ppp/pppoe.conf file that looks a bit like this:

ETH=eth0
USER=xxxxxxx
DEMAND=no
DNSTYPE=SERVER
PEERDNS=yes
DEFAULTROUTE=yes
PING="."
CONNECT_POLL=2
CF_BASE=`basename $CONFIG`
PIDFILE="/var/run/$CF_BASE-adsl.pid"
LCP_INTERVAL=20
LCP_FAILURE=3
FIREWALL=NONE
CLAMPMSS=1412
SYNCHRONOUS=no
ACNAME=
SERVICENAME=
CONNECT_TIMEOUT=30
PPPOE_TIMEOUT=80

And include you username and password in the /etc/ppp/chap-secrets. Once set up you just need to run pppoe-start as root

In order to get back to something like normal I needed something else. I had a Raspberry Pi in my bag along with a USB Ethernet adapter which looked like it should fit the bill.

I installed rp-pppoe and the dhcp server then plugged one ethernet adapter into the FTTC modem and the other into a ethernet hub. Into the hub I had a old WiFi access point and the rest of my usual machines. After configuring the Pi to masquerade IP traffic from the hub I had everything back up and running. The only downside is that speeds are limited to 10mbps as that is as quick as the built in ethernet adapter on Pi will do.

Playing with touchatag reader

Touchatag reader
We got a bunch of Touchatag NFC readers in the office just after Christmas and I said I would have a play with one to see what we could use them for. I had seen one before (in fact I borrowed one from Andy Piper for a little while) but didn’t get much further than trying to read my work id card.

To get it work on Linux you need to use PCSC Lite and download the driver from ASC (the guys that actually make the readers that Touchatag use). You can grab the download here. Once you’ve built these (standard “configure; make; sudo make install”) you will need to update one of the PCSC config files. There is a patch linked to from this page: http://idefix.net/~koos/rfid-touchatag.html

PCSC Lite is packaged for Fedora and Ubuntu, but I couldn’t find libnfc in the standard repos for Ubuntu so ended up building it myself for one of the machines in the office. Again this was a simple “configure; make; sudo make install”. With that done I was able to use the nfc-mfclassic tool from the libnfc samples to read data from a tag.

$ nfc-mfclassic r a test.out
Connected to NFC reader: ACS ACR122U 00 00 / ACR122U103 - PN532 v1.6 (0x07)
Found MIFARE Classic 4k card with UID: 9e2bcaea
Reading out 256 blocks |.....................................|

Which gets me a file with all the data stored on a tag (assuming I know the right keys to access all the blocks), but most of the time just having the tag id is enough to trigger an event. After a bit more poking around I found nfc-eventd which seamed to fit the bill perfectly.

This allows you to specify commands to be run when a tag is placed on the reader and when it is removed and it will pass the tag id to the command. Here is the config file I used

nfc-eventd {

  # Run in background? Implies debug=false if true
  daemon = false;

  # show debug messages?
  debug = true;
	
  # polling time in seconds
  polling_time = 1;

  # expire time in seconds
  # default = 0 ( no expire )
  expire_time = 0;
	
  device my_touchatag {
    driver = "ACR122";
    name = "ACS ACR 38U-CCID 01 00";
  }

  # which device to use ? note: if this part is commented out, 
  # nfc-eventd will try to pick up device automagically...
  #nfc_device = "my_touchatag";

  # list of events and actions
  module nem_execute {
    # Tag inserted
    event tag_insert {
      # what to do if an action fail?
      # ignore  : continue to next action
      # return  : end action sequence
      # quit    : end program
      on_error = ignore ;
	
      # You can enter several, comma-separated action entries
      # they will be executed in turn
      action = "publish 'wmqtt://nfc@broker:1883/nfc/tagid?qos=0&retain=n&debug=0' $TAG_UID"
    }
	
    # Tag has been removed
    event tag_remove { 
      on_error = ignore;
      action = "(echo -n 'Tag (uid=$TAG_UID) removed at: ' && date) >> /tmp/nfc-eventd.log";
    }
	
    # Too much time card removed
    event expire_time { 
      on_error = ignore;
      action = "/bin/false";
    }
  }

}

Here I have used the publish command from the IBM IA93 C MQTT package to publish a message with the tag id to the nfc topic. You can do something similar with mosquitto_pub like this:

mosquitto_pub -h broker -i nfc -t nfc -m $TAG_ID

The plan is to use this now to allow the guys in ETS to log into various demos in the lab with their id badges.

Next on the list is to see if I can get the reader to respond to my Galaxy Nexus when it’s in tag mode.

Timelapse photography

As I mentioned at the end of my last post I have been playing around with gphoto2 to create some time lapse videos of the assembly of one of my Christmas gifts.

I have played with making time lapse video before, when I set up my MMS CCTV system with motion I enabled a feature that creates a video from a sample image every 30 seconds. Motion uses the web cam (or other Video4Linux src) and all the web cams I had access to up at my folks are pretty low resolution so this wasn’t what I was looking for.

I did have my Canon 350D and the little point and shoot Canon A520 that lives on the end of my work bag so I thought I’d see what I could do with them. The 350D is 8 Megapixel and the A520 is 4 Megapixel so both will take a frame way bigger than 720p that I can crop down.

I had a bit of a look round and found an app called gphoto2 that claimed to be able to drive both cameras via the USB port to take images at a set interval. I plugged the 350D in and tried to get it to work but I kept getting the following error*:

*** Error ***
Sorry, your camera does not support generic capture
ERROR: Could not capture.
*** Error (-6: 'Unsupported operation') ***

So I swapped over to the A520 and things seamed to work fine. So I set it up on the tripod and fired off the following command:

[hardillb@bagend ~]$ gphoto2 --capture-image -I 120

This triggers the camera every 2 mins which I thought should be enough time to see some progress in each frame.

Apart from having to swap the batteries twice it all went rather well, I soon started to ignore the little beep from the camera as it took each shot. At the end I copied all the images off the SD card for processing. Each image started out at 2272×1704 so they would have been big enough to use in 1080p video but I decided to shrink them down to 720p.

The following little ImageMagik script resizes the images down and adds 2 black bars down the sides to pad it out to a full 16:9 720p frame size.

#!/bin/sh
for x in `ls *.JPG`
do 
   convert $x -resize 1280x720 -bordercolor black -border 160x0 resized/$x
done

The first bit -resize 1280×720 resized the original image down to 720 pixels high and the second bit -bordercolor black -border 160×0 adds on the 2 x 160 pixel wide black bars to pad the image up to the required 1280 pixels wide before writing a copy out to the resized directory.

And this mencoder line to stitch them together as a video with 2 frame per second so each second of video is equivalent to about 4 minutes of real time.

mencoder "mf://*.JPG" -mf fps=2 -o day1.avi -ovc lavc -lavcopts \
vcodec=msmpeg4v2:vbitrate=800

Here is a sample of the output

*I have since found this gphoto bug report that mentioned changing the camera’s USB mode from PTP mode to normal mode. After finding this setting on the camera I managed to get the 350D to work with gphoto2 as well.

WIFI presence detection

Back in my very first post I talked about using Bluetooth to detect my presence at home in order to disable the CCTV system and control a few other things.

While this works well it does not scale well to multiple people as the Bluetooth layer 2 ping takes about 5 seconds to time out if the device in not in range. This means that at most 12 different phones can be checked in a minute.

A couple of recent chats with a few people at work (Vaibhavi Joshi & Dale Lane and Bharat Bedi) got me thinking about this again. Modern phones tend to have WIFI as well as Bluetooth and 3G radios these days so I thought that I’d have a look at seeing if this could be used to locate devices.

After bit of a poke around it looked like a package called Kisment should be able to do what I wanted.

Kismet

Kismet is a client server application, the backend server reads from the network card and decodes the packets and the UI which requests data from the server over a socket connection. This also means the backend can be on a different machine, in fact several different drone backends can be consolidated in a single master backend server and all the captured data presented to UI. This means you could distribute a number of drones over site and generate a map as devices move between areas covered by the different backends.

The default client is a ncurses based application that can list all the visible networks and a chart showing the incoming packet rates. It’s great for getting a view of what networks are active which can be very useful when you have to set up a new one and want to see which channels are free.

Rather than use the default client I decided to write my own to drive the backend the way I wanted it and to make exposing the data easier (I’m going to publish detected devices on a MQTT topic). But first I had a bit of a play using the netcat (nc) command. Netcat basically pipes stdin/stdout to and from a given socket, this is useful because the Kisment protocol is just a set of simple text commands. For example the following command will get the kismet backend to return a list of all the clients it has seen to date.


echo -e '!0 enable client MAC,manuf,signal_dbm,signal_rssi' | nc localhost 2501


Returns something that looks like this:

...
*CLIENT: 00:25:69:7D:53:D9 [0x01]SagemCommu[0x01] -71 0
...

The only tricky bit about the response is that any field that can contain a space is wrapped in characters with a value of 0x01, in this case the manufacture field could contain spaces so we need the following regexp to chop up the responses for each time a client is spotted.

I decided my client in Java (because the MQTT libraries are easy to use) so I chose to use a regular expression to split up the response

Pattern.compile("\*CLIENT: ([0-9A-F:]*) \x01(.*)\x01 (-?\d+) (\d) ");

By default Kismet cycles round all the available channels to try and get a full picture of all the WIFI traffic in range, but this means it can miss some packets and in turn miss clients that are not generating a lot of traffic. To help get round this I have locked Kismet to just listen on the same channel as my WIFI access point since all my devices are likely to try and connect to it as soon as it comes in range and there is less chance of me missing detecting my phone up front.

!1  HOPSOURCE cab63dc8-9916-11e0-b51a-0f04751ce201 LOCK 13

cab63dc8-9916-11e0-b51a-0f04751ce201 is the UUID assigned to the wifi card by kismet and the 13 is the channel I run my WIFI access point on. You can find the UUID by running the following command:

echo -e '!1234 enable source type,username,channel,uuid' | nc localhost 2501

Which returns a string that looks like this every time the back end hops to new channel.

*KISMET: 0.0.0 1308611701 [0x01]Kismet_2009[0x01] [0x01]alert[0x01] 0 
*PROTOCOLS: KISMET,ERROR,ACK,PROTOCOLS,CAPABILITY,TERMINATE,TIME,PACKET,STATUS, 
PLUGIN,SOURCE,ALERT,WEPKEY,STRING,GPS,BSSID,SSID,CLIENT,BSSIDSRC,CLISRC, 
NETTAG,CLITAG,REMOVE,CHANNEL,SPECTRUM,INFO,BATTERY 
*SOURCE: orinoco_cs test 3 30b9b5a4-9b93-11e0-acfe-ee054e2c7201 
*ACK: 1234 [0x01]OK[0x01]

Publishing the last seen time on the following topic /WIFIWatch/<mac> allows applications to register to see a specific device and also build up a list of all devices ever seen and when.

WIFI Watch topics

It’s not just phones that have WIFI adapters these days, net books, tablets even digital cameras (with things like eyefi) all have , also with multiple kismet nodes it might be possible to track devices as they move around an area.

Next is to look at the signal strength information to see if I can judge a relative distance from the detection adapter.

Resources:

Kismet