LDAP and NFC Node-RED Nodes

About a week ago a colleague asked me to help resurrect some code I had written to use our work ID badges to look up information on the card owner in order to log into a system for a demonstration.

The ID badges are basically mifare cards so can be read by a NFC reader. The content of the cards is encrypted, but each card has a unique ID. Unfortunately the security team will not share the mapping of these IDs to actual people, but since this is for a demonstration that will only be given by a relatively small number of people it’s not a problem to set up a list of mappings our selves.

The original version of this code used nfc-eventd and some java code to the IDs then do a lookup in a little database to convert these to email addresses. It worked but was a bit of a pig to setup and move between machines as it required a number of different apps and config files so I decided to have a go at rewriting it all in Node-RED.

NFC ID Flow

To do this I was going to need 2 new nodes, one to read the NFC card and one to look up details in the LDAP. Both of these actually proved reasonable easy and quick to write as there are existing Node NPM modules that do most of the heavy lifting. The flow has a couple of extra bit, it uses a mongodb to store the id to email address mappings and if there is no match it uses websockets to populate a field in a separate web page to enter a email address to update the database.

NFC

I did a first pass using the nfc npm and it worked but there was no way to shut the connection to the NFC reader down in the code which meant I couldn’t clean up properly when Node-RED shut down or when the node needed to be restarted.

The nfc on npmjs.org is actually a bit out of date compared to the git repository it’s hosted in. So I moved over to using the upstream version of the code. This changed the API a little and still didn’t have a mechanism to allow the interface to be stopped. I forked the project and after a little bit of playing I ended up with some working shutdown code.

The only call back is for when at NFC tag is detected and it polls in tight loop so the stream of data from the node is way too high to feed into a Node-RED flow really. The Node-RED wrapper rate limits to only reporting the same tag once every 10 seconds. This is good enough for the original problem I was looking to solve but I still think it can be done better. I’m planning on adding call backs for tag seen and when it is removed, this is similar to how nfc-eventd works. I also want to look at doing NDEF decoding.

You can install the current version of the node with:

npm install https://github.com/hardillb/node-red-contrib-nfc/archive/master.tar.gz

It depends on libnfc which should work on the Linux and OSx and I’ve even seen instructions to build it for Windows.
Once I’ve got a bit further I’ll add it to npmjs.org.

LDAP

This one was even simpler. The LDAP npm modules links to the openldap libraries and does all the hard work.

It just needed a config dialog creating to take a base DN and a filter and a connection setup that takes a server, port and if needed a bind DN and password. The filter is a mustache template so values can be passed in.

This node is pretty much done, you can find the code on github here and the node can be installed with the following:

npm install node-red-node-ldap

Like the NFC node openldap should be available for Linux and OSx and there looks to be a Windows port


More playing with Asterisk

I have been playing some more with Asterisk and I’ve got 2 useful bits to share.

MP3 Voicemail

The simple one first, a recent update to Android (not sure what exactly) means it won’t playback wav files attached to emails. This is a problem as when Asterisk records a voicemail it can be configured to email the recording in wav to mailbox owner. A little bit of googling turned this up http://bernaerts.dyndns.org/linux/179-asterisk-voicemail-mp3. It needed updating a little to get it to work on my Raspberry Pi.

First I needed to change one of the sed line to match WAV not wav to fix the file name from this:

...| sed 's/.wav/.mp3/g' > stream.part3.mp3.head

to this:

...| sed 's/.WAV/.mp3/g' > stream.part3.mp3.head

Secondly lame doesn’t like the encoding for the wav file (I think it’s because the stored values are unsigned) so we need to run it through sox to fix it first.

sox stream.part3.wav -e signed stream.part3a.wav
lame -m m -b 24 stream.part3a.wav stream.part3.mp3

This not only makes it so I can listen to the messages on my phone and tablet it also makes the mails smaller so they take up less bandwidth.

Using a 3G Stick to make calls

Having used OBi110 to hook my Asterisk VoIP rig up to a standard phone line I was looking for a way to hook a mobile phone to the system. There are 2 options with Asterisk, chan_bluetooth and chan_dongle.

Chan_bluetooth uses a bluetooth connection to a mobile phone to make and receive calls. I had a look at this but it meant keeping phone plugged into a charger and having another bluetooth adapter plugged in.

Chan_dongle work with certain Huawei 3G USB modems. These 3G sticks are basically full phones with a USB interface. I’ve already been using one of listed modems for my SMS/MMS project and I had a spare one kicking around. It needed the firmware updating to make it work, which was a bit of a challenge as it required setting up a real Windows machine as I couldn’t get it to work in a VM.

Setting up the dongle was a little tricky at first as I couldn’t get a set of udev rules to match the stick properly to ensure it always ends up with the same device names. The code does let you specify the stick using it’s IMEI which helps if you have multiple sticks plugged into the same computer.

Once configured it was easy to set up the extensions.conf to allow making and receiving calls. The main reason for setting this up was to have a portable VoIP rig that I can take to different places and not have to worry about a fixed phone line. There is a upcoming hackday that I have a plan for.


DIY IVR for PVR

The title is a bit of a mouthful, but basically being able to set up recordings on my PVR by calling the my home phone number and just speaking.

This is one of the projects I wanted to play with after setting up my OBi110 and Asterisk PBX.

Setting up systems where you press digits on your phone to navigate menus in Asterisk is pretty simple, but systems that listen to what you say and then interpret that are a little trickier. To make it work you need 3 main parts:

  • A system to record the audio
  • A way to parse the audio and turn it into text
  • Something to extract the interesting bits from the text

Asterisk will record the audio if poked the right way which leaves the last two bits to sort out.

Some of the guys at work do voice recognition projects and pointed me at some of the open source toolkits1, but these normally involve lots of training to get things accurate and reasonably meaty boxes to run the systems on. Since I’m running Asterisk on a Raspberry Pi I was looking for something a little more light weight.

A bit of searching round turned up a project on Github that uses Google’s Voice to Text engine with Asterisk already. This looked spot on for what I needed. Getting hold of a Google Speech API key is a little tricky as it’s not really a public API, but support for it is build into the Chromium web browser so following these instructions helped. The API key is limited to 50 calls a day but that should be more than enough for this project.

Once installed the following flow in the Asterisk dialplan lets you dial extension 200 and it will record any speech until there are 3 seconds of silence then it forwards it on to the Google service, when it returns it puts the text into a dialplan variable called utterance, along with a value between 0 and 1 indicating how confident Google is in what it says in a variable called confidence.

exten => 200,1,Answer()
exten => 200,n,AGI(/opt/asterisk/asterisk-speech-recog/speech-recog.agi,en-GB,3,#,NOBEEP)
exten => 200,n,Verbose(1,The text you just said is: ${utterance})
exten => 200,n,Verbose(1,The probability to be right is: ${confidence})
exten => 200,n,Hangup()

An example output:

The text you just said is: record bbc1 at 9 p.m.
The probability to be right is: 0.82050169

Now I’ve got some text to work with I needed something to make sense of it and turn it into an action that can be followed. A simple way to do this would be with some regular expressions2 but I wanted to try something a little smarter that I could also use to add support for other bits of my home automation system. This means looking at some proper NLP and Text Analytics technology.

Dale has recently written about deploying some simple Text analytics tools to BlueMix which I used as a starting point along with this set of introductory tutorials for IBM Languageware Workbench.

Following the instructions I built a number of databases, the main one of television channel names to make them easy to match and to include multiple versions to help smooth out how the voice to text engine interprets things like “BBC One” which could easily end up being mapped to BBC 1 or BBC1 to name but two. Then a bunch of rules to match times. It’s a little long winded to go into here, if I get time I’ll do a separate post on writing UIMA rules. Once the rules were finished I exported them as a PEAR file and wrote a Java Servlet to feed text into the pipeline and extract the useful bits from the CAS. The source for the servlet can be found on Github here. When I get a bit more time I’ll do a more detailed post on how I actually created these rules.

Now that I had a web end point I could send text to and get it marked up with all the interesting bits I needed a way to forward text to it from within the Asterisk dialplan. I used the earlier Voice to Text example to put together this little bit of perl

#!/usr/bin/env perl

use warnings;
use strict;
use URI::Escape;
use LWP::UserAgent;
use JSON;

my %AGI;
my $ua;
my $url = "http://192.168.122.1:8080/PEARWebApp/Processor";
my $response;
my $temp;

# Store AGI input #
($AGI{arg_1}) = @ARGV;
while (<STDIN>) {
        chomp;
        last if (!length);
        $AGI{$1} = $2 if (/^agi_(\w+)\:\s+(.*)$/);
}

$temp = "text=" . uri_escape($AGI{arg_1});

$ua = LWP::UserAgent->new;
$ua->agent("ben");
$response = $ua->post(
	"$url",
	Content_Type => "application/x-www-form-urlencoded",
	Content => "$temp",
);
if (!$response->is_success) {
	print "VERBOSE \"some error\"\n";
	checkresponse();
} else {
	print "SET VARIABLE \"action\" \"$response->content\"\n";
	checkresponse();
}
exit;

sub checkresponse {
        my $input = <STDIN>;
        my @values;

        chomp $input;
        if ($input =~ /^200/) {
                $input =~ /result=(-?\d+)\s?(.*)$/;
                if (!length($1)) {
                        warn "action.agi Command failed: $input\n";
                        @values = (-1, -1);
                } else {
                        warn "action.agi Command returned: $input\n" if ($debug);
                        @values = ("$1", "$2");
                }
        } else {
                warn "action.agi Unexpected result: $input\n";
                @values = (-1, -1);
        }
        return @values;
}

The response looks like this which I then used to feed a script that uses the MythTV Services API to query the program guide for what is showing at that time on that channel then to schedule a recording.

{
  "time": "9:00 am",
  "action": "record",
  "channel": "BBC ONE"
}

And I included the script in the dialplan like this:

exten => 200,1,Answer()
exten => 200,n,AGI(/opt/asterisk/asterisk-speech-recog/speech-recog.agi,en-GB,3,#,NOBEEP)
exten => 200,n,Verbose(1,The text you just said is: ${utterance})
exten => 200,n,Verbose(1,The probability to be right is: ${confidence})
exten => 200,n,AGI(/opt/asterisk/uima/action.agi,"${utterance}")
exten => 200,n,AGI(opt/asterisk/mythtv/record.agi,"${action}")
exten => 200,n,Hangup()

I need to add some more code to include some confirmation in cases where the confidence in the extracted text is low and also once the program look up has happened to ensure we are recording the correct show.

Now I have the basics working I plan to add some more actions to control and query other aspects of my home automation system.

1 Kaldi seams to be one of the interesting ones recently.
2 did I really say simple and RegExp in the same sentence?


Playing with Asterisk PBX

I’ve been meaning to get back and have a proper play with Asterisk again for a while. Last week Amazon sent me one of those emails about things you’ve looked at but not bought and I spotted this:

It was down from £60 to £35 so I did exactly what they wanted and bought one.

Now normally I don’t use my land line at all, it’s just there to let the internets in, it doesn’t even have a handset plugged in. But there are a few little projects kicking around the back of my mind I’ve been thinking about for a while and the OBi110 should let me play with them.

The first is to see if the (unused, never given to anybody but my ISP to set up the conection) number for the land line has ended up on any lists for scamers/spammers and people generally trying to sell me stuff. My mobile gets at least 1 call a week about payment protection and the like and even my work office number has started getting recorded calls about getting my boiler replaced.

I could have probably just used the call log on the OBi110 but I wanted to be able to potentially record these calls and a few other things so I needed something a little smarter which is were Asterisk comes in. Asterisk is a opensource VoIP PBX this basically means it acts like a telephone exchange for calls made over the internet. I’ve seen people run Asterisk on the old Linksys Slugs so I was sure it should run fine on a Raspberry Pi as long as it wasn’t dealing with too many calls and not doing much codex transcoding. As I already had a Pi running my SMS/MMS rig it seamed like a good place to put all my telephone stuff.

Installing Asterisk on the Pi was just a case of running apt-get install asterisk. It comes with a bunch of default config files (in /etc/asterisk), but there are 2 main ones that I needed to change to make some simple things work.

sip.conf
This file is where you can configure what clients can connect to your asterisk instance via the SIP protocol. To start with I’m going to set up 2 different clients, one for a softphone running on my laptop and one for the OBi110. It sets up few things, but the import bit for later is the context which controls which bit of the extentions.conf file we jump to when receiving a call from each client.

[general]
context=local
srvlookup=yes

[softphone]
defaultuser=softphone
type=friend
secret=password123
qualify=no
nat=no
host=dynamic
canreinvite=no
context=local
disallow=all ; only the sensible codecs
allow=ulaw
allow=alaw
allow=gsm

[obihai]
defaultuser=obihai
type=friend
secret=password123
qualify=yes
dtmfmode=rfc2833
canreinvite=no
context=external
disallow=all
allow=ulaw

extensions.conf
This file defines how Asterisk should handle calls, it has two contexts called local and external. The local context defines 2 paths, the first for extension 100, when this number is called from the softphone Asterisk calls out to a small python program called agi-mqtt which publishes a JSON object to the calls/local MQTT topic which contains all the information Asterisk has about the call. It then answers the call then plays audio file containing HelloWorld and finally hangs the call up. I’m mainly using this local context to testing things out before copying them over to the external context.

The second path through the local context uses a special case extension number “_0Z.”, this matches any number that starts with 0[1-9] (so won’t match against 100). This path forwards the dialed number on to the OBi110 to place the call via the PSTN line.

The external context only contains 1 path which matches the phone number of the PSTN line and currently matches the 100 extension (play HelloWorld). At some point later I’ll setup this path to forward calls to a local softphone or forward to a voicemail account.

[local]
exten => _0Z.,1,AGI(/opt/asterisk/agi-mqtt/mqtt,/opt/asterisk/agi-mqtt/mqtt.cfg,calls/local)
exten => _0Z.,2,Dial(SIP/${EXTEN}@obihai);
exten => _0Z.,3,Congestion()
exten => _0Z.,103,Congestion()
exten => t,1,Hangup()

exten => 100,1,AGI(/opt/asterisk/agi-mqtt/mqtt,/opt/asterisk/agi-mqtt/mqtt.cfg,calls/local)
exten => 100,2,Answer()
exten => 100,3,Playback(en_US/hello-world)
exten => 100,4,Hangup()

[inbound]

exten => 0123456789,1,AGI(/opt/asterisk/agi-mqtt/mqtt,/opt/asterisk/agi-mqtt/mqtt.cfg,calls/pstn-in)
exten => 0123456789,2,Answer()
exten => 0123456789,3,Playback(en_US/hello-world)
exten => 0123456789,4,Hangup()

Now Asterisk is all working properly I setup the OBi110 using the instructions found here.

After a bit of playing I have inbound and outbound calls working and some MQTT enabled logging. Next up is looking at using the SIP Client built into Android to allow calls to be made and received from my mobile phone.


Unpacking binary data from MQTT in Javascript

While doing trawl of Stackoverflow for questions I might be able to help out with I came across this interesting looking question:

Receive binary with paho mqttws31.js

The question was how to unpack binary MQTT payloads into double precision floating point numbers in javascript when using the Paho MQTT over WebSockets client.

Normally I would just send floating point numbers as strings and parse them on the receiving end, but sending them as raw binary means much smaller messages, so I thought I’d see if I could help to find a solution.

A little bit of Googling turned up this link to the Javascript typed arrays which looked like it probably be in the right direction. At that point I got called away to look at something else so I stuck a quick answer in with a link and the following code snippet.

function onMessageArrived(message) {
  var payload = message.payloadByte()
  var doubleView = new Float64Array(payload);
  var number = doubleView[0];
  console.log(number);
}

Towards the end of the day I managed to have a look back and there was a comment from the original poster saying that the sample didn’t work. At that point I decided to write a simple little testcase.

First up quick little Java app to generate the messages.

import java.nio.ByteBuffer;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;

public class MessageSource {

  public static void main(String[] args) {
    try {
      MqttClient client = new MqttClient("tcp://localhost:1883", "doubleSource");
      client.connect();

      MqttMessage message = new MqttMessage();
      ByteBuffer buffer = ByteBuffer.allocate(8);
      buffer.putDouble(Math.PI);
      System.err.println(buffer.position() + "/" + buffer.limit());
      message.setPayload(buffer.array());
      client.publish("doubles", message);
      try {
        Thread.sleep(1000);
      } catch (InterruptedException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
      }
      client.disconnect();
    } catch (MqttException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
    }
  }
}

It turns out that using typed arrays is a little more complicated and requires a bit of work to populate the data structures properly. First you need to create an ArrayBuffer of the right size, then wrap it in a Uint8Array in order to populate it, before changing to the Float64Array. After a little bit of playing around I got to this:

function onMessageArrived(message) {
  var payload = message.payloadBytes
  var length = payload.length;
  var buffer = new ArrayBuffer(length);
  uint = new Uint8Array(buffer);
  for (var i=0; i<length; i++) {
	  uint[i] = payload[i];
  }
  var doubleView = new Float64Array(uint.buffer);
  var number = doubleView[0];
  console.log("onMessageArrived:"+number);
};

But this was returning 3.207375630676366e-192 instead of Pi. A little more head scratching and the idea of checking the byte order kicked in:

function onMessageArrived(message) {
  var payload = message.payloadBytes
  var length = payload.length;
  var buffer = new ArrayBuffer(length);
  uint = new Uint8Array(buffer);
  for (var i=0; i<length; i++) {
	  uint[(length-1)-i] = payload[i];
  }
  var doubleView = new Float64Array(uint.buffer);
  var number = doubleView[0];
  console.log("onMessageArrived:"+number);
};

This now gave an answer of 3.141592653589793 which looked a lot better. I still think there may be a cleaner way to do with using a DataView object, but that’s enough for a Friday night.

EDIT:

Got up this morning having slept on it and came up with this:

function onMessageArrived(message) {
  var payload = message.payloadBytes
  var length = payload.length;
  var buffer = new ArrayBuffer(length);
  uint = new Uint8Array(buffer);
  for (var i=0; i<length; i++) {
	  uint[i] = payload[i];
  }
  var dataView = new DataView(uint.buffer);
  for (var i=0; i<length/8; i++) {
      console.log(dataView.getFloat64((i*8), false));
  }
};

This better fits the original question in that it will decode an arbitrary length array of doubles and since we know that Java is big endian, we can set the little endian flag to false to get the right conversion without having to re-order the array as we copy it into the buffer (which I’m pretty sure wouldn’t have worked for more than one value).


Random gifts

A couple of weeks ago I got back into the office on Monday morning after a long weekend off to find a message from the office’s Goods Inwards department saying there had been a package delivered for me on Friday. I was expecting a few things for various projects so I set off down through the tunnels under Hursley to see which bits had arrived.

On arrival I found a plain cardboard box held together with tape claiming it had been damaged in transit (no obvious signs of damage on the box) marked with my name, work address and desk phone number but no indication of where it had come from. Once opened it revealed a boxed pair of Sennheiser Momentum on ear headphones and a delivery note.

None of the projects I’m working on at the moment need a set of headphones and I’d not ordered any for myself, as my usual set of Philips O’Neill Headphones are still in great shape and spend most of the working day on my head.

A quick ask round the office didn’t turn up anybody who knew anything about them. The delivery note didn’t help as it only showed 1 set of headphones and the delivery details, nothing about where they had come from. It was as if it should have been printed on headed note paper. So having drawn a blank as to where they may have come from I threw out a tweet to see if anybody would own up to sending them:

This didn’t get any joy, so I was at a little bit of a loss with what to do with them. A bit of googling round seamed to imply that if something is unsolicited then it should be considered a gift. Having said I decided to leave them in the box for a couple of weeks just in case somebody came looking for them.

Now some 2 weeks later and nobody has come to demand their new headphones so I’ve decided the least I can review them. I’ll use them next week in the office and write up my impressions.

EDIT:
It appears somebody has come looking for them a whole month after they arrived. They caught me in a moment of weakness and I agreed to send them back (they did try to sell them to me when I mentioned they had now been used and I had to point out that it would be illegal to request payment…)


Android Wear after a week

It’s been a little over a week since I picked up a LG G Android Ware device to play with.

My initial impression seams to hold, it’s ok, but it’s not going to change my world.

We got hold of some Samsung G Lives this week so I’ve swapped to see if there is any difference between them.

Samsung Gear Live

The Samsung looks a bit better, but the out of the box experience was not as good, it wasn’t charged (unlike the LG), it needed updating as soon as it was started (the same as the LG) but it hid the update progress meter down in the settings so it wasn’t obvious that it was doing something when I powered it on. The charging cradle is a fiddly little thing to fit and feels really cheap compared to the really nice magnetic tray that came with the LG.

The only extra feature the Samsung has is a heart rate monitor built into the back of the watch. This is interesting but does require the watch to be warn tight round the wrist. I normally like to let my watches move around a bit so it’s taking a bit of getting used to and I’m not sure I’ll keep it that long. The only real use for the heart rate monitor is going to be during exercise, which is when I’m even more likely to want the device to be loose on the wrist.

Samsung Gear Live Charger

So far I’ve not been impressed enough to with the Android Wear devices to buy one for myself or even to borrow one from work to use for an extended period of time. I will keep an eye on the app developments to see if anybody can come up with a truly compelling use case for one. It will also be interesting to see if the Motorola Moto 360 is any different.


First Impressions – Android Wear – LG G Watch

One of the benefits of working for ETS is that we occasionally get hold of toys to play with, recently a box of LG G Watches turned up so I grabbed one to have a play with.

Previously I’ve one of the first iteration of Android linked “smart watches”, namely the Sony Liveview. The first version of these really were not great, especially the fact that only the very edges of the screen were touch sensitive didn’t help interacting with them. And the strap wasn’t that comfy so all in all not a good experience.

The LG G seams much better out of the box, the whole screen is touch surface and it has a look and feel much closer to a modern digital watch. The set up process was relatively painless (once I’d overcome some local issues with the office wifi), there was the now usual immediate device update that all modern devices seam to suffer from, but it didn’t take that long.

So far I’ve just been wearing it in the office having it pop up new mail, sms and calendar notifications at the same time as my phone, but I’m out of the office with a research partner for the next 3 days so it will be interesting to see if it’s useful while I’m on the road. There seams to be a deep integration with Google Now which should be useful.

The biggest thing that will determine how useful the whole Android Wear idea is going to be battery life, I’ll keep an eye on it and see how long it lasts.

I also need to have a look at the API to see if I can come up with something fun to do with it and the sensors contained in the device. I do know that the notifications from Tracks2Miles are showing up.


Running Node-Red as a Windows or OSx Service

For a recent project I needed to run Node-RED on windows and it became apparent that being able to run it as a service would be very useful.

After a little poking around I found a npm module called node-windows.

You install node-windows with as follows:

npm install -g node-windows

followed by:

npm link node-windows

in the root directory of your project. This is a 2 stage process as node-windows works better when installed globally.

Now the npm module in installed you configure the Windows service by writing a short nodejs app. This windows-service.js should work for Node-Red

var Service = require('node-windows').Service;

var svc = new Service({
  name:'Node-Red',
  description: 'A visual tool for wiring the Internet of Things',
  script: require('path').join(__dirname,'red.js')
});

svc.on('install',function(){
  svc.start();
});

svc.on('uninstall',function(){
  console.log('Uninstall complete.');
  console.log('The service exists: ',svc.exists);
});

if (process.argv.length == 3) {
  if ( process.argv[2] == 'install') {
    svc.install();
  } else if ( process.argv[2] == 'uninstall' ) {
    svc.uninstall();
  }
}

Run the following to install the service:

node windows-service.js install

and to remove the service:

node windows-service.js uninstall

There is also a OSx version of node-windows called node-mac, the same script with a small change should work on both:

if (process.platform === 'win32') {
  var Service = require('node-windows').Service;
} else if (process.platform === 'darwin') {
  var Service = require('node-mac').Service;
} else {
  console.log('Not Windows or OSx');
  process.exit(1);
}

var svc = new Service({
  name:'Node-Red',
  description: 'A visual tool for wiring the Internet of Things',
  script: require('path').join(__dirname,'red.js')
});

svc.on('install',function(){
  svc.start();
});

svc.on('uninstall',function(){
  console.log('Uninstall complete.');
  console.log('The service exists: ',svc.exists);
});

if (process.argv.length == 3) {
  if ( process.argv[2] == 'install') {
    svc.install();
  } else if ( process.argv[2] == 'uninstall' ) {
    svc.uninstall();
  }
}

I have submitted a pull request to include this in the base Node-RED install.

EDIT:

I’ve added node-linux to the pull request as well to generate /etc/init.d SystemV start scripts.


Tracks2Miles & Tracks2TitanXT Sunset

This link arrived in my inbox this morning

https://groups.google.com/forum/#!topic/mytracks-dev/qcOWjmAfGi0

It basically means that the My Tracks export to Dailymile capability will stop working with the next version of My Tracks.

You will still be able to manually post workouts just not with GPS data.


Copyright © 1996-2010 Ben's Place. All rights reserved.
iDream theme by Templates Next | Powered by WordPress

Switch to our mobile site