Working with multiple AWS EKS instances

I’ve recently been working on a project that uses AWS EKS managed Kubernetes Service.

For various reasons too complicated to go into here we’ve ended up with multiple clusters owned by different AWS Accounts so flipping back and forth between them has been a little trickier than normal.

Here are my notes on how to manage the AWS credentials and the kubectl config to access each cluster.

AWS CLI

First task is to authorise the AWS CLI to act as the user in question. We do this by creating a user with the right permissions in the IAM console and then export the Access key ID and Secret access key values usually as a CSV file. We then take these values and add them to the ~/.aws/credentials file.

[dev]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy

[test]
aws_access_key_id = AKYYYYYYYYYYYYYYYYYY
aws_secret_access_key = abababababababababababababababababababab

[prod]
aws_access_key_id = AKZZZZZZZZZZZZZZZZZZ
aws_secret_access_key = nmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnm

We can pick which set of credential the AWS CLI uses by adding the --profile option to the command line.

$ aws --profile dev sts get-caller-identity
{
    "UserId": "AIXXXXXXXXXXXXXXXXXXX",
    "Account": "111111111111",
    "Arn": "arn:aws:iam::111111111111:user/dev"
}

Instead of using the --profile option you can also set the AWS_PROFILE environment variable. Details of all the ways to switch profiles are in the docs here.

$ export AWS_PROFILE=test
$ aws sts get-caller-identity
{
    "UserId": "AIYYYYYYYYYYYYYYYYYYY",
    "Account": "222222222222",
    "Arn": "arn:aws:iam::222222222222:user/test"
}

Now we can flip easily between different AWS accounts we can export the EKS credential with

$ export AWS_PROFILE=prod
$ aws eks update-kubeconfig --name foo-bar --region us-east-1
Updated context arn:aws:eks:us-east-1:333333333333:cluster/foo-bar in /home/user/.kube/config

The user that created the cluster should also follow these instructions to make sure the new account is added to the cluster’s internal ACL.

Kubectl

If we run the previous command with each profile it will add the connection information for all 3 clusters to the ~/.kube/config file. We can list them with the following command

$ kubectl config get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
*         arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   
          arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   
          arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar 

The star is next to the currently active context, we can change the active context with this command

$ kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar
Switched to context "arn:aws:eks:us-east-1:222222222222:cluster/foo-bar".

Putting it all together

To automate all this I’ve put together a collection of script that look like this

export AWS_PROFILE=prod
aws eks update-kubeconfig --name foo-bar --region us-east-1
kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar

I then use the shell source ./setup-prod command (or it’s shortcut . ./setup-prod) , this is instead of adding the shebang to the top and running it as a normal script. This is because when environment variables are set in scripts they go out of scope. Leaving the AWS_PROFILE variable in scope means that the AWS CLI will continue to use the correct account settings when it’s used later while working on this cluster.

Joining FlowForge Inc.

FlowForge Logo

Today is my first day working for FlowForge Inc. I’ll be employee number 2 and joining Nick O’Leary working on all things based around Node-RED and continuing to contribute to the core Open Source project.

We should be building on some of the things I’ve been playing with recently.

Hopefully I’ll be able to share some of the things I’ll be working on soon, but in the mean time here is the short post that Nick wrote when he announced FlowForge a few weeks ago and a post welcoming me to the team

To go with this announcement Hardill Technologies Ltd will be going dormant. It’s been an good 3 months and I’ve built something interesting for my client which I hope to see it go live soon.

Setting up a AWS EC2 Mac

I recently needed to debug some problems running a Kubernetes app on a Mac. The problem is I don’t have a Mac or easy access to one that I can have full control over to poke and prod at things. (I also am not the biggest fan of OSx, but that’s a separate story)

Recently AWS started to offer Mac Mini EC2 instances. These differ a little from most normal EC2 instances as they are an actual dedicated bit of hardware that you have exclusive access to rather than a VM on hardware shared with others.

Because of the fact it’s a dedicated bit of hardware the process for setting one up is a little different.

Starting the Instance

First you probably need to request to have a limit increasing on your account. as the default limit for dedicated hardware looks to be 0. This limit is also per region so you will need to ask for the update in every one you would need. To request the update use the AWS Support Center, user the “Create Case” button and select “Service Limit Increase”. From the drop down select “EC2 Dedicated Hosts”, then the region and you want to request and update to the mac1 instance type and enter the number of concurrent instances you will need. It took a little time for my request to be processed, but I did submit it on Friday afternoon and it was approved on Sunday morning.

Once it has been approved you can create a new “Dedicated Hosts” instance on the EC2 console, with a “Instance Family” of mac1 and a “Instance Type” of mac1.metal. You can pick your availability zone (not all Regions and AZ have all instance type so it might not be possible to allocate a mac in every zone). I also suggest you tick the “Instance auto-placement” box.

Once that is complete you can actually start allocate an EC2 instance on this dedicated host. You get to pick which version of OSx you want to run. Assuming you only have one dedicated host and you ticked the auto-placement box then you shouldn’t need to pick the hardware you want to run the instance on.

The other main things to pick as you walk through the wizard are the amount of disk space (default is 60gb), which security policy you want (be sure to pick one with ssh access) and which SSH key you’ll use to log in.

The instances do take a while to start, but given it’s doing a fresh OSx install the hardware this is probably not a surprise. But once the console says it’s up and both the status checks are passing you’ll be able to ssh into the box.

Enabling a GUI

Once logged in you can do most things from the command line, but I needed to run Docker, and all the instructions I could find online said I needed to download Docker Desktop and install that via the GUI.

I found the following gist which helped.

  • Fist up set a password for the ec2-user
    sudo passwd ec2-user
  • Second enabled the the VNC
% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
-activate -configure -access -on \
-configure -allowAccessFor -specifiedUsers \
-configure -users ec2-user \
-configure -restart -agent -privs -all

% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
 -configure -access -on -privs -all -users ec2-user

You can then add -L 5900:localhost:5900 to the ssh command that you use to log into the mac. This will port forward the VNC port to localhost.

VNCViewer or Remmina can be used to start a session that gives full access to the Mac’s gui.

Expand the disk

If you have allocated more than the default 60gb then you will need to expand the disk to make full use of it.

% PDISK=$(diskutil list physical external | head -n1 | cut -d" " -f1)
APFSCONT=$(diskutil list physical external | grep "Apple_APFS" | tr -s " " | cut -d" " -f8)
% sudo diskutil repairDisk $PDISK
# Accept the prompt with "y", then paste this command
% sudo diskutil apfs resizeContainer $APFSCONT 0

Add tools

The instance comes with Homebrew pre-setup so you can install nearly anything else you might need.

Shut it down when you are done

Mac EC2 instances really are not cheap ($25.99 per day…) so remember to kill it off when you are done.

Hardill Technologies Ltd

Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.

Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.

Node-RED asking for credentials

The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.

At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.

As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:

If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on tech@hardill.me.uk.

Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.

Looking For a New Job

I’m currently in the market for a new employer.

I’m looking for a lead developer/architect role preferably in the connectivity/IoT space but happy to talk to people about anything that they feel I might be a good fit for.

My C.V. can be found here and contains contact details.

My current position very much isn’t the job I was offered/recruited for and having tried to get it there it appears that there is little chance of it ever becoming that, so it’s time to move on.

Quick and Dirty Touchscreen Driver

I spent way too much time last week at work trying to get a Linux kernel touchscreen driver to work.

The screen vendor supplied the source for the driver with no documentation at all, after poking through the code I discovered it took it’s configuration parameters from a Device Tree overlay.

Device Tree

So started the deep dive into i2c devices and Device Tree. At first it all seemed so easy, just a short little overlay to set the device’s address and to set a GPIO pin to act as an interrupt, e.g. something like this:

/dts-v1/;
/plugin/;

/ {
    fragment@0 {
        target = <&i2c1>;
        __overlay__ {
            status = "okay";
            #address-cells = <1>;
            #size-cells = <0>;

            pn547: pn547@28 {
                compatible = "nxp,pn547";
                reg = <0x28>;
                clock-frequency = <400000>;
                interrupt-gpios = <&gpio 17 4>; /* active high */
                enable-gpios = <&gpio 21 0>;
            };
        };
    };
};

All the examples are based around a hard wired i2c device attached to a permanent system i2c bus, this is where my situation differs. Due to “reasons” too complicated to go into here, I have no access to either of the normal i2c buses available on a Raspberry Pi so I’ve ended up using a Adafruit Trinket running the i2c_tiny_usb firmware as a USB i2c adapter and attaching the touchscreen via this bus. The kernel driver for the i2c_tiny_usb devices is already baked into the default Raspbian Linux kernel so meant I didn’t have to build anything special.

The problem is that USB devices are not normally represented in the Device Tree as they can be hot plugged. After being plugged in they are enumerated to discover what modules to load to support the hardware. The trick now was to work out where to attach the touchscreen i2c device, so the interrupt configuration would be passed to the driver when it was loaded.

I tried all kinds of different overlays, but no joy. The Raspberry Pi even already has a Device Tree entry for a USB device, because the onboard Ethernet is actually a permanently wired device and has an entry in the Device Tree. I tried copying this pattern and adding an entry for the tiny_i2c_usb device and then the i2c device but still nothing worked.

I have an open Raspberry Pi Stack Exchange question and an issue on the tiny-i2c-usb github page that hopefully somebody will eventually answer.

Userspace

Having wasted a week and got nowhere this morning I decided to take a different approach (mainly for the sake of my sanity). This is a basic i2c device with a single GPIO pin to act as an interrupt when new data is available. I knew I could write userspace code that would watch the pin and read from the device, so I set about writing a userspace device driver.

Python has good i2c and GPIO bindings on the Pi so I decided to start there.

import smbus
import RPi.GPIO as GPIO
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,2)
  x = ev[0]
  y = ev[1]
  print("x=%d y=%d" % (x, y))

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This is a good start but it would be great to be able to use the standard /dev/input devices like a real mouse/touchscreen. Luckily there is the uinput kernel module that exposes an API especially for userspace input devices and there is the python-uinput module.

import smbus
import RPi.GPIO as GPIO
import uinput
import signal

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down.PUD_UP)

bus=smbus.SMBus(3)

device = uinput.device([
  uinput.ABS_X,
  uinput.ABS_Y
])

def callback(c):
  ev = bus.read_i2c_block_data(0x38,0x12,3)
  down = ev[0]
  x = ev[1]
  y = ev[2]
  if down == 0:
    device.emit(uinput.BTN_TOUCH, 1, syn=False)
    device.emit(uinput.ABS_X, x, syn=False)
    device.emit(uinput.ABS_Y, y)
  else:
    device.emit(uinput.BTN_TOUCH, 0)   

GPIO.add_event_detect(27,GPIO.FALLING,callback=callback)
signal.pause()

This injects touchscreen coordinates directly into the /dev/input system, the syn=False in the X axis value tells the uinput code to batch the value up with the Y axis value so it shows up as an atomic update.

This is a bit of a hack, but it should be more than good enough for what I need it for, but I’m tempted to keep chipping away at the Device Tree stuff as I’m sure it will come in handy someday.

Native NodeJS callbacks with Context

As I mentioned back in September I’ve recently started a new job. Due to the nature of the work I’m not going to be able to talk about much of it. But when there are things I can I’ll try and write them up here.

One of my first tasks has been to write a Node-RED wrapper around a 3rd party native library. This library provides a 2 way communication channel to a prototyping environment so needs to use threads to keep track of things in both directions and make use of callbacks to pass that information back into the Javascript side. I dug around for some concrete examples of what I was trying and while I found a few things that came close I didn’t find exactly what I was looking for so here is a stripped back version of the node I created to use as a reference for later.

This is the C++ method that is called when a new instance of the native node module is created. It takes an object reference as an argument to be stored away and used as the context for the callback later.

void Test::New(const Nan::FunctionCallbackInfo<v8::Value>& info) {
  if (info.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    Test* obj = new Test();
    obj->Wrap(info.This());
    info.GetReturnValue().Set(info.This());

    v8::Local<v8::Object> context = v8::Local<v8::Object>::Cast(info[0]);
    obj->context.Reset(context);

    uv_loop_t* loop = uv_default_loop();
    uv_async_init(loop, &obj->async, asyncmsg);
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 2;
    v8::Local<v8::Value> argv[argc] = { info[0], info[1] };
    v8::Local<v8::Function> cons = Nan::New<v8::Function>(constructor);
    info.GetReturnValue().Set(Nan::NewInstance(cons,argc,argv).ToLocalChecked());
  }
}

The object is created like this on the javascript side where the this is the reference to the object to be used as the context:

function Native() {
  events.EventEmitter.call(this);
  //passes "this" to the C++ side for callback
  this._native = new native.Test(this);
}

We then make the callback from C++ here:

void Test::asyncmsg(uv_async_t* handle) {
  Nan::HandleScope scope;

  //retrieve the context object
  Test* obj = (Test*)((callbackData*)handle->data)->context;
  v8::Local<v8::Object> context = Nan::New(obj->context);

  //create object to pass back to javascript with result
  v8::Local<v8::Object> response = Nan::New<v8::Object>();
  response->Set(Nan::New<v8::String>("counter").ToLocalChecked(), Nan::New(((callbackData*)handle->data)->counter));

  v8::Local<v8::Value> argv[] = { response };
  ((Nan::Callback*)((callbackData*)handle->data)->cb)->Call(context,1,argv);
  free(handle->data);
}

Which ends up back on the javascript side of the house here:

Native.prototype._status = function(status) {
  this.emit("loop", status.counter);
}

I’ve uploaded code to githib here if you want to have look at the whole stack and possibly use it as a base for your project.

Time for something new

It looks like the IBM Process Server work flow has run and my entry in Bluepages (IBM’s internal LDAP back employee directory) has been expunged. So after pretty much exactly 16 years at IBM it’s time for something new.

I started at IBM straight after I finished my masters (to the extent that I handed my thesis in on the Friday in Cranfield, drove back to Yorkshire on the Saturday morning, did as much washing as possible and then drove down to Southampton on the Sunday to check into the hotel at Marwell Zoo for the start of the induction week).

While at IBM I worked for 2 teams, firstly the Java Technology Centre and then Emerging Technologies & Services.

Java Technology Centre


Most of my time on this group was spent working in the Level 3 Support team, at the time the IBM JVM under pinned a large proportion of the IBM Software stack, which meant it was always our fault (until proven otherwise) when something broke. This was a great team to work for, every morning (and later when phone rang at 3am) there was a new batch of problems to solve and the team helped each other out and we were always learning. I’d like to thank Mark Bluemel who was my original team leader for teaching me that the customer is not always right, and some times the quickest way to solve a problem was to point this out to them (just as long as you had all the evidence to back it up). It helped hone my engineering background to dig into problems and find the underlying cause.

As I said earlier, the JVM used to underpin a large proportion of IBM’s software offerings, this brought me into contact with a large number of product teams and their customers based all round the world. In later years when I became one of the two go to guys (with Chris Bailey) for management to send on-site at really short notice to solve problems, I got to meet a lot of these folks in person and not just at the end of a IM chat window or conference call. This period also taught me the ways of airline/hotel points schemes and how to “work” a corporate travel booking system (Thanks Flavio) and took me to some places I probably wouldn’t have normally chosen to visit (2.5 weeks in Seoul, a winter of Mon-Fri in German country side), even if some visits I saw little more than a air conditioned office and a cookie cutter

In the end the only reason I moved on from this group was because by the time a customer reached me they were usually not the happiest camper and the best I could do was get them back to a content state that things were working again, while there was a great deal of satisfaction in this it did start to grate a little towards the end.

Emerging Technologies & Services


ETS was always THE place to work in Hursley, they have all the best toys and it’s hard to argue with a team that had it’s own armoured car (unfortunately returned a few years ago)!

It is a small team that works on just about anything going, but specialising on what ever was new and interesting we could convince a client to pay for. We would go poke round both IBM Research and anything else in the public domain looking for something interesting and the go looking for a client that wanted to try something on the bleeding edge. Projects vary from just one member of the team working with a client or offering support to one of the other IBM services teams to 3-4 delivering something a little bigger. Projects include things like bits for Wimbledon like social media analysis system and network attached light level sensors, a set of pedestals to control the video walls in the IBM Southbank Forum, Controlling TVs using telepathy and a 10 year research program around Network and Information Science for the US/UK defence sector. The team also runs hackdays, innovation and design thinking workshops with clients.

This is the team that invented Node-RED (much kudos to Nick and Dave) along with a bunch of other cool tech like GianDB and Edgeware Fabric.

The team has had a bit of a shuffle round recently and now sits even closer to the IBM Research folk, hopefully this will make things easier for them to grab the latest and greatest new and shiny stuff coming down the pipe.

Next

On the whole I enjoyed my time at IBM and I’ll miss all the great people I worked with, but it was just time to try something new.

As for what that will be, I’ll let you know more once I’ve actually started (beginning of November) and worked out just how much of it I’m allowed to talk about, but given some recent public announcements it sounds like it could all be VERY interesting. Watch this space.

Multipart HTTP Post requests with Volley on Android

It’s been a little while since I’ve done any really serious Android development, but a couple of projects have brought me back to it.

Early on in one of those projects I had to make some HTTP requests, my first thought was to make use of the Apache HTTP Client classes as I had done many times before on Android. Which is why I was a little surprised when the usual ctrl-space didn’t auto complete any of the expected class names.

It turns out the classes were removed in Android 6.0 and the notice suggests using the HttpURLConnection class. A little bit more digging turned up a wrapper for this called Volley.

Volley is a wrapper round the HttpURLConnection class to provides a neat asynchronous interface that does IO in the background and then delivers results to the Main thread so UI updates can be done with out further faffing around switching threads. There is also a nice set of tutorials on the Android Developers pages.

The first few requests all worked fine, but there was one which was a little bit more tricky. The HTTP endpoint in question accepts a multipart-form payload. A bit of googling/searching on Stackoverflow turned up a number of approaches to this and best seamed to be documented in this gist

This was close to what I wanted but not quite what I needed so I have taken some of the core concepts and built my own MultipathRequest object.

...
MultipartRequest request = new MultipartRequest(url, headers, 
    new Response.Listener<NetworkResponse>() {
        @Override
        public void onResponse(NetworkResponse response) {
        ...
        }
    },
    new Response.ErrorListener() {
        @Override
        public void onErrorResponse(VolleyError error) {
        ...
        }
    });
    
request.addPart(new FormPart(fieldName,value));
request.addPart(new FilePart(fileFieldName, mimeType, fileName, data);

requestQueue.add(request);
...

I’ve stuck the code up on github here. You can include it in your Android Project by adding the following to the build.gradle in the root of the project:

allprojects {
  repositories {
    ...
    maven { url 'https://jitpack.io' }
  }
}

And then this to the dependencies section of the modules build.gradle:

dependencies {
  compile 'com.github.hardillb:MultiPartVolley:0.0.3'
}

Tinkerforge Node-RED nodes

For a recent project I’ve been working on a collection of different Node-RED nodes. Part of this set is a group of nodes to interact with Tinkerforge Bricks and Bricklets.

Tinkerforge is a platform of stackable devices and sensors/actuators that can be connected to via USB or can be attached directly to the network via Ethernet or Wifi.

Tinkerforge Stack

Collections of sensors/actuators, known as bricklets, are grouped together round a Master Brick. Each Master Brick can host up to 4 sensors/actuators but multiple Master Brick’s can be stacked to add more capacity. The Master Brick also has a Micro USB socket which can be used to connect to a host machine running a deamon called brickd. The host runs a daemon that handles discovering the attached devices and exposes access to them via an API. There are also Ethernet (with and without PoE) and WiFi bricks which allow you to link the stack directly to the network.

As well as the Master Brick there is a RED Brick which contains a ARM Cortex A8 processor, SD card, USB socket and micro HDMI adapter. This runs a small Linux distribution (based on Debian) and hosts a copy of brickd.

There are bindings for the API in a number of languages including:

  • C/C++
  • C#
  • Delphi
  • Java
  • Javascript
  • PHP
  • Perl
  • Python
  • Ruby

For the Node-RED nodes I took the Javascript bindings and wrapped them as Node-RED nodes. So far the following bricklets are supported:

Adding others shouldn’t be hard, but these were the ones I had access to in order to test.

All the nodes share a config node which is configured to point to the instance of the brickd the sensors are linked and it then provides a filtered list of available bricklets for each node.

Node-RED - Google Chrome_004

The code for the nodes can be found here and is available on npmjs.org here