Quick and Dirty Finger Daemon

I’ve been listening to more Brad & Will Made a Tech Pod and the current episode triggered a bunch of nostalgia about using finger to work out what my fellow CS students at university were up to. I won’t go into to too much detail about what Finger is as the podcast covers it all.

This podcast has triggered things like this in the past, like when I decided to make this blog (and Brad & Will’s podcast) available via Gopher.

On the podcast they had Ben Brown as a guest who had written his own Finger Daemon and linked it up to a site called Happy Net Box where users can update their plan file. Then anybody can access it using the finger command e.g. finger hardillb@happynetbox.com . The finger command is shipped by default on Windows, OSx and Linux so can be accessed from nearly anywhere.

I really liked the idea of resurrecting finger and as well as having a play with Happy Net Box I decided to see if I could run my own.

I started to look at what it would take to run a finger daemon on one of my Raspberry Pis, but while there are 2 packaged they don’t appear to run on current releases as they rely on init.d rather than Systemd.

Next up I thought I’d have a look at the protocol, which is documented in RFC1288. It is incredibly basic, you just listen on port 79 and read the username terminated with a new line & carriage return. This seamed to be simple enough to implement so I thought I’d give it a try in Go (and I needed something to do while all tonight’s TV was taken up with 22 men chasing a fall round a field).

The code is on Github here.

package main

import (
  "io"
  "os"
  "fmt"
  "net"
  "path"
  "time"
  "strings"
)

const (
  CONN_HOST = "0.0.0.0"
  CONN_PORT = "79"
  CONN_TYPE = "tcp"
)

func main () {
  l, err := net.Listen(CONN_TYPE, CONN_HOST+":"+CONN_PORT)
  if err != nil {
    fmt.Println("Error opening port: ", err.Error())
    os.Exit(1)
  }

  defer l.Close()
  for {
    conn, err := l.Accept()
    if err != nil {
      fmt.Println("Error accepting connection: ", err.Error())
      continue
    }
    go handleRequest(conn)
  }
}

func handleRequest(conn net.Conn) {
  defer conn.Close()
  currentTime := time.Now()
  buf := make([]byte, 1024)
  reqLen, err := conn.Read(buf)
  fmt.Println(currentTime.Format(time.RFC3339))
  if err != nil {
    fmt.Println("Error reading from: ", err.Error())
  } else {
    fmt.Println("Connection from: ", conn.RemoteAddr())
  }

  request := strings.TrimSpace(string(buf[:reqLen]))
  fmt.Println(request)

  parts := strings.Split(request, " ")
  wide := false
  user := parts[0]

  if parts[0] == "/W" && len(parts) == 2 {
    wide = true
    user = parts[1]
  } else if parts[0] == "/W" && len(parts) == 1 {
    conn.Write([]byte("\r\n"))
    return
  }

  if strings.Index(user, "@") != -1 {
    fmt.Println("remote")
    conn.Write([]byte("Forwarding not supported\r\n"))
  } else {
    if wide {
      //TODO
    } else {
      pwd, err := os.Getwd()
      filePath := path.Join(pwd, "plans", path.Base(user + ".plan"))
      filePath = path.Clean(filePath)
      fmt.Println(filePath)
      file, err := os.Open(filePath)
      if err != nil {
        //not found
        // io.Write([]byte("Not Found\r\n"))
      } else {
        defer file.Close()
        io.Copy(conn,file)
        conn.Write([]byte("\r\n"))
      }
    }
  }
}

Rather than deal with the nasty security problems with pulling .plan files out of peoples home directories it uses a directory called plans and loads files that match the pattern <username>.plan

I’ve also built it in a Docker container and mounted a local directory to allow me to edit and add new plan files.

You can test it with finger ben@hardill.me.uk

Working with multiple AWS EKS instances

I’ve recently been working on a project that uses AWS EKS managed Kubernetes Service.

For various reasons too complicated to go into here we’ve ended up with multiple clusters owned by different AWS Accounts so flipping back and forth between them has been a little trickier than normal.

Here are my notes on how to manage the AWS credentials and the kubectl config to access each cluster.

AWS CLI

First task is to authorise the AWS CLI to act as the user in question. We do this by creating a user with the right permissions in the IAM console and then export the Access key ID and Secret access key values usually as a CSV file. We then take these values and add them to the ~/.aws/credentials file.

[dev]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy

[test]
aws_access_key_id = AKYYYYYYYYYYYYYYYYYY
aws_secret_access_key = abababababababababababababababababababab

[prod]
aws_access_key_id = AKZZZZZZZZZZZZZZZZZZ
aws_secret_access_key = nmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnm

We can pick which set of credential the AWS CLI uses by adding the --profile option to the command line.

$ aws --profile dev sts get-caller-identity
{
    "UserId": "AIXXXXXXXXXXXXXXXXXXX",
    "Account": "111111111111",
    "Arn": "arn:aws:iam::111111111111:user/dev"
}

Instead of using the --profile option you can also set the AWS_PROFILE environment variable. Details of all the ways to switch profiles are in the docs here.

$ export AWS_PROFILE=test
$ aws sts get-caller-identity
{
    "UserId": "AIYYYYYYYYYYYYYYYYYYY",
    "Account": "222222222222",
    "Arn": "arn:aws:iam::222222222222:user/test"
}

Now we can flip easily between different AWS accounts we can export the EKS credential with

$ export AWS_PROFILE=prod
$ aws eks update-kubeconfig --name foo-bar --region us-east-1
Updated context arn:aws:eks:us-east-1:333333333333:cluster/foo-bar in /home/user/.kube/config

The user that created the cluster should also follow these instructions to make sure the new account is added to the cluster’s internal ACL.

Kubectl

If we run the previous command with each profile it will add the connection information for all 3 clusters to the ~/.kube/config file. We can list them with the following command

$ kubectl config get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
*         arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   
          arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   
          arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar 

The star is next to the currently active context, we can change the active context with this command

$ kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar
Switched to context "arn:aws:eks:us-east-1:222222222222:cluster/foo-bar".

Putting it all together

To automate all this I’ve put together a collection of script that look like this

export AWS_PROFILE=prod
aws eks update-kubeconfig --name foo-bar --region us-east-1
kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar

I then use the shell source ./setup-prod command (or it’s shortcut . ./setup-prod) , this is instead of adding the shebang to the top and running it as a normal script. This is because when environment variables are set in scripts they go out of scope. Leaving the AWS_PROFILE variable in scope means that the AWS CLI will continue to use the correct account settings when it’s used later while working on this cluster.

Joining FlowForge Inc.

FlowForge Logo

Today is my first day working for FlowForge Inc. I’ll be employee number 2 and joining Nick O’Leary working on all things based around Node-RED and continuing to contribute to the core Open Source project.

We should be building on some of the things I’ve been playing with recently.

Hopefully I’ll be able to share some of the things I’ll be working on soon, but in the mean time here is the short post that Nick wrote when he announced FlowForge a few weeks ago and a post welcoming me to the team

To go with this announcement Hardill Technologies Ltd will be going dormant. It’s been an good 3 months and I’ve built something interesting for my client which I hope to see it go live soon.

Setting up WireGuard IPv6

I’ve been having a quick play with setting up another VPN solution for getting an IPv6 address on my mobile devices this time using WireGuard.

WireGuard is a relatively new VPN tunnel implementation that has been written to be as stripped back as possible to keep the codebase as small as possible to help make it easier to audit.

Setup

A lot of the instructions for running WireGuard on RaspberryPi OS talk about adding debian testing repos or building the code from scratch, but it looks like recent updates have included the packages needed in the core repositories.

# apt-get install wireguard

I set up UDP port forwarding on my router for port 53145 and got my ISP to route another /64 IPv6 subnet to my line, both of these are forwarded on to the Raspberry Pi that is running that is also running my OpenVPN setup. This is useful as it’s already setup to do NAT for the 10.8.0.0/24 range I’m issuing to OpenVPN clients so having it do itfor the 10.9.0.0/24 range for WireGuard is easy enough.

WireGuard on Linux is implemented as a network device driver so can be configured on the command line with the ip command e.g.

# ip link add dev wg0 type wireguard
# ip address add dev wg0 10.9.0.1/24

Which brings the device up and sets the IP addresses but you still need to add the Private Key and remote address and Public Key which can be done with the wg command

# wg set wg0 listen-port 53145 private-key /path/to/private-key peer ABCDEF... allowed-ips 0.0.0.0/0 endpoint 209.202.254.14:8172

Or more easily it can read from a config file

# wg setconf wg0 myconfig.conf

Or the whole setup and configured with wg-quick

# wg-quick up /path/to/wg.conf

Server Config

[Interface]
Address = 10.9.0.1/24, 2001:8b0:2c1:xxx::1/64
ListenPort = 53145
PrivateKey = oP3TAHBctNVcnPTxxxxxxxxzNRLSF5CwII4s8gVAXg=

#nexus
[Peer]
PublicKey = 4XcNbctkGy0s73Dvxxxxxxxxx++rs5BAzCGjYmq21UM=
AllowedIPs = 10.9.0.2/32, 2001:8b0:2c1:xxx::2/128

The Server config includes:

  • Address is the local address on the VPN tunnel, here has both IPv4 and IPv6.
  • ListenPort is which port to listen for client connections on. WireGuard doesn’t have a assigned port.
  • PrivateKey to identify the host.
  • There can be multiple Peers which represent which clients can connect and the AllowedIPs is the IP addresses for each client.

Client Config

[Interface]
Address = 10.9.0.2/32, 2001:8b0:2c1:4b50::2/128
PrivateKey = UFIJGgtKsor6xxxxxxxxxxxbWeKmw+Bb5ODpyNblEA=
DNS = 8.8.8.8

[Peer]
PublicKey = jMB2oMu+YTKigGxxxxxxxxxxSYcTde/7HT+QlQoZFm0=
AllowedIPs = 0.0.0.0/0, ::0/0
Endpoint = hardill.me.uk:53145

The differences from the Server config are:

  • Interface has a DNS entry for the client to use while the tunnel is running.
  • Peer has an Endpoint which is the public address and port to connect to
  • AllowedIPs are which IPs to route over the tunnel, in this case it’s everything

Key Generation

Both ends of the connection need a PublicKey and a PrivateKey so they can mutually authenticate each other. These are generated with the wg command

# wg keygen > privateKey
# wg pubkey < privateKey > publicKey

Sharing Config

The WireGuard Android app that you can manually add all the details in the config file or it supports reading config files from QR codes. This makes it really easy to setup and removes the chance of getting a typo in the Keys and IP addresses.

You can generate QR codes from the config file as follows:

# qrencode -t png -o nexus.png < nexus.conf
# qrencode -t ansiutf8 < nexus.conf

The first generates a PNG file with the QR code, the second prints the code out as ASCII art.

Conclusion

It all looks to be working smoothly. I can see the advantages over OpenVPN being that you don’t need to worry about certificate maintenance and distribution.

I’ll give it a proper work out and see how it holds up running things like SIP connections along with general access to my home network.

As well as running it on the phone, I’ll set up a client config for my laptop to use when out and about. The only issues is that the Gnome Network Manager integration for WireGuard isn’t available in the standard repos for Ubuntu 20.04 so it needs to be started/stopped from the command line.

New Monitor (BenQ EW2780U)

I finally got round to buying my self a proper monitor to use with my laptop at home (I know I’m very late to this party given the current situation of extended working from home).

I’ll be using it with my Dell XPS13 which only has 2 ports ( 2 USB-C/Thunderbolt ) and these double as the power input as well so I was looking for a monitor that can be both driven via USB-C and supply power to the laptop via USB-PD.

Having had a bit of a search round and asking for suggestions on Twitter, I found the BenQ EW2780U which looked to cover all the bases. There was a reasonably looking review from TechRadar. 27″ was a little outside my initial size range, but given how close it will be on my desk and the amount of space I have to play with it’s the right call.

There was a very similar 32″ model (BenQ EW3270u) on Amazon that was even slightly cheaper, but while it had support for video over USB-C, it didn’t support USB-PD to charge/power the laptop.

Technical Specs

  • 27″ Screen
  • 3840 x 2160 pixels
  • 2 HDMI ports (v2.0)
  • 1 DisplayPort (v1.4)
  • 1 USB-C
  • USB-PD up to 60W (Note I don’t think this is enough to charge a MBP)
  • Built in speakers (these work over HDMI/USB-C)

Setup

I’ve tweaked a few of the out of the box settings.

  • Turned off auto input switching, mainly because it was flipping to the Chromecast when ever the laptop went to sleep or I unplugged it. It’s pretty easy to switch inputs with the buttons on the back.
  • Set it to sleep when the USB-C connection is unplugged.

While the monitor did come with a HDMI cable in the box I did need to buy a new USB-C cable to use it with the Dell. None of the ones I currently had would support the HD video signal. This is one of the only downsides of USB-C, all the cables will fit in all the devices, but it’s very hard to visually tell them apart as to what spec they support.

I had a problem with Ubuntu 18.04 not liking driving such a big desktop, if I put anything on the new monitor it would occasionally randomly crash the Gnome session which meant all the open apps also got killed. So this lead to me actually getting round to do the upgrade to Ubuntu 20.04 that I had been putting off. This has fixed the problem and everything is running smoothly now.

The only thing it’s really missing is a built in USB hub then I wouldn’t need to plug a dongle into the remaining USB-C port to give me some USB-A ports.

Google Assistant Sensors

Having built my 2 different LoRA connected temperature/humidity sensors I was looking for something other than the Graphana instance that shows the trends.

Being able to ask Google Assistant the temperature in a room seemed like a good idea and an excuse to add the relatively new Sensor device type my Google Assistant Bridge for Node-RED.

I’m exposing 2 options for the Sensor to start with, Temperature and Humidity. I might look at adding Air Quality later.

Once the virtual device is setup, you can feed data in the Google Home Graph using a flow similar to the following

The join node is set to combine the 2 incoming MQTT messages into a single object based on their topics. The function node then builds the right payload to pass to the Google Home output node and finally it feeds it through an RBE node just to make sure we only send updates when the data changes.

msg.payload = {
  params: {
    temperatureAmbientCelsius: msg.payload["bedroom/temp"],
    humidityAmbientPercent: Math.round(msg.payload["bedroom/humidity"])
  }
}

Setting up a AWS EC2 Mac

I recently needed to debug some problems running a Kubernetes app on a Mac. The problem is I don’t have a Mac or easy access to one that I can have full control over to poke and prod at things. (I also am not the biggest fan of OSx, but that’s a separate story)

Recently AWS started to offer Mac Mini EC2 instances. These differ a little from most normal EC2 instances as they are an actual dedicated bit of hardware that you have exclusive access to rather than a VM on hardware shared with others.

Because of the fact it’s a dedicated bit of hardware the process for setting one up is a little different.

Starting the Instance

First you probably need to request to have a limit increasing on your account. as the default limit for dedicated hardware looks to be 0. This limit is also per region so you will need to ask for the update in every one you would need. To request the update use the AWS Support Center, user the “Create Case” button and select “Service Limit Increase”. From the drop down select “EC2 Dedicated Hosts”, then the region and you want to request and update to the mac1 instance type and enter the number of concurrent instances you will need. It took a little time for my request to be processed, but I did submit it on Friday afternoon and it was approved on Sunday morning.

Once it has been approved you can create a new “Dedicated Hosts” instance on the EC2 console, with a “Instance Family” of mac1 and a “Instance Type” of mac1.metal. You can pick your availability zone (not all Regions and AZ have all instance type so it might not be possible to allocate a mac in every zone). I also suggest you tick the “Instance auto-placement” box.

Once that is complete you can actually start allocate an EC2 instance on this dedicated host. You get to pick which version of OSx you want to run. Assuming you only have one dedicated host and you ticked the auto-placement box then you shouldn’t need to pick the hardware you want to run the instance on.

The other main things to pick as you walk through the wizard are the amount of disk space (default is 60gb), which security policy you want (be sure to pick one with ssh access) and which SSH key you’ll use to log in.

The instances do take a while to start, but given it’s doing a fresh OSx install the hardware this is probably not a surprise. But once the console says it’s up and both the status checks are passing you’ll be able to ssh into the box.

Enabling a GUI

Once logged in you can do most things from the command line, but I needed to run Docker, and all the instructions I could find online said I needed to download Docker Desktop and install that via the GUI.

I found the following gist which helped.

  • Fist up set a password for the ec2-user
    sudo passwd ec2-user
  • Second enabled the the VNC
% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
-activate -configure -access -on \
-configure -allowAccessFor -specifiedUsers \
-configure -users ec2-user \
-configure -restart -agent -privs -all

% sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
 -configure -access -on -privs -all -users ec2-user

You can then add -L 5900:localhost:5900 to the ssh command that you use to log into the mac. This will port forward the VNC port to localhost.

VNCViewer or Remmina can be used to start a session that gives full access to the Mac’s gui.

Expand the disk

If you have allocated more than the default 60gb then you will need to expand the disk to make full use of it.

% PDISK=$(diskutil list physical external | head -n1 | cut -d" " -f1)
APFSCONT=$(diskutil list physical external | grep "Apple_APFS" | tr -s " " | cut -d" " -f8)
% sudo diskutil repairDisk $PDISK
# Accept the prompt with "y", then paste this command
% sudo diskutil apfs resizeContainer $APFSCONT 0

Add tools

The instance comes with Homebrew pre-setup so you can install nearly anything else you might need.

Shut it down when you are done

Mac EC2 instances really are not cheap ($25.99 per day…) so remember to kill it off when you are done.

Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

{
  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
        "progressive_mp4",
        "hls",
        "dash",
        "smooth_stream"
      ],
      "online": true
    }
  }
}

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "http://192.168.1.96:8080/hls/stream.m3u8",
    cameraStreamProtocol: "hls"
}
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.

Working with multiple EFS file system in EKS

I’ve been building a system recently on AWS EKS and using EFS filesystems as volumes for persistent storage.

I initially only had one container that required any storage, but as I added a second I ran into the issue that there didn’t look to be a way to bind a EFS volume to a specific PersistentVolumeClaim so no way to make sure the same volume was mounted into the same container each time.

A Pod requests a volume by referencing a PersistentVolumeClaim as follows:

apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    volumeMounts:
    - name: efs-volume
      mountPath: /data
  volumes:
  - name: efs-volume
    persistentVolumeClaim:
      claimName: efs-claim

The PersistentVolumeClaim would look:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

You can bind the EFS volume to a PersistentVolume as follows

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-persistent-volume
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-6eb2fc16

The volumeHandle points to the EFS volume you want to back it.

If there is only one PersistentVolume then there is not a problem as the PersistentVolumeClaim will grab the only one available. But if there are more than one then you can include the volumeName in the PersistentVolumeClaim description to bind the two together.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
  volumeName: efs-persistent-volume

After a bit of poking around I found this Stack Overflow question which pointed me in the right direction.

Viewing Node-RED Credentials

A question popped up on the Node-RED Slack yesterday asking how to recover an entry from the credentials file.

Background

The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json

The content of the file will look something a little like this:

{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}

This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.

The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:

  • In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
  • In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.

Code

In order to make use of thex

const crypto = require('crypto');

var encryptionAlgorithm = "aes-256-ctr";

function decryptCreds(key, cipher) {
  var flows = cipher["$"];
  var initVector = Buffer.from(flows.substring(0, 32),'hex');
  flows = flows.substring(32);
  var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
  var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
  return JSON.parse(decrypted);
}

var creds = require("./" + process.argv[1])
var secret = process.argv[2]

var key = crypto.createHash('sha256').update(secret).digest();

console.log(decryptCreds(key, creds))

If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:

$ node show-creds flows_localhost_creds.json [secret]

where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.