Google Assistant Camera Feeds

As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.

The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.

You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”

This will create an input message in Node-RED that looks like:

{
  "topic": "",
  "name": "View Camera",
  "payload": {
    "command": "action.devices.commands.GetCameraStream",
    "params": {
      "StreamToChromecast": true,
      "SupportedStreamProtocols": [
        "progressive_mp4",
        "hls",
        "dash",
        "smooth_stream"
      ],
      "online": true
    }
  }
}

The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.

Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.

The function node updates the msg.payload.params with the required details. In this case

msg.payload.params = {
    cameraStreamAccessUrl: "http://192.168.1.96:8080/hls/stream.m3u8",
    cameraStreamProtocol: "hls"
}
return msg;

It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.

This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.

Working with multiple EFS file system in EKS

I’ve been building a system recently on AWS EKS and using EFS filesystems as volumes for persistent storage.

I initially only had one container that required any storage, but as I added a second I ran into the issue that there didn’t look to be a way to bind a EFS volume to a specific PersistentVolumeClaim so no way to make sure the same volume was mounted into the same container each time.

A Pod requests a volume by referencing a PersistentVolumeClaim as follows:

apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: app
    image: centos
    command: ["/bin/sh"]
    volumeMounts:
    - name: efs-volume
      mountPath: /data
  volumes:
  - name: efs-volume
    persistentVolumeClaim:
      claimName: efs-claim

The PersistentVolumeClaim would look:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

You can bind the EFS volume to a PersistentVolume as follows

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-persistent-volume
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-6eb2fc16

The volumeHandle points to the EFS volume you want to back it.

If there is only one PersistentVolume then there is not a problem as the PersistentVolumeClaim will grab the only one available. But if there are more than one then you can include the volumeName in the PersistentVolumeClaim description to bind the two together.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi
  volumeName: efs-persistent-volume

After a bit of poking around I found this Stack Overflow question which pointed me in the right direction.

Viewing Node-RED Credentials

A question popped up on the Node-RED Slack yesterday asking how to recover an entry from the credentials file.

Background

The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json

The content of the file will look something a little like this:

{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}

This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.

The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:

  • In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
  • In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.

Code

In order to make use of thex

const crypto = require('crypto');

var encryptionAlgorithm = "aes-256-ctr";

function decryptCreds(key, cipher) {
  var flows = cipher["$"];
  var initVector = Buffer.from(flows.substring(0, 32),'hex');
  flows = flows.substring(32);
  var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
  var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
  return JSON.parse(decrypted);
}

var creds = require("./" + process.argv[2])
var secret = process.argv[3]

var key = crypto.createHash('sha256').update(secret).digest();

console.log(decryptCreds(key, creds))

If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:

$ node show-creds creds.json [secret]

where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.

Adding a TPM to My Offline Certificate Authority

Back at the start of last year, I built an offline Certificate Authority based around Pi Zero and a RTC module.

The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.

For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.

TPM

A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.

TPM & RTC on a Raspberry Pi Zero

TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.

You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.

You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).

It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.

dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670

Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.

Using the TPM with the CA

Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.

Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.

There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.

Generating New Keys

You can generate a new CA certificate with the following commands

$ tpm2tss-genkey -a rsa -s 2048 ca.tss
$ openssl req -new -x509 -engine tpm2tss -key ca.tss  -keyform engine -out ca.crt

This certificate can now be used to sign CSRs

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Importing Keys

For an existing ca.key private key file.

$ tpm2_createprimary --hierarchy=o --hash-algorithm=sha256 --key-algorithm=rsa --key-context=primiary_owner_key.ctx
$ HANDLE=$(tpm2_evictcontrol --hierarchy=o --object-context=primiary_owner_key.ctx | cut -d ' ' -f 2 | head -n 1)
$ tpm2_import -C primiary_owner_key.ctx -G rsa -i ca.key -u ca-pub.tpm -r ca.tpm
$ tpm2tss-genkey --public ca-pub-tpm --private ca.tpm --parent $HANDLE --password secret ca.tss

And we can then sign new CSRs the same way as with the generated key

$ openssl ca -config openssl.cnf -engine tpm2tss -key ca.tss -keyform engine -in cert.csr -out cert.pem

Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.

Web Interface

At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.


1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)

Hardill Technologies Ltd

Over the last few years I’ve had a number of people approach me to help them build things with Node-RED, each time it’s not generally been possible to get as involved as I would have liked due to my day job.

Interest started to heat up a bit after I posted my series of posts about building Multi Tenant Node-RED systems and some of them sounded really interesting. So I have decided to start doing some contract work on a couple of them.

Node-RED asking for credentials

The best way for me to do this is to set up a company and for me to work for that company. Hence the creation of Hardill Technologies Ltd.

At the moment it’s just me, but we will have to see how things go. I think there is room for a lot of growth in people embedding the Node-RED engine into solutions as a way for users to customise event driven systems.

As well as building Multi-Tenant Node-RED environments I’ve also built a number of custom Node-RED nodes and Authentication/Storage plugins, some examples include:

If you are interested in building a multi-user/multi-tenant Node-RED solution, embedding Node-RED into an existing application, need some custom nodes creating or just want to talk about Node-RED you can check out my CV here and please feel free to drop me a line on tech@hardill.me.uk.

Where possible (and in line with the wishes of clients) I hope to make the work Open Source and to blog about it here so keep an eye out for what I’m working on.