As mentioned in a previous post I’ve been playing with Streaming Camera feeds to my Chromecast.
The next step is to enabling accessing these feeds via the Google Assistant. To do this I’m extending my Node-RED Google Assistant Service.
You should now be able to add a device with the type Camera and a CameraStream trait. You can then ask the Google Assistant to “OK Google, show me View Camera on the Livingroom TV”
This will create an input message in Node-RED that looks like:
The important part is mainly the SupportedStreamProtocols which shows the types of video stream the display device supports. In this case because the target is a ChromeCast it shows the full list.
Since we need to reply with a URL pointing to the stream the Node-RED input node can not be set to Auto Acknowledge and must be wired to a Response node.
The function node updates the msg.payload.params with the required details. In this case
It needs to include the cameraStreamAccessUrl which points to the video stream and the cameraStreamProtocol which identifies which of the requested protocols the stream uses.
This works well when the cameras and the Chromecast are on the same network, but if you want to access remote cameras then you will want to make sure that they are secured to prevent them being scanned by a IoT search engine like Shodan and open to the world.
I’ve been building a system recently on AWS EKS and using EFS filesystems as volumes for persistent storage.
I initially only had one container that required any storage, but as I added a second I ran into the issue that there didn’t look to be a way to bind a EFS volume to a specific PersistentVolumeClaim so no way to make sure the same volume was mounted into the same container each time.
A Pod requests a volume by referencing a PersistentVolumeClaim as follows:
The volumeHandle points to the EFS volume you want to back it.
If there is only one PersistentVolume then there is not a problem as the PersistentVolumeClaim will grab the only one available. But if there are more than one then you can include the volumeName in the PersistentVolumeClaim description to bind the two together.
A question popped up on the Node-REDSlack yesterday asking how to recover an entry from the credentials file.
Background
The credentials file can normally be found in the Node-RED userDir, which defaults to ~/.node-red on Unix like platforms (and is logged near the start of the output when Node-RED starts). The file has the same name as the flow file with _cred appended before the .json e.g. the flows_localhost.json will have a coresponding flows_localhost_creds.json
The content of the file will look something a little like this:
{"$":"7959e3be21a9806c5778bd8ad216ac8bJHw="}
This isn’t much use on it’s own as the contents are encrypted to make it harder for people to just copy the file and have access to all the stored passwords and access tokens.
The secret that is used to encrypt/decrypt this file can be found in one of 2 locations:
In the settings.js file in the credentialsSecret field. The user can set this if they want to use a fixed known value.
In the .config.json (or .config.runtime.json in later releases) in the __credentialSecret field. This secret is the one automatically generated if the user has not specifically set one in the settings.js file.
Code
In order to make use of thex
const crypto = require('crypto');
var encryptionAlgorithm = "aes-256-ctr";
function decryptCreds(key, cipher) {
var flows = cipher["$"];
var initVector = Buffer.from(flows.substring(0, 32),'hex');
flows = flows.substring(32);
var decipher = crypto.createDecipheriv(encryptionAlgorithm, key, initVector);
var decrypted = decipher.update(flows, 'base64', 'utf8') + decipher.final('utf8');
return JSON.parse(decrypted);
}
var creds = require("./" + process.argv[1])
var secret = process.argv[2]
var key = crypto.createHash('sha256').update(secret).digest();
console.log(decryptCreds(key, creds))
If you place this is a file called show-creds.js and place it in the Node-RED userDir you can run it as follows:
where [secret] is the value stored in credentialsSecret or _credentialsSecret from earlier. This will then print out the decrypted JSON object holding all the passwords/tokens from the file.
The idea was to run the CA on the pi that can only be accesses when it’s plugged in via a USB cable to another machine. This means that the CA cert and private key are normally offline and only potentially accessible by an attacker when plugged in.
For what’s at stake if my toy CA gets compromised this is already overkill, but I was looking to see what else I could do to make it even more secure.
TPM
A TPM or Trusted Platform Module is a dedicated CPU paired with some dedicated NVRAM. The CPU is capable of doing some pretty basic crypto functions, provide a good random number generator and NVRAM is used to store private keys.
TPMs also have a feature called PCRs which can be used to validate the hardware and software stack used to boot the machine. This means you can use this to detect if the system has been tampered with at any point. This does require integration into the bootloader for the system.
You can set access policies for keys protected by the TPM to allow access if the PCRs match a known pattern, some Disk Encryption systems like LUKS on Linux and Bitlocker on Windows1 can use this to automatically unlock the encrypted drive.
You can get a TPM for the Raspberry Pi from a group called LetsTrust (that is available online here).
It mounts on to the SPI bus pins and is enabled by adding a Device Tree Overlay to the /boot/config,txt similar to the RTC.
dtoverlay=i2c-rtc,ds1307
dtoverlay=tpm-slb9670
Since the Raspberry Pi Bootloader is not TPM aware the PCRs are not initialised in this situation, so we can’t use it to automatically unlock an encrypted volume.
Using the TPM with the CA
Even without the PCRs the TPM can be used to protect the CA’s private key so it can only be used on the same machine as the TPM. This makes the private key useless if anybody does manage to remotely log into the device and make a copy.
Of course since it just pushes on to the Pi header if anybody manages to get physical access they can just take the TPM and sdcard, but as with all security mechanisms once an attacker has physical access all bets are usually off.
There is a plugin for OpenSSL that enables it to use keys stored in the TPM. Once compiled it can be added as OpenSSL Engine along with a utility called tpm2tss-genkey that can be used to create new keys or an existing key can be imported.
Generating New Keys
You can generate a new CA certificate with the following commands
Once the keys have been imported the it’s important to remember to clean up the original key file (ca.key) so any attacker can’t just use them instead of using the one protected by the TPM. Any attacker now needs both the password for the key and the TPM device that was used to cloak it.
Web Interface
At the moment the node-openssl-cert node that I’m using to drive the web interface to CA doesn’t look to support passing in engine arguments so I’m having to drive it all manually on the command line, but I’ll be looking at a way to add support to the library. I’ll try and generate a pull request when I get something working.
1Because of it’s use with Bitlocker, a TPM is now required for all machines that want to be Windows 10 certified. This means my second Dell XPS13 also has one (it was an optional extra on the first version and not included in the Sputnik edition)
I finally got round to setting up a new version of a Virtual Machine I had on my old laptop. It’s purpose is basically to host an email client that accesses a bunch of email addresses I have set up on my domain.
It was all going smoothly until I actually got round to adding the account details to Thunderbird
It sat and span like this for a while then pops up the manual configuration view.
Which is fine as I know the difference between pop3 and imap but it’s the sort of thing that really confuses the most users (I’ve lost count of the number of times I’ve had to talk people through this over the phone).
The problem is I thought I’d already fixed particular probelm. Back last time I set up a bunch of email addresses I remember setting up a bunch of DNS SRV records to point to both the inbound mail server and the IMAP server.
SRV Records
SRV records allow you to say which servers to use for a particular protocol using a given domain. The entries are made up of the protocol followed by the transport type and then the domain e.g.
_submission._tcp.example.com
The mail client would look the SRV record for this hostname to find the mail submission protocol server for the example.com domain and would get a response that looks like this:
_submission._tcp.example.com. 3600 IN SRV 0 1 587 mail.example.com.
where:
3600 is the Time to Live (how long to cache this result in seconds)
IN is the Internet Protocol
SRV Record type
0 Weight (If multiple records try the lowest first)
1 Priority (If multiple records with the same Weight pick the highest first)
587 The port number of the service
mail.example.com the host where to find the service.
I have SRV records for the following protocols enabled:
Mail Submission _submission._tcp
IMAPS _imaps._tcp
SIP _sip._tcp & _sip._udp
SIPS _sips._tcp
Using SRV records for email discovery is covered by RFC6186. SRV records are also used in the VoIP space to point to SIP servers.
So the question is why this doesn’t work. The problem looks to be that Thunderbird hasn’t implemented support for RFC6186 just yet. A bit of digging found this document which covers what the current design for Thunderbird is and which bits are still to be implemented. It looks like the only option that currently works in the XML configuration file
config-v1.1.xml file
The document lists a few locations that a file can be placed relative to the domain that holds details of how to configure the email account. This includes http://example.com/.well-known/autoconfig/mail/config-v1.1.xml where example.com is the domain part of the email address.
The schema for config-v1.1.xml can be found here. A basic minimal entry would look something like this:
Apart from the obvious parts that say which servers to connect to the other useful bit is found in the <username> tags. Here I’m using %EMAILADDRESS% which says to use the whole email address as the username. You can also use %EMAILLOCALPART% which is everything before the @ sign and %EMAILDOMAIN% which is everything after the @ sign.
The documentation includes options for setting up remote address books and calendar information, though it doesn’t look like Thunderbird supports all of these options just yet.
With this file now in place on my HTTP server Thunderbird now sets everything up properly.
As I hinted at in the end of my last post, I’ve been looking for a way to take the hostnames setup for Kubernetes Ingress endpoints and turn them into mDNS CNAME entries.
When I’m building things I like to spin up a local copy where possible (e.g. microk8s on a Pi 4 for the Node-RED on Kubernetes and the Docker Compose environment on another Pi 4 for the previous version). These setups run on my local network at home and while I have my own DNS server set up and running I also make extensive use of mDNS to be able to access the different services.
I’ve previously built little utilities to generate mDNS CNAME entries for both Nginx and Traefik reverse proxies using Environment Variables or Labels in a Docker environment, so I was keen to see if I can build the same for Kubernetes’ Ingress proxy.
Watching for new Ingress endpoints
The kubernetes-client node module supports for watching certain endpoints, so can be used to get notifications when an Ingress endpoint is created or destroyed.
const stream = client.apis.extensions.v1beta1.namespaces("default").ingresses.getStream({qs:{ watch: true}})
const jsonStream = new JSONStream()
stream.pipe(jsonStream)
jsonStream.on('data', async obj => {
if (obj.type == "ADDED") {
for (x in obj.object.spec.rules) {
let hostname = obj.object.spec.rules[x].host
...
}
} else if (obj.type == "DELETED") {
for (x in obj.object.spec.rules) {
let hostname = obj.object.spec.rules[x].host
...
}
}
}
Creating the CNAME
For the previous versions I used a python library called mdns-publish to set up the CNAME entries. It works by sending DBUS messages to the Avahi daemon which actually answers the mDNS requests on the network. For this version I decided to try and send those DBUS messages directly from the app watching for changes in K8s.
The dbus-next node module allows working directly with the DBUS interfaces that Avahi exposes.
const dbus = require('dbus-next');
const bus = dbus.systemBus()
bus.getProxyObject('org.freedesktop.Avahi', '/')
.then( async obj => {
const server = obj.getInterface('org.freedesktop.Avahi.Server')
const entryGroupPath = await server.EntryGroupNew()
const entryGroup = await bus.getProxyObject('org.freedesktop.Avahi', entryGroupPath)
const entryGroupInt = entryGroup.getInterface('org.freedesktop.Avahi.EntryGroup')
var interface = -1
var protocol = -1
var flags = 0
var name = host
var clazz = 0x01
var type = 0x05
var ttl = 60
var rdata = encodeFQDN(hostname)
entryGroupInt.AddRecord(interface, protocol, flags, name, clazz, type, ttl, rdata)
entryGroupInt.Commit()
})
Adding a signal handler to clean up when the app gets killed and we are pretty much good to go.
process.on('SIGTERM', cleanup)
process.on('SIGINT', cleanup)
function cleanup() {
const keys = Object.keys(cnames)
for (k in keys) {
//console.log(keys[k])
cnames[keys[k]].Reset()
cnames[keys[k]].Free()
}
bus.disconnect()
process.exit(0)
}
Having built a working example of Multi Tenant Node-RED using Docker I thought I’d have a look at how to do the same with Kubernetes as a Christmas project.
I started with installing the 64bit build of Ubuntu Server on a fresh Pi4 with 8gb RAM and then using snapd to install microk8s. I had initially wanted to use the 64bit version of Raspberry Pi OS, but despite microk8s claiming to work on any OS that support snapd, I found that containerd just kept crashing on Raspberry Pi OS.
Once installed I enabled the dns and ingress plugins, this got me a minimal viable single node Kubernetes setup working.
I also had to stand up a private docker registry to hold the containers I’ll be using. That was just a case of running docker run -d -p 5000:5000 --name registry registry on a local machine e.g private.example.com . This also means adding the URL for this to microk8s as described here.
Since Kubernetes is another container environment I can reuse most of the parts I previously created. The only bit that really needs to change is the Manager application as this has to interact with the environment to stand up and tear down containers.
Architecture
As before the central components are a MongoDB database and a management web app that stands up and tears down instances. The MongoDB instance holds all the flows and authentication details for each instance. I’ve deployed the database and web app as a single pod and exposed them both as services
This Deployment descriptor basically does all the heavy lifting. It sets up the mangment app, MongoDB and the private NPM registry.
It also binds 2 sets of secrets, the first holds holds the authentication details to interact with the Kubernetes API (the ~/.kube/config file) and the settings.js for the management app. The second is the config for the Veraccio NPM registry.
I’m using the HostPath volume provider to store the MongoDB and the Veraccio registry on the filesystem of the Pi, but for a production deployment I’d probably use the NFS provider or a Cloud Storage option like AWS S3.
This library exposes the full kubernetes API allowing the creation/modification/destructions of all entities.
Standing up a new instance is a little more complicated as it’s now a multi step process.
Create a Pod with the custom-node-red container
Create a Service based on that pod
Expose that service via the Ingress addon
I also removed the Start/Stop buttons since stopping pods is not really a thing in Kubernetes.
All the code for this version of the app is on github here.
Catalogue
In the Docker-Compose version the custom node `catalogue.json` file is hosted by the management application and had to be manually updated each time a new or updated node was push to the repository. For this version I’ve stood up a separate container.
This container runs a small web app that has 2 endpoints.
/catalogue.json – which returns the current version of the catalogue
/update – which is triggered by the the notify function of the Verdaccio private npm registry
The registry has this snippet added to the end of the config.yml
Then run the setup.sh script, passing in the base domain for instances and the host:port combination for the local container registry.
$ ./setup.sh example.com private.example.com:5000
This will update some of the container locations in the deployment and build the secrets needed to access the Kubernetes API (reads the content of ~/.kube/config)
With all the configuration files updated the containers need building and pushing to the local container registry.
Finally trigger the actual deployment with kubectl
$ kubectl apply -f ./deployment
Once up and running the management app should be available on http://manager.example.com, the private npm registry on http://registry.example.com and an instance called “r1” would be on http://r1.example.com.
A wildcard DNS entry needs to be setup to point all *.example.com hosts to the Kubernetes clusters Ingress IP addresses.
As usual the whole solution can be found on github here.
What’s Next
I need to work out how to set up Avahi CNAME entries for each deployment as I had working with both nginx and traefik so I can run it all nicely on my LAN without having to mess with /etc/hosts or the local DNS. This should be possible by using a watch call one the Kubernetes Ingress endpoint.
I also need to back port the new catalogue handling to the docker-compose version.
And finally I want to have a look at generating a Helm chart for all this to help get rid of needing the setup.sh script to modify the deployment YAML files.
p.s. If anybody is looking for somebody to do this sort of thing for them drop me a line.
After using building a tool to populate a Gopher server as an excuse to learn the Go programming language I’ve recently been wanting to try my hand at Rust.
The best way to learn a new programming language is to use it to actually solve a problem, rather than just copying exercises out of a tutorial. So this time I thought I’d try and build my own Gopher server.
Specification
The initial version of the Gopher specification is laid down in RFC1436. It’s pretty simple. The client basically sends a string that represents the path to the document it wants followed by \r\n. In the case of the root document the client sends just the line ending chars.
The server responds with either the raw content of the document or if the path points to a directory then it sends the content of a file called gophermap if found in that directory.
The gophermap file holds a list of links a bit like a index.html for a HTTP server.
Ben's Place - Gopher
Just a place to make notes about things I've
been playing with
0CV cv.txt
1Blog /blog
1Brad & Will Made a Tech Pod /podcast
The lines that start with 0 are direct links to a file and have the label and then the file name. Where as 1 are links to another directory. The fields are delimited with tabs.
You can also link to files/directories on other servers by including the server and port after the filename/dir path again separated by tabs.
1Blog /blog gopher.hardill.me.uk 70
There is also something called Gopher+ which is an extended version that looks to never have been formally adopted as a standard but both my gopher client and PyGopherd look to support. A copy of the draft is here.
Rust
Similar to things like NodeJS, Rust has a package manager called cargo that can be used to create a new project, running cargo new rust_gopher will create a Cargo.toml file that looks a bit like this:
[package]
name = "rust_gopher"
version = "0.1.0"
authors = ["Ben Hardill <hardillb@gmail.com"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
It also create a src directory with a main.rs and calls git init to start a new repository.
The src/main.rs is pre-populated with “Hello World”
fn main() {
println!("Hello, world!");
}
I’ve replaced the main() function with one that uses the clap create to parse some command line arguments.
let matches = App::new("rust_gopher")
.version("0.1.0")
.author("Ben Hardill")
.about("Basic Gopher Server")
.arg(Arg::with_name("hostname")
.short("h")
.long("hostname")
.takes_value(true)
.help("The hostname of this server"))
.arg(Arg::with_name("port")
.short("p")
.long("port")
.takes_value(true)
.help("Port number to listen on"))
.arg(Arg::with_name("dir")
.short("d")
.long("dir")
.takes_value(true)
.help("path to gopher content"))
.get_matches();
let hostname = matches.value_of("hostname").unwrap_or("localhost");
let port :i16 = matches.value_of("port").unwrap_or("70").parse().unwrap();
let dir = matches.value_of("dir").unwrap_or("root");
Which gives really nicely formatted help output:
$ rust_gopher --help
rust_gopher 0.1.0
Ben Hardill
Basic Gopher Server
USAGE:
rust_gopher [OPTIONS]
FLAGS:
--help Prints help information
-V, --version Prints version information
OPTIONS:
-d, --dir <dir> path to gopher content
-h, --hostname <hostname> The hostname of this server
-p, --port <port> Port number to listen on
Reading the gophermap
To add the required bits to a basic gophermap file into what actually gets sent, the files get parsed into the following structure.
struct Gophermap {
row_type: char,
label: String,
path: String,
server: String,
port: i16
}
fn read_gophermap(path: &Path, config: &Config) -> Vec<Gophermap>{
let mut entries: Vec<Gophermap> = Vec::new();
let file = File::open(path).unwrap();
let reader = BufReader::new(file);
for line in reader.lines() {
let mut l = line.unwrap();
...
let entry = Gophermap {
row_type: t,
label: label,
path: p.to_string(),
server: s.to_string(),
port: port
};
entries.push(entry);
}
entries;
}
What’s next?
It still needs a bunch of work, mainly around adding a bunch of error handling. I’ll probably keep poking at it over the holidays.
An interesting question came up on Stack Overflow recently that I suggested a hypothetical answer for how to do hostname based proxying for MQTT.
In this post I’ll explore how to actually implement that hypothetical solution.
History
HTTP added the ability to do hostname based proxying when it instroduced the Host header in HTTP v1.1. This meant that a single IP address could be used for many sites and the server would decide which content to serve based on the his header. Front end reverse proxies (e.g. nginx) can use the same header to decide which backend server to forward the traffic to.
This works well until we need encrypt the traffic to the HTTP server using SSL/TLS where the headers are encrypted. The solution to this is to use the SNI header in the TLS handshake, this tells the server which hostname the client is trying to connect to. The front end proxy can then either use this information to find the right local copy of the certificate/key for that site if it’s terminating the encryption at the frontend or it can forward the whole connection directly to the correct backend server.
MQTT
Since the SNI header is in the initial TLS handshake and is nothing to do with the underlying protocol it can be used for ay protocol, in this case MQTT. This measn if we set up a frontend proxy that uses SNI to pick the correct backend server to connect to.
Here is a nginx configuration file that proxies for 2 different MQTT brokers based on the hostname the client uses to connect. It is doing the TLS termination at the proxy before forwarding the clear version to the backend.
Assuming the the DNS entries for test1.example.com and test2.example.com both point to the host running nginx then we can test this with the mosquitto_sub command as follows
$ mosquitto_sub -v -h test1.example.com -t test --cafile ./ca-certs.pem
This will be proxied to the broker running on 192.168.1.1, where as
$ mosquitto_sub -v -h test2.example.com -t test --cafile ./ca-certs.pem
will be proxied to the broker on 192.168.1.2.
Cavets
The main drawback with this approach is that it requires that all the clients connect using TLS, but this is not a huge problem as nearly all devices are capable of this now and for any internet facing service it should be the default anyway.
Acknowledgment
How to do this was mainly informed by the following Gist