I’m working on a project that dynamically creates Pods/Services/Ingress objects using the Kubernetes API.
This all works pretty well, until we have to support a bunch of different Ingress Controllers. Currently the code supports 2 different options.
- Using the nginx ingress controller with it set as the default IngressClass
- Running on AWS EKS with a ALB Ingress Controller.
If does this by having an if
block and a settings flag that says it’s running on AWS where it then injects a bunch of annotations into the Ingress object
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: flowforge
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
name: graceful-whiskered-tern-1370
namespace: flowforge
spec:
...
While this works it doesn’t scale well as we add support for more types of Ingress Controller that require different annotations to configure them e.g. to use cert-manager
to request LetsEncrypt certificates to HTTPS.
Luckily Kubernetes provides a mechanism for modifying objects as they are being created via something called a MutatingAdmissionWebhook. This is a HTTPS endpoint hosted inside the cluster that is passed the object at specific lifetime events and is allowed to modify that object before it is instantiated by the control plane.
There are a few projects that implement this pattern and allow you to declare rules to be applied to objects. Such as KubeMod and Patch Operator from RedHat. I may end up using one of these for the production solution, but as this didn’t sound too complex so I thought I would have a go at implementing a Webhook first myself just to help understand how they work.
Here are the Kubernetes docs on creating Webhooks.
So the first task was to write a simple web app to host the Webhook. I decided to use express.js as that is what I’m most familiar with to get started.
By default the Webhook is a POST to the /mutate
route, the body is a JSON object with the AdmissionReview
which has the object being created in the request.object
field.
Modifications to the original object need to be sent as a base64 JSONPatch.
{
apiVersion: admissionReview.apiVersion,
kind: admissionReview.kind,
response: {
uid: admissionReview.request.uid,
allowed: true,
patchType: 'JSONPatch',
patch: Buffer.from(patchString).toString('base64')
}
}
The JSONPatch to add the AWS ALB annotations mentioned earlier look like:
[
{
"op":"add",
"path":"/metadata/annotations/alb.ingress.kubernetes.io~1scheme",
"value":"internet-facing"
},
{
"op":"add",
"path":"/metadata/annotations/alb.ingress.kubernetes.io~1target-type",
"value":"ip"
},
{
"op":"add",
"path":"/metadata/annotations/alb.ingress.kubernetes.io~1group.name",
"value":"flowforge"
},
{
"op":"add",
"path":"/metadata/annotations/alb.ingress.kubernetes.io~1listen-ports",
"value":"[{\"HTTPS\":443}, {\"HTTP\":80}]"
}
]
A basic hook that adds the AWS ALB annotations fits in 50 lines of code (and some of that is for HTTPS, which we will get to in a moment).
Webhooks need to be called via HTTPS, this means we need to create a server certificate for the HTTP server. Normally we could use something like LetsEncrypt to generate a certificate, but that will only issue certificates that have host names that are publicly resolvable. And since we will be accessing this as a Kubernetes Service that means it’s hostname will be something like service-name.namespace
. Luckily we can create our own Certificate Authority and issue certificates that match any name we need because as partof the configuration we can upload our own CA root certificate.
The following script creates a new CA, then uses it to sign a certificate for a service called ingress-mutartor
and adds all the relevant `SAN entries that are needed.
#!/bin/bash
cd ca
rm newcerts/* ca.key ca.crt index index.* key.pem req.pem
touch index
openssl genrsa -out ca.key 4096
openssl req -new -x509 -key ca.key -out ca.crt -subj "/C=GB/ST=Gloucestershire/O=Hardill Technologies Ltd./OU=K8s CA/CN=CA"
openssl req -new -subj "/C=GB/CN=ingress-mutator" \
-addext "subjectAltName = DNS.1:ingress-mutator, DNS.2:ingress-mutator.default, DNS.3:ingress-mustator.default.svc, DNS.4:ingress-mutator.default.svc.cluster.local" \
-addext "basicConstraints = CA:FALSE" \
-addext "keyUsage = nonRepudiation, digitalSignature, keyEncipherment" \
-addext "extendedKeyUsage = serverAuth" \
-newkey rsa:4096 -keyout key.pem -out req.pem \
-nodes
openssl ca -config ./sign.conf -in req.pem -out ingress.pem -batch
If I was building more than one Webhook I could break out the last 2 lines to generate and sign multiple different certificates.
Now we have the code and the key/certificate pair we can bundle them up in the Docker container that can be pushed to a suitable container registry and we then create the deployment YAML files needed to make all this work.
The Pod
and Service
definitions are all pretty basic, but we also need a MutatingWebhookConfiguration
. As well as identifying which service hosts the Webhook it also includes the filter that decides which new objects should be passed to the Webhook to be modified.
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: ingress-annotation.hardill.me.uk
webhooks:
- name: ingress-annotation.hardill.me.uk
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- flowforge
rules:
- apiGroups: [ "*" ]
apiVersions: [ "networking.k8s.io/v1" ]
resources: [ "ingresses" ]
operations: [ "CREATE" ]
scope: Namespaced
clientConfig:
service:
namespace: default
name: ingress-mustator
path: /mutate
caBundle: < BASE64 encoded CA bundle>
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
reinvocationPolicy: "Never"
The rules
secition says to match all new Ingress
objects and the namespaceSelector
says to only apply it to objects from the flowforge
namespace to stop us stamping on anything else that might be creating new objects on the cluster.
The caBundle
value is the output of cat ca.crt | base64 -w0
This all worked as expected when deployed. So the next step is to remove the hard coded JSONPatch and make it apply a configurable set of different options based on the target environment.
The code for this is all on github here.