Working with multiple AWS EKS instances

I’ve recently been working on a project that uses AWS EKS managed Kubernetes Service.

For various reasons too complicated to go into here we’ve ended up with multiple clusters owned by different AWS Accounts so flipping back and forth between them has been a little trickier than normal.

Here are my notes on how to manage the AWS credentials and the kubectl config to access each cluster.

AWS CLI

First task is to authorise the AWS CLI to act as the user in question. We do this by creating a user with the right permissions in the IAM console and then export the Access key ID and Secret access key values usually as a CSV file. We then take these values and add them to the ~/.aws/credentials file.

[dev]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy

[test]
aws_access_key_id = AKYYYYYYYYYYYYYYYYYY
aws_secret_access_key = abababababababababababababababababababab

[prod]
aws_access_key_id = AKZZZZZZZZZZZZZZZZZZ
aws_secret_access_key = nmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmnm

We can pick which set of credential the AWS CLI uses by adding the --profile option to the command line.

$ aws --profile dev sts get-caller-identity
{
    "UserId": "AIXXXXXXXXXXXXXXXXXXX",
    "Account": "111111111111",
    "Arn": "arn:aws:iam::111111111111:user/dev"
}

Instead of using the --profile option you can also set the AWS_PROFILE environment variable. Details of all the ways to switch profiles are in the docs here.

$ export AWS_PROFILE=test
$ aws sts get-caller-identity
{
    "UserId": "AIYYYYYYYYYYYYYYYYYYY",
    "Account": "222222222222",
    "Arn": "arn:aws:iam::222222222222:user/test"
}

Now we can flip easily between different AWS accounts we can export the EKS credential with

$ export AWS_PROFILE=prod
$ aws eks update-kubeconfig --name foo-bar --region us-east-1
Updated context arn:aws:eks:us-east-1:333333333333:cluster/foo-bar in /home/user/.kube/config

The user that created the cluster should also follow these instructions to make sure the new account is added to the cluster’s internal ACL.

Kubectl

If we run the previous command with each profile it will add the connection information for all 3 clusters to the ~/.kube/config file. We can list them with the following command

$ kubectl config get-contexts
CURRENT   NAME                                                  CLUSTER                                               AUTHINFO                                              NAMESPACE
*         arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   arn:aws:eks:us-east-1:111111111111:cluster/foo-bar   
          arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   arn:aws:eks:us-east-1:222222222222:cluster/foo-bar   
          arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar   arn:aws:eks:us-east-1:333333333333:cluster/foo-bar 

The star is next to the currently active context, we can change the active context with this command

$ kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar
Switched to context "arn:aws:eks:us-east-1:222222222222:cluster/foo-bar".

Putting it all together

To automate all this I’ve put together a collection of script that look like this

export AWS_PROFILE=prod
aws eks update-kubeconfig --name foo-bar --region us-east-1
kubectl config set-context arn:aws:eks:us-east-1:222222222222:cluster/foo-bar

I then use the shell source ./setup-prod command (or it’s shortcut . ./setup-prod) , this is instead of adding the shebang to the top and running it as a normal script. This is because when environment variables are set in scripts they go out of scope. Leaving the AWS_PROFILE variable in scope means that the AWS CLI will continue to use the correct account settings when it’s used later while working on this cluster.

Updated AWS Lambda NodeJS Version checker

I got another of those emails from Amazon this morning that told me that the version of the NodeJS runtime I’m using in the Lambda for my Node-RED Alexa Smart Home Skill is going End Of Life.

I’ve previously talked about wanting a way to automate checking what version of NodeJS was in use across all the different AWS Availability Regions. But when I tried my old script it didn’t work.

for r in `aws ec2 describe-regions --output text | cut -f3`; 
do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .FunctionName + " - " + .Runtime';
done

This is most likely because the output from awscli tool has changed slightly.

The first change looks to be in the listing of the available regions

$ aws ec2 describe-regions --output text
REGIONS	ec2.eu-north-1.amazonaws.com	opt-in-not-required	eu-north-1
REGIONS	ec2.ap-south-1.amazonaws.com	opt-in-not-required	ap-south-1
REGIONS	ec2.eu-west-3.amazonaws.com	opt-in-not-required	eu-west-3
REGIONS	ec2.eu-west-2.amazonaws.com	opt-in-not-required	eu-west-2
REGIONS	ec2.eu-west-1.amazonaws.com	opt-in-not-required	eu-west-1
REGIONS	ec2.ap-northeast-2.amazonaws.com	opt-in-not-required	ap-northeast-2
REGIONS	ec2.ap-northeast-1.amazonaws.com	opt-in-not-required	ap-northeast-1
REGIONS	ec2.sa-east-1.amazonaws.com	opt-in-not-required	sa-east-1
REGIONS	ec2.ca-central-1.amazonaws.com	opt-in-not-required	ca-central-1
REGIONS	ec2.ap-southeast-1.amazonaws.com	opt-in-not-required	ap-southeast-1
REGIONS	ec2.ap-southeast-2.amazonaws.com	opt-in-not-required	ap-southeast-2
REGIONS	ec2.eu-central-1.amazonaws.com	opt-in-not-required	eu-central-1
REGIONS	ec2.us-east-1.amazonaws.com	opt-in-not-required	us-east-1
REGIONS	ec2.us-east-2.amazonaws.com	opt-in-not-required	us-east-2
REGIONS	ec2.us-west-1.amazonaws.com	opt-in-not-required	us-west-1
REGIONS	ec2.us-west-2.amazonaws.com	opt-in-not-required	us-west-2

This looks to have added something extra to the start of the each line, so I need to change which filed I select with the cut command by changing -f3 to -f4.

The next problem looks to be with the JSON that is output for the list of functions in each region.

$ aws --region $r lambda list-functions
{
    "Functions": [
        {
            "TracingConfig": {
                "Mode": "PassThrough"
            }, 
            "Version": "$LATEST", 
            "CodeSha256": "wUnNlCihqWLXrcA5/5fZ9uN1DLdz1cyVpJV8xalNySs=", 
            "FunctionName": "Node-RED", 
            "VpcConfig": {
                "SubnetIds": [], 
                "VpcId": "", 
                "SecurityGroupIds": []
            }, 
            "MemorySize": 256, 
            "RevisionId": "4f5bdf6e-0019-4b78-a679-12638412177a", 
            "CodeSize": 1080463, 
            "FunctionArn": "arn:aws:lambda:eu-west-1:434836428939:function:Node-RED", 
            "Handler": "index.handler", 
            "Role": "arn:aws:iam::434836428939:role/service-role/home-skill", 
            "Timeout": 10, 
            "LastModified": "2018-05-11T16:20:01.400+0000", 
            "Runtime": "nodejs8.10", 
            "Description": "Provides the basic framework for a skill adapter for a smart home skill."
        }
    ]
}

This time it looks like there is an extra level of array in the output, this can be fixed with a minor change to the jq filter

$aws lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'
"Node-RED - nodejs8.10"

Putting it all back together to get

for r in `aws ec2 describe-regions --output text | cut -f4`;  do
  echo $r;
  aws --region $r lambda list-functions | jq '.[] | .[] | .FunctionName + " - " + .Runtime'; 
done

eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
"Node-RED - nodejs8.10"
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
"Node-RED - nodejs8.10"
us-east-2
us-west-1
us-west-2
"Node-RED - nodejs8.10"