Set up Github Actions for EKS deployments

The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".

This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.

This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.

Github actions are still a relatively new addition to Github and are a great platform for doing small tasks and checks through to handling full builds and more.

I’ve previously written about setting up Github actions in the context of using a monorepo, pushing images to a Github Private Repository and then updating deployments here

If you are not too familiar with this process, then I highly recommend reading the previous post to get comfortable with the concepts.

In the context of using Kubernetes on AWS and hosting a private repository, there are a few caveats to consider.

The first of these is that most Github actions relating to kubectl for controlling Kubernetes clusters expect to receive only a kubeconfig in order to connect. This poses a problem as the EKS kubectl config relies on the aws cli tool being available to handle authentication.

To get around this, I wrote eks-kubectl which allows you to access your EKS cluster via a Github Action easily without needing to mess around with thinning out and base64 encoding your kubeconfig or worrying about updating the config in the case of a change and whatnot.

To use it (or any other action to handle your Kubernetes deployments) it is best to first create a new IAM user with restricted permissions.

I covered off the creation processes for a new IAM user in the previous post of this series here. When creating a ‘deploy’ style user, the same process applies as discussed in the previous post except you can minimize the permissions.

The deployment IAM user only needs the following policy:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "eks:DescribeCluster",
            "Resource": "arn:aws:eks:*:*:cluster/*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "eks:ListClusters",
            "Resource": "*"

Next, as in the previous post, add the new IAM user to the aws-auth ConfigMap which is done via the following command.

kubectl edit configmap aws-auth -n kube-system

And then adding the new user (see the previous post for more information if required)

apiVersion: v1
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::500000000000:role/NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - userarn: arn:aws:iam::500000000000:user/existing-user
      username: existing-user
    - userarn: arn:aws:iam::500000000000:user/USERNAME_OF_USER_YOU_JUST_CREATED      username: USERNAME_OF_USER_YOU_JUST_CREATED

The big difference between the previous posts example and creating a ‘deploy’ user is the Role rules. The following allows the EKS user to only effect Deployment and Pod resources. You can obviously change this depending on what you want your actions to achieve too.

I’m also using a ClusterRole here as I want me action to be able to affect deployments in any namespace. You an definitely use a Role as per my previous post if you want to only give access to certain namespaces.

kind: ClusterRole
  name: github-action-eks-user-role
- apiGroups: ['*']
  resources: ["deployments","pods"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

And don’t forget to add a binding

kind: ClusterRoleBinding
  name: github-action-eks-user-binding
- kind: User
  kind: Role
  name: github-action-eks-user-role

It’s worth using the aws configure command here and setting the profile as that of the new user. Once you have, the following should be true assuming that you’ve used the same permissions:

# Succeeds
$ kubectl get deployments

# Fails
$ kubectl get rc

Once this is complete, it’s time to start working on the actions. The configuration here mirrors the my previous Github actions post in some ways but uses different actions to account for the different registry and the fact that we’re using EKS and using the eks-kubectl action.

I’m also using my own monorepo-container-build-action which is a simple action that runs a command in the project, and then pushes the built image to the supplied registry.

name: Build and deploy PROJECT
    name: Build and deploy 
    runs-on: ubuntu-latest
      - uses: actions/checkout@main
      - name: Build and push to registry
        uses: ianbelcher/monorepo-container-build-action@master
        id: build
          command_to_run: '(cd containers/project &&'
          docker_registry: ${{ secrets.DOCKER_REGISTRY }}
          docker_registry_username: ${{ secrets.DOCKER_REGISTRY_USERNAME }}
          docker_registry_password: ${{ secrets.DOCKER_REGISTRY_PASSWORD }}
      - name: Update deployment
        uses: ianbelcher/eks-kubectl-action@master
          aws_access_key_id: ${{ secrets.AWS_PRIMARY_ACCESS_KEY_ID }}
          aws_secret_access_key: ${{ secrets.AWS_PRIMARY_SECRET_ACCESS_KEY }}
          aws_region: ${{ secrets.AWS_PRIMARY_REGION }}
          cluster_name: YOUR_CLUSTER_NAME
          args: set image --namespace NAMESPACE --record deployment/DEPLOYMENT_NAME DEPLOYMENT_NAME=${{ }}

Apart from these differences, everything regarding these actions should be pretty similar to the previous Github actions post.

This new action will build a container and push it to the registry you supplied, and then update the deployment to the new containers url which will include the new hash. This will cause the Deployment to do a rollout of the new container without any downtime.