Using Github Actions and Package Registry to build, store and deploy your containers

So far this year, Github have launched beta versions of both Github Actions (GA) and Github Package Registry (GPR). This is exciting as historically Github has been pretty much just a code repository which worked well and allowed developers to get their work done more easily than by using any other product. I my opinion though, they have really left money on the table by not capitalizing on their extreme market share. They appear to have totally disregarded their ability to offer further services and leverage this network with the easily defensible competitive advantage of being able to provide the most frictionless integrations available.

The fact that they are finally moving to offer these new services in the software development chain is good to see, and a sign that Microsoft are really doing a great job of steering the company, an issue many in the industry were worried about. The next step in the development chain is Kubernetes and if they do introduce it (perhaps ‘when’ is a better choice of word here, couldn’t be too hard to abstract AKS right?), you will be able to run most modern day stacks without your code ever leaving Github’s servers. An interesting thought.

Background

I have a monorepo that I have historically used for about 15 different personal sites. These range from my own projects, to applications such as AsciiFlow and µGlark which I like having my own instances of, through to my own static sites.

Every instance is currently run within a Docker container. The custom instances are built at run time and not hosted in a registry elsewhere. For the instances which are built by others and are public, these are all on Docker hub.

On committing a change, I currently have to ssh to the server in question, pull the latest commits, and then build and restart the containers. This is only a single command and this situation is perfectly fine for a project where I am the only one who interacts with the code, but it would be nice to just push commits and everything updates automatically.

The Plan

I’m looking to move to building these containers in a more CI/CD fashion, immediately after changes have been committed and then stored in a registry. I’m also looking to set up a Kubernetes cluster to host these containers as opposed to manually running them directly on a server. I’m hoping that via GA secrets, I’m able to access and initiate rolling updates of the deployments in the given Kubernetes cluster so the entire process runs autonomously and completes with no interaction apart from pushing commits.

I’m also keen to get my hands dirty and build my own action, but for the sake of getting things working initially, I’m going to find existing actions which will work in my situation (and hopefully yours if you’re playing along at home).

To summarize, I’ll be doing the following:

  • Building containers via a GA.
  • Pushing the built images to GPR via the GA.
  • Deploy the containers to a new Kubernetes cluster.
  • Apply an updated configuration to the Kubernetes cluster to roll out new containers each time they

are built.

Building containers via a Github Action and pushing built images to Github Package Registry

After looking around I found this action that appears to do exactly what I need.

For the rest of this post I’ll omit every other container apart from Pa11y dashboard which I inject custom configuration into when building, and RStudio which I also customize.

My repo structure requires a slight change from how it is currently set up, but I settle on the standard format I’ve used in other multi container repositories.

|-- README.md
|-- .github/
|   `-- workflows
|       `-- build-and-deploy.yml
|-- containers
|   |-- pa11y
|   |   |-- Dockerfile
|   |   `-- production.json
|   `-- rstudio
|       `-- Dockerfile

These are very simple containers which I’ve chosen for brevity but for more involved containers I continue to keep all source code for each container in its directory or own ‘context’.

The build-and-deploy.yml contains the following:

name: Build and deploy
on: push
jobs:
  buildAndDeployPa11y:
    runs-on: ubuntu-latest 
    steps:
      - name: Copy Repo Files
        uses: actions/checkout@master
      - name: Build and Publish to GPR
        uses: machine-learning-apps/gpr-docker-publish@master
        id: build
        with:
          USERNAME: ${{ secrets.REGISTRY_USERNAME }}
          PASSWORD: ${{ secrets.REGISTRY_TOKEN }}
          IMAGE_NAME: 'pa11y'
          DOCKERFILE_PATH: 'containers/pa11y/Dockerfile'
          BUILD_CONTEXT: 'containers/pa11y/'

  buildAndDeployRStudio:
    runs-on: ubuntu-latest
    steps:
      - name: Copy Repo Files
        uses: actions/checkout@master
      - name: Build and Publish to GPR
        uses: machine-learning-apps/gpr-docker-publish@master
        id: build
        with:
          USERNAME: ${{ secrets.REGISTRY_USERNAME }}
          PASSWORD: ${{ secrets.REGISTRY_TOKEN }}
          IMAGE_NAME: 'rstudio'
          DOCKERFILE_PATH: 'containers/rstudio/Dockerfile'
          BUILD_CONTEXT: 'containers/rstudio/'

The first step in each job here is pretty standard for all actions and checks out the code from the repository.

The next step in each job builds the individual containers. Points of note:

  • The secrets come from the repositories settings where you are able to add secrets which are

encrypted and protected. They will not appear in build output which is fantastic, even when used used in a secondary process. For instance, the above USERNAME refers to my github handle, and in the build output, it will always appear as github.com/***/repository-name

place of a password if you have 2FA enabled (as you already do, right?). If you are getting authentication issues when trying to push containers, this is most likely the problem.

After a few attempts, the action succeeds and I can now see the images in the Package Registry view.

Deploying container to a new Kubernetes cluster

This is pretty standard Kubernetes setup so I’ll go light on the details as you likely already know the basis of the following, or if you don’t, it’s probably best to get started by learning the basics of how Kubernetes works and how you deploy to a cluster.

For my cluster, I’m going with Digital Ocean as these sites are not particularly critical and for the price compared to say GKE and reliability of Digital Ocean, it is a good choice for this situation.

I haven’t set up Kubernetes on Digital Ocean previously and the entire process is quick and easy which is refreshing. They give you your kube config as a download which, while not exactly as secure as other Kubernetes services, but it suits my purposes perfectly.

I add the details to my ~/.kube/config and switch context to my new cluster. I can test out a quick kubectl get all and check that I can connect to the cluster and see containers that are running, which is only the basic Kubernetes deployments at the moment as I have yet to deploy anything.

The first thing I need to do is set up a secret so that I can pull containers from the private registry. This is easiest using the CLI command to create a docker-registry secret like so:

$ kubectl create secret docker-registry regcred \
  --docker-server="docker.pkg.github.com" \
  --docker-username="$USERNAME" \
  --docker-password="$PERSONAL_ACCESS_TOKEN_NOT_PASSWORD" \
  --docker-email="EMAIL"

Again, use a Personal Access Token, not your password here. With 2FA your password will, and likewise should fail.

I also need to add an Ingress to the cluster. To do this, I set up Helm (I’ve written about Helm previously here and a Digital Ocean specific guide can be found here), and add an Ingress (more info on adding Ingresses can be found here).

From here I apply the configurations for each of my deployments. These are pretty similar so here is just the configuration for pa11y (with secrets redacted with ***).

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pa11y-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
apiVersion: v1
kind: Service
metadata:
  name: pa11y
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: pa11y
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pa11y
  labels:
    app: pa11y
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pa11y
  template:
    metadata:
      labels:
        app: pa11y
    spec:
      containers:
        - name: pa11y
          image: docker.pkg.github.com/ianbelcher/personal-sites/pa11y:a9e918dfb543
          ports:
            - containerPort: 80
      imagePullSecrets:
        - name: regcred
---
apiVersion: v1
kind: Service
metadata:
  name: pa11y-mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
    protocol: TCP
  selector:
    app: pa11y-mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: pa11y-mongo
  labels:
    app: pa11y-mongo
spec:
  serviceName: "pa11y"
  replicas: 1
  selector:
    matchLabels:
      app: pa11y-mongo
  template:
    metadata:
      labels:
        app: pa11y-mongo
    spec:
      containers:
        - name: pa11y-mongo
          image: mongo:latest
          ports:
            - containerPort: 27017
          volumeMounts:
            - mountPath: "/data/db"
              name: pa11y-mongo-volume
  volumeClaimTemplates:
    - metadata:
        name: pa11y-mongo-volume
      spec:
        accessModes: 
          - "ReadWriteOnce"
        resources:
          requests:
            storage: 5Gi
        storageClassName: do-block-storage
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: pa11y
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: pa11y.***.com
    http:
      paths:
      - backend:
          serviceName: pa11y
          servicePort: 80

A pretty standard setup. For those unfamiliar with Digital Ocean Kubernetes, they expose a storage class of do-block-storage which makes your Persistent Volume Claims pretty clean and easy to set up.

The other point of note is the imagePullSecrets which references the secret we created previously. This is the standard way for pulling containers from private registries.

I wait for everything to come up, then send a request to the Ingress’ external IP for the pa11y.***.com domain and a few other domain within the project and they all return what is expected in each case.

Okay, everything is setup Kubernetes side. Now to roll out new containers automatically.

Updating a Kubernetes Deployment via a Github Action

Interestingly, there doesn’t seem to be that many actions available as yet for Kubernetes. Realistically though, setting up the ability to use kubectl is pretty much all that is required for 99% of cases.

This action does just that, and is pretty simple, allowing you to use kubectl for whatever you need.

I add two new steps to all jobs which update the image for the given deployment for that container. For the pa11y, this looks like the following:

jobs:
  buildAndDeployPa11y:
    runs-on: ubuntu-latest 
    steps:
      - name: Copy Repo Files
        uses: actions/checkout@master
      - name: Build and Publish to GPR
        uses: machine-learning-apps/gpr-docker-publish@master
        id: build
        with:
          USERNAME: ${{ secrets.REGISTRY_USERNAME }}
          PASSWORD: ${{ secrets.REGISTRY_TOKEN }}
          IMAGE_NAME: 'pa11y'
          DOCKERFILE_PATH: 'containers/pa11y/Dockerfile'
          BUILD_CONTEXT: 'containers/pa11y/'
      - name: Update Deployment        uses: steebchen/kubectl@master        env:          KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}        with:          args: set image --record deployment/pa11y pa11y=${{ steps.build.outputs.IMAGE_SHA_NAME }}      - name: Verify rollout        uses: steebchen/kubectl@master        env:          KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}        with:          args: rollout status deployment/my-app

This sends the command to update the image for the pa11y container within the pa11y deployment to IMAGE_SHA_NAME which is output by the Build and Publish to GPR step. This is special variable that is output by this action, it is not something that is available otherwise so if you’re not using the machine-learning-apps/gpr-docker-publish@master action, you’ll need a different way to get the new container version.

I add the KUBE_CONFIG_DATA secret as it is documented on the actions page and push my latest changes.

Within a few minutes, I check and see that the pa11y pod has updated to the new version with a hash the same as the most recent commit in the repo. Awesome stuff, the entire process is now up and running.

Wrap up

Creating a CI/CD system with Github Actions and Package Registry is a pretty simple and easy process. I’m finding that the actions are somewhat slow and there appears to be a lot of the normal caching for Docker builds missing. Unsure if this is a setting or something that will likely be added later under some priced tier which I’d say is most likely.

For the moment though, for a free service (at least until the end of the beta), I definitely can’t complain. I’m interested to find out how these services will be priced (and when), but I’m guessing it will be competitive compared to what is already available within the industry, and given the improvements to pricing that have occurred since Microsoft took the reigns.

For now though, it’s all pretty impressive. Taking only a couple of hours to set up two new and unfamiliar tool such as this without any major roadblocks is fantastic.