Adding ingress-nginx to an EKS cluster

The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".

This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.

This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.

While there are a few choices that you can go with as far as an ingress controller, a very popular choice is ingress-nginx which was created and is maintained by the Kubernetes team. It is worth pointing out at this point that there is also the nginx ingress project which was created and is maintained by the nginx team. These two projects are not the same so be wary of the documentation you are reading when looking things up. This can be a major source of confusion.

Setting up the ingress controller

There is often a significant choice initially at this point around whether you will have a separate ingress per namespace, or a single global ingress. This is something that I recommend you research for your given use case, but I’ve found very little issues (on smaller projects, it could be different on larger projects) in using a single controller which handles all namespaces. It is possible to have multiple ingresses which handle configured namespaces but again, this will come down to the use case at hand. The following describes how to implement a single, global ingress to handle all namespaces.

I typically like setting up the ingress in it’s own namespace to keep things more separated and to help increase visibility.

To create a new namespace you need only to run the command kubectl create namespace ingress-nginx. This will add the namespace ingress-nginx, but by default, when applying configurations kubectl will place them in the default namespace. To change this you can simply run kubectl config set-context --current --namespace=ingress-nginx which will set the namespace for the current context to ingress-nginx.

Installing ingress-nginx via Helm is quite easy with the following commands (I’ll assume that you have Helm installed at this point):

helm repo add stable
helm install ingress stable/nginx-ingress

This will create an AWS ELB for receiving traffic from external sources. You can view the ELB address via the following:

$ kubectl get services -o wide -w ingress-nginx-ingress-controller
NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                      AGE   SELECTOR
ingress-nginx-ingress-controller   LoadBalancer   80:31999/TCP,443:31618/TCP   24s   app=nginx-ingress,component=controller,release=ingress

At this point, there is one caveat to my setup which I change and that is switching off HSTS. The default config adds HSTS headers by default, which includes subdomains. I do a lot of things on subdomains and it doesn’t always make sense to have SSL for these so adding a subdomain HSTS header is a poor default choice in my opinion, as it allows ‘leakage’ of project requirements into other, unrelated projects.

To disable HSTS, it is a case of applying the following config:

kind: ConfigMap
apiVersion: v1
  name: ingress-nginx-ingress-controller
  namespace: ingress-nginx
  # To disable HSTS altogether, this can be used as well
  # hsts: 'False'
  hsts-include-subdomains: 'False'

Setting up cert-manager for automatic SSL certificate management

Once ingress-nginx is set up, the next step is to add cert-manager to automatically handle the creation and updating of SSL certificates.

This can also be done by adding a new namespace and installing cert-manager via Helm. This package requires the CustomResourceDefinitions API which is not supported by Helm with versions lower than v3.2 and there are currently a few Kubernetes API issues depending on version so it is likely best to read the documentation for installing cert-manager here.

In general though, the following should work with somewhat recent versions of kubectl, Helm and Kubernetes.

kubectl create namespace cert-manager
kubectl config set-context --current --namespace=cert-manager
helm repo add jetstack
helm repo update
kubectl apply --validate=false -f
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v1.0.1

Once this is complete you should have three pods with a running status

$ kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-5bc6c5cb94-f5xdx              1/1     Running   0          11s
cert-manager-cainjector-5f845bf6c7-kctgl   1/1     Running   0          11s
cert-manager-webhook-8484675bd8-nq5hv      1/1     Running   0          11s

The next step is creating a ClusterIssuer which handles the issuance of certificates from different sources. I’m going to use the ACME Issuer (otherwise known as Lets Encrypt) but you can use the other available options.

As a resolver, because I’m using cloudflare quite extensively, I’m using the DNS01 CloudFlare option. This is documented here if you are interested. As the cloudflare setup is quite specific to my setup, I’ll save the noise by not adding more information here about how to set up this solver, but the configuration for adding the issuer looks as follows:

kind: ClusterIssuer
  name: letsencrypt
    email: [email protected]
    # For testing and setting up, gives untrusted certificates back
    # For production
    # server:
      name: letsencrypt
      - dns01:
            email: [email protected]
              name: cloudflare-api-token-secret
              key: api-token

Once the ClusterIssuer configuration is applied, you should now be able to deploy a test deployment, service and ingress to test that the SSL issuer and manager work and create a 3rd party signed SSL certificate.

The following is a good (but distracting) test for such an occasion. These resources are all added to their own namespace which means they can be deleted easily after you have tested everything.

apiVersion: v1
kind: Namespace
  name: "2048"
apiVersion: apps/v1
kind: Deployment
  name: "2048-deployment"
  namespace: "2048"
      app: "2048"
  replicas: 5
        app: "2048"
      - image: alexwhen/docker-2048
        imagePullPolicy: Always
        name: "2048"
        - containerPort: 80
apiVersion: v1
kind: Service
  name: "service-2048"
  namespace: "2048"
    - port: 80
      targetPort: 80
      protocol: TCP
    app: "2048"
apiVersion: extensions/v1beta1
kind: Ingress
  name: test-ingress
  namespace: "2048"
  annotations: nginx 'true' letsencrypt-staging
    - secretName: test-ingress
        - 2048.YOUR_SITE.COM
    - host: 2048.YOUR_SITE.COM
          - path: /
              serviceName: service-2048
              servicePort: 80

After applying the above, as long as your DNS is correct and pointing to the load balancer that was created earlier then visiting 2048.YOUR_SITE.COM return the 2048 game with a valid SSL connection.

At this point, adding any new hosts will automatically create a valid SSL certificate for the host and keep it from expiring automatically. No more worries about expiring certificates or setting them up in the first place!

Posted in: