The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".
This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.
This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.
While there are a few choices that you can go with as far as an ingress controller, a very popular
choice is ingress-nginx
which was created and is maintained by the Kubernetes team. It is worth
pointing out at this point that there is also the nginx ingress
project which was created and
is maintained by the nginx team. These two projects are not the same so be wary of the
documentation you are reading when looking things up. This can be a major source of confusion.
There is often a significant choice initially at this point around whether you will have a separate ingress per namespace, or a single global ingress. This is something that I recommend you research for your given use case, but I’ve found very little issues (on smaller projects, it could be different on larger projects) in using a single controller which handles all namespaces. It is possible to have multiple ingresses which handle configured namespaces but again, this will come down to the use case at hand. The following describes how to implement a single, global ingress to handle all namespaces.
I typically like setting up the ingress in it’s own namespace to keep things more separated and to help increase visibility.
To create a new namespace you need only to run the command kubectl create namespace ingress-nginx
.
This will add the namespace ingress-nginx
, but by default, when applying configurations kubectl
will place them in the default
namespace. To change this you can simply run
kubectl config set-context --current --namespace=ingress-nginx
which will set the namespace for
the current context to ingress-nginx
.
Installing ingress-nginx
via Helm is quite easy with the following commands (I’ll assume that you
have Helm installed at this point):
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install ingress stable/nginx-ingress
This will create an AWS ELB for receiving traffic from external sources. You can view the ELB address via the following:
$ kubectl get services -o wide -w ingress-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx-ingress-controller LoadBalancer 10.100.0.0 xxx.ap-southeast-2.elb.amazonaws.com 80:31999/TCP,443:31618/TCP 24s app=nginx-ingress,component=controller,release=ingress
At this point, there is one caveat to my setup which I change and that is switching off HSTS. The default config adds HSTS headers by default, which includes subdomains. I do a lot of things on subdomains and it doesn’t always make sense to have SSL for these so adding a subdomain HSTS header is a poor default choice in my opinion, as it allows ‘leakage’ of project requirements into other, unrelated projects.
To disable HSTS, it is a case of applying the following config:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-ingress-controller
namespace: ingress-nginx
data:
# To disable HSTS altogether, this can be used as well
# hsts: 'False'
hsts-include-subdomains: 'False'
Once ingress-nginx
is set up, the next step is to add cert-manager
to automatically handle the
creation and updating of SSL certificates.
This can also be done by adding a new namespace and installing cert-manager
via Helm. This package
requires the CustomResourceDefinitions
API which is not supported by Helm with versions lower
than v3.2 and there are currently a few Kubernetes API issues depending on version so it is likely
best to read the documentation for installing cert-manager
here.
In general though, the following should work with somewhat recent versions of kubectl
, Helm and
Kubernetes.
kubectl create namespace cert-manager
kubectl config set-context --current --namespace=cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.1/cert-manager.crds.yaml
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.0.1
Once this is complete you should have three pods with a running status
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cert-manager-5bc6c5cb94-f5xdx 1/1 Running 0 11s
cert-manager-cainjector-5f845bf6c7-kctgl 1/1 Running 0 11s
cert-manager-webhook-8484675bd8-nq5hv 1/1 Running 0 11s
The next step is creating a ClusterIssuer
which handles the issuance of certificates from
different sources. I’m going to use the ACME
Issuer (otherwise known as Lets Encrypt) but you can use the other available options.
As a resolver, because I’m using cloudflare quite extensively, I’m using the DNS01 CloudFlare option. This is documented here if you are interested. As the cloudflare setup is quite specific to my setup, I’ll save the noise by not adding more information here about how to set up this solver, but the configuration for adding the issuer looks as follows:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: [email protected]
# For testing and setting up, gives untrusted certificates back
server: https://acme-staging-v02.api.letsencrypt.org/directory
# For production
# server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
cloudflare:
email: [email protected]
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
Once the ClusterIssuer
configuration is applied, you should now be able to deploy a test
deployment
, service
and ingress
to test that the SSL issuer and manager work and create a 3rd
party signed SSL certificate.
The following is a good (but distracting) test for such an occasion. These resources are all added to their own namespace which means they can be deleted easily after you have tested everything.
---
apiVersion: v1
kind: Namespace
metadata:
name: "2048"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "2048-deployment"
namespace: "2048"
spec:
selector:
matchLabels:
app: "2048"
replicas: 5
template:
metadata:
labels:
app: "2048"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "2048"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-2048"
namespace: "2048"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: "2048"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: "2048"
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- secretName: test-ingress
hosts:
- 2048.YOUR_SITE.COM
rules:
- host: 2048.YOUR_SITE.COM
http:
paths:
- path: /
backend:
serviceName: service-2048
servicePort: 80
After applying the above, as long as your DNS is correct and pointing to the load balancer that was created earlier then visiting 2048.YOUR_SITE.COM return the 2048 game with a valid SSL connection.
At this point, adding any new hosts will automatically create a valid SSL certificate for the host and keep it from expiring automatically. No more worries about expiring certificates or setting them up in the first place!