The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".
This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.
This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.
By adding OpenVPN into the cluster, we can allow users to use the cluster in a much more traditional manner, much like an intranet. This affords the ability to quickly deploy quality tools for internal business use which are hidden from the global internet, allowing an extra layer of security. The choices here are huge but some useful software that can be deployed in very little time include quality tools like sentry, BI programs such as metabase and I’ve also found having shared instances of Jupyter Notebooks that are only available via the local VPN network can be a great way of handling documentation.
This also makes development much easier for developers as services which are being developed can access existing services which they depend on as if they are running within the cluster while not actually being available to take real traffic from other services. It really provides a great way of developing services, even those that are highly dependant on other services.
I’ve found installing OpenVPN to be easiest via Helm which can be done in via the following:
kubectl create namespace openvpn
kubectl config set-context --current --namespace=openvpn
helm install openvpn stable/openvpn
You can then create credentials by following what is given in the output, which for me ended up looking something like:
KEY_NAME=ianbelcher
POD_NAME=$(kubectl get pods --namespace "openvpn" -l "app=openvpn,release=openvpn" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_NAME=$(kubectl get svc --namespace "openvpn" -l "app=openvpn,release=openvpn" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace "openvpn" "$SERVICE_NAME" -o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}')
kubectl --namespace "openvpn" exec -it "$POD_NAME" /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "openvpn" exec -it "$POD_NAME" cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"
Once this config was loaded into tunnelblick (a great option on Mac, unsure of the best option on
Windows or Linux), it was a simple case of choosing to connect and then checking that my IP and DNS
settings had changed. After connecting you should see your DNS as an internal IP
(likely 10.100.0.10) and your search domains including a few ending in cluster.local
.
Once up and running, Service
resources can be accessed in the form
SERVICE_NAME.NAMESPACE.svc.cluster.local
. If the cluster is still running the 2048 example from
the previous ingress post, then visiting service-2048.2048.svc.cluster.local
will resolve to the
service which will return the 2048 game.
At this point, it is worth also noting that the connection here is done via unencrypted port 80. This is ok, as all the traffic which is coming from the cluster to your local machine is via the VPN which is an encrypted tunnel.