Adding Kubernetes dashboard to a cluster

The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".

This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.

This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.

Unlike other Kubernetes offering from other providers, EKS does not include any form of web ui for managing the workloads within an EKS cluster. While there are some great options for managing workloads, my personal option being Lens, there are some instances where having a web based management console available.

This is where the Kubernetes Dashboard project can be a great addition to a cluster. The documentation for this project is very easy to follow and is as simple as running the following command. Kubernetes dashboard creates its own namespace so no need to worry about creating one if this is what you typically do.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

The above installs a minimum ClusterRoleBinding for the dashboard, which doesn’t allow it to really the dashboard service to really mutate anything. Depending on how you are setting things up, this is likely the best option as the dashboard operates a an actual dashboard, in essence, it is a read only view.

For my purposes though, as I am the only user of the cluster and it doesn’t need to be absolutely solid as far as a full production system, giving the dashboard service full permissions allows me to work much quicker than doing all the CLI heroics or worrying about generating and getting tokens to access the dashboard.

It is extremely important at this point to acknowledge that doing this creates a large number of security issues in almost all systems. If you don’t understand the risks, it is likely best that you don’t do the following.

If you do understand the risks and want to give the dashboard full permissions, the first step is to delete the current ClusterRoleBinding for the dashboard as follows.

kubectl delete clusterrolebinding kubernetes-dashboard

Next, create a ClusterRole which grants full permissions to the cluster.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-full-access
rules:
- apiGroups: ['*']
  resources: ["*"]
  verbs: ["*"]

The next step then is to create a new ClusterRoleBinding which connects the dashboard to the ClusterRole that was just created.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-full-access
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

After doing this, it is also possible to disable the requirement for the user to supply a token to log in. This is a case of editing the current configuration for the dashboard:

kubectl edit deployment/kubernetes-dashboard --namespace=kubernetes-dashboard

And adding the following:

containers:
  - args:
    - --auto-generate-certificates
    - --enable-skip-login                  # <-- add this line
    - --namespace=kubernetes-dashboard
    image: kubernetesui/dashboard:v2.0.4

See here (at end) for more info on this change.

At this point the dashboard should be working with full permissions. The standard way of accessing it is via running kubectl proxy and then visiting this link.

If you have been following along in this series of posts and have set up VPN access, it is also possible to create a Service for the dashboard which will allow access via a url in the browser. To do this, simply apply the following:

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 8443

After creating this new service the dashboard should now be accessible at https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local/ when you are connected via VPN.

The connection is always encrypted with a self signed certificate for this service. As a reminder, to get around this error in Chrome, you just need to type thisisunsafe to continue from the SSL error message.

You now have a management console for your cluster. Again, this should only ever be done in the very small subset of cases where it does not create a security concern.

Posted in: