The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".
This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.
This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.
The setup for Kubernetes on AWS is a little more integrated with their platform compared to other
Kubernetes providers so the set up is a little more involved. The main AWS specific set up steps
which you should do prior to creating a cluster is to set up IAM roles for the cluster, and also
ensure that the aws
cli tool is installed locally.
The official documentation
goes over setting up the cluster IAM role, but tl;dr you just need to create a role with the
existing use case called EKS - Cluster
.
You also need to create a Node IAM group and the docs are typically found
here, but at the time
of writing this page was broken on the docs site. I did find
this page though which
describes how to create the node worker group role. The tl;dr here is to create a role based on
ec2 with the policies AmazonEKSWorkerNodePolicy
, AmazonEKS_CNI_Policy
,
and AmazonEC2ContainerRegistryReadOnly
.
As far as installing the aws
cli on macOS, the
official documentation
describes how to install the package by doing the following:
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
The aws
cli isn’t required for the next few sections, but it’s best to get this out of the way as
a prerequisite and it will be required at a later stage for authenticating and connecting to the
cluster.
The next step is to actually create the new cluster. This can be done in the AWS console pretty
easily using the defaults, just remember to use the cluster IAM role created earlier for the
Cluster Service Role
.
Once you’ve reviewed and created the cluster, it will take a few minutes to become available. At this point you will then need to add a node group to the cluster, and this can be found in the Compute tab.
When creating the node group, use the node group policy that was created previously, choose the size and composition of the cluster you want and create the node group.
At this point, you will have the cluster running, but the only user that has access to it is the IAM user that you used to create it.
For my personal projects on AWS (and this is likely not the best way, I’m definitely don’t attest to being an AWS expert at all), I use the root user to set things up and then use restricted IAM users with minimum security clearances to run and access systems. The basic steps to give a less privileged user access in this way are outlined below.
aws-auth
ConfigMap to map the IAM user to a Kubernetes user.Creating a new IAM user is pretty easy so not going to talk you through that process, but you do need to assign the proper permission to this user after completion.
For my user, I added the AmazonEC2FullAccess
policy as well as my own EKSFullAccess
policy
which looks like the below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListFargateProfiles",
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
At this point the IAM user should be set up with permissions to access the EKS cluster from the AWS side, they just need to have the correct permissions and setup on the Kubernetes side.
I assume that you know how to create access keys. The main thing here is to remember to delete them after completing this process.
The first step is to configure the aws
cli tool so that it can authenticate with AWS.
Documentation for authentication can be found
here.
The configuration step is pretty simple and a case of running the aws configure
filling in the
key and secret you created temporarily, the default region and the output format. The output format
refers to the likes of json
, csv
etc and is somewhat of a really ambiguous prompt the first time
round.
Once you have configured aws
, then it is simply a case of creating a new context in your kube
config and this can be done with the command
aws eks --region CLUSTER_REGION update-kubeconfig --name CLUSTER_NAME
.
The docs on setting up kubectl can be found
here
It’s worth mentioning that if you have MFA on your root account (you do right?), you may need to create a temporary token in order for the above command to work.
A token can be generated using the following command:
aws sts get-session-token --serial-number DEVICE_ARN --token-code CODE_FROM_DEVICE
Your DEVICE_ARN
is available on the Security Credentials
page in the AWS console where you
previously set it up MFA, and the CODE_FROM_DEVICE
is the current MFA code, like the one you
would use when logging in. For example:
aws sts get-session-token --serial-number arn:aws:iam::111111111111:mfa/username --token-code 123456
This will output a json file with a temporary Access Key Id, Secret Access Key and a Session Token.
As this is only temporary, it’s best to add these to your environment as such:
export AWS_ACCESS_KEY_ID=example-access-key-as-in-previous-output
export AWS_SECRET_ACCESS_KEY=example-secret-access-key-as-in-previous-output
export AWS_SESSION_TOKEN=example-session-token-as-in-previous-output
These envars will be picked up by the AWS cli and will allow the update-kubeconfig
command to
succeed.
For more information on getting a session token the docs can be found here
aws-auth
ConfigMapAt this point the next step is map the new users arn
to a user in Kubernetes. To do this you need
to edit the aws-auth
ConfigMap with the following command:
kubectl edit configmap aws-auth -n kube-system
You’ll see the file looks something similar to the below.
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::500000000000:role/NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
You just need to add a new mapUsers entry like so:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::500000000000:role/NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
// highlight-next-line
mapUsers: |
- userarn: arn:aws:iam::500000000000:user/eks-user username: eks-user
At this point Kubernetes knows that the arn of arn:aws:iam::500000000000:user/eks-user
is
classified as user eks-user
within the cluster. This doesn’t give the user any permissions which
is the next step.
Kubernetes has two role approaches, that of cluster wide roles and bindings (ClusterRole and ClusterBinding) and namespace specific (Role and RoleBinding).
In the case of this initial user who is going to be the power / admin user, you definitely want to give them cluster wide (all namespaces) permissions.
To do this its a case of first creating the role with a config such as the following:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-full-access
rules:
- apiGroups: ['*']
resources: ["*"]
verbs: ["*"]
The next step is assigning the new ClusterRole to the user that you mapped in the aws-auth
ConfigMap. This can be done with the following config:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks-user-cluster-access-binding
subjects:
- kind: User
name: eks-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-full-access
apiGroup: rbac.authorization.k8s.io
The IAM user that was created should now have full permission to all resources within the cluster.
To switch from the root user to the new IAM user is a simple case of running aws configure
again
and submitting the IAM users key details in place of the root user. The kube config does not contain
any user specific details so you are able to easily switch between IAM users by just changing the
aws configuration.
Once you have updated the configuration, you running kubectl get all --all-namespaces
should
return everything that is running in the cluster. Hurrah!
Do it now, before you forget…
At this point you should have a minimally privileged AWS user with full permissions to do anything within the Kubernetes cluster. Good work!!
The next step in setting up the cluster is to add an ingress controller which will be covered off in the next post.