The following is part of a series of posts called "Building a complete Kubernetes backed infrastructure".
This series of posts describes my approach to standing up a scalable infrastructure for hosting both internal and external software in a company setting.
This series focuses on AWS specifically as I have found EKS to be the most complicated Kubernetes provider to set up, but the principles and workflow should be easily applied to any other provider or on-prem situation.
Kubernetes allows for some pretty customizable permissions via its RBAC system. Giving all users the same privileges to resources, especially on important production systems, is a risk that just shouldn’t be accepted.
Luckily with EKS you can bind IAM users to Kubernetes users, meaning that you only have one system to worry about as far as authentication, not two. This does mean that you need to get into Kubernetes configs to map users, but it’s not that difficult of a process.
The first step is to create the new IAM user if one does not already exist. This process is the same
as for any other IAM user creation, but just make sure that they have the AmazonEC2FullAccess
policy applied as well as this EKS policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListFargateProfiles",
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
Once created, you can supply the AWS log in details to the owning user who can then create a new
access key and utilize kubectl
using aws configure
. For more information on setting up
kubectl
see
this post
under the ‘Configuring kubectl access’ section.
You then need to map this new IAM user to a user in Kubernetes. This has previously also covered off in the previous post regarding setting up the Kubernetes cluster, but for our case, lets assume we want to only give a user restrictive permission, and can only access the namespace ‘development’, primarily stopping them from changing anything in the ‘production’ namespace.
Firstly, you need to add this IAM user to the aws-auth
ConfigMap which can be edited via:
kubectl edit configmap aws-auth -n kube-system
This will bring up the current ConfigMap in your default editor, looking something like the below:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::500000000000:role/NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::500000000000:user/existing-user
username: existing-user
You just need to add a new mapUsers
entry like so, replacing the arn
in this example with the
correct one:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::500000000000:role/NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::500000000000:user/existing-user
username: existing-user
- userarn: arn:aws:iam::500000000000:user/USERNAME_OF_USER_YOU_JUST_CREATED username: USERNAME_OF_USER_YOU_JUST_CREATED
Kubernetes now knows that the user with arn of
arn:aws:iam::500000000000:user/USERNAME_OF_USER_YOU_JUST_CREATED
is identified as
USERNAME_OF_USER_YOU_JUST_CREATED
, but at the moment there are no permission applied to this user.
As previously mentioned, we want to give the user access to only the namespace ‘development’. To
do this you need to create a Role
and RoleBinding
. These should not to be confused with
ClusterRole
and ClusterRoleBinding
which are very similar in behavior, but are not namespaced
resources and so control global RBAC rules. If you want to apply global permissions use the Cluster
variation, but for namespace permissions such as the given example, the non-cluster resource types
will be used.
Firstly, create a Role
that gives full access to a namespace like the following:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: namespace-full-access
namespace: development
rules:
- apiGroups: ['*']
resources: ["*"]
verbs: ["*"]
You can obviously change the rules here as well. As an example, you may want to build a dashboard
that used kubectl
to get it’s data (not the best way of doing things, but a example use case) then
you may want to restrict this rule set to only idempotent verbs such as GET.
Another gotcha is that as Role
resources are namespaced, you will need to apply the Role
to
every namespace that you wish to use it. As a good practice, keep the rules the same for every
Role
with the same name across namespaces, trust me, you’ll thank me later.
The next step is to bind this Role
to the new user using a RoleBinding
.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-full-access-binding
namespace: development
subjects:
- kind: User
name: USERNAME_OF_USER_YOU_JUST_CREATED
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: namespace-full-access
apiGroup: rbac.authorization.k8s.io
At this point, once the new user has set up their kubectl
properly, the following should be true
# Succeeds
$ kubectl get all --namespace development
# Fails
$ kubectl get all --namespace production
It’s likely best practice here to keep your configurations committed, following with the best practices of Kubernetes. This means when you wish to add another new member to the namespace with the same role bindings, you do the following:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-full-access-binding
namespace: development
subjects:
- kind: User
name: USERNAME_OF_USER_YOU_JUST_CREATED
apiGroup: rbac.authorization.k8s.io
- kind: User name: another-iam-user-that-you-are-adding-later apiGroup: rbac.authorization.k8s.ioroleRef:
kind: Role
name: namespace-full-access
apiGroup: rbac.authorization.k8s.io
It is definitely worth mentioning here as well, this type of access control only addresses kubectl access. It does not block access via services if the caller is accessing the cluster via VPN or other methods. RBAC only dictates Kubernetes level access control, not access at the service level.
From here, creating different Role
configurations for user groups and assigning them to users via
RoleBinding
configurations is a straightforward process and the rules configuration allows for
some very specific control.