Skip to content

Kubernetes authentication with AWS IAM.

Today I am going to demonstrate how you can leverage existing AWS IAM infrastructure to enable fine grained authentication(authN) and authorization(authZ) to your Kubernetes(k8s) cluster. We will look at how to use aws-iam-authenticator to map AWS IAM roles with k8s RBAC and how to enable authentication with kops and kubeadm. We will set up two groups, one with admin rights and second with view only, but based on given example you will be able to create as many groups as you wish, with fine grained control.

One of the key problems once you start using k8s at your organization is how you are going to authenticate to your cluster.
When k8s cluster is first provisioned, it will set up x509 as default authentication mechanism, meaning in order to let your users to use your cluster you need to share the key and certificate, which is apart from being insecure, will not let you to use fine grained RBAC authorization
which comes with k8s.

k8s comes with many authN mechanisms, but if you are using AWS, it means you must already have your users divided into specific groups with specific IAM roles, like admins, ops, deveopers, testers, viewers, etc. So all you have to do is to map those groups into k8s RBAC groups, as a result, you will have users who can view the cluster resources like pods and deployments, or who can deploy same resources, like your members of your build team, and those who can setup your cluster, like ops and admin team.

So let’s create a list of the things we will need to do:

1. Create example IAM users, groups and roles.
2. How to configure kops to use aws-iam-authenticator
3. Map AWS roles to RBAC
4. Setup kubectl to authN to cluster using IAM roles.
5. How to configure kubeadm to use aws-iam-authenticator

1. Create example IAM users, groups and roles.

First thing first, let’s create two groups, with singe user in each:

aws iam create-group --group-name k8s-admin;
aws iam create-user --user-name k8s_admin_user;
aws iam add-user-to-group --user-name k8s_admin_user --group-name k8s-admin;

aws iam create-group --group-name k8s-view;
aws iam create-user --user-name k8s_view_user;
aws iam add-user-to-group --user-name k8s_view_user --group-name k8s-view;

Also let’s create access key for those users, you will need it in the end, when we will use kubectl to authenticate to cluster:

aws iam create-access-key --user-name k8s_view_user;
aws iam create-access-key --user-name k8s_admin_user;

Save generated AccessKeyId and SecretAccessKey somewhere:

{
    "AccessKey": {
        "UserName": "k8s_view_user",
        "AccessKeyId": "AKIAJ7N6YA2622Z5YGFFD",
        "Status": "Active",
        "SecretAccessKey": "+fsfsfsfsfww//7iWLY4olrkWwFfp+sdfsfsf",
        "CreateDate": "2018-07-29T17:40:27.251Z"
    }
}
{
    "AccessKey": {
        "UserName": "k8s_admin_user",
        "AccessKeyId": "AKIAILYJ6DIQGKZPADSR",
        "Status": "Active",
        "SecretAccessKey": "sdsds/MK+iqnm2gWjFO1oj4t8Ihtsfsfssfs",
        "CreateDate": "2018-07-29T17:40:22.588Z"
    }
}

Now setup up those users in your .aws/credentials file as two different profiles:


[profile-k8s_admin_user]
aws_access_key_id = AKIAILYJ6DIQGKZPADSR
aws_secret_access_key = sdsds/MK+iqnm2gWjFO1oj4t8Ihtsfsfssfs
region = eu-west-2


[profile-k8s_view_user]
aws_access_key_id = AKIAJ7N6YA2622Z5YGFFD
aws_secret_access_key = +fsfsfsfsfww//7iWLY4olrkWwFfp+sdfsfsf
region = eu-west-2

Next we need to create the roles:


# Replace 228426479489 with your account id.

POLICY='{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::228426479489:root"},"Action":"sts:AssumeRole","Condition":{}}]}'

aws iam create-role \
  --role-name KubernetesAdmin \
  --description "Kubernetes admin role for Heptio Authenticator" \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn';

aws iam create-role \
  --role-name KubernetesView \
  --description "Kubernetes view role for Heptio Authenticator" \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn';


 

Now we need to grand permissions to assume the role, create two policy files first:

cat k8s-assume-k8s-admin-role-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "123",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::228426479489:role/KubernetesAdmin"
            ]
        }
    ]
}

cat k8s-assume-k8s-view-role-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "123",
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Resource": [
                "arn:aws:iam::228426479489:role/KubernetesView"
            ]
        }
    ]
}

aws iam create-policy --policy-name k8s-assume-admin-role --policy-document file://k8s-assume-k8s-admin-role-policy.json;
aws iam create-policy --policy-name k8s-assume-view-role --policy-document file://k8s-assume-k8s-view-role-policy.json;

And then attach policies to groups:

aws iam attach-group-policy --policy-arn arn:aws:iam::228426479489:policy/k8s-assume-admin-role --group-name k8s-admin;
aws iam attach-group-policy --policy-arn arn:aws:iam::228426479489:policy/k8s-assume-view-role --group-name k8s-view;

So now users in k8s-admin group will be able to assume KubernetesAdmin role and users in view KubernetesView accordingly.

2. How to configure kops to use aws-iam-authenticator

This article assumes you have k8s cluster up and running, if not, here is a very simple way to set it up using kops

Now let’s update our k8s cluster with webhook-token-authentication

By default, your kops k8s config will look something like below after running:

kops get k8s.ifritltd.co.uk -oyaml

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2018-09-09T11:48:09Z
  name: k8s.ifritltd.co.uk
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://kayan-kops-state2/k8s.ifritltd.co.uk
  dnsZone: k8s.ifritltd.co.uk
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-2a
      name: a
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-2a
      name: a
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.10.3
  masterPublicName: api.k8s.ifritltd.co.uk
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.20.32.0/19
    name: eu-west-2a
    type: Public
    zone: eu-west-2a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---
...

Le’s update it, to do so, run:

kops edit cluster k8s.ifritltd.co.uk

and add the next couple lines of code in the spec section of Cluster:

  kubeAPIServer:
    authenticationTokenWebhookConfigFile: /srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml
  hooks:
  - name: kops-hook-authenticator-config.service
    before:
      - kubelet.service
    roles: [Master]
    manifest: |
      [Unit]
      Description=Download AWS Authenticator configs from S3
      [Service]
      Type=oneshot
      ExecStart=/bin/mkdir -p /srv/kubernetes/heptio-authenticator-aws
      ExecStart=/usr/local/bin/aws s3 cp --recursive s3://kayan-kops-state2/k8s.ifritltd.co.uk/addons/authenticator /srv/kubernetes/heptio-authenticator-aws/

Please note it won’t update cluster immediately, so before we apply these changes, let’s explain what we are doing here.

The first argument, kubeAPIServer, will update your API server with the next flag:

--authentication-token-webhook-config-file=/srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml

To make prove it actually worked let’s check api flag before and after our changes, if you check api server now:

ssh -i /Users/kayanazimov/.ssh/id_rsa_kops admin@api.k8s.ifritltd.co.uk. ps aux | grep api

you should get something like below:


root      2360  2.3 26.5 444652 269976 ?       Ssl  11:51   0:41 /usr/local/bin/kube-apiserver --allow-privileged=true --anonymous-auth=false --apiserver-count=1 --authorization-mode=RBAC --basic-auth-file=/srv/kubernetes/basic_auth.csv --bind-address=0.0.0.0 --client-ca-file=/srv/kubernetes/ca.crt --cloud-provider=aws --enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota --etcd-quorum-read=false --etcd-servers-overrides=/events#http://127.0.0.1:4002 --etcd-servers=http://127.0.0.1:4001 --insecure-bind-address=127.0.0.1 --insecure-port=8080 --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP --proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.cert --proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key --requestheader-allowed-names=aggregator --requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.cert --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=443 --service-cluster-ip-range=100.64.0.0/13 --storage-backend=etcd2 --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key --token-auth-file=/srv/kubernetes/known_tokens.csv --v=2

We will repeat this after changes applied.

What authentication-token-webhook-config means is, when you will authenticate to your cluster with Token generated by client side of iam-authenticator using kubectl, API server will forward that token to external service configured at specified file: kubeconfig.yaml. And then, the service will either return failure or success with the name of the RBAC group for authenticated user.

The second one, hooks, will prepare everything iam-authenticator service needs in order to function, that is certificate, key, and kubeconfig.yaml file, obviously all of these files needs to exist prior to updating the cluster, so let’s prepare them.

All we need to do is to download the iam-authenticator binary and generate those files and then copy to s3:

export KOPS_STATE_STORE=s3://kayan-kops-state2
export CLUSTER_NAME=k8s.ifritltd.co.uk

wget https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_0.3.0_darwin_amd64 && \
 chmod +x heptio-authenticator-aws_0.3.0_darwin_amd64 && \
 mv heptio-authenticator-aws_0.3.0_darwin_amd64 /usr/local/bin/heptio-authenticator-aws

heptio-authenticator-aws init -i $CLUSTER_NAME
aws s3 cp cert.pem ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/cert.pem;
aws s3 cp key.pem ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/key.pem;
aws s3 cp heptio-authenticator-aws.kubeconfig ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/kubeconfig.yaml;

You can do this on your local machine, as I did, specified kops state s3 bucket name, the cluster name, downloaded binary, generated config, and copied generated files into kops state store.

So hopefully now the the changes mentioned in the hooks section make more sense, as they copy authenticator’s config files from s3 into /srv/kubernetes/heptio-authenticator-aws/. But it may still seem a bit confusing as why do we need all these files, as we have nothing yet that would be using them. That is because we haven’t yet seen authenticator.yaml file which will deploy iam-authenticator to the cluster. But because cluster changes take some time, once authenticator config is generated and ready in the s3 bucket, let’s apply them first, and then look at iam-authenticator container resources:


kops update cluster --yes
kops rolling-update cluster --yes

Now once cluster is updated, DNS changes can take a while, so I rather ssh using ip address from the aws console:

ssh -i /Users/kayanazimov/.ssh/id_rsa_kops admin@35.178.174.140 ps aux | grep authentication-token-webhook

root      2413 10.9 30.0 428036 306208 ?       Ssl  13:03   0:09 /usr/local/bin/kube-apiserver --allow-privileged=true --anonymous-auth=false --apiserver-count=1 --authentication-token-webhook-config-file=/srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml --authorization-mode=RBAC --basic-auth-file=/srv/kubernetes/basic_auth.csv --bind-address=0.0.0.0 --client-ca-file=/srv/kubernetes/ca.crt --cloud-provider=aws --enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota --etcd-quorum-read=false --etcd-servers-overrides=/events#http://127.0.0.1:4002 --etcd-servers=http://127.0.0.1:4001 --insecure-bind-address=127.0.0.1 --insecure-port=8080 --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP --proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.cert --proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key --requestheader-allowed-names=aggregator --requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.cert --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=443 --service-cluster-ip-range=100.64.0.0/13 --storage-backend=etcd2 --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key --token-auth-file=/srv/kubernetes/known_tokens.csv --v=2

As you can see, the API server is running now using authentication-token-webhook flag.

The next thing we need is to run iam-authenticator as a docker container on a cluster, let’s look at it’s resources:

# authenticator.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: heptio-authenticator-aws
  labels:
    k8s-app: heptio-authenticator-aws
data:
  config.yaml: |    
    clusterID: k8s.ifritltd.co.uk
    server:
      mapRoles:
      - roleARN: arn:aws:iam::228426479489:role/KubernetesAdmin
        username: kubernetes-admin:{{SessionName}}
        groups:
        - system:masters
      - roleARN: arn:aws:iam::228426479489:role/KubernetesView
        username: kubernetes-view:{{SessionName}}
        groups:
        - kub-view


---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: heptio-authenticator-aws
  labels:
    k8s-app: heptio-authenticator-aws
spec:
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        k8s-app: heptio-authenticator-aws
    spec:
      # run on the host network (don't depend on CNI)
      hostNetwork: true

      # run on each master node
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists

      # run `heptio-authenticator-aws server` with three volumes
      # - config (mounted from the ConfigMap at /etc/heptio-authenticator-aws/config.yaml)
      # - state (persisted TLS certificate and keys, mounted from the host)
      # - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
      containers:
      - name: heptio-authenticator-aws
        image: gcr.io/heptio-images/authenticator:v0.3.0
        args:
        - server
        - --config=/etc/heptio-authenticator-aws/config.yaml
        - --state-dir=/var/heptio-authenticator-aws
        - --kubeconfig-pregenerated

        resources:
          requests:
            memory: 20Mi
            cpu: 10m
          limits:
            memory: 20Mi
            cpu: 100m

        volumeMounts:
        - name: config
          mountPath: /etc/heptio-authenticator-aws/
        - name: state
          mountPath: /var/heptio-authenticator-aws/
        - name: output
          mountPath: /etc/kubernetes/heptio-authenticator-aws/
      volumes:
      - name: config
        configMap:
          name: heptio-authenticator-aws
      - name: output
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/
      - name: state
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/

So the container needs to run on each API server, hence it needs to be a DaemonSet.
It has –kubeconfig-pregenerated flag set, as we prepared all config files required in advance. Other option is generate-kubeconfig which is mentioned in the docs but it causes API server failure, as there is a circular dependency, authenticator, as a container requires API server, and API server requires web-hook file in order to start and fails otherwise. So even though you can get it working in theory, but if you want to be able to provision your cluster from scratch, in a immutable fashion, meaning your can destroy all masters and the provision again, –kubeconfig-pregenerated option is the one to go with.

3. Map AWS roles to RBAC

Finally ConfigMap part of the yaml file creates the mapping between the role we created earlier, and two groups system:masters and kub-view. Whereas the first one maps KubernetesAdmin AWS IAM role to system:masters group, which is default cluster-admin RBAC role, which you can find by running:

kubectl get ClusterRoleBinding cluster-admin -n kube-system -oyaml
piVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2018-09-09T11:52:18Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
  resourceVersion: "86"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
  uid: cda77ec1-b426-11e8-96f4-06250e6be7b6
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters

The second mapping we refer doesn’t exist yet:

- roleARN: arn:aws:iam::228426479489:role/KubernetesView
        username: kubernetes-view:{{SessionName}}
        groups:
        - kub-view

It maps KubernetesView IAM role to group kub-view, so before being able to use that group, we first need to create a new ClusterRoleBinding:

cat heptio-kub-view-rolebinding.yaml
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: heptio-kub-view-rolebinding
roleRef:kubectl apply -f heptio-kub-view-rolebinding.yaml
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: kub-view

To do so, just run:

kubectl apply -f heptio-kub-view-rolebinding.yaml

Now we have everything in place to deploy iam-authenticator:

kubectl apply -f authenticator.yaml

And confirm it is up and running:

kubectl get pod  -n kube-system | grep authenticator
heptio-authenticator-aws-p5nt2                                       1/1       Running   0          1m

kubectl logs  heptio-authenticator-aws-p5nt2  -n kube-system
time="2018-09-09T13:29:01Z" level=info msg="mapping IAM role" groups="[system:masters]" role="arn:aws:iam::228426479489:role/KubernetesAdmin" username="kubernetes-admin:{{SessionName}}"
time="2018-09-09T13:29:01Z" level=info msg="mapping IAM role" groups="[kub-view]" role="arn:aws:iam::228426479489:role/KubernetesView" username="kubernetes-view:{{SessionName}}"
time="2018-09-09T13:29:01Z" level=info msg="loaded existing keypair" certPath=/var/heptio-authenticator-aws/cert.pem keyPath=/var/heptio-authenticator-aws/key.pem
time="2018-09-09T13:29:01Z" level=info msg="listening on https://127.0.0.1:21362/authenticate"
time="2018-09-09T13:29:01Z" level=info msg="reconfigure your apiserver with `--authentication-token-webhook-config-file=/etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml` to enable (assuming default hostPath mounts)"

All looks good!

4. Setup kubectl to authN to cluster using IAM roles.

Now we have everything to authN to cluster on server side, but still need to configure the client side.
Let’s first confirm authenticator can generate sts token on behalf of both users, that will also test users can assume roles created earlier:

AWS_PROFILE=profile-k8s_view_user

heptio-authenticator-aws token -r arn:aws:iam::228426479489:role/KubernetesView -i k8s.ifritltd.co.uk | jq
{
  "kind": "ExecCredential",
  "apiVersion": "client.authentication.k8s.io/v1alpha1",
  "spec": {},
  "status": {
    "token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFTSUFUS0wySE82QVNGQTNZSjMzJTJGMjAxODA5M..."
  }
}

heptio-authenticator-aws token -r arn:aws:iam::228426479489:role/KubernetesAdmin -i k8s.ifritltd.co.uk | jq
could not get token: AccessDenied: User: arn:aws:iam::228426479489:user/k8s_view_user is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::228426479489:role/KubernetesAdmin
	status code: 403, request id: 5d2ddb92-b436-11e8-b3c0-fb55aff13393


AWS_PROFILE=profile-k8s_admin_user
heptio-authenticator-aws token -r arn:aws:iam::228426479489:role/KubernetesAdmin -i k8s.ifritltd.co.uk | jq
{
  "kind": "ExecCredential",
  "apiVersion": "client.authentication.k8s.io/v1alpha1",
  "spec": {},
  "status": {
    "token": "k8s-aws-v1.aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFTSUFUS0wySE82QTM0MlhOUEdMJTJGMjAxODA5MDklMkZ1cy1lYXN0LTElMkZzdHMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDE4MDkwOVQxMzQ0NTJaJlgtQW16LUV4cGlyZXM9NjAmWC1BbXotU2VjdXJpdHktVG9rZW49RlFvR1pYSXZZWGR6RU1mJTJGJTJGJTJGJTJGJTJGJTJGJTJGJTJGJTJGJTJGd0VhREFIcmlJJTJGNG9leWZjRTMxZkNMM0FkZTN5bUdxWUtEbU42NmFSMzRrR0dCMHdjS1A3eVFBaU5BM....."
  }
}	

All good, view user can only asume KubernetesView role and fails for KubernetesAdmin, as expected.

Now we need to configure kubeconfig with with a new user, with will execute the binary heptio-authenticator-aws to authenticate to AWS and generate the Token, which then will be passed to API server to further processing. So far, we have been using default kubeconfig which should be similar to below:


cat .kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: pQThXWllGRmFBY2VLYThYalp4d2JVUmdYM3Zz.....
    server: https://api.k8s.ifritltd.co.uk
  name: k8s.ifritltd.co.uk

contexts:
- context:
    cluster: k8s.ifritltd.co.uk
    user: k8s.ifritltd.co.uk
  name: k8s.ifritltd.co.uk

current-context: k8s.ifritltd.co.uk
kind: Config
preferences: {}
users:
- name: k8s.ifritltd.co.uk
  user:
    client-certificate-data: LS0tLS1CRUdJTi....
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBL...
    password: ...
    username: admin

We can now update the contexts section so that k8s.ifritltd.co.uk is using a new user:

kubectl config set-context ${CLUSTER_NAME} --user=${CLUSTER_NAME}.exec

Now it will look like below:


contexts:
- context:
    cluster: k8s.ifritltd.co.uk
    user: k8s.ifritltd.co.uk.exec
  name: k8s.ifritltd.co.uk

You can also update manually and call user as you want. The most important part is configuring this user:


- name: k8s.ifritltd.co.uk.exec
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - k8s.ifritltd.co.uk
      - -r
      - arn:aws:iam::228426479489:role/KubernetesView
      command: heptio-authenticator-aws
      env: null

Add the snipped above to the users section.

Now, when using kubectl, it will use k8s.ifritltd.co.uk.exec user when connecting to cluster:


AWS_PROFILE=profile-k8s_view_user

 kubectl get pod  -n kube-system
NAME                                                                 READY     STATUS    RESTARTS   AGE
dns-controller-576469b45b-wtcrk                                      1/1       Running   0          1h
etcd-server-events-ip-172-20-60-10.eu-west-2.compute.internal        1/1       Running   0          1h
etcd-server-ip-172-20-60-10.eu-west-2.compute.internal               1/1       Running   0          1h
heptio-authenticator-aws-p5nt2                                       1/1       Running   0          1h


kubectl run busybox --image=busybox --port 8080 \
     -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
     echo 'smallest http server'; } | nc -l -p  8080; done"

Error from server (Forbidden): deployments.apps is forbidden: User "kubernetes-view:1536504570564538372" cannot create deployments.apps in the namespace "default"

As you can see, getting list of the pods works fine, while trying to create a deployment fails when using profile-k8s_view_user with KubernetesView role. Now let’s switch the user to admin:

AWS_PROFILE=profile-k8s_admin_user

And then update .kube/config to use KubernetesAdmin role and try again:

kubectl run busybox --image=busybox --port 8080 \
     -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
     echo 'smallest http server'; } | nc -l -p  8080; done"
deployment.apps/busybox created

As you can see, users assuming role KubernetesAdmin can successfully perform any changes to the cluster.

The cool thing is from client configuration point of view, you need to provide exactly same config to all users, the only difference is the role they will assume, and all authentication will be delegated to existing AWS infrastructure.

5. How to configure kubeadm to use aws-iam-authenticator

The last thing I would like to show is how to configure cluster if you are using kubeadm.
First of all you will need to update kubeadm config to set up MasterConfiguration with additional resources:

kind: MasterConfiguration
apiServerExtraArgs:
  authentication-token-webhook-config-file: "/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml"

Obviously, you will need generate and then copy aws-iam-authenticator’s key, cert and config to the location as defined in the volumes section of the authenticator.yaml, in our example it was /srv/kubernetes/heptio-authenticator-aws/:

volumes:
      - name: config
        configMap:
          name: heptio-authenticator-aws
      - name: output
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/
      - name: state
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/

So you will copy them into /srv/kubernetes/heptio-authenticator-aws/ folder somewhere in your systemd config before ‘kubeadm init’:

mkdir -p /srv/kubernetes/aws-iam-authenticator
heptio-authenticator-aws init -i k8s-cluster
mv cert.pem  /srv/kubernetes/heptio-authenticator-aws/
mv key.pem  /srv/kubernetes/heptio-authenticator-aws/
mv heptio-authenticator-aws.kubeconfig  /srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml

As you can see it is pretty straightforward.

That is basically it, I hope you enjoyed reading and this post will help you to setup your authN and authZ for your k8s cluster.
Here is the list of documentation I was following and it may help you to understand more as well:

https://ifritltd.com/2017/12/18/how-to-setup-kubernetes-cluster-on-aws-with-kops/
https://github.com/kubernetes-sigs/aws-iam-authenticator
https://kubernetes.io/docs/reference/access-authn-authz/webhook/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings