Skip to content

Tag: kubernetes

Kubernetes authentication with AWS IAM.

Today I am going to demonstrate how you can leverage existing AWS IAM infrastructure to enable fine grained authentication(authN) and authorization(authZ) to your Kubernetes(k8s) cluster. We will look at how to use aws-iam-authenticator to map AWS IAM roles with k8s RBAC and how to enable authentication with kops and kubeadm. We will set up two groups, one with admin rights and second with view only, but based on given example you will be able to create as many groups as you wish, with fine grained control.

One of the key problems once you start using k8s at your organization is how you are going to authenticate to your cluster.
When k8s cluster is first provisioned, it will set up x509 as default authentication mechanism, meaning in order to let your users to use your cluster you need to share the key and certificate, which is apart from being insecure, will not let you to use fine grained RBAC authorization
which comes with k8s.

k8s comes with many authN mechanisms, but if you are using AWS, it means you must already have your users divided into specific groups with specific IAM roles, like admins, ops, deveopers, testers, viewers, etc. So all you have to do is to map those groups into k8s RBAC groups, as a result, you will have users who can view the cluster resources like pods and deployments, or who can deploy same resources, like your members of your build team, and those who can setup your cluster, like ops and admin team.

So let’s create a list of the things we will need to do:

1. Create example IAM users, groups and roles.
2. How to configure kops to use aws-iam-authenticator
3. Map AWS roles to RBAC
4. Setup kubectl to authN to cluster using IAM roles.
5. How to configure kubeadm to use aws-iam-authenticator

Comments closed

How to set up scaling and autoscaling in Kubernetes.

Today I am going to show how to scale docker containers on Kubernetes and you will see how easy it is.
Then we will look at how pods could be autoscaled based on the performance degradation and CPU Utilisation.


1. Deploy simple stack to k8s
2. Scaling the deployment manually.
3. Autoscaling in k8s based on CPU Utilisation.

1. Deploy simple stack to k8s

If you don’t have Kubernetes installed on your machine in this article I demonstrate how easily this can be achieved on MacOS, it literally takes few minutes to set up.

So let’s create a deployment of a simple test http server container:

  
➜  ~ kubectl  run busybox --image=busybox --port 8080  \
         -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
         env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p  8080; done"
deployment "busybox" created

I have also set it up in a way so it returns it’s hostname in the response to http get request, we will need it to distinguish
responses from different instances later on. Once deployed, we can check our deployment and pod status:

➜  ~ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
busybox   1         1         1            1           3m
➜  ~ kubectl get pod
NAME                       READY     STATUS        RESTARTS   AGE
busybox-7bcdf6684b-jnp6w   1/1       Running       0          18s
➜  ~

As you can see it’s current ‘DESIRED’ state equals to 1.

Next step is to expose our deployment through a service so it can be queried from outside of the cluster:

➜  ~ kubectl expose deployment busybox --type=NodePort
service "busybox" exposed

This will expose our endpoint:

➜  ~ kubectl get endpoints
NAME         ENDPOINTS         AGE
busybox      172.17.0.9:8080   23s

Once it is done, we can ask our cluster manager tool to get us it’s api url:

➜  ~ minikube service busybox --url
http://192.168.99.100:31623

If we query it we will get it’s hostname in the response:

➜  ~ curl http://192.168.99.100:31623
busybox-7bcdf6684b-jnp6w

2. Scaling the deployment manually.
Now our deployment is ready to be scaled:

Comments closed

Installing Kubernetes on MacOS

I am assuming you have virtualbox installed on your Mac.

To test most of the stuff on k8s you don’t need multiple nodes, running one node cluster is pretty much what you need.

First we need to install kubectl, a tool to interact with kubernetes cluster:

➜  ~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s \
  https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl \
  && chmod +x ./kubectl \
  && sudo mv ./kubectl /usr/local/bin/kubectl

Then we need Minikube – which is a tool that provisions and manages single-node Kubernetes clusters:

➜  ~ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-darwin-amd64 \
  && chmod +x minikube \
  && sudo mv minikube /usr/local/bin/

Now we can start the VM:

➜  ~ minikube  start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 140.01 MB / 140.01 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 148.56 MB / 148.56 MB [============================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

Let’s check everything is working:

Comments closed

Creating Kubernetes Jobs.

Sometimes you need to run a container to execute a specific task and then stop it.

Normally in Kubernetes, if you try just to run it, it will actually create a deployment,
meaning you container will keep running all the time. That is because by default kubectl runs with ‘–restart=”Always”‘ policy.
So if you don’t want to create a yaml file where you specify pod ‘kind’ as a Job, but simply use kubectl run, you can set restart policy to ‘OnFailure’.
Let’s run simple container as a job. It is a simple web crawler which I have written for one of my job interviews, it has many bugs and incomplete,
but sometimes it actually works 🙂 So let’s run it:

➜  ~ kubectl run crawler --restart=OnFailure --image=kayan/web-crawler \
 -- http://www.gamesyscorporate.com http://www.gamesyscorporate.com 3
job "crawler" created

Now we can check the state of the pod:

➜  ~ kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
crawler-k57bh   0/1       ContainerCreating   0          2s

it will take a while, as it needs to download the image first, to check run:

kubectl describe pod crawler

You should see something like below:

Comments closed