Skip to content

Tag: k8s

Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS.

Today I am going to show how you can fully automate the advanced process of setting up the highly available k8s cluster in the cloud. We will go through a set of terraform and bash scripts which should be sufficient enough for you to literally just run terraform plan/apply to get your HA etcd and k8s cluster up and running without any hassle around.

    Part 0 – Intro.
    Part 1 – Setting up HA ETCD cluster.
    Part 2 – The PKI infra
    Part 3 – Setting up k8s cluster.

Part 0 – Intro.

If you do a short research on how to setup k8s cluster you may find quite a lot of ways this could be achieved.
But in general, all this ways could be grouped into 3 types:

1) No setup
2) Easy Set up
3) Advanced Set up
4) Hard way

By No setup I simply mean something like EKS, it is a managed service, you don’t need to maintain or care about details while AWS will do all for you. Never used it can’t say much on that one.

Easy setup, tools like kops and alike make it quite easy – couple commands run kinda setup:

kops ~]$ kops create cluster \
  --name=k8s.ifritltd.net --state=s3://kayan-kops-state \
  --zones="eu-west-2a" --node-count=2 --node-size=t2.micro 
  --master-size=t2.micro --dns-zone=k8s.ifritltd.net  --cloud aws

All you need is setup s3 bucket and dns records and run the command above which I described two years ago in this article

The downside is first of all it is mainly only for AWS, and generates all AWS resources as it wants, so lets say it would generate security groups, asg, etc in it’s own way which means
if you already have terraform managed infra with your own rules, strategies and framework, it won’t feet into that model but just added as some kind of alien infra. Long story short if you want fine grained control over how your infra should be managed from single centralised terraform, it isn’t best solution, yet still easy and balanced tool.

Before I start explaining how to use Advanced Set up, I am just going to mention that 4th, The Hard way is probably only good if you want to learn how k8s works, how all components interact with each other, and as it doesn’t use any external tool to set up components, you do everything manually, you literally know all the guts of the system. Obviously it could become a nightmare to support such system in production unless all members of the ops team are k8s experts or there are some requirements not supported by other bootstrapping tools.

Finally the Advanced Set up.

Comments closed

Kubernetes authentication with AWS IAM.

Today I am going to demonstrate how you can leverage existing AWS IAM infrastructure to enable fine grained authentication(authN) and authorization(authZ) to your Kubernetes(k8s) cluster. We will look at how to use aws-iam-authenticator to map AWS IAM roles with k8s RBAC and how to enable authentication with kops and kubeadm. We will set up two groups, one with admin rights and second with view only, but based on given example you will be able to create as many groups as you wish, with fine grained control.

One of the key problems once you start using k8s at your organization is how you are going to authenticate to your cluster.
When k8s cluster is first provisioned, it will set up x509 as default authentication mechanism, meaning in order to let your users to use your cluster you need to share the key and certificate, which is apart from being insecure, will not let you to use fine grained RBAC authorization
which comes with k8s.

k8s comes with many authN mechanisms, but if you are using AWS, it means you must already have your users divided into specific groups with specific IAM roles, like admins, ops, deveopers, testers, viewers, etc. So all you have to do is to map those groups into k8s RBAC groups, as a result, you will have users who can view the cluster resources like pods and deployments, or who can deploy same resources, like your members of your build team, and those who can setup your cluster, like ops and admin team.

So let’s create a list of the things we will need to do:

1. Create example IAM users, groups and roles.
2. How to configure kops to use aws-iam-authenticator
3. Map AWS roles to RBAC
4. Setup kubectl to authN to cluster using IAM roles.
5. How to configure kubeadm to use aws-iam-authenticator

Comments closed

Provisioning prepackaged stacks easily on Kubernetes with helm

Today I am going to show how to provision prepackaged k8s stacks with helm.

So what is Helm – here is literally what is’t page says – a tool for managing Kubernetes charts and Charts are packages of pre-configured Kubernetes resources. So imagine you want to provision some stack, ELK for example, although there are many ways to do it, here is one I did as an example for Jenkins logs although not on k8s but with just docker, but nevertheless, so instead of reinventing the wheel you just provision it using helm.

So let’s just do it instead of talking.

Go to download page and get the right version from https://github.com/kubernetes/helm

Then untar, move to right dir and run:

➜  tar -xzvf helm-v2.7.2-darwin-amd64.tar.gz

➜   mv darwin-amd64/helm /usr/local/bin


➜   helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init
..
...

So let’s do what is asks

Comments closed

How to setup Kubernetes cluster on AWS with kops

Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).

In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:

0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.
5. (Update September 2018) Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
6. (Update June 2019) The advanced way: Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS..

0. Setup a VM with CentOS Linux

Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:

➜  kops cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|

  config.vm.define "kops" do |m|
    m.vm.box = "centos/7"
    m.vm.hostname = "kops"
  end

end

Let’s start it up and logon:

Comments closed

Installing Kubernetes on MacOS

I am assuming you have virtualbox installed on your Mac.

To test most of the stuff on k8s you don’t need multiple nodes, running one node cluster is pretty much what you need.

First we need to install kubectl, a tool to interact with kubernetes cluster:

➜  ~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s \
  https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl \
  && chmod +x ./kubectl \
  && sudo mv ./kubectl /usr/local/bin/kubectl

Then we need Minikube – which is a tool that provisions and manages single-node Kubernetes clusters:

➜  ~ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-darwin-amd64 \
  && chmod +x minikube \
  && sudo mv minikube /usr/local/bin/

Now we can start the VM:

➜  ~ minikube  start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 140.01 MB / 140.01 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 148.56 MB / 148.56 MB [============================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

Let’s check everything is working:

Comments closed

Creating Kubernetes Jobs.

Sometimes you need to run a container to execute a specific task and then stop it.

Normally in Kubernetes, if you try just to run it, it will actually create a deployment,
meaning you container will keep running all the time. That is because by default kubectl runs with ‘–restart=”Always”‘ policy.
So if you don’t want to create a yaml file where you specify pod ‘kind’ as a Job, but simply use kubectl run, you can set restart policy to ‘OnFailure’.
Let’s run simple container as a job. It is a simple web crawler which I have written for one of my job interviews, it has many bugs and incomplete,
but sometimes it actually works 🙂 So let’s run it:

➜  ~ kubectl run crawler --restart=OnFailure --image=kayan/web-crawler \
 -- http://www.gamesyscorporate.com http://www.gamesyscorporate.com 3
job "crawler" created

Now we can check the state of the pod:

➜  ~ kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
crawler-k57bh   0/1       ContainerCreating   0          2s

it will take a while, as it needs to download the image first, to check run:

kubectl describe pod crawler

You should see something like below:

Comments closed

Running smallest test http server container

Sometimes we want to quickly run some container and check http connection to it, I use to use nginx for this.
If your internet connection is not super fast, or if you want something really really quick, or nginx just doesn’t work for you
for some reason, here is what you can use – a combination of busybox and netcat:

➜  ~ docker run -d --rm -p 8080:8080 --name webserver busybox \
	 sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
	 echo 'smallest http server'; } | nc -l -p  8080; done"
	 
031cb2f4c0ecab22b3af574ab09a28dbfcb9e654e9a2d04fb421bb7ebacdff1f

➜  ~ curl localhost:8080
smallest http server

Lets check it’s size:

➜  ~ docker images nginx | grep alpine
nginx               1.13.6-alpine       5c6da346e3d6        3 weeks ago         15.5MB
➜  ~
➜  ~
➜  ~ docker images busybox
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
➜  ~

It is just 1Mb as oppose to 15Mb for nginx alpine.

You can run same on Kubernetes as described below:

Comments closed