Skip to content

Category: Docker

Provisioning prepackaged stacks easily on Kubernetes with helm

Today I am going to show how to provision prepackaged k8s stacks with helm.

So what is Helm – here is literally what is’t page says – a tool for managing Kubernetes charts and Charts are packages of pre-configured Kubernetes resources. So imagine you want to provision some stack, ELK for example, although there are many ways to do it, here is one I did as an example for Jenkins logs although not on k8s but with just docker, but nevertheless, so instead of reinventing the wheel you just provision it using helm.

So let’s just do it instead of talking.

Go to download page and get the right version from https://github.com/kubernetes/helm

Then untar, move to right dir and run:

➜  tar -xzvf helm-v2.7.2-darwin-amd64.tar.gz

➜   mv darwin-amd64/helm /usr/local/bin


➜   helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init
..
...

So let’s do what is asks

Leave a Comment

How to setup Kubernetes cluster on AWS with kops

Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).

In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:

0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.

0. Setup a VM with CentOS Linux

Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:

➜  kops cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|

  config.vm.define "kops" do |m|
    m.vm.box = "centos/7"
    m.vm.hostname = "kops"
  end

end

Let’s start it up and logon:

Comments closed

How to set up scaling and autoscaling in Kubernetes.

Today I am going to show how to scale docker containers on Kubernetes and you will see how easy it is.
Then we will look at how pods could be autoscaled based on the performance degradation and CPU Utilisation.


1. Deploy simple stack to k8s
2. Scaling the deployment manually.
3. Autoscaling in k8s based on CPU Utilisation.

1. Deploy simple stack to k8s

If you don’t have Kubernetes installed on your machine in this article I demonstrate how easily this can be achieved on MacOS, it literally takes few minutes to set up.

So let’s create a deployment of a simple test http server container:

  
➜  ~ kubectl  run busybox --image=busybox --port 8080  \
         -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
         env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p  8080; done"
deployment "busybox" created

I have also set it up in a way so it returns it’s hostname in the response to http get request, we will need it to distinguish
responses from different instances later on. Once deployed, we can check our deployment and pod status:

➜  ~ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
busybox   1         1         1            1           3m
➜  ~ kubectl get pod
NAME                       READY     STATUS        RESTARTS   AGE
busybox-7bcdf6684b-jnp6w   1/1       Running       0          18s
➜  ~

As you can see it’s current ‘DESIRED’ state equals to 1.

Next step is to expose our deployment through a service so it can be queried from outside of the cluster:

➜  ~ kubectl expose deployment busybox --type=NodePort
service "busybox" exposed

This will expose our endpoint:

➜  ~ kubectl get endpoints
NAME         ENDPOINTS         AGE
busybox      172.17.0.9:8080   23s

Once it is done, we can ask our cluster manager tool to get us it’s api url:

➜  ~ minikube service busybox --url
http://192.168.99.100:31623

If we query it we will get it’s hostname in the response:

➜  ~ curl http://192.168.99.100:31623
busybox-7bcdf6684b-jnp6w

2. Scaling the deployment manually.
Now our deployment is ready to be scaled:

Comments closed

Installing Kubernetes on MacOS

I am assuming you have virtualbox installed on your Mac.

To test most of the stuff on k8s you don’t need multiple nodes, running one node cluster is pretty much what you need.

First we need to install kubectl, a tool to interact with kubernetes cluster:

➜  ~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s \
  https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl \
  && chmod +x ./kubectl \
  && sudo mv ./kubectl /usr/local/bin/kubectl

Then we need Minikube – which is a tool that provisions and manages single-node Kubernetes clusters:

➜  ~ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-darwin-amd64 \
  && chmod +x minikube \
  && sudo mv minikube /usr/local/bin/

Now we can start the VM:

➜  ~ minikube  start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 140.01 MB / 140.01 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 148.56 MB / 148.56 MB [============================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

Let’s check everything is working:

Comments closed

Creating Kubernetes Jobs.

Sometimes you need to run a container to execute a specific task and then stop it.

Normally in Kubernetes, if you try just to run it, it will actually create a deployment,
meaning you container will keep running all the time. That is because by default kubectl runs with ‘–restart=”Always”‘ policy.
So if you don’t want to create a yaml file where you specify pod ‘kind’ as a Job, but simply use kubectl run, you can set restart policy to ‘OnFailure’.
Let’s run simple container as a job. It is a simple web crawler which I have written for one of my job interviews, it has many bugs and incomplete,
but sometimes it actually works 🙂 So let’s run it:

➜  ~ kubectl run crawler --restart=OnFailure --image=kayan/web-crawler \
 -- http://www.gamesyscorporate.com http://www.gamesyscorporate.com 3
job "crawler" created

Now we can check the state of the pod:

➜  ~ kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
crawler-k57bh   0/1       ContainerCreating   0          2s

it will take a while, as it needs to download the image first, to check run:

kubectl describe pod crawler

You should see something like below:

Comments closed

Running smallest test http server container

Sometimes we want to quickly run some container and check http connection to it, I use to use nginx for this.
If your internet connection is not super fast, or if you want something really really quick, or nginx just doesn’t work for you
for some reason, here is what you can use – a combination of busybox and netcat:

➜  ~ docker run -d --rm -p 8080:8080 --name webserver busybox \
	 sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
	 echo 'smallest http server'; } | nc -l -p  8080; done"
	 
031cb2f4c0ecab22b3af574ab09a28dbfcb9e654e9a2d04fb421bb7ebacdff1f

➜  ~ curl localhost:8080
smallest http server

Lets check it’s size:

➜  ~ docker images nginx | grep alpine
nginx               1.13.6-alpine       5c6da346e3d6        3 weeks ago         15.5MB
➜  ~
➜  ~
➜  ~ docker images busybox
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
➜  ~

It is just 1Mb as oppose to 15Mb for nginx alpine.

You can run same on Kubernetes as described below:

Comments closed

Implementing Service Discovery with Consul, Registrator and Nginx in a Dockerized environment.

Today we are going to look at how we can benefit from modern devops tools to implement simple Service Discovery.
What is Service Discovery? To put it very simply, it is a combinations of scripts or tools which can help to discover certain
properties of a deployable applications, like IP address, port, etc, so deployment could be automated.

I remember in one of my previous jobs, we use to come to office at 6am for the release. It was fun…
So the ops guys would configure the reverse proxy with all configuration required for the new app, like their ports, then add the new app, take the old application off the reverse proxy’s pool, then restart the proxy. Very tedious process. After all done, they would run many tests to confirm all is looking good. The flow would look something like on the diagram:

Old way of doing things manually

Nowadays you can imagine different software development world, applications running as docker containers and deployment happening multiple times a day.

Today I will try to demonstrate how to automate configuring the reverse proxy automatically, so no matter what is the IP address or port the application server is running at, all will be configured automatically, and we will only deploy the application or remove it when needed:

Service Discovery with Consul, Registrator and Nginx in a Dockerized environment

Of course it is just a concept to see how specific devops tools could be benefited from and in real life docker orchestration tools like Rancher or Kubernetes, with their embedded mechanisms, will either take care of the Service Discovery or will make it much easier.

But I just wanted to show how we can do it piece by piece, so we know what is going on and how things work.

So here is a list of the things we are going to do:

  1. How to Dockerize simple NodeJs app
  2. How to use Consul as service discovery tool for storing container data in a KV storage
  3. How to use registrator as service discovery tool for inspecting containers
  4. How to use nginx as reverse proxy
  5. How to use Consul-template for configuring nginx automatically

We are going to start from

Comments closed

/var/run/docker.sock

If you have been using docker for a while you may have noticed that some containers requires bind mounting /var/run/docker.sock.

Or ever wondered why when docker engine/daemon is off, you get the next message when running :

docker ps

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

So what does it mean?

The docker.sock is a Unix socket and this is basically how processes in Unix can communicate with each other to share some data.
In case of docker

Comments closed

Running Ansible as Docker container

Today I am going to show how to put ansible on docker. You may ask why? Well, many reasons, first of all pure curiosity on how to do it, second,
you may end up in environment where you don’t have ansible installed nor you have a
permissions to install anything, but free to pull docker images, a sort of immutable infrastructure.

Apart from learning how to dockerize some tool, you also will have a chance to play with ansible and ansible-playbook, which is one of the most used devops tools these days.

So after a bit of googling I found how to install ansible, it is couple lines of bash script. With this information in hand, all we
have to do is just put the script into the Dockerfile, so I created a file called Dockerfile.ansible.cnf:

FROM ubuntu

USER root

RUN \
  apt-get update && \
  apt-get install -y software-properties-common && \
  apt-add-repository ppa:ansible/ansible && \
  apt-get update && \
  apt-get install -y --force-yes ansible

RUN mkdir /ansible
WORKDIR /ansible

Let’s create an image and tag it:

docker build -f Dockerfile.ansible.cnf -t myansible .

Time to test it:

docker run --name myansible --rm    myansible  ansible --version
ansible 2.4.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

Nice work, let’s continue.

Comments closed

Dockerizing Jenkins build logs with ELK stack (Filebeat, Elasticsearch, Logstash and Kibana)

This is 4th part of Dockerizing Jenkins series, you can find more about previous parts here:

Dockerizing Jenkins, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins, part 2: Deployment with maven and JFrog Artifactory
Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

Today we are going to look at managing the Jenkins build logs in a dockerized environment.

Normally, in order to view the build logs in Jenkins, all you have to do is to go to particular job and check the logs. Depending on a log rotation configuration, the logs could be saved for N number of builds, days, etc, meaning the old jobs logs will be lost.

Our aim in this article will be persisting the logs in a centralised fashion, just like any other application logs, so it could be searched, viewed and monitored from single location.

We also will be running Jenkins in Docker, meaning if container is dropped and no other means are in place, like mounting the volume for logs from a host and taking the backup the logs will be lost.

As you may have already heard, one of the best solutions when it comes to logging is called ELK stack.

The Idea with ELK stack is you collect logs with Filebeat(or any other *beat), parse, filter logs with longstash and then send them to elasticsearch for persistence, and then view them in kibana.

On top of that, because logstash is heavyweight jruby app on JVM , you either skip it at all or use a way smaller application called Filebeat, which is a logstash log forwarder, all it does, collects the logs and sends to longstash for further processing.

In fact, if you don’t have any filtering and parsing requirements you can skip the logstash at all and use Filebeat’s elastic output for sending the logs directly to elasticsearch.

In our example we will try to use all of them, plus, we won’t be running Filebeat in a separate container, but instead, will install it right inside of our Jenkins image, because Filebeat is small enough. I also wanted to demonstrate how we can install anything on our Jenkins image, so it is more interesting.

So the summary of what we are going to look at today is:

  1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline
  2. Download and install Filebeat on our Jenkins image
  3. Configure Filebeat so it knows how and where collect the Jenkins logs and how to send them further to logstash
  4. Configure and run logstash in a docker container
  5. Configure and run elasticsearch in a docker container
  6. Configure and run kibana in a docker container

1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline

In this example we will use Jenkins image we created earlier in the part 3 of these series. First thing first, let’s checkout the project:

git clone https://github.com/kenych/dockerizing-jenkins && \
   cd dockerizing-jenkins && \
   git checkout dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin && \
   ./runall.sh

Let’s see what runall.sh does:

Comments closed