Skip to content

Tag: docker

Advanced Jenkins setup: Creating Jenkins configuration as code and setting up Kubernetes plugin

This blog post demonstrates how anything in Jenkins could be configured as a code through Java API using groovy code, and how changes could be applied right inside Jenkins job. I particularly will demo how to configure Kubernetes plugin and credentials, but the same concept could be used later to configure any Jenkins plugin you are interested in. We will also look at how to create custom config which could be used either for all
or only specific Jenkins instances so you can setup different instances differently based on security policy or any other criteria.

The Why…

Recently I have been working on a task to improve deployment of our master Jenkins instances on Kubernetes.
On of the requirements was to improve the speed, as we have more than 40 Jenkins masters running on different
environments like test, dev, pre-prod, perf, prod etc and deployed in Kubernetes over AWS cluster. The deployment job took around an hour, involved downtime and required multiple steps.

Comments closed

Provisioning prepackaged stacks easily on Kubernetes with helm

Today I am going to show how to provision prepackaged k8s stacks with helm.

So what is Helm – here is literally what is’t page says – a tool for managing Kubernetes charts and Charts are packages of pre-configured Kubernetes resources. So imagine you want to provision some stack, ELK for example, although there are many ways to do it, here is one I did as an example for Jenkins logs although not on k8s but with just docker, but nevertheless, so instead of reinventing the wheel you just provision it using helm.

So let’s just do it instead of talking.

Go to download page and get the right version from https://github.com/kubernetes/helm

Then untar, move to right dir and run:

➜  tar -xzvf helm-v2.7.2-darwin-amd64.tar.gz

➜   mv darwin-amd64/helm /usr/local/bin


➜   helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init
..
...

So let’s do what is asks

Comments closed

How to setup Kubernetes cluster on AWS with kops

Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).

In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:

0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.
5. (Update September 2018) Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
6. (Update June 2019) The advanced way: Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS..

0. Setup a VM with CentOS Linux

Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:

➜  kops cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|

  config.vm.define "kops" do |m|
    m.vm.box = "centos/7"
    m.vm.hostname = "kops"
  end

end

Let’s start it up and logon:

Comments closed

Installing Kubernetes on MacOS

I am assuming you have virtualbox installed on your Mac.

To test most of the stuff on k8s you don’t need multiple nodes, running one node cluster is pretty much what you need.

First we need to install kubectl, a tool to interact with kubernetes cluster:

➜  ~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s \
  https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl \
  && chmod +x ./kubectl \
  && sudo mv ./kubectl /usr/local/bin/kubectl

Then we need Minikube – which is a tool that provisions and manages single-node Kubernetes clusters:

➜  ~ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-darwin-amd64 \
  && chmod +x minikube \
  && sudo mv minikube /usr/local/bin/

Now we can start the VM:

➜  ~ minikube  start
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
Downloading Minikube ISO
 140.01 MB / 140.01 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 148.56 MB / 148.56 MB [============================================] 100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

Let’s check everything is working:

Comments closed

Creating Kubernetes Jobs.

Sometimes you need to run a container to execute a specific task and then stop it.

Normally in Kubernetes, if you try just to run it, it will actually create a deployment,
meaning you container will keep running all the time. That is because by default kubectl runs with ‘–restart=”Always”‘ policy.
So if you don’t want to create a yaml file where you specify pod ‘kind’ as a Job, but simply use kubectl run, you can set restart policy to ‘OnFailure’.
Let’s run simple container as a job. It is a simple web crawler which I have written for one of my job interviews, it has many bugs and incomplete,
but sometimes it actually works 🙂 So let’s run it:

➜  ~ kubectl run crawler --restart=OnFailure --image=kayan/web-crawler \
 -- http://www.gamesyscorporate.com http://www.gamesyscorporate.com 3
job "crawler" created

Now we can check the state of the pod:

➜  ~ kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
crawler-k57bh   0/1       ContainerCreating   0          2s

it will take a while, as it needs to download the image first, to check run:

kubectl describe pod crawler

You should see something like below:

Comments closed

Running smallest test http server container

Sometimes we want to quickly run some container and check http connection to it, I use to use nginx for this.
If your internet connection is not super fast, or if you want something really really quick, or nginx just doesn’t work for you
for some reason, here is what you can use – a combination of busybox and netcat:

➜  ~ docker run -d --rm -p 8080:8080 --name webserver busybox \
	 sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
	 echo 'smallest http server'; } | nc -l -p  8080; done"
	 
031cb2f4c0ecab22b3af574ab09a28dbfcb9e654e9a2d04fb421bb7ebacdff1f

➜  ~ curl localhost:8080
smallest http server

Lets check it’s size:

➜  ~ docker images nginx | grep alpine
nginx               1.13.6-alpine       5c6da346e3d6        3 weeks ago         15.5MB
➜  ~
➜  ~
➜  ~ docker images busybox
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
➜  ~

It is just 1Mb as oppose to 15Mb for nginx alpine.

You can run same on Kubernetes as described below:

Comments closed

Running Ansible as Docker container

Today I am going to show how to put ansible on docker. You may ask why? Well, many reasons, first of all pure curiosity on how to do it, second,
you may end up in environment where you don’t have ansible installed nor you have a
permissions to install anything, but free to pull docker images, a sort of immutable infrastructure.

Apart from learning how to dockerize some tool, you also will have a chance to play with ansible and ansible-playbook, which is one of the most used devops tools these days.

So after a bit of googling I found how to install ansible, it is couple lines of bash script. With this information in hand, all we
have to do is just put the script into the Dockerfile, so I created a file called Dockerfile.ansible.cnf:

FROM ubuntu

USER root

RUN \
  apt-get update && \
  apt-get install -y software-properties-common && \
  apt-add-repository ppa:ansible/ansible && \
  apt-get update && \
  apt-get install -y --force-yes ansible

RUN mkdir /ansible
WORKDIR /ansible

Let’s create an image and tag it:

docker build -f Dockerfile.ansible.cnf -t myansible .

Time to test it:

docker run --name myansible --rm    myansible  ansible --version
ansible 2.4.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

Nice work, let’s continue.

Comments closed

Dockerizing Jenkins build logs with ELK stack (Filebeat, Elasticsearch, Logstash and Kibana)

This is 4th part of Dockerizing Jenkins series, you can find more about previous parts here:

Dockerizing Jenkins, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins, part 2: Deployment with maven and JFrog Artifactory
Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

Today we are going to look at managing the Jenkins build logs in a dockerized environment.

Normally, in order to view the build logs in Jenkins, all you have to do is to go to particular job and check the logs. Depending on a log rotation configuration, the logs could be saved for N number of builds, days, etc, meaning the old jobs logs will be lost.

Our aim in this article will be persisting the logs in a centralised fashion, just like any other application logs, so it could be searched, viewed and monitored from single location.

We also will be running Jenkins in Docker, meaning if container is dropped and no other means are in place, like mounting the volume for logs from a host and taking the backup the logs will be lost.

As you may have already heard, one of the best solutions when it comes to logging is called ELK stack.

The Idea with ELK stack is you collect logs with Filebeat(or any other *beat), parse, filter logs with longstash and then send them to elasticsearch for persistence, and then view them in kibana.

On top of that, because logstash is heavyweight jruby app on JVM , you either skip it at all or use a way smaller application called Filebeat, which is a logstash log forwarder, all it does, collects the logs and sends to longstash for further processing.

In fact, if you don’t have any filtering and parsing requirements you can skip the logstash at all and use Filebeat’s elastic output for sending the logs directly to elasticsearch.

In our example we will try to use all of them, plus, we won’t be running Filebeat in a separate container, but instead, will install it right inside of our Jenkins image, because Filebeat is small enough. I also wanted to demonstrate how we can install anything on our Jenkins image, so it is more interesting.

So the summary of what we are going to look at today is:

  1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline
  2. Download and install Filebeat on our Jenkins image
  3. Configure Filebeat so it knows how and where collect the Jenkins logs and how to send them further to logstash
  4. Configure and run logstash in a docker container
  5. Configure and run elasticsearch in a docker container
  6. Configure and run kibana in a docker container

1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline

In this example we will use Jenkins image we created earlier in the part 3 of these series. First thing first, let’s checkout the project:

git clone https://github.com/kenych/dockerizing-jenkins && \
   cd dockerizing-jenkins && \
   git checkout dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin && \
   ./runall.sh

Let’s see what runall.sh does:

Comments closed

Dockerizing Jenkins 2, part 2: Deployment with maven and JFrog Artifactory

In the 1st part of this tutorial we looked at how to dockerize installation of the Jenkins plugins, java and maven tool setup in Jenkins 2 and created declarative build pipeline for maven project with test and SonarQube stages. In this part we will focus on deployment part.

Couldn’t we just simply add another stage for deployment in part 1, you may ask? Well, in fact deployment requires quite a few steps to be taken, including maven pom and settings file configuration, artifact repository availability, repository credentials encryption, etc. Let’s add them to the list and then implement step by step like we did in previous session.

  • Running JFrog Artifactory on Docker
  • Configuring maven pom file
  • Configuring maven settings file
  • Using Config File Provider Plugin for persistence of maven settings
  • Dockerizing the installation and configuration process

If you are already familiar with 1st part of this tutorial, created your project from the scratch and using your own repository, then you can just follow the steps as we go further, otherwise, if you are starting now, you can just clone/fork the work we did in the last example and then add changes as they follow in the tutorial:

 
git clone https://github.com/kenych/jenkins_docker_pipeline_tutorial1 && cd jenkins_docker_pipeline_tutorial1 && ./runall.sh

Please note all steps have been tested on MacOS Sierra and Docker version 17.05.0-ce and you should change them accordingly if you are using MS-DOS, FreeBSD etc 😉

The script above is going to take a while as it is downloading java 7, java 8, maven, sonarqube and jenkins docker images, so please be patient 🙂 Once done you should have Jenkins and Sonar up and running as we created in part 1:

If you got errors about some port being busy just use the free ports from your host, I explain this here. Otherwise you can use dynamic ports which is shown a bit later.

Chapter 1. Running JFrog Artifactory on Docker

So let’s look at the first step. Obviously if we want to test the deployment in our example, we need some place to deploy our artifacts to. We are going to use a limited open source version of JFrog Artifactory called “artifactory oss”. Let’s run it on Docker to see how easy it is to have your own artifact repo. The port 8081 on machine was busy, so I had to run it on 8082, you should do according to free ports available on your machine:

 
docker run --rm -p 8082:8081 --name artifactory docker.bintray.io/jfrog/artifactory-oss:5.4.4

Alternatively you can use dynamic ports.

Comments closed

Dockerizing Jenkins 2, Part 1: Declarative Build Pipeline With SonarQube Analysis

 

In this part I am going to demonstrate:
  • Running Jenkins on Docker
  • Automation of Jenkins plugin installation on Docker
  • Configuring java and maven tools on Jenkins, first manually and then via the groovy scripts
  • Automating the above step with Docker
  • Running Sonarqube on Docker
  • Setting up java maven pipeline with unit test, test coverage and sonarqube analysis steps.
Next time, in part 2(WIP), I am going to demonstrate everything you need for deployment:
  • How to run Artifactory repository on Docker
  • How to configure POM file for deployment
  • How to configure maven settings for deployment
  • Using maven deployment plugin
  • Setting up, configuring and Dockerizing couple Jenkins plugins for keeping deployment credentials in safe place, apply maven setting file in the job

This is a practical example, so be ready to get your hands dirty. You can either follow this step by step guide, which would be really good for learning purposes and we will create everything from the scratch, or if you are lazy, just run the command bellow after reading for the demo:

git clone https://github.com/kenych/jenkins_docker_pipeline_tutorial1 && cd jenkins_docker_pipeline_tutorial1 && ./runall.sh

but by the end you should be able to run the pipeline on fully automated Jenkins Docker container.

As you may already know, with Jenkins 2 you can actually have your build pipeline right within your java project. So you can actually use your own maven java project in order to follow the steps in this article as long as it is hosted on a git repository.

Everything obviously will be running on Docker as it is the easiest way of deploying and running them.

So, let’s see how to run Jenkins on Docker

docker pull jenkins:2.60.1

While it is downloading in the background let’s see what we are going to do with it once it is done.

Default Jenkins comes quite naked and shows suggested plugins installation wizard. We will choose it, then we will capture all installed plugins and then automate this manual step in Docker image and will follow this simple rule throughout the all steps:

  1. manually​ setup
  2. programmatically
  3. automate with Docker

The image we are going to download is 600M so you can prepare yourself coffee and have couple sips before it is finished and I will take you through the steps we need to setup up build pipeline for java project. Let’s add them to the list and then look closer later:

  • Pull the code from scm
  • Configuration of java and maven
  • Running​ unit tests
  • Running​ static analysis
  • Sending report to Sonarqube for further processing
  • And finally deployment​ of the jar file to repository(will be covered in the next part soon)
  • Optionally we can also release it after each commit.

Once you have your image downloaded let’s  run the container:

docker run -p 8080:8080 --rm --name myjenkins jenkins:2.60.1

Please note I used a specific tag, I am not using latest tag, which is the default if you don’t specify one, as I don’t want anything to break in the future.

Also note we name the container so it is easier to refer to it later as otherwise docker will name it randomly and we added –-rm flag to delete container once we stop it, this will ensure we are running Jenkins in an immutable fashion and everything configured on the fly, and if we want to preserve any data we will do it explicitly.

Comments closed