Skip to content

Ifrit LTD Posts

Running smallest test http server container

Sometimes we want to quickly run some container and check http connection to it, I use to use nginx for this.
If your internet connection is not super fast, or if you want something really really quick, or nginx just doesn’t work for you
for some reason, here is what you can use – a combination of busybox and netcat:

➜  ~ docker run -d --rm -p 8080:8080 --name webserver busybox \
	 sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
	 echo 'smallest http server'; } | nc -l -p  8080; done"
	 
031cb2f4c0ecab22b3af574ab09a28dbfcb9e654e9a2d04fb421bb7ebacdff1f

➜  ~ curl localhost:8080
smallest http server

Lets check it’s size:

➜  ~ docker images nginx | grep alpine
nginx               1.13.6-alpine       5c6da346e3d6        3 weeks ago         15.5MB
➜  ~
➜  ~
➜  ~ docker images busybox
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
➜  ~

It is just 1Mb as oppose to 15Mb for nginx alpine.

You can run same on Kubernetes as described below:

Comments closed

Setting up Firewall and network troubleshooting in Linux with UFW, lsof, tcpdump, wireshark, rsyslog and vagrant

In this article I am going to demonstrate how to restrict incoming connections to specific port in Ubuntu Linux. Then we will see how to check the logs on the server side for rejected client connection attempts and how to troubleshoot this issue from clients perspective by analysing TCP packets.

By the end you should be more familiar with UFW(Uncomplicated Firewall), lsof, tcpdump and TCP Three-way handshake analysis, Wireshark, nc, rsyslog, telnet, vagrant.

The Article assumes you have VirtualBox and vagrant installed, if you haven’t google it, it is very easy to setup.

First thing first, you will need to setup 2 VMs, alternatively you can use the host as the client, given you can
use tcpdump on it:

cat Vagrantfile
Vagrant.configure(2) do |config|

  config.vm.synced_folder ".", "/vagrant"

  config.vm.define "sensu" do |m|
          m.vm.box = "ubuntu/trusty64"
          m.vm.hostname = "sensu"
          m.vm.network "private_network", ip: "192.168.2.212"
          m.vm.provider "virtualbox" do |v|
            v.memory = 1024
          end
  end

  config.vm.define "sensuclient" do |m|
          m.vm.box = "ubuntu/trusty64"
          m.vm.hostname = "sensuclient"
          m.vm.network "private_network", ip: "192.168.2.213"
          m.vm.provider "virtualbox" do |v|
            v.memory = 512
          end
  end

end  %

I have two servers now, sensu will be the server side and sensuclient will be the one connecting to sensu server.

I now have got two VMs, lets start them and then connect to server first:

Comments closed

Implementing Service Discovery with Consul, Registrator and Nginx in a Dockerized environment.

Today we are going to look at how we can benefit from modern devops tools to implement simple Service Discovery.
What is Service Discovery? To put it very simply, it is a combinations of scripts or tools which can help to discover certain
properties of a deployable applications, like IP address, port, etc, so deployment could be automated.

I remember in one of my previous jobs, we use to come to office at 6am for the release. It was fun…
So the ops guys would configure the reverse proxy with all configuration required for the new app, like their ports, then add the new app, take the old application off the reverse proxy’s pool, then restart the proxy. Very tedious process. After all done, they would run many tests to confirm all is looking good. The flow would look something like on the diagram:

Old way of doing things manually

Nowadays you can imagine different software development world, applications running as docker containers and deployment happening multiple times a day.

Today I will try to demonstrate how to automate configuring the reverse proxy automatically, so no matter what is the IP address or port the application server is running at, all will be configured automatically, and we will only deploy the application or remove it when needed:

Service Discovery with Consul, Registrator and Nginx in a Dockerized environment

Of course it is just a concept to see how specific devops tools could be benefited from and in real life docker orchestration tools like Rancher or Kubernetes, with their embedded mechanisms, will either take care of the Service Discovery or will make it much easier.

But I just wanted to show how we can do it piece by piece, so we know what is going on and how things work.

So here is a list of the things we are going to do:

  1. How to Dockerize simple NodeJs app
  2. How to use Consul as service discovery tool for storing container data in a KV storage
  3. How to use registrator as service discovery tool for inspecting containers
  4. How to use nginx as reverse proxy
  5. How to use Consul-template for configuring nginx automatically

We are going to start from

Comments closed

/var/run/docker.sock

If you have been using docker for a while you may have noticed that some containers requires bind mounting /var/run/docker.sock.

Or ever wondered why when docker engine/daemon is off, you get the next message when running :

docker ps

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

So what does it mean?

The docker.sock is a Unix socket and this is basically how processes in Unix can communicate with each other to share some data.
In case of docker

Comments closed

Running Ansible as Docker container

Today I am going to show how to put ansible on docker. You may ask why? Well, many reasons, first of all pure curiosity on how to do it, second,
you may end up in environment where you don’t have ansible installed nor you have a
permissions to install anything, but free to pull docker images, a sort of immutable infrastructure.

Apart from learning how to dockerize some tool, you also will have a chance to play with ansible and ansible-playbook, which is one of the most used devops tools these days.

So after a bit of googling I found how to install ansible, it is couple lines of bash script. With this information in hand, all we
have to do is just put the script into the Dockerfile, so I created a file called Dockerfile.ansible.cnf:

FROM ubuntu

USER root

RUN \
  apt-get update && \
  apt-get install -y software-properties-common && \
  apt-add-repository ppa:ansible/ansible && \
  apt-get update && \
  apt-get install -y --force-yes ansible

RUN mkdir /ansible
WORKDIR /ansible

Let’s create an image and tag it:

docker build -f Dockerfile.ansible.cnf -t myansible .

Time to test it:

docker run --name myansible --rm    myansible  ansible --version
ansible 2.4.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

Nice work, let’s continue.

Comments closed

Dockerizing Jenkins build logs with ELK stack (Filebeat, Elasticsearch, Logstash and Kibana)

This is 4th part of Dockerizing Jenkins series, you can find more about previous parts here:

Dockerizing Jenkins, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins, part 2: Deployment with maven and JFrog Artifactory
Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

Today we are going to look at managing the Jenkins build logs in a dockerized environment.

Normally, in order to view the build logs in Jenkins, all you have to do is to go to particular job and check the logs. Depending on a log rotation configuration, the logs could be saved for N number of builds, days, etc, meaning the old jobs logs will be lost.

Our aim in this article will be persisting the logs in a centralised fashion, just like any other application logs, so it could be searched, viewed and monitored from single location.

We also will be running Jenkins in Docker, meaning if container is dropped and no other means are in place, like mounting the volume for logs from a host and taking the backup the logs will be lost.

As you may have already heard, one of the best solutions when it comes to logging is called ELK stack.

The Idea with ELK stack is you collect logs with Filebeat(or any other *beat), parse, filter logs with longstash and then send them to elasticsearch for persistence, and then view them in kibana.

On top of that, because logstash is heavyweight jruby app on JVM , you either skip it at all or use a way smaller application called Filebeat, which is a logstash log forwarder, all it does, collects the logs and sends to longstash for further processing.

In fact, if you don’t have any filtering and parsing requirements you can skip the logstash at all and use Filebeat’s elastic output for sending the logs directly to elasticsearch.

In our example we will try to use all of them, plus, we won’t be running Filebeat in a separate container, but instead, will install it right inside of our Jenkins image, because Filebeat is small enough. I also wanted to demonstrate how we can install anything on our Jenkins image, so it is more interesting.

So the summary of what we are going to look at today is:

  1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline
  2. Download and install Filebeat on our Jenkins image
  3. Configure Filebeat so it knows how and where collect the Jenkins logs and how to send them further to logstash
  4. Configure and run logstash in a docker container
  5. Configure and run elasticsearch in a docker container
  6. Configure and run kibana in a docker container

1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline

In this example we will use Jenkins image we created earlier in the part 3 of these series. First thing first, let’s checkout the project:

git clone https://github.com/kenych/dockerizing-jenkins && \
   cd dockerizing-jenkins && \
   git checkout dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin && \
   ./runall.sh

Let’s see what runall.sh does:

Comments closed

Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

Dockerizing Jenkins 2, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

This is 3rd part of Dockerizing Jenkins series, you can find more about previous parts here:
Dockerizing Jenkins 2, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins 2, part 2: Deployment with maven and JFrog Artifactory

In this part we will look at:

  1. How to use docker-compose to run containers
  2. How to use passwords in docker environment with docker-secrets
  3. How to hide sensitive information in Jenkins with credentials plugin

In the part 1 we created basic jenkins docker image in order to run java maven pipeline with test and sonarqube analysis and then in the part 2 we looked at how to perform deployment using maven settings file. As you remember we saved the password in the file without any encryption, which is not you would obviously ever do, of course.
All code for this and previous parts is in my GitHub repo https://github.com/kenych/dockerizing-jenkins and I decided to create a branch for every part, as master branch will change with every part and older article would refer to wrong code base, for this part the code will be in the branch “dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin” and you can run the below command to check it out:

git clone https://github.com/kenych/dockerizing-jenkins && \
	cd dockerizing-jenkins && \
	git checkout dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin 

In this part we will remove password from the source code and let credentials​ plugin apply credentials to Config File Provider Plugin. But before changing any code, we will need to switch to using docker-compose instead of using docker run command. This will give us a chance to leverage docker secrets feature along with many other features which you will love.

I updated runall.sh script which we used in two parts before and replaced with docker-compose and download.sh script which will just download the minimum stuff we will need in advance. I also removed java 7 and java 8 installation in favour to use embedded java 8 from jenkins container as otherwise our download script takes too long and java comes for free in the image anyway, you can check it later once our jenkins container running.

If you were following part one and two you should know how to pick up specific java version anyway using maven tool mechanism and if you want to play with that just uncomment these lines in download script, java.groovy and in the pipeline as well. Now let’s run download to make sure we have everything we need:

➜  ./download.sh
2.60.1: Pulling from library/jenkins
Digest: sha256:fa62fcebeab220e7545d1791e6eea6759b4c3bdba246dd839289f2b28b653e72
Status: Image is up to date for jenkins:2.60.1
6.3.1: Pulling from library/sonarqube
Digest: sha256:d5f7bb8aecaa46da054bf28d111e5a27f1378188b427db64cc9fb392e1a8d80a
Status: Image is up to date for sonarqube:6.3.1
5.4.4: Pulling from jfrog/artifactory-oss
Digest: sha256:404a3f0bfdfa0108159575ef74ffd4afaff349b856966ddc49f6401cd2f20d7d
Status: Image is up to date for docker.bintray.io/jfrog/artifactory-oss:5.4.4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8334k  100 8334k    0     0   445k      0  0:00:18  0:00:18 --:--:--  444k

Please note if you haven’t ever downloaded the images, it will take some time. Now, while it is downloading the stuff we need, let’s look at docker-compose.ym:

version: "3.1"

services:
  myjenkins:
    build:
      context: .
    image: myjenkins
    ports:
     - "8080:8080"
    depends_on:
     - mysonar
     - artifactory
    links:
      - mysonar
      - artifactory
    volumes:
     - "./jobs:/var/jenkins_home/jobs/"
     - "./m2deps:/var/jenkins_home/.m2/repository/"
     - "./downloads:/var/jenkins_home/downloads"
    secrets:
     - artifactoryPassword
  mysonar:
    image: sonarqube:6.3.1
    ports:
     - "9000"
  artifactory:
    image: docker.bintray.io/jfrog/artifactory-oss:5.4.4
    ports:
     - "8081"

secrets:
  artifactoryPassword:
    file: ./secrets/artifactoryPassword

If you were curious, you would ask why did I call the file docker-compose.yml?

Comments closed

Dockerizing Jenkins 2, part 2: Deployment with maven and JFrog Artifactory

In the 1st part of this tutorial we looked at how to dockerize installation of the Jenkins plugins, java and maven tool setup in Jenkins 2 and created declarative build pipeline for maven project with test and SonarQube stages. In this part we will focus on deployment part.

Couldn’t we just simply add another stage for deployment in part 1, you may ask? Well, in fact deployment requires quite a few steps to be taken, including maven pom and settings file configuration, artifact repository availability, repository credentials encryption, etc. Let’s add them to the list and then implement step by step like we did in previous session.

  • Running JFrog Artifactory on Docker
  • Configuring maven pom file
  • Configuring maven settings file
  • Using Config File Provider Plugin for persistence of maven settings
  • Dockerizing the installation and configuration process

If you are already familiar with 1st part of this tutorial, created your project from the scratch and using your own repository, then you can just follow the steps as we go further, otherwise, if you are starting now, you can just clone/fork the work we did in the last example and then add changes as they follow in the tutorial:

 
git clone https://github.com/kenych/jenkins_docker_pipeline_tutorial1 && cd jenkins_docker_pipeline_tutorial1 && ./runall.sh

Please note all steps have been tested on MacOS Sierra and Docker version 17.05.0-ce and you should change them accordingly if you are using MS-DOS, FreeBSD etc 😉

The script above is going to take a while as it is downloading java 7, java 8, maven, sonarqube and jenkins docker images, so please be patient 🙂 Once done you should have Jenkins and Sonar up and running as we created in part 1:

If you got errors about some port being busy just use the free ports from your host, I explain this here. Otherwise you can use dynamic ports which is shown a bit later.

Chapter 1. Running JFrog Artifactory on Docker

So let’s look at the first step. Obviously if we want to test the deployment in our example, we need some place to deploy our artifacts to. We are going to use a limited open source version of JFrog Artifactory called “artifactory oss”. Let’s run it on Docker to see how easy it is to have your own artifact repo. The port 8081 on machine was busy, so I had to run it on 8082, you should do according to free ports available on your machine:

 
docker run --rm -p 8082:8081 --name artifactory docker.bintray.io/jfrog/artifactory-oss:5.4.4

Alternatively you can use dynamic ports.

Comments closed

Dockerizing Jenkins 2, Part 1: Declarative Build Pipeline With SonarQube Analysis

 

In this part I am going to demonstrate:
  • Running Jenkins on Docker
  • Automation of Jenkins plugin installation on Docker
  • Configuring java and maven tools on Jenkins, first manually and then via the groovy scripts
  • Automating the above step with Docker
  • Running Sonarqube on Docker
  • Setting up java maven pipeline with unit test, test coverage and sonarqube analysis steps.
Next time, in part 2(WIP), I am going to demonstrate everything you need for deployment:
  • How to run Artifactory repository on Docker
  • How to configure POM file for deployment
  • How to configure maven settings for deployment
  • Using maven deployment plugin
  • Setting up, configuring and Dockerizing couple Jenkins plugins for keeping deployment credentials in safe place, apply maven setting file in the job

This is a practical example, so be ready to get your hands dirty. You can either follow this step by step guide, which would be really good for learning purposes and we will create everything from the scratch, or if you are lazy, just run the command bellow after reading for the demo:

git clone https://github.com/kenych/jenkins_docker_pipeline_tutorial1 && cd jenkins_docker_pipeline_tutorial1 && ./runall.sh

but by the end you should be able to run the pipeline on fully automated Jenkins Docker container.

As you may already know, with Jenkins 2 you can actually have your build pipeline right within your java project. So you can actually use your own maven java project in order to follow the steps in this article as long as it is hosted on a git repository.

Everything obviously will be running on Docker as it is the easiest way of deploying and running them.

So, let’s see how to run Jenkins on Docker

docker pull jenkins:2.60.1

While it is downloading in the background let’s see what we are going to do with it once it is done.

Default Jenkins comes quite naked and shows suggested plugins installation wizard. We will choose it, then we will capture all installed plugins and then automate this manual step in Docker image and will follow this simple rule throughout the all steps:

  1. manually​ setup
  2. programmatically
  3. automate with Docker

The image we are going to download is 600M so you can prepare yourself coffee and have couple sips before it is finished and I will take you through the steps we need to setup up build pipeline for java project. Let’s add them to the list and then look closer later:

  • Pull the code from scm
  • Configuration of java and maven
  • Running​ unit tests
  • Running​ static analysis
  • Sending report to Sonarqube for further processing
  • And finally deployment​ of the jar file to repository(will be covered in the next part soon)
  • Optionally we can also release it after each commit.

Once you have your image downloaded let’s  run the container:

docker run -p 8080:8080 --rm --name myjenkins jenkins:2.60.1

Please note I used a specific tag, I am not using latest tag, which is the default if you don’t specify one, as I don’t want anything to break in the future.

Also note we name the container so it is easier to refer to it later as otherwise docker will name it randomly and we added –-rm flag to delete container once we stop it, this will ensure we are running Jenkins in an immutable fashion and everything configured on the fly, and if we want to preserve any data we will do it explicitly.

Comments closed