Skip to content

Dockerizing Jenkins build logs with ELK stack (Filebeat, Elasticsearch, Logstash and Kibana)

This is 4th part of Dockerizing Jenkins series, you can find more about previous parts here:

Dockerizing Jenkins, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins, part 2: Deployment with maven and JFrog Artifactory
Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin

Today we are going to look at managing the Jenkins build logs in a dockerized environment.

Normally, in order to view the build logs in Jenkins, all you have to do is to go to particular job and check the logs. Depending on a log rotation configuration, the logs could be saved for N number of builds, days, etc, meaning the old jobs logs will be lost.

Our aim in this article will be persisting the logs in a centralised fashion, just like any other application logs, so it could be searched, viewed and monitored from single location.

We also will be running Jenkins in Docker, meaning if container is dropped and no other means are in place, like mounting the volume for logs from a host and taking the backup the logs will be lost.

As you may have already heard, one of the best solutions when it comes to logging is called ELK stack.

The Idea with ELK stack is you collect logs with Filebeat(or any other *beat), parse, filter logs with longstash and then send them to elasticsearch for persistence, and then view them in kibana.

On top of that, because logstash is heavyweight jruby app on JVM , you either skip it at all or use a way smaller application called Filebeat, which is a logstash log forwarder, all it does, collects the logs and sends to longstash for further processing.

In fact, if you don’t have any filtering and parsing requirements you can skip the logstash at all and use Filebeat’s elastic output for sending the logs directly to elasticsearch.

In our example we will try to use all of them, plus, we won’t be running Filebeat in a separate container, but instead, will install it right inside of our Jenkins image, because Filebeat is small enough. I also wanted to demonstrate how we can install anything on our Jenkins image, so it is more interesting.

So the summary of what we are going to look at today is:

  1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline
  2. Download and install Filebeat on our Jenkins image
  3. Configure Filebeat so it knows how and where collect the Jenkins logs and how to send them further to logstash
  4. Configure and run logstash in a docker container
  5. Configure and run elasticsearch in a docker container
  6. Configure and run kibana in a docker container

1. Prepare our dockerized dev environment with Jenkins, Sonarqube and JFrog artifactory running the declarative pipeline

In this example we will use Jenkins image we created earlier in the part 3 of these series. First thing first, let’s checkout the project:

git clone https://github.com/kenych/dockerizing-jenkins && \
   cd dockerizing-jenkins && \
   git checkout dockerizing_jenkins_part_3_docker_compose_docker_secret_credentials_plugin && \
   ./runall.sh

Let’s see what runall.sh does:

#!/usr/bin/env bash

#get needed stuff 1st
./download.sh

#clean anything with same name to get rid of clashes
docker-compose down

#update with actual password
echo "password" > ./secrets/artifactoryPassword

#update older jenkins image, make sure it doesnt use cache
docker-compose build --no-cache

#run all
docker-compose up

We firstly download all images needed to prepare our environment, plus make maven available for Jenkins container. Then make sure no other services are running with same name by shutting them down with docker-compose down command, as if we had it from previous part of this tutorial, docker-compose would fails during creation of services.Then we set up password for artifactory, I could have committed it, but I am a bit paranoiac about committing passwords to repo. Then we build our Jenkins image and run the holy build btrinity of Jenkins, sonarqube and artifactory containers.

By now you should have everything ready to start and you can check by going to http://localhost:8080  and giving a try to a build to run, hopefully successfully:


2. Install Filebeat on our Jenkins image

So, now let’s move forward, in order to install and run Filebeat we need to implement these steps:

  • Download and install Filebeat Debian package
  • Configure Filebeat with inbound and outbound context
  • Start the Filebeat service

To do so let’s add these lines to Dockerfile:

USER root

RUN curl -o /tmp/filebeat_1.0.1_amd64.deb https://download.elastic.co/beats/filebeat/filebeat_1.0.1_amd64.deb && \
   dpkg -i /tmp/filebeat_1.0.1_amd64.deb &&  apt-get install

COPY filebeat.yml /etc/filebeat/filebeat.yml

As you see, we download and install the package first, then copy the config to image. Staring service is shown a bit later.

3. Configure Filebeat so it knows where to collect the Jenkins logs from and how to send them further to logstash

Now let’s configure Filebeat, we can do it using yaml file:

filebeat:
  prospectors:
    -
      paths:
        - "/var/jenkins_home/jobs/*/builds/*/log"

output:
  logstash:
        hosts: ["logstash:5044"]

As you see we specified the path where to find Jenkins logs and then instructed to send the logs to logstash. If you are curious about logs path and why it looks like that, you can get inside of your Jenkins container and have a look:

dockerizing-jenkins git:(master) ✗ docker exec -it dockerizingjenkins_myjenkins_1 bash
jenkins@3713764d9f9d:/$ ls -l  /var/jenkins_home/jobs/*/builds/*/log
-rw-r--r-- 1 jenkins jenkins 64009 Aug 18 21:08 /var/jenkins_home/jobs/maze-explorer/builds/1/log
-rw-r--r-- 1 jenkins jenkins 53397 Aug 18 21:22 /var/jenkins_home/jobs/maze-explorer/builds/2/log
-rw-r--r-- 1 jenkins jenkins 53397 Aug 18 21:22 /var/jenkins_home/jobs/maze-explorer/builds/lastFailedBuild/log
-rw-r--r-- 1 jenkins jenkins 64009 Aug 18 21:08 /var/jenkins_home/jobs/maze-explorer/builds/lastStableBuild/log
-rw-r--r-- 1 jenkins jenkins 64009 Aug 18 21:08 /var/jenkins_home/jobs/maze-explorer/builds/lastSuccessfulBuild/log
-rw-r--r-- 1 jenkins jenkins 53397 Aug 18 21:22 /var/jenkins_home/jobs/maze-explorer/builds/lastUnsuccessfulBuild/log
jenkins@3713764d9f9d:/$ exit
exit

As you can see, this is the way Jenkins keeps it’s build logs.
In a hosts we specified “logstash:5044” that is because we will link logstash later in Jenkins so Jenkins containers will be able to resolve its name to logstash containers IP.

Finally we need to start Filebeat service, and as our startup script gets more complex lets move it to separate file, create a file entrypoint.sh:

/etc/init.d/filebeat start

exec bash -c /usr/local/bin/jenkins.sh

And then refer to it in Dockerfile:

COPY ["entrypoint.sh", "/"]

RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/bin/bash","-c","./entrypoint.sh"]

Our finale dockerfile should look like below:

FROM jenkins:2.60.1

MAINTAINER Kayan Azimov

ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"

COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt

COPY groovy/* /usr/share/jenkins/ref/init.groovy.d/

USER root

RUN curl -o /tmp/filebeat_1.0.1_amd64.deb https://download.elastic.co/beats/filebeat/filebeat_1.0.1_amd64.deb && \
    dpkg -i /tmp/filebeat_1.0.1_amd64.deb &&  apt-get install

COPY filebeat.yml /etc/filebeat/filebeat.yml

COPY ["entrypoint.sh", "/"]

RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/bin/bash","-c","./entrypoint.sh"]

Now let’s build the image and make sure everything still works, this is a good pattern, baby steps to make sure you know the point where it last was in a working state:

docker-compose up --build

So let’s check Filebeat is up and running:

➜  dockerizing-jenkins git:(master) ✗ docker exec -it dockerizingjenkins_myjenkins_1 bash
root@a2c965f635dd:/#
root@a2c965f635dd:/# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1 78.7 12.5 5455056 893372 ?      Ssl  21:26   0:34 java -Djenkins.install.runSetupWizard=false -jar /usr/share/jenkins/jenkins.war
root        18  0.0  0.0   9400   576 ?        Sl   21:26   0:00 /usr/bin/filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
root        19  0.2  0.1 359068 11532 ?        Sl   21:26   0:00 /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
root      2022  0.3  0.0  19960  3504 ?        Ss   21:27   0:00 bash
root      2028  0.0  0.0  38380  3216 ?        R+   21:27   0:00 ps aux
root@a2c965f635dd:/# exit
exit

4. Configure and run logstash in a docker container

Now, instead of keeping adding more and more services to our already overloaded stack, let’s create another docker-compose file, so we can have separation of concerns, and add containers related to ELK there. File docker-compose-elk.yml:

version: "3.1"

services:

  logstash:
    image: logstash:2
    volumes:
          - ./:/config
    command: logstash -f /config/logstash.conf
   

As you see we created a new file and added logstash to it, it is pretty old image and I just took it from the stack I setup up long time ago, if you want to get latest you will need to make sure versions matrix match:

Now we need to configure logstash in logstash.conf:

input { beats {      port => 5044    }  }
output {
  stdout { codec => rubydebug }

}

With this config all it does is showing the logs on the output so we can check it is actually working. Another baby step, let’s run the new stack:


docker-compose -f docker-compose-elk.yml up

Notice that we need to specify compose file explicitly this time as it doesn’t have the default name, which is already taken by trinity.

As you can see logstash now picking up messages sent by Filebeat.

5.Configure and run elasticsearch in a docker container

Next step adding elasticsearch bit to the stack:

version: "3.1"

services:

  logstash:
    image: logstash:2
    volumes:
          - ./:/config
    command: logstash -f /config/logstash.conf
    links:
     - elasticsearch
    depends_on:
     - elasticsearch

  elasticsearch:
     image: elasticsearch:2
     ports:
      - "9200:9200"

We also added links and dependency on elastic to logstash, so it can see it and wait for it as well. Don’t forget to configure logstash to send messages to elastic:

input { beats {      port => 5044    }  }
output {
  stdout { codec => rubydebug }
  elasticsearch { hosts => ["elasticsearch:9200"] }

}

Now you can stop the elk stack and start again, just hit Ctrl + C or run


docker-compose -f docker-compose-elk.yml stop

You can hit the URL to check we have elasticsearch up and running:

Now let’s see if we can search for something. I took a random string from Jenkins logs:

and tried to search for it:

As you can see now we are able to search and find anything we want, cool, isn’t it?

6.Configure and run kibana in a docker container

But it is even more cooler when you have nice UI for searching, so let’s give a try to kibana now, and thus we will have our final compose file for elk stack:

version: "3.1"

services:

  logstash:
    image: logstash:2
    volumes:
          - ./:/config
    command: logstash -f /config/logstash.conf
    links:
     - elasticsearch
    depends_on:
     - elasticsearch

  elasticsearch:
     image: elasticsearch:2
     ports:
      - "9200:9200"

  kibana:
    image: kibana:4
    ports:
     - "5601:5601"
    links:
     - elasticsearch
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    depends_on:
         - elasticsearch

Restart the stack and check the logs:

As we see after couple tries kibana managed to connect to elastic.
Now go to http://localhost:5601, that is where you find kibana, you should see this screen:

You will need to hit the “create” button to create the first index pattern. As the logs come from logstash, the index will have the pattern “logstash-*” where “*” will be any date like “logstash-2017.08.22”. Once done kibana will pick up your logs and you can hit “Discover” to see your logs. Afterwards, we can do some config changes to get live logs, first change time to today, then change auto-refresh to 5 seconds:

 

 You can also press “add” on message to have only log messages shown.

Now let’s search for “sonar” string with kibana, I assume you ran build many times so you have logs for “sonar”:

As we got 6 containers running, you may wonder how much memory and cpu they consume, let’s run some stats when pipeline is running, especially during sonar stage, given we do it in parallel:

docker stats  $(docker ps --format {{.Names}})

That is it, we have completed our mission and running two stacks, one for our build pipeline and second for log management.

Finally, if you didn’t follow the instruction but still want to have all up and running by magic command, just run this:

git clone https://github.com/kenych/dockerizing-jenkins && \
   cd dockerizing-jenkins && \
   git checkout dockerizing_jenkins_part_4_elk_stack && \
   ./runall.sh