Skip to content

Implementing Service Discovery with Consul, Registrator and Nginx in a Dockerized environment.

Today we are going to look at how we can benefit from modern devops tools to implement simple Service Discovery.
What is Service Discovery? To put it very simply, it is a combinations of scripts or tools which can help to discover certain
properties of a deployable applications, like IP address, port, etc, so deployment could be automated.

I remember in one of my previous jobs, we use to come to office at 6am for the release. It was fun…
So the ops guys would configure the reverse proxy with all configuration required for the new app, like their ports, then add the new app, take the old application off the reverse proxy’s pool, then restart the proxy. Very tedious process. After all done, they would run many tests to confirm all is looking good. The flow would look something like on the diagram:

Old way of doing things manually

Nowadays you can imagine different software development world, applications running as docker containers and deployment happening multiple times a day.

Today I will try to demonstrate how to automate configuring the reverse proxy automatically, so no matter what is the IP address or port the application server is running at, all will be configured automatically, and we will only deploy the application or remove it when needed:

Service Discovery with Consul, Registrator and Nginx in a Dockerized environment

Of course it is just a concept to see how specific devops tools could be benefited from and in real life docker orchestration tools like Rancher or Kubernetes, with their embedded mechanisms, will either take care of the Service Discovery or will make it much easier.

But I just wanted to show how we can do it piece by piece, so we know what is going on and how things work.

So here is a list of the things we are going to do:

  1. How to Dockerize simple NodeJs app
  2. How to use Consul as service discovery tool for storing container data in a KV storage
  3. How to use registrator as service discovery tool for inspecting containers
  4. How to use nginx as reverse proxy
  5. How to use Consul-template for configuring nginx automatically

We are going to start from dockerizing simple node js application, before we will look at how to automate it’s deployment.

1. How to Dockerize simple NodeJs app

It is very simple, I just googled it and changed the code slightly so we can run application with static or dynamic ports as an arguments passed to the app.

Dockerfile:

FROM node:boron

WORKDIR /usr/src/app

COPY package.json .

RUN npm install

COPY . .

ENTRYPOINT [ "node",  "server.js" ]

As you can see, it runs npm install so ge dependencies and then runs node with server.js file.

package.json:

{
  "name": "simple_node_js_app",
  "version": "1.0.0",
  "description": "running dockerized Node.js",
  "author": "Kayan Azimov",
  "dependencies": {
    "express": "^4.13.3"
  }
}

server.js:

'use strict';

const express = require('express');
const HOST = "0.0.0.0";
const appName = process.argv[2] ==  typeof process.argv[2] != "undefined" ? process.argv[2] : "DEFAULT_APP";
const port = typeof process.argv[3] != "undefined" ? process.argv[3] :  8080;

const app = express();

app.get('/', (req, res) => ; {
    res.send(`Hello from host:  ${appName}\n`);
});

console.log(`Running app: ${appName} on port: ${port}`);
app.listen(port, HOST);

Lets build the image:

docker build --no-cache  -t kayan/node-web-app .

And then run it:

docker run  --name nodeTest -p 8080:8080  --rm kayan/node-web-app nodeTest

Running app: nodeTest on port: 8080

Once it is running we can test it:

curl localhost:8080
Hello from host:  nodeTest

Now, lets try to run server on different port than default 8080:

docker run  --name nodeTest -p 8080:8085  --rm \
    kayan/node-web-app nodeTest 8085


Running app: nodeTest on port: 8085

or if you want to change server.js without rebuilding image you can refer to updated server.js by mounting it:

docker run  --name nodeTest -p 8080:8085  --rm \
    -v `pwd`/server.js:/usr/src/app/server.js kayan/node-web-app nodeTest 8085


Running app: nodeTest on port: 8085

curl localhost:8080
Hello from host:  nodeTest

As we map the container to port 8080 on localhost, even though we running server on port 8085 inside container, we still can access it on 8080 on the host.
Please note, we won’t map the ports when running it during Service Discovery demo and Service Discovery will take care about everything for us.
Stop the containers:

docker stop $(docker ps -aq --filter name=node)

So, time to check the Consul.

2. How to use Consul as service discovery tool for storing container data in a KV storage

docker run -p 8500:8500 --name=consul --rm \
	consul agent -server -bootstrap-expect 1 -node=myconsulnode  -client=0.0.0.0 -ui

Check your browser, as you see it shows information about itself only so far in the services section, later this part will get populated with information about other containers:

As we mentioned it is also a KV storage, so you can use it to keep some configuration information for your infra. Lets add something and then query it:

curl -X PUT -d 'test value' http://localhost:8500/v1/kv/testKey
true%
curl http://localhost:8500/v1/kv/testKey
[{"LockIndex":0,"Key":"testKey","Flags":0,"Value":"dGVzdCB2YWx1ZQ==","CreateIndex":8,"ModifyIndex":9}]%

 

3. How to use registrator as service discovery tool for inspecting containers

Time to deploy Registrator, which will populate consul with information about added or deleted containers.

docker run --rm --name=registrator --net=host\
  -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:latest \
  -internal=true \
  consul://`docker inspect consul --format {{.NetworkSettings.Networks.bridge.IPAddress}}`:8500

As you can see we use docker.sock to communicate with docker engine, if you don’t know what that means,
please read detailed explanation with example here.

It also passes consul address using inspect command to find it. As soon as you run it will start checking docker and publish any info to consul.
Obviously, first of all it will add information about consul itself, it’s exposed ports etc
as you can see in the logs:

2017/10/28 17:35:28 Starting registrator v7 …
2017/10/28 17:35:28 Using consul adapter: consul://172.17.0.2:8500
2017/10/28 17:35:28 Connecting to backend (0/0)
2017/10/28 17:35:28 consul: current leader 172.17.0.2:8300
2017/10/28 17:35:28 Listening for Docker events …
2017/10/28 17:35:28 Syncing services on 1 containers
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8302:udp
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8600
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8600:udp
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8300
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8301
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8500
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8301:udp
2017/10/28 17:35:28 added: ef75c43c930a moby:consul:8302
2017/10/28 17:35:28 ignored: a6b7a8ea21de no published ports

4. How to use nginx as reverse proxy

 

Time to prepare Nginx, in order to start it we will need some basic config:

simple.conf:

  server {
    listen 80;

  }

Now let’s start it:

docker run --name nginx --rm -p 80:80 -v `pwd`/nginx_config:/etc/nginx/conf.d nginx

As you can see I bind mounted nginx_config folder from current directory which contains simple.conf. With this config nginx will simply return 404 if we send a request:

curl localhost

We get this error:

<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center>
<h1>404 Not Found</h1>

</center>

<hr>

<center>nginx/1.13.3</center>
</body>
</html>

And in nginx we get:

2017/10/28 17:48:30 [error] 7#7: *1 "/etc/nginx/html/index.html" is not found (2: No such file or directory), client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"
172.17.0.1 - - [28/Oct/2017:17:48:30 +0000] "GET / HTTP/1.1" 404 169 "-" "curl/7.51.0" "-"

We just use simple.conf file as a config which will be dynamically updated by Consul-template and it’s content is not really important for now.

5. How to use Consul-template for configuring nginx automatically

Now we are ready to run the most complicated command probably, due to it’s arguments and number of actions it will do:

 docker run --rm --name consul-tpl -e CONSUL_TEMPLATE_LOG=debug  \
 -v /var/run/docker.sock:/var/run/docker.sock  \
 -v /usr/bin/docker:/usr/bin/docker  \
 -v `pwd`/nginx_config/:/tmp/nginx_config  \
 hashicorp/consul-template \
 -template "/tmp/nginx_config/simple.ctmpl:/tmp/nginx_config/simple.conf:docker  \
 kill -s HUP nginx" \
 -consul-addr `docker inspect consul --format {{.NetworkSettings.Networks.bridge.IPAddress}}`:8500

So what is going on here you may ask:
Line 1 just names container and sets its logging level.
Line 2 bind mounts var/run/docker.sock for communication with docker engine
Line 3 adds docker binary to consul’s path so it can run docker commands(we will use it later)
Line 4 we mount nginx config folder
Line 5 we run actual comand inside container
Line 6 we pass template argument, this is most important part, so it firstly take a
simple.ctmpl file, which is template file where we specify what needs to be done:

simple.ctmpl:

upstream testservers {
  {{range service "node-web-app" "any"}}
    server {{.Address}}:{{.Port}} ;
  {{end}}
}


  server {
    listen 80;

    location / {
      proxy_pass  http://testservers;
      proxy_next_upstream error timeout invalid_header http_500;
    }

  }

then, we specify output result file for the template as simple.conf which is nginx config,
lastly on line 7 we specify command that needs to be run after template is written, which is restarting nginx through
docker command, this is simplest way and perhaps most dodgy too I agree:) but for the sake of this test I
wanted to give it a go, there are better ways to do this obviously, you can google it.
Finally on line 8 we specify consul’s IP address.

As soon as you run consul-template (and I assume you have all 4 containers running by now,and in this very order:
consul, registrator, nginx, consul-template), it should update simple.config, which should look something like below now:

kayanazimov.local: Sat Oct 28 21:49:50 2017

upstream testservers {

}


  server {
    listen 80;

    location / {
      proxy_pass  http://testservers;
      proxy_next_upstream error timeout invalid_he
ader http_500;
    }

As you can see, “upstream testservers” part is empty.

We can start our experiment now. So we are going to see in real time how running our NodeJs container will update the nginx configuration,
you can run watch command from to see that yourself:

watch cat simple.conf

Now let’s run the container:

docker run -d --name node1 --expose 8080  --rm kayan/node-web-app APP1

You will see new server being added to upstream testservers:

upstream testservers {

    server 172.17.0.5:8080 ;

}

Let’s run another container now:

docker run -d --name node2 --expose 8080  --rm kayan/node-web-app APP2

You should see another server again added to upstream testservers:

upstream testservers {

    server 172.17.0.5:8080 ;

    server 172.17.0.6:8080 ;

}
...

Now if we send requst to default http port on localhost, where our nginx is listening to the requests, it should
forward them into the containers in round-robin fashion:

➜  ~ curl localhost
Hello from host:  APP1
➜  ~ curl localhost
Hello from host:  APP2
➜  ~ curl localhost
Hello from host:  APP1
➜  ~

Now let’s run another container with non default port:

docker run -d --name node3 --expose 8083  --rm kayan/node-web-app APP3 8083

and check the config:

upstream testservers {

    server 172.17.0.5:8080 ;

    server 172.17.0.6:8080 ;

    server 172.17.0.7:8083 ;

}

The best way to see how all containers interact with each other is to split your screen into multiple sections,
as you can see every time new application is run, first Registrator will receive the event, then it will update consul,
then consul-template will receive this info from consul and update the config and restart the nginx:

You can see here how all components dynamically interact with each other.

As you can see, I also run:

 watch cat simple.conf

as it nicely demonstrates how the config is dynamically updated every time we add/remove our node app container.

As a result of this lab we run deploy and undeploy many instances of the application, and our system can figure out the IP address and port automatically and configure nginx accordingly, cool isn’t it?

Don’t forget that in real production system you will have some or all of these components distributed across your clustered environment, and then having able to automatically discover and publish information about deployed application in order to configure reverse proxy, etc will make much more sense.

The final script which will do everything will look like this:

#!/usr/bin/env bash

echo "pulling images, might take some time..."
for image in consul:1.0.0 nginx:1.13.6 gliderlabs/registrator:v7 hashicorp/consul-template:0.19.0; do docker pull $image; done

cd dockernodejs
docker build  -t kayan/node-web-app .

echo "reset nginx config"
cd ../
cp default.conf nginx_config/simple.conf

echo "starting stack..."
docker run -p 8500:8500 --name=consul --rm \
	consul:1.0.0 agent -server -bootstrap-expect 1 -node=myconsulnode  -client=0.0.0.0 -ui &
sleep 3

docker run --rm --name=registrator --net=host\
  -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:v7 \
  -internal=true \
  consul://`docker inspect consul --format {{.NetworkSettings.Networks.bridge.IPAddress}}`:8500 &
sleep 3

docker run --name nginx --rm -p 80:80 -v `pwd`/nginx_config:/etc/nginx/conf.d nginx:1.13.6 &
sleep 3

docker run --rm --name consul-tpl -e CONSUL_TEMPLATE_LOG=debug  \
 -v /var/run/docker.sock:/var/run/docker.sock  \
 -v /usr/bin/docker:/usr/bin/docker  \
 -v `pwd`/nginx_config/:/tmp/nginx_config  \
 hashicorp/consul-template:0.19.0 \
 -template "/tmp/nginx_config/simple.ctmpl:/tmp/nginx_config/simple.conf:docker  \
 kill -s HUP nginx" \
 -consul-addr `docker inspect consul --format {{.NetworkSettings.Networks.bridge.IPAddress}}`:8500 &
sleep 3

echo "deploying apps..."
docker run -d --name node1 --expose 8080  --rm kayan/node-web-app APP1
sleep 1
docker run -d --name node2 --expose 8080  --rm kayan/node-web-app APP2
sleep 1
docker run -d --name node3 --expose 8083  --rm kayan/node-web-app APP3 8083
sleep 1
docker run -d --name node4 --expose 8084  --rm kayan/node-web-app APP4 8084

sleep 3
echo "starting tests..."
for i in {1..6}; do curl localhost; done

Finally, here is the repository where you can clone and run the stack with one magic command:

git clone https://github.com/kenych/service_discovery \
    && cd service_discovery \
    && ./start.sh