Today I am going to show how to scale docker containers on Kubernetes and you will see how easy it is.
Then we will look at how pods could be autoscaled based on the performance degradation and CPU Utilisation.
1. Deploy simple stack to k8s
2. Scaling the deployment manually.
3. Autoscaling in k8s based on CPU Utilisation.
1. Deploy simple stack to k8s
If you don’t have Kubernetes installed on your machine in this article I demonstrate how easily this can be achieved on MacOS, it literally takes few minutes to set up.
So let’s create a deployment of a simple test http server container:
➜ ~ kubectl run busybox --image=busybox --port 8080 \ -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \ env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done" deployment "busybox" created
I have also set it up in a way so it returns it’s hostname in the response to http get request, we will need it to distinguish
responses from different instances later on. Once deployed, we can check our deployment and pod status:
➜ ~ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE busybox 1 1 1 1 3m
➜ ~ kubectl get pod NAME READY STATUS RESTARTS AGE busybox-7bcdf6684b-jnp6w 1/1 Running 0 18s ➜ ~
As you can see it’s current ‘DESIRED’ state equals to 1.
Next step is to expose our deployment through a service so it can be queried from outside of the cluster:
➜ ~ kubectl expose deployment busybox --type=NodePort service "busybox" exposed
This will expose our endpoint:
➜ ~ kubectl get endpoints NAME ENDPOINTS AGE busybox 172.17.0.9:8080 23s
Once it is done, we can ask our cluster manager tool to get us it’s api url:
➜ ~ minikube service busybox --url http://192.168.99.100:31623
If we query it we will get it’s hostname in the response:
➜ ~ curl http://192.168.99.100:31623 busybox-7bcdf6684b-jnp6w
2. Scaling the deployment manually.
Now our deployment is ready to be scaled: