Sometimes you need to connect to VPN server where connection has been set up using multi factor authentication, meaning you can’t save your password in…
1 CommentIfrit LTD Posts
Today I am going to show how to provision prepackaged k8s stacks with helm.
So what is Helm – here is literally what is’t page says – a tool for managing Kubernetes charts and Charts are packages of pre-configured Kubernetes resources. So imagine you want to provision some stack, ELK for example, although there are many ways to do it, here is one I did as an example for Jenkins logs although not on k8s but with just docker, but nevertheless, so instead of reinventing the wheel you just provision it using helm.
So let’s just do it instead of talking.
Go to download page and get the right version from https://github.com/kubernetes/helm
Then untar, move to right dir and run:
➜ tar -xzvf helm-v2.7.2-darwin-amd64.tar.gz ➜ mv darwin-amd64/helm /usr/local/bin ➜ helm The Kubernetes package manager To begin working with Helm, run the 'helm init' command: $ helm init .. ...
So let’s do what is asks
Comments closedToday I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).
In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:
0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.
5. (Update September 2018) Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
6. (Update June 2019) The advanced way: Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS..
0. Setup a VM with CentOS Linux
Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:
➜ kops cat Vagrantfile # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.define "kops" do |m| m.vm.box = "centos/7" m.vm.hostname = "kops" end end
Let’s start it up and logon:
Comments closedIn the previous example, we created an EC2 instance, which we wouldn’t be able to access, that is because we neither provisioned a new key pair nor used existing one, which we could see from the state report:
➜ terraform_demo grep key_name terraform.tfstate "key_name": "", ➜ terraform_demo
As you can see key_name is empty.
Now, if you already have a key pair which you are using to connect to your instance, which you will find
in EC2 Dashboard, NETWORK & SECURITY – Key Pairs:
then we can specify it in aws_instance section so EC2 can be accessed with that key:
resource "aws_instance" "ubuntu_zesty" { ami = "ami-6b7f610f" instance_type = "t2.micro" key_name = "myec2key" }
Let’s create an instance:
Comments closedToday we will look at how to setup EC2 instance with Terraform.
- Set up Terraform
- Spin up EC2
- Externalise secrets and other resources with terraform variables.
- Set up Vault as secret repo
1. Set up Terraform
So first thing first, quick installation guide, visit https://www.terraform.io/downloads.html , pick up right version and download:
➜ apps wget https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip\?_ga\=2.1738614.654909398.1512400028-228831855.1511115744 --2017-12-04 15:16:06-- https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744 Resolving releases.hashicorp.com... 151.101.17.183, 2a04:4e42:4::439 Connecting to releases.hashicorp.com|151.101.17.183|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 15750266 (15M) [application/zip] Saving to: ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’ terraform_0.11.1_darwin_amd64.zip?_ga=2.17386 100%[=================================================================================================>] 15.02M 499KB/s in 30s 2017-12-04 15:16:36 (517 KB/s) - ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’ saved [15750266/15750266]
Then unzip:
➜ apps unzip terraform_0.11.1_darwin_amd64.zip\?_ga=2.1738614.654909398.1512400028-228831855.1511115744 Archive: terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744 inflating: terraform
Finally make sure location added to PATH:
➜ ~ export PATH=~/apps:$PATH
Check installation works:
➜ ~ terraform -v Terraform v0.11.1
2. Spin up EC2
The plan is to spin up latest Ubuntu.
Comments closedIn the previous blog I showed how to add a new storage in Linux and split the disk into partitions. Today I will touch a bit more advanced topic and will show how to create logical volumes with LVM. There are plenty advantages of LVM:
- you can create/resize/delete partitions while your system is running, without reboot.
- merge multiple small disks space together, creating a bigger logical disk
- create distributed I/O across all disks, similar to RAID, but much easier to set up.
- create snapshots of the volume easily for disk backups. etc
Last time we used Ubuntu, this time we will use CentOS, as when it comes to storage management and commands and tools that we will use, they are pretty much similar:
[vagrant@centos ~]$ rpm -qa | grep lvm lvm2-2.02.171-8.el7.x86_64 lvm2-libs-2.02.171-8.el7.x86_64 [vagrant@centos ~]$
ubuntu@zesty:~$ dpkg --list | grep lvm ii liblvm2app2.2:amd64 2.02.167-1ubuntu5 amd64 LVM2 application library ii liblvm2cmd2.02:amd64 2.02.167-1ubuntu5 amd64 LVM2 command library ii lvm2 2.02.167-1ubuntu5 amd64 Linux Logical Volume Manager ubuntu@zesty:~$
Let’s create a VM, make sure the directory you running the command is empty as vagrant is using rsync to synchronise contents of current directory with the VM, so if you have GBs of files, it might take a while without a reason:
vagrant init centos/7 && \ vagrant up && \ vagrant ssh
If you didn’t have centos previously it will download about 385MB:
➜ ~ du -sh ~/.vagrant.d/boxes/* 385M /Users/kayanazimov/.vagrant.d/boxes/centos-VAGRANTSLASH-7 425M /Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64 269M /Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-xenial64 290M /Users/kayanazimov/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-zesty64
Once inside, let’s check the existing storage devices:
[vagrant@centos ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 39G 0 part ├─VolGroup00-LogVol00 253:0 0 37.5G 0 lvm / └─VolGroup00-LogVol01 253:1 0 1.5G 0 lvm [SWAP]
Now let’s exit,, halt the vm, add 2 new disks of size 1GB and then start the vm and logon again,
If you don’t know how to add new disks to vm you can read first part of previous blog about storages.
Now let’s check disks again:
[vagrant@centos ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 39G 0 part ├─VolGroup00-LogVol00 253:0 0 37.5G 0 lvm / └─VolGroup00-LogVol01 253:1 0 1.5G 0 lvm [SWAP] sdb 8:16 0 1G 0 disk sdc 8:32 0 1G 0 disk
As you can see sdb and sdc have been added. Let’s ask LVM which devices available to it:
[vagrant@centos ~]$ sudo lvmscan sudo: lvmscan: command not found [vagrant@centos ~]$ sudo lvmdiscan sudo: lvmdiscan: command not found [vagrant@centos ~]$ sudo lvmdiskscan /dev/VolGroup00/LogVol00 [ <37.47 GiB] /dev/VolGroup00/LogVol01 [ 1.50 GiB] /dev/sda2 [ 1.00 GiB] /dev/sda3 [ <39.00 GiB] LVM physical volume /dev/sdb [ 1.00 GiB] /dev/sdc [ 1.00 GiB] 2 disks 3 partitions 0 LVM physical volume whole disks 1 LVM physical volume
First we need to initialise a physical volumes for use by LVM:
Comments closedSooner or later we all run out of space. Today I am going to demo how to add a new
storage to Linux VM. First we will look at how to do this on local VM with virtualbox and vagrant,
then in AWS.
1. Adding a new volume locally.
2. Splitting disk into partitions
3. Spinning AWS EC2 instance and adding a new volume manually.
4. Attaching new volume with AWS CLI.
So let’s assume you have vagrant and virtualbox installed, let’s spin up a new VM:
vagrant init ubuntu/trusty64 && vagrant up && vagrant ssh
You can pick up newer version of Ubuntu of course, Xenial or Zesty, or any other Linux distro even, I have ubuntu/trusty64 vagrant box already downloaded, so I will be using that one.
First let’s check what we have already got there with ‘list block devices’ command:
vagrant@sensuclient:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk `-sda1 8:1 0 40G 0 part / vagrant@sensuclient:~$
Now let’s exit VM and stop it:
vagrant halt ==> sensuclient: Attempting graceful shutdown of VM...
Then we need to go to virtualbox and add new disk as shown below:
Once it is done, we can start VM and check devices again:
vagrant up && vagrant ssh vagrant@sensuclient:~$ sudo lsblk -f NAME FSTYPE LABEL MOUNTPOINT sda `-sda1 ext4 cloudimg-rootfs / sdb
As you can see new disk, ‘sdb’ has been added to the list.
Next we need to crate a filesystem:
Comments closedToday I am going to show how to scale docker containers on Kubernetes and you will see how easy it is.
Then we will look at how pods could be autoscaled based on the performance degradation and CPU Utilisation.
1. Deploy simple stack to k8s
2. Scaling the deployment manually.
3. Autoscaling in k8s based on CPU Utilisation.
1. Deploy simple stack to k8s
If you don’t have Kubernetes installed on your machine in this article I demonstrate how easily this can be achieved on MacOS, it literally takes few minutes to set up.
So let’s create a deployment of a simple test http server container:
➜ ~ kubectl run busybox --image=busybox --port 8080 \ -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \ env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done" deployment "busybox" created
I have also set it up in a way so it returns it’s hostname in the response to http get request, we will need it to distinguish
responses from different instances later on. Once deployed, we can check our deployment and pod status:
➜ ~ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE busybox 1 1 1 1 3m
➜ ~ kubectl get pod NAME READY STATUS RESTARTS AGE busybox-7bcdf6684b-jnp6w 1/1 Running 0 18s ➜ ~
As you can see it’s current ‘DESIRED’ state equals to 1.
Next step is to expose our deployment through a service so it can be queried from outside of the cluster:
➜ ~ kubectl expose deployment busybox --type=NodePort service "busybox" exposed
This will expose our endpoint:
➜ ~ kubectl get endpoints NAME ENDPOINTS AGE busybox 172.17.0.9:8080 23s
Once it is done, we can ask our cluster manager tool to get us it’s api url:
➜ ~ minikube service busybox --url http://192.168.99.100:31623
If we query it we will get it’s hostname in the response:
➜ ~ curl http://192.168.99.100:31623 busybox-7bcdf6684b-jnp6w
2. Scaling the deployment manually.
Now our deployment is ready to be scaled:
I am assuming you have virtualbox installed on your Mac.
To test most of the stuff on k8s you don’t need multiple nodes, running one node cluster is pretty much what you need.
First we need to install kubectl, a tool to interact with kubernetes cluster:
➜ ~ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s \ https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl \ && chmod +x ./kubectl \ && sudo mv ./kubectl /usr/local/bin/kubectl
Then we need Minikube – which is a tool that provisions and manages single-node Kubernetes clusters:
➜ ~ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.0/minikube-darwin-amd64 \ && chmod +x minikube \ && sudo mv minikube /usr/local/bin/
Now we can start the VM:
➜ ~ minikube start Starting local Kubernetes v1.8.0 cluster... Starting VM... Downloading Minikube ISO 140.01 MB / 140.01 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading localkube binary 148.56 MB / 148.56 MB [============================================] 100.00% 0s Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.
Let’s check everything is working:
Comments closedSometimes you need to run a container to execute a specific task and then stop it.
Normally in Kubernetes, if you try just to run it, it will actually create a deployment,
meaning you container will keep running all the time. That is because by default kubectl runs with ‘–restart=”Always”‘ policy.
So if you don’t want to create a yaml file where you specify pod ‘kind’ as a Job, but simply use kubectl run, you can set restart policy to ‘OnFailure’.
Let’s run simple container as a job. It is a simple web crawler which I have written for one of my job interviews, it has many bugs and incomplete,
but sometimes it actually works 🙂 So let’s run it:
➜ ~ kubectl run crawler --restart=OnFailure --image=kayan/web-crawler \ -- http://www.gamesyscorporate.com http://www.gamesyscorporate.com 3 job "crawler" created
Now we can check the state of the pod:
➜ ~ kubectl get pod NAME READY STATUS RESTARTS AGE crawler-k57bh 0/1 ContainerCreating 0 2s
it will take a while, as it needs to download the image first, to check run:
kubectl describe pod crawler
You should see something like below:
Comments closed