Skip to content

Ifrit LTD Posts

Supernetting explained easy

I have recently been configuring squid proxy behind loadbalancer, in order for squid to allow incoming PROXY protocol connections from loadbalancer, I quickly decided easiest option would be either whole vpc CIDR range:

acl loadbalancer src 10.139.0.0/17
proxy_protocol_access allow loadbalancer

or list of subnets from 3 AZs where loadbalancer is running:

acl loadbalancer src 10.139.64.64/28 10.139.64.96/28 10.139.64.80/28
proxy_protocol_access allow loadbalancer

Even though both configurations are valid, my pull request quickly caught attention of more experienced in networking(in fact ex CCNP guy) colleague of mine. But that is the beauty of modern operations teams working in devops fashion, while he may catch this sort of issues, I (ex developer guy) for instance can easily spot how duplication in piece of bash or python code could be avoided by refactoring it into a reusable function/template.

So back to our problem, as I said, while both ranges are valid, first in fact is much wider than actually required, and second is too redundant.


Enter supernetting.

So what is that?

Comments closed

How to fix DNS issues when using OpenVPN.

How to fix DNS issues when using OpenVPN.

Sometimes you successfully connect to vpn server but nothing still seems to work. Well, one of the reasons could be the DNS.
Firstly, you should check your vpn logs, that would be for instance,
for MacOS:
/Library/Application Support/Tunnelblick/Logs
or Linux in:
journalctl -u NetworkManager.service on linux

2019-06-11 23:30:25.110048 MANAGEMENT: >STATE:1560292225,GET_CONFIG,,,,,,
2019-06-11 23:30:25.110251 SENT CONTROL [openvpn.example.com]: 'PUSH_REQUEST' (status=1)
2019-06-11 23:30:25.252005 PUSH: Received control message: 'PUSH_REPLY,route ....
dhcp-option DOMAIN dev.example.com,dhcp-option DOMAIN prod.example.com,dhcp-option DOMAIN int.example.com'
...
....
2019-06-11 23:30:25.252374 Options error: Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:13: dhcp-option (2.4.7)

In the example above, openvpn client complaints about not recognising dhcp-options, because server pushes multiple ‘dhcp-option DOMAIN value’ config params whereas
client expects a single command with multiple values: ‘dhcp-option DOMAIN value1 value2’.

This normally happens when your client version doesn’t match with your server version, so you client doesn’t know what to do with them.

As a result you may not get correct settings in your ‘/etc/resolv.conf’, for example missing or incomplete ‘nameserver’ or ‘search’.
In this example we won’t get ‘search’ set up correctly meaning if there was a DNS record like something.int.example.com, we wouldn’t be able
to refer to it without FQDN like just ‘something’, that is what ‘search’ parameter does in ‘/etc/resolv.conf’.

If ‘nameserver’ was not configured then our DNS won’t work at all.

But there is a solution.

Comments closed

Storing sensitive data in AWS with credstash, DynamoDB and KMS.

One of the most important problems of modern cloud infrastructure is security. You can put a lot of efforts to automate the build process of your infrastructure, but it is worthless if you don’t deal with sensitive data appropriately and sooner or later it could become a pain.

Most of the big organisations will probably spend some time to implement and support HashiCorp Vault, or something similar, which is more ‘enterprisy’.
In most cases though something simple, yet secure and reliable could be just sufficient, especially if you follow YAGNI.

Today I will demonstrate how to use a tool called credstash, which leverages two AWS services for it’s functionality: DynamoDB and KMS. It uses DynamoDB as key/value store to encrypt and save the secrets with KMS master key and encryption context, and everyone who has access to same master key and encryption context can then decrypt the secret and read it.

From user perspective, you don’t need to deal with neither DynamoDB nor KMS. All you do is store and read your secrets using key/value and context as arguments to the credstash.

So let’s go straight to terraform code which we will use to provision DynamoDB and KMS key

Comments closed

Kubernetes authentication with AWS IAM.

Today I am going to demonstrate how you can leverage existing AWS IAM infrastructure to enable fine grained authentication(authN) and authorization(authZ) to your Kubernetes(k8s) cluster. We will look at how to use aws-iam-authenticator to map AWS IAM roles with k8s RBAC and how to enable authentication with kops and kubeadm. We will set up two groups, one with admin rights and second with view only, but based on given example you will be able to create as many groups as you wish, with fine grained control.

One of the key problems once you start using k8s at your organization is how you are going to authenticate to your cluster.
When k8s cluster is first provisioned, it will set up x509 as default authentication mechanism, meaning in order to let your users to use your cluster you need to share the key and certificate, which is apart from being insecure, will not let you to use fine grained RBAC authorization
which comes with k8s.

k8s comes with many authN mechanisms, but if you are using AWS, it means you must already have your users divided into specific groups with specific IAM roles, like admins, ops, deveopers, testers, viewers, etc. So all you have to do is to map those groups into k8s RBAC groups, as a result, you will have users who can view the cluster resources like pods and deployments, or who can deploy same resources, like your members of your build team, and those who can setup your cluster, like ops and admin team.

So let’s create a list of the things we will need to do:

1. Create example IAM users, groups and roles.
2. How to configure kops to use aws-iam-authenticator
3. Map AWS roles to RBAC
4. Setup kubectl to authN to cluster using IAM roles.
5. How to configure kubeadm to use aws-iam-authenticator

Comments closed

Advanced Jenkins setup: Creating Jenkins configuration as code and setting up Kubernetes plugin

This blog post demonstrates how anything in Jenkins could be configured as a code through Java API using groovy code, and how changes could be applied right inside Jenkins job. I particularly will demo how to configure Kubernetes plugin and credentials, but the same concept could be used later to configure any Jenkins plugin you are interested in. We will also look at how to create custom config which could be used either for all
or only specific Jenkins instances so you can setup different instances differently based on security policy or any other criteria.

The Why…

Recently I have been working on a task to improve deployment of our master Jenkins instances on Kubernetes.
On of the requirements was to improve the speed, as we have more than 40 Jenkins masters running on different
environments like test, dev, pre-prod, perf, prod etc and deployed in Kubernetes over AWS cluster. The deployment job took around an hour, involved downtime and required multiple steps.

Comments closed

The most demanded DevOps skills stats, DIY approach.

I was curious the other day, what is the most demanded devops skills out there on the market?
Not that I didn’t have any clue, as for someone who has been in the industry for a while it is kinda obvious, but sometimes you simply curious or just want to get some sort of stats. So after couple googling attempts which didn’t give any reasonable results apart from boring marketing ads and stupid suggestions like soft skill (who cares!), I decided that best approach would be DIY!

So here is what I did, step by step

1) Went to the web site many have probably used to find a job and put some search criteria

Then switched to Classic View and changed summary to 200 jobs per page, which is the max. Now all I needed is to find all occurrences of some keyword on the resulting page. (this manual part would benefit from selenium/phantomjs if run regularly, I probably will add it later)

2) Obviously I didn’t want to count manually, so I decided hey let’s do it with curl and then scan the output with some predefined keywords. Initially the keywords file was too big, then I skipped some stuff as it appeared to be not that popular(1 or 2 occurrences). But in general the file needs to be maintained as overtime some new kids in the block will pop out. So here the list in the words.txt file:

➜  trendystuff cat words.txt 
aws
azure

terraform

ansible
puppet
chef

docker
kubernetes
mesos

jenkins
ci/cd

elasticsearch
kibana 
logstash
elk

prometheus
openstack
zabbix
vault

linux
scripting
unix
bash
python
groovy
ruby

git
maven
➜  trendystuff

3) Now let’s write some super simple dummy bash script to get though the output and make some stats:

➜  scrips cat jobstats 
#!/bin/bash

if [[ $@ != **-c** ]]; then 
	curl -s $1 > output.txt  
fi	

if [[ $@ == **-2** ]]; then 
	sort_arg=" -k 2 -r"
fi	

rm result.txt
for word in `cat words.txt`; do
	echo "$word `grep -io  $word output.txt \
	| wc -l`" \
	| xargs  >> result.txt
done; cat result.txt | sort -n $sort_arg%  
➜  scrips 

4) Finally let’s run it, we have to copy the url from the website, which will be generated once you put you search criteria and press search, and pass it as argument to the script:

Comments closed

Docker volume monitoring with Ruby, Sensu and Uchiwa.

In this post I am going to demonstrate how to monitor your docker volumes with Sensu.
I came across a problem when our Jenkins instances were running out of space and no jobs could be scheduled because of this,
so before it was too late, it would be very useful to have something in place that will show if some container is too greedy, eating all the space on the volume.

In general we are going to look at the next things today:
1. Some Ruby scripting
2. Identifying disk and docker volume usage commands
3. Configuring Sensu server and client for monitoring
4. Making script run as a root
5. Running a simple Uchiwa dashboard

1. Some Ruby scripting
So first we are going to write the script which will check the volume and report if usage is higher than we configured.
Following best Sensu practices we will write it in Ruby, we probably could also use bash, but it really gets messy once we add more logic and lines.

#!/usr/bin/env /opt/sensu/embedded/bin/ruby

max_size = ARGV[0].to_i
container_name_filter = ARGV[1]
message = ""

procs=`du -sk  /var/lib/docker/volumes/* | sort -rn`
procs.each_line do | process|
  result = process.split(" ")
  vol_usage = result[0].to_i/1024
  vol_name = result[1].gsub "/var/lib/docker/volumes/", ''

  if vol_usage > max_size
    cont_name = `docker ps --filter=volume=#{vol_name} --filter=name=#{container_name_filter} --format {{.Names}}`
    if !cont_name.empty?
      message = message + "container: #{cont_name.delete!("\n")} volume exceeds max disk usage(#{max_size}MB): #{vol_usage}MB; \n"
    end
  end
end

unless message.empty?
  puts message
  exit 1
end

2. Identifying disk and docker volume usage commands

Comments closed

Automating gmail check in your shell

It you are just like me, who likes doing almost everything through the shell scripts rather than fancy UI apps, then here is a nice and easy way of checking a new emails in your gmail account:


function gmail(){
	emails=$(curl -s -u $1:$2 "https://mail.google.com/mail/feed/atom"\
	 | egrep -o '<fullcount>[0-9]*' | cut -c 12-)
	if [ "$emails" -gt 0 ] ; then echo "You have ${emails} emails in your $1 account"; fi
}		
gmail daenerys.targaryen $(vault read -field=value secret/GPASSWORD)
gmail simply.dany $(vault read -field=value secret/G2PASSWORD)

Simply add this to your .zshrc/.bashrc and you are done, next time you open a new tab you might get this:


Last login: Mon Feb  5 21:27:39 on ttys006
You have 1 emails in your daenerys.targaryen account
➜  ~ 
Comments closed