Skip to content

Ifrit LTD Posts

Remote state isolation with terraform workspaces for multi-account deployments

Today we gonna look at how to use terraform workspaces (cli not the enterprise) to manage multi-environmental and multi-account deployments in a secure way. The main issue in tf workspaces that it is using a single bucket for all environments and we will look at how to improve that by segregating the access to each environment, so when we are authenticated to dev env, we can’t read/edit the state file of the prod env, etc, thus making use of terraform workspaces more secure and prod ready, and keeping terraform command neat:

terraform workspace new prod
..
terraform workspace new dev
..
terraform workspace select prod
terraform plan
..
terraform workspace select dev
terraform plan
Comments closed

Protecting personal secrets in vault with encryption

One of the issues when using personal secrets in vault is the admin/root user being able to access everything in vault, thus making usage of personal secret less secure.

In order to protect the personal secret from root/admin access we can however keep secret in an encrypted way, using private key, gpg, or just a password. Below is an example how to protect the secret with a password.

Comments closed

Using hashicorp vault for personal secrets

Today I will show you how to use vault for your personal secrets. Normally you would auth and get access to some path in vault where everyone in your team have access too, but in some cases you may want to use vault for your own secrets as well, i.e for storing passphrase for the ssh private key or email or something similar.

So here is a list of commands that needs to be run, first as an admin to set up auth and policies, and then as a user, auth and read/write secrets.

Create a policy that allows actions under ones identity:

cat <<EOF | vault policy write identity -
path "secret/data/{{identity.entity.id}}/*" {
	capabilities = ["create", "read", "update", "delete"]
}
EOF
Comments closed

How to remove default route to vpn

Quite often especially on corporate networks, once connected to company VPN, all your traffic starts going via your company VPN, meaning – they watching what you do.
Most people may not even suspect that but it is quite simple to find out.
So I am gonna show how to do that on Mac with few commands everyone can run.

OK, lets check out routing tables when connected to VPN:

netstat -nr | grep '0/1\'
0/1                10.225.222.129     UGSc           70        0   utun3
default            192.168.0.1        UGSc           12       11     en0
128.0/1            10.225.222.129     UGSc            0        0   utun3
Comments closed

How to use echo or cat when nc, ss, netstat, curl, etc not available on the host to check if the port is listening

I came across this amazing way of testing if I could reach a port on the host, when literally nothing I tried was available:

vagrant@ ~ () $ echo hi |  nc -l -p  8089 &
[1] 13651
vagrant@ ~ () $ cat < /dev/tcp/127.0.0.1/8089
hi
[1]+  Done                    echo hi | nc -l -p 8089
vagrant@ ~ () $
vagrant@ ~ () $ cat < /dev/tcp/127.0.0.1/8089
-bash: connect: Connection refused
-bash: /dev/tcp/127.0.0.1/8089: Connection refused
Comments closed

How to generate self signed certificates with openssl

This article shows how to generate self signed CA, then use it to generate csr and client/server certificate.

We will use openssl to deal with the certs and look at various command line options so we can:
1. generate ca and certs without human intervention, which could be useful during automation
2. validate generated certificate
3. look how to add extensions to the certs

Create CA

Generate CA key

1.1 phrase by default enter manually

openssl genrsa -des3 -out ca.key 4096

Generating RSA private key, 4096 bit long modulus
............................++
....................++
e is 65537 (0x10001)
Enter pass phrase for ca.key:
Verifying - Enter pass phrase for ca.key:

1.2 or read from file

openssl genrsa -des3 -passout file:mypass.txt  -out ca.key 4096

1.3 or elsewhere

vault read  --field=value secret/path_to_ca_key/password | openssl genrsa -des3 -passout stdin  -out ca.key 4096 -
Comments closed

How to preserve index based order in terraform maps

I have been adding new VPC peerings with another acount today and noticed that my new peering would delete old peerings and recreate them again on top of adding a new one in terraform plan.

Here is my peering code:

  resource "aws_vpc_peering_connection" "apples_account" {
  count = "${length(var.apples_account_vpc_ids)}"

  vpc_id = "${aws_vpc.vpc.id}"

  peer_owner_id = "${var.apples_account}"
  peer_vpc_id   = "${element(values(var.apples_account_vpc_ids),count.index)}"

  auto_accept = false
  peer_region = "eu-west-1"

  tags = "${merge(
    map(
      "Name",
      "peer-${var.environment_group}-${var.aws_account}-${element(keys(var.apples_account_vpc_ids),count.index)}-company1"),
    local.all_tags
    )}"
}

And vars:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-i": "vpc-222d4253"
  }

As you can see, I am adding new VPC vpc-staging-i and here is what I get:

 
)
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ aws_vpc_peering_connection.apples_account[0] (new resource required)
      id:              "pcx-00888486b31516daa" => <computed> (forces new resource)
      accept_status:   "active" => <computed>
      accepter.#:      "0" => <computed>
      auto_accept:     "false" => "false"
      peer_owner_id:   "111111111111" => "111111111111"
      peer_region:     "eu-west-1" => "eu-west-1"
      peer_vpc_id:     "vpc-111d4253" => "vpc-222d4253" (forces new resource)
      requester.#:     "1" => <computed>
      tags.%:          "9" => "9"
      tags.CostCentre: "OPS_TEAM" => "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov" => "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1" => "peer-vpc-secure-np-vpc-staging-i-company1"
      tags.Owner:      "Terraform" => "Terraform"
      tags.Product:    "PROD1" => "PROD1"
      tags.Region:     "eu-west-2" => "eu-west-2"
      tags.Role:       "secure" => "secure"
      tags.Scope:      "internal" => "internal"
      tags.SourcePath: "terraform/vpc/business/" => "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a" => "vpc-222eddef5e86fa65a"

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-111d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 2 to add, 0 to change, 1 to destroy.

As you can see, vpc-222d4253 replaces vpc-111d4253, and then vpc-111d4253 added later. But I don’t want to recreate my peerings!

Because my other VPC side is in a different account and I can’t use auto_accept either, meaning my other account will need to accept new peerings again, and in between this – a breaking change…

So first of all, why is this happening?

This is because keys(map) in terraform returns list sorted in alphabetical order, let’s prove it, if I change vpc-staging-i to vpc-staging-m:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-m": "vpc-222d4253"
  }

as M comes after L, as oppose to I coming before L, now the order will be artificially preserved:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-222d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-m-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 1 to add, 0 to change, 0 to destroy.

Indeed, only adding a new VPC peering,

But I don’t want to juggle with letters, becides this letter actually stands for a name of vpc(l – low risk, m – middle, etc) not just some random letter, I need another solution, luckily there is one.

Comments closed

Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS.

Today I am going to show how you can fully automate the advanced process of setting up the highly available k8s cluster in the cloud. We will go through a set of terraform and bash scripts which should be sufficient enough for you to literally just run terraform plan/apply to get your HA etcd and k8s cluster up and running without any hassle around.

    Part 0 – Intro.
    Part 1 – Setting up HA ETCD cluster.
    Part 2 – The PKI infra
    Part 3 – Setting up k8s cluster.

Part 0 – Intro.

If you do a short research on how to setup k8s cluster you may find quite a lot of ways this could be achieved.
But in general, all this ways could be grouped into 3 types:

1) No setup
2) Easy Set up
3) Advanced Set up
4) Hard way

By No setup I simply mean something like EKS, it is a managed service, you don’t need to maintain or care about details while AWS will do all for you. Never used it can’t say much on that one.

Easy setup, tools like kops and alike make it quite easy – couple commands run kinda setup:

kops ~]$ kops create cluster \
  --name=k8s.ifritltd.net --state=s3://kayan-kops-state \
  --zones="eu-west-2a" --node-count=2 --node-size=t2.micro 
  --master-size=t2.micro --dns-zone=k8s.ifritltd.net  --cloud aws

All you need is setup s3 bucket and dns records and run the command above which I described two years ago in this article

The downside is first of all it is mainly only for AWS, and generates all AWS resources as it wants, so lets say it would generate security groups, asg, etc in it’s own way which means
if you already have terraform managed infra with your own rules, strategies and framework, it won’t feet into that model but just added as some kind of alien infra. Long story short if you want fine grained control over how your infra should be managed from single centralised terraform, it isn’t best solution, yet still easy and balanced tool.

Before I start explaining how to use Advanced Set up, I am just going to mention that 4th, The Hard way is probably only good if you want to learn how k8s works, how all components interact with each other, and as it doesn’t use any external tool to set up components, you do everything manually, you literally know all the guts of the system. Obviously it could become a nightmare to support such system in production unless all members of the ops team are k8s experts or there are some requirements not supported by other bootstrapping tools.

Finally the Advanced Set up.

Comments closed