Skip to content

Tag: AWS

Remote state isolation with terraform workspaces for multi-account deployments

Today we gonna look at how to use terraform workspaces (cli not the enterprise) to manage multi-environmental and multi-account deployments in a secure way. The main issue in tf workspaces that it is using a single bucket for all environments and we will look at how to improve that by segregating the access to each environment, so when we are authenticated to dev env, we can’t read/edit the state file of the prod env, etc, thus making use of terraform workspaces more secure and prod ready, and keeping terraform command neat:

terraform workspace new prod
..
terraform workspace new dev
..
terraform workspace select prod
terraform plan
..
terraform workspace select dev
terraform plan
Comments closed

How to preserve index based order in terraform maps

I have been adding new VPC peerings with another acount today and noticed that my new peering would delete old peerings and recreate them again on top of adding a new one in terraform plan.

Here is my peering code:

  resource "aws_vpc_peering_connection" "apples_account" {
  count = "${length(var.apples_account_vpc_ids)}"

  vpc_id = "${aws_vpc.vpc.id}"

  peer_owner_id = "${var.apples_account}"
  peer_vpc_id   = "${element(values(var.apples_account_vpc_ids),count.index)}"

  auto_accept = false
  peer_region = "eu-west-1"

  tags = "${merge(
    map(
      "Name",
      "peer-${var.environment_group}-${var.aws_account}-${element(keys(var.apples_account_vpc_ids),count.index)}-company1"),
    local.all_tags
    )}"
}

And vars:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-i": "vpc-222d4253"
  }

As you can see, I am adding new VPC vpc-staging-i and here is what I get:

 
)
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ aws_vpc_peering_connection.apples_account[0] (new resource required)
      id:              "pcx-00888486b31516daa" => <computed> (forces new resource)
      accept_status:   "active" => <computed>
      accepter.#:      "0" => <computed>
      auto_accept:     "false" => "false"
      peer_owner_id:   "111111111111" => "111111111111"
      peer_region:     "eu-west-1" => "eu-west-1"
      peer_vpc_id:     "vpc-111d4253" => "vpc-222d4253" (forces new resource)
      requester.#:     "1" => <computed>
      tags.%:          "9" => "9"
      tags.CostCentre: "OPS_TEAM" => "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov" => "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1" => "peer-vpc-secure-np-vpc-staging-i-company1"
      tags.Owner:      "Terraform" => "Terraform"
      tags.Product:    "PROD1" => "PROD1"
      tags.Region:     "eu-west-2" => "eu-west-2"
      tags.Role:       "secure" => "secure"
      tags.Scope:      "internal" => "internal"
      tags.SourcePath: "terraform/vpc/business/" => "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a" => "vpc-222eddef5e86fa65a"

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-111d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 2 to add, 0 to change, 1 to destroy.

As you can see, vpc-222d4253 replaces vpc-111d4253, and then vpc-111d4253 added later. But I don’t want to recreate my peerings!

Because my other VPC side is in a different account and I can’t use auto_accept either, meaning my other account will need to accept new peerings again, and in between this – a breaking change…

So first of all, why is this happening?

This is because keys(map) in terraform returns list sorted in alphabetical order, let’s prove it, if I change vpc-staging-i to vpc-staging-m:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-m": "vpc-222d4253"
  }

as M comes after L, as oppose to I coming before L, now the order will be artificially preserved:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-222d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-m-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 1 to add, 0 to change, 0 to destroy.

Indeed, only adding a new VPC peering,

But I don’t want to juggle with letters, becides this letter actually stands for a name of vpc(l – low risk, m – middle, etc) not just some random letter, I need another solution, luckily there is one.

Comments closed

Storing sensitive data in AWS with credstash, DynamoDB and KMS.

One of the most important problems of modern cloud infrastructure is security. You can put a lot of efforts to automate the build process of your infrastructure, but it is worthless if you don’t deal with sensitive data appropriately and sooner or later it could become a pain.

Most of the big organisations will probably spend some time to implement and support HashiCorp Vault, or something similar, which is more ‘enterprisy’.
In most cases though something simple, yet secure and reliable could be just sufficient, especially if you follow YAGNI.

Today I will demonstrate how to use a tool called credstash, which leverages two AWS services for it’s functionality: DynamoDB and KMS. It uses DynamoDB as key/value store to encrypt and save the secrets with KMS master key and encryption context, and everyone who has access to same master key and encryption context can then decrypt the secret and read it.

From user perspective, you don’t need to deal with neither DynamoDB nor KMS. All you do is store and read your secrets using key/value and context as arguments to the credstash.

So let’s go straight to terraform code which we will use to provision DynamoDB and KMS key

Comments closed

Kubernetes authentication with AWS IAM.

Today I am going to demonstrate how you can leverage existing AWS IAM infrastructure to enable fine grained authentication(authN) and authorization(authZ) to your Kubernetes(k8s) cluster. We will look at how to use aws-iam-authenticator to map AWS IAM roles with k8s RBAC and how to enable authentication with kops and kubeadm. We will set up two groups, one with admin rights and second with view only, but based on given example you will be able to create as many groups as you wish, with fine grained control.

One of the key problems once you start using k8s at your organization is how you are going to authenticate to your cluster.
When k8s cluster is first provisioned, it will set up x509 as default authentication mechanism, meaning in order to let your users to use your cluster you need to share the key and certificate, which is apart from being insecure, will not let you to use fine grained RBAC authorization
which comes with k8s.

k8s comes with many authN mechanisms, but if you are using AWS, it means you must already have your users divided into specific groups with specific IAM roles, like admins, ops, deveopers, testers, viewers, etc. So all you have to do is to map those groups into k8s RBAC groups, as a result, you will have users who can view the cluster resources like pods and deployments, or who can deploy same resources, like your members of your build team, and those who can setup your cluster, like ops and admin team.

So let’s create a list of the things we will need to do:

1. Create example IAM users, groups and roles.
2. How to configure kops to use aws-iam-authenticator
3. Map AWS roles to RBAC
4. Setup kubectl to authN to cluster using IAM roles.
5. How to configure kubeadm to use aws-iam-authenticator

Comments closed

How to setup Kubernetes cluster on AWS with kops

Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).

In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:

0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.
5. (Update September 2018) Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
6. (Update June 2019) The advanced way: Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS..

0. Setup a VM with CentOS Linux

Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:

➜  kops cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|

  config.vm.define "kops" do |m|
    m.vm.box = "centos/7"
    m.vm.hostname = "kops"
  end

end

Let’s start it up and logon:

Comments closed

Provisioning EC2 key pairs with terraform.

In the previous example, we created an EC2 instance, which we wouldn’t be able to access, that is because we neither provisioned a new key pair nor used existing one, which we could see from the state report:

➜  terraform_demo grep key_name terraform.tfstate
                            "key_name": "",
➜  terraform_demo

As you can see key_name is empty.

Now, if you already have a key pair which you are using to connect to your instance, which you will find
in EC2 Dashboard, NETWORK & SECURITY – Key Pairs:

then we can specify it in aws_instance section so EC2 can be accessed with that key:

resource "aws_instance" "ubuntu_zesty" {
  ami           = "ami-6b7f610f"
  instance_type = "t2.micro"
  key_name = "myec2key"
}

Let’s create an instance:

Comments closed

Spinning up an EC2 with Terraform and Vault.

Today we will look at how to setup EC2 instance with Terraform.

  1. Set up Terraform
  2. Spin up EC2
  3. Externalise secrets and other resources with terraform variables.
  4. Set up Vault as secret repo

1. Set up Terraform

So first thing first, quick installation guide, visit https://www.terraform.io/downloads.html , pick up right version and download:

➜  apps wget https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip\?_ga\=2.1738614.654909398.1512400028-228831855.1511115744
--2017-12-04 15:16:06--  https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Resolving releases.hashicorp.com... 151.101.17.183, 2a04:4e42:4::439
Connecting to releases.hashicorp.com|151.101.17.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15750266 (15M) [application/zip]
Saving to: ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’

terraform_0.11.1_darwin_amd64.zip?_ga=2.17386 100%[=================================================================================================>]  15.02M   499KB/s    in 30s

2017-12-04 15:16:36 (517 KB/s) - ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’ saved [15750266/15750266]

Then unzip:

➜  apps unzip terraform_0.11.1_darwin_amd64.zip\?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Archive:  terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
  inflating: terraform

Finally make sure location added to PATH:

➜  ~ export PATH=~/apps:$PATH

Check installation works:

➜  ~ terraform -v
Terraform v0.11.1

2. Spin up EC2

The plan is to spin up latest Ubuntu.

Comments closed

How to add a new storage volume to Linux VM locally and on AWS EC2.

Sooner or later we all run out of space. Today I am going to demo how to add a new
storage to Linux VM. First we will look at how to do this on local VM with virtualbox and vagrant,
then in AWS.

1. Adding a new volume locally.
2. Splitting disk into partitions
3. Spinning AWS EC2 instance and adding a new volume manually.
4. Attaching new volume with AWS CLI.

So let’s assume you have vagrant and virtualbox installed, let’s spin up a new VM:

vagrant init ubuntu/trusty64 && vagrant up && vagrant ssh

You can pick up newer version of Ubuntu of course, Xenial or Zesty, or any other Linux distro even, I have ubuntu/trusty64 vagrant box already downloaded, so I will be using that one.

First let’s check what we have already got there with ‘list block devices’ command:

vagrant@sensuclient:~$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  40G  0 disk
`-sda1   8:1    0  40G  0 part /
vagrant@sensuclient:~$

Now let’s exit VM and stop it:


vagrant halt
==> sensuclient: Attempting graceful shutdown of VM...

Then we need to go to virtualbox and add new disk as shown below:

Once it is done, we can start VM and check devices again:

vagrant up  && vagrant ssh  

vagrant@sensuclient:~$ sudo lsblk -f
NAME   FSTYPE LABEL           MOUNTPOINT
sda
`-sda1 ext4   cloudimg-rootfs /
sdb

As you can see new disk, ‘sdb’ has been added to the list.

Next we need to crate a filesystem:

Comments closed