Skip to content

Category: terraform

Remote state isolation with terraform workspaces for multi-account deployments

Today we gonna look at how to use terraform workspaces (cli not the enterprise) to manage multi-environmental and multi-account deployments in a secure way. The main issue in tf workspaces that it is using a single bucket for all environments and we will look at how to improve that by segregating the access to each environment, so when we are authenticated to dev env, we can’t read/edit the state file of the prod env, etc, thus making use of terraform workspaces more secure and prod ready, and keeping terraform command neat:

terraform workspace new prod
..
terraform workspace new dev
..
terraform workspace select prod
terraform plan
..
terraform workspace select dev
terraform plan
Comments closed

How to preserve index based order in terraform maps

I have been adding new VPC peerings with another acount today and noticed that my new peering would delete old peerings and recreate them again on top of adding a new one in terraform plan.

Here is my peering code:

  resource "aws_vpc_peering_connection" "apples_account" {
  count = "${length(var.apples_account_vpc_ids)}"

  vpc_id = "${aws_vpc.vpc.id}"

  peer_owner_id = "${var.apples_account}"
  peer_vpc_id   = "${element(values(var.apples_account_vpc_ids),count.index)}"

  auto_accept = false
  peer_region = "eu-west-1"

  tags = "${merge(
    map(
      "Name",
      "peer-${var.environment_group}-${var.aws_account}-${element(keys(var.apples_account_vpc_ids),count.index)}-company1"),
    local.all_tags
    )}"
}

And vars:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-i": "vpc-222d4253"
  }

As you can see, I am adding new VPC vpc-staging-i and here is what I get:

 
)
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ aws_vpc_peering_connection.apples_account[0] (new resource required)
      id:              "pcx-00888486b31516daa" => <computed> (forces new resource)
      accept_status:   "active" => <computed>
      accepter.#:      "0" => <computed>
      auto_accept:     "false" => "false"
      peer_owner_id:   "111111111111" => "111111111111"
      peer_region:     "eu-west-1" => "eu-west-1"
      peer_vpc_id:     "vpc-111d4253" => "vpc-222d4253" (forces new resource)
      requester.#:     "1" => <computed>
      tags.%:          "9" => "9"
      tags.CostCentre: "OPS_TEAM" => "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov" => "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1" => "peer-vpc-secure-np-vpc-staging-i-company1"
      tags.Owner:      "Terraform" => "Terraform"
      tags.Product:    "PROD1" => "PROD1"
      tags.Region:     "eu-west-2" => "eu-west-2"
      tags.Role:       "secure" => "secure"
      tags.Scope:      "internal" => "internal"
      tags.SourcePath: "terraform/vpc/business/" => "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a" => "vpc-222eddef5e86fa65a"

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-111d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 2 to add, 0 to change, 1 to destroy.

As you can see, vpc-222d4253 replaces vpc-111d4253, and then vpc-111d4253 added later. But I don’t want to recreate my peerings!

Because my other VPC side is in a different account and I can’t use auto_accept either, meaning my other account will need to accept new peerings again, and in between this – a breaking change…

So first of all, why is this happening?

This is because keys(map) in terraform returns list sorted in alphabetical order, let’s prove it, if I change vpc-staging-i to vpc-staging-m:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-m": "vpc-222d4253"
  }

as M comes after L, as oppose to I coming before L, now the order will be artificially preserved:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-222d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-m-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 1 to add, 0 to change, 0 to destroy.

Indeed, only adding a new VPC peering,

But I don’t want to juggle with letters, becides this letter actually stands for a name of vpc(l – low risk, m – middle, etc) not just some random letter, I need another solution, luckily there is one.

Comments closed

Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS.

Today I am going to show how you can fully automate the advanced process of setting up the highly available k8s cluster in the cloud. We will go through a set of terraform and bash scripts which should be sufficient enough for you to literally just run terraform plan/apply to get your HA etcd and k8s cluster up and running without any hassle around.

    Part 0 – Intro.
    Part 1 – Setting up HA ETCD cluster.
    Part 2 – The PKI infra
    Part 3 – Setting up k8s cluster.

Part 0 – Intro.

If you do a short research on how to setup k8s cluster you may find quite a lot of ways this could be achieved.
But in general, all this ways could be grouped into 3 types:

1) No setup
2) Easy Set up
3) Advanced Set up
4) Hard way

By No setup I simply mean something like EKS, it is a managed service, you don’t need to maintain or care about details while AWS will do all for you. Never used it can’t say much on that one.

Easy setup, tools like kops and alike make it quite easy – couple commands run kinda setup:

kops ~]$ kops create cluster \
  --name=k8s.ifritltd.net --state=s3://kayan-kops-state \
  --zones="eu-west-2a" --node-count=2 --node-size=t2.micro 
  --master-size=t2.micro --dns-zone=k8s.ifritltd.net  --cloud aws

All you need is setup s3 bucket and dns records and run the command above which I described two years ago in this article

The downside is first of all it is mainly only for AWS, and generates all AWS resources as it wants, so lets say it would generate security groups, asg, etc in it’s own way which means
if you already have terraform managed infra with your own rules, strategies and framework, it won’t feet into that model but just added as some kind of alien infra. Long story short if you want fine grained control over how your infra should be managed from single centralised terraform, it isn’t best solution, yet still easy and balanced tool.

Before I start explaining how to use Advanced Set up, I am just going to mention that 4th, The Hard way is probably only good if you want to learn how k8s works, how all components interact with each other, and as it doesn’t use any external tool to set up components, you do everything manually, you literally know all the guts of the system. Obviously it could become a nightmare to support such system in production unless all members of the ops team are k8s experts or there are some requirements not supported by other bootstrapping tools.

Finally the Advanced Set up.

Comments closed

Storing sensitive data in AWS with credstash, DynamoDB and KMS.

One of the most important problems of modern cloud infrastructure is security. You can put a lot of efforts to automate the build process of your infrastructure, but it is worthless if you don’t deal with sensitive data appropriately and sooner or later it could become a pain.

Most of the big organisations will probably spend some time to implement and support HashiCorp Vault, or something similar, which is more ‘enterprisy’.
In most cases though something simple, yet secure and reliable could be just sufficient, especially if you follow YAGNI.

Today I will demonstrate how to use a tool called credstash, which leverages two AWS services for it’s functionality: DynamoDB and KMS. It uses DynamoDB as key/value store to encrypt and save the secrets with KMS master key and encryption context, and everyone who has access to same master key and encryption context can then decrypt the secret and read it.

From user perspective, you don’t need to deal with neither DynamoDB nor KMS. All you do is store and read your secrets using key/value and context as arguments to the credstash.

So let’s go straight to terraform code which we will use to provision DynamoDB and KMS key

Comments closed

Provisioning EC2 key pairs with terraform.

In the previous example, we created an EC2 instance, which we wouldn’t be able to access, that is because we neither provisioned a new key pair nor used existing one, which we could see from the state report:

➜  terraform_demo grep key_name terraform.tfstate
                            "key_name": "",
➜  terraform_demo

As you can see key_name is empty.

Now, if you already have a key pair which you are using to connect to your instance, which you will find
in EC2 Dashboard, NETWORK & SECURITY – Key Pairs:

then we can specify it in aws_instance section so EC2 can be accessed with that key:

resource "aws_instance" "ubuntu_zesty" {
  ami           = "ami-6b7f610f"
  instance_type = "t2.micro"
  key_name = "myec2key"
}

Let’s create an instance:

Comments closed

Spinning up an EC2 with Terraform and Vault.

Today we will look at how to setup EC2 instance with Terraform.

  1. Set up Terraform
  2. Spin up EC2
  3. Externalise secrets and other resources with terraform variables.
  4. Set up Vault as secret repo

1. Set up Terraform

So first thing first, quick installation guide, visit https://www.terraform.io/downloads.html , pick up right version and download:

➜  apps wget https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip\?_ga\=2.1738614.654909398.1512400028-228831855.1511115744
--2017-12-04 15:16:06--  https://releases.hashicorp.com/terraform/0.11.1/terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Resolving releases.hashicorp.com... 151.101.17.183, 2a04:4e42:4::439
Connecting to releases.hashicorp.com|151.101.17.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15750266 (15M) [application/zip]
Saving to: ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’

terraform_0.11.1_darwin_amd64.zip?_ga=2.17386 100%[=================================================================================================>]  15.02M   499KB/s    in 30s

2017-12-04 15:16:36 (517 KB/s) - ‘terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744’ saved [15750266/15750266]

Then unzip:

➜  apps unzip terraform_0.11.1_darwin_amd64.zip\?_ga=2.1738614.654909398.1512400028-228831855.1511115744
Archive:  terraform_0.11.1_darwin_amd64.zip?_ga=2.1738614.654909398.1512400028-228831855.1511115744
  inflating: terraform

Finally make sure location added to PATH:

➜  ~ export PATH=~/apps:$PATH

Check installation works:

➜  ~ terraform -v
Terraform v0.11.1

2. Spin up EC2

The plan is to spin up latest Ubuntu.

Comments closed