Skip to content

Month: June 2019

How to preserve index based order in terraform maps

I have been adding new VPC peerings with another acount today and noticed that my new peering would delete old peerings and recreate them again on top of adding a new one in terraform plan.

Here is my peering code:

  resource "aws_vpc_peering_connection" "apples_account" {
  count = "${length(var.apples_account_vpc_ids)}"

  vpc_id = "${aws_vpc.vpc.id}"

  peer_owner_id = "${var.apples_account}"
  peer_vpc_id   = "${element(values(var.apples_account_vpc_ids),count.index)}"

  auto_accept = false
  peer_region = "eu-west-1"

  tags = "${merge(
    map(
      "Name",
      "peer-${var.environment_group}-${var.aws_account}-${element(keys(var.apples_account_vpc_ids),count.index)}-company1"),
    local.all_tags
    )}"
}

And vars:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-i": "vpc-222d4253"
  }

As you can see, I am adding new VPC vpc-staging-i and here is what I get:

 
)
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ aws_vpc_peering_connection.apples_account[0] (new resource required)
      id:              "pcx-00888486b31516daa" => <computed> (forces new resource)
      accept_status:   "active" => <computed>
      accepter.#:      "0" => <computed>
      auto_accept:     "false" => "false"
      peer_owner_id:   "111111111111" => "111111111111"
      peer_region:     "eu-west-1" => "eu-west-1"
      peer_vpc_id:     "vpc-111d4253" => "vpc-222d4253" (forces new resource)
      requester.#:     "1" => <computed>
      tags.%:          "9" => "9"
      tags.CostCentre: "OPS_TEAM" => "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov" => "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1" => "peer-vpc-secure-np-vpc-staging-i-company1"
      tags.Owner:      "Terraform" => "Terraform"
      tags.Product:    "PROD1" => "PROD1"
      tags.Region:     "eu-west-2" => "eu-west-2"
      tags.Role:       "secure" => "secure"
      tags.Scope:      "internal" => "internal"
      tags.SourcePath: "terraform/vpc/business/" => "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a" => "vpc-222eddef5e86fa65a"

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-111d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-l-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 2 to add, 0 to change, 1 to destroy.

As you can see, vpc-222d4253 replaces vpc-111d4253, and then vpc-111d4253 added later. But I don’t want to recreate my peerings!

Because my other VPC side is in a different account and I can’t use auto_accept either, meaning my other account will need to accept new peerings again, and in between this – a breaking change…

So first of all, why is this happening?

This is because keys(map) in terraform returns list sorted in alphabetical order, let’s prove it, if I change vpc-staging-i to vpc-staging-m:

  "apples_account_vpc_ids" : {
    "vpc-staging-l": "vpc-111d4253",
    "vpc-staging-m": "vpc-222d4253"
  }

as M comes after L, as oppose to I coming before L, now the order will be artificially preserved:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_vpc_peering_connection.apples_account[1]
      id:              <computed>
      accept_status:   <computed>
      accepter.#:      <computed>
      auto_accept:     "false"
      peer_owner_id:   "111111111111"
      peer_region:     "eu-west-1"
      peer_vpc_id:     "vpc-222d4253"
      requester.#:     <computed>
      tags.%:          "9"
      tags.CostCentre: "OPS_TEAM"
      tags.CreatedBy:  "kayanazimov"
      tags.Name:       "peer-vpc-secure-np-vpc-staging-m-company1"
      tags.Owner:      "Terraform"
      tags.Product:    "PROD1"
      tags.Region:     "eu-west-2"
      tags.Role:       "secure"
      tags.Scope:      "internal"
      tags.SourcePath: "terraform/vpc/business/"
      vpc_id:          "vpc-222eddef5e86fa65a"


Plan: 1 to add, 0 to change, 0 to destroy.

Indeed, only adding a new VPC peering,

But I don’t want to juggle with letters, becides this letter actually stands for a name of vpc(l – low risk, m – middle, etc) not just some random letter, I need another solution, luckily there is one.

Comments closed

Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS.

Today I am going to show how you can fully automate the advanced process of setting up the highly available k8s cluster in the cloud. We will go through a set of terraform and bash scripts which should be sufficient enough for you to literally just run terraform plan/apply to get your HA etcd and k8s cluster up and running without any hassle around.

    Part 0 – Intro.
    Part 1 – Setting up HA ETCD cluster.
    Part 2 – The PKI infra
    Part 3 – Setting up k8s cluster.

Part 0 – Intro.

If you do a short research on how to setup k8s cluster you may find quite a lot of ways this could be achieved.
But in general, all this ways could be grouped into 3 types:

1) No setup
2) Easy Set up
3) Advanced Set up
4) Hard way

By No setup I simply mean something like EKS, it is a managed service, you don’t need to maintain or care about details while AWS will do all for you. Never used it can’t say much on that one.

Easy setup, tools like kops and alike make it quite easy – couple commands run kinda setup:

kops ~]$ kops create cluster \
  --name=k8s.ifritltd.net --state=s3://kayan-kops-state \
  --zones="eu-west-2a" --node-count=2 --node-size=t2.micro 
  --master-size=t2.micro --dns-zone=k8s.ifritltd.net  --cloud aws

All you need is setup s3 bucket and dns records and run the command above which I described two years ago in this article

The downside is first of all it is mainly only for AWS, and generates all AWS resources as it wants, so lets say it would generate security groups, asg, etc in it’s own way which means
if you already have terraform managed infra with your own rules, strategies and framework, it won’t feet into that model but just added as some kind of alien infra. Long story short if you want fine grained control over how your infra should be managed from single centralised terraform, it isn’t best solution, yet still easy and balanced tool.

Before I start explaining how to use Advanced Set up, I am just going to mention that 4th, The Hard way is probably only good if you want to learn how k8s works, how all components interact with each other, and as it doesn’t use any external tool to set up components, you do everything manually, you literally know all the guts of the system. Obviously it could become a nightmare to support such system in production unless all members of the ops team are k8s experts or there are some requirements not supported by other bootstrapping tools.

Finally the Advanced Set up.

Comments closed

Supernetting explained easy

I have recently been configuring squid proxy behind loadbalancer, in order for squid to allow incoming PROXY protocol connections from loadbalancer, I quickly decided easiest option would be either whole vpc CIDR range:

acl loadbalancer src 10.139.0.0/17
proxy_protocol_access allow loadbalancer

or list of subnets from 3 AZs where loadbalancer is running:

acl loadbalancer src 10.139.64.64/28 10.139.64.96/28 10.139.64.80/28
proxy_protocol_access allow loadbalancer

Even though both configurations are valid, my pull request quickly caught attention of more experienced in networking(in fact ex CCNP guy) colleague of mine. But that is the beauty of modern operations teams working in devops fashion, while he may catch this sort of issues, I (ex developer guy) for instance can easily spot how duplication in piece of bash or python code could be avoided by refactoring it into a reusable function/template.

So back to our problem, as I said, while both ranges are valid, first in fact is much wider than actually required, and second is too redundant.


Enter supernetting.

So what is that?

Comments closed

How to fix DNS issues when using OpenVPN.

How to fix DNS issues when using OpenVPN.

Sometimes you successfully connect to vpn server but nothing still seems to work. Well, one of the reasons could be the DNS.
Firstly, you should check your vpn logs, that would be for instance,
for MacOS:
/Library/Application Support/Tunnelblick/Logs
or Linux in:
journalctl -u NetworkManager.service on linux

2019-06-11 23:30:25.110048 MANAGEMENT: >STATE:1560292225,GET_CONFIG,,,,,,
2019-06-11 23:30:25.110251 SENT CONTROL [openvpn.example.com]: 'PUSH_REQUEST' (status=1)
2019-06-11 23:30:25.252005 PUSH: Received control message: 'PUSH_REPLY,route ....
dhcp-option DOMAIN dev.example.com,dhcp-option DOMAIN prod.example.com,dhcp-option DOMAIN int.example.com'
...
....
2019-06-11 23:30:25.252374 Options error: Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:13: dhcp-option (2.4.7)

In the example above, openvpn client complaints about not recognising dhcp-options, because server pushes multiple ‘dhcp-option DOMAIN value’ config params whereas
client expects a single command with multiple values: ‘dhcp-option DOMAIN value1 value2’.

This normally happens when your client version doesn’t match with your server version, so you client doesn’t know what to do with them.

As a result you may not get correct settings in your ‘/etc/resolv.conf’, for example missing or incomplete ‘nameserver’ or ‘search’.
In this example we won’t get ‘search’ set up correctly meaning if there was a DNS record like something.int.example.com, we wouldn’t be able
to refer to it without FQDN like just ‘something’, that is what ‘search’ parameter does in ‘/etc/resolv.conf’.

If ‘nameserver’ was not configured then our DNS won’t work at all.

But there is a solution.

Comments closed