Skip to content

Storing sensitive data in AWS with credstash, DynamoDB and KMS.

One of the most important problems of modern cloud infrastructure is security. You can put a lot of efforts to automate the build process of your infrastructure, but it is worthless if you don’t deal with sensitive data appropriately and sooner or later it could become a pain.

Most of the big organisations will probably spend some time to implement and support HashiCorp Vault, or something similar, which is more ‘enterprisy’.
In most cases though something simple, yet secure and reliable could be just sufficient, especially if you follow YAGNI.

Today I will demonstrate how to use a tool called credstash, which leverages two AWS services for it’s functionality: DynamoDB and KMS. It uses DynamoDB as key/value store to encrypt and save the secrets with KMS master key and encryption context, and everyone who has access to same master key and encryption context can then decrypt the secret and read it.

From user perspective, you don’t need to deal with neither DynamoDB nor KMS. All you do is store and read your secrets using key/value and context as arguments to the credstash.

So let’s go straight to terraform code which we will use to provision DynamoDB and KMS key, the code is in my credstash terraform repo, main.tf:

resource "aws_kms_key" "credstash_kms_key" {
  description = "KMS key for credstash"
}

resource "aws_kms_alias" "alias" {
  name          = "alias/credstash"
  target_key_id = "${aws_kms_key.credstash_kms_key.key_id}"
}

resource "aws_dynamodb_table" "credential_store" {
  name           = "credential-store"
  read_capacity  = "10"
  write_capacity = "10"
  hash_key       = "name"
  range_key      = "version"

  attribute {
    name = "name"
    type = "S"
  }

  attribute {
    name = "version"
    type = "S"
  }
}

provider "aws" {
  region = "eu-west-1"
}

The code above will create everything credstash needs to function, which is DynamoDB table called credential-store and KMS key called alias/credstash.

We also will save KMS key id for later use, output.tf:

output "kms_key_id" {
  value = "${aws_kms_key.credstash_kms_key.key_id}"
}

You can check the result in the console:

Now we are ready to use credstash. Although creation is very simple, the most of the efforts is actually spent setting the permission in the right order!
So let’s create 3 EC2 instances: admin, dev and a hypothetical qa user, the code is in my credstash_example repo, main.tf :

resource "aws_instance" "credstash_admin" {
  ami = "${data.aws_ami.ubuntu_1604.id}"
  instance_type = "t2.micro"

  key_name = "terra"
  user_data = "${file("setup.sh")}"

  vpc_security_group_ids = [
    "${aws_security_group.sg.id}",
  ]

  iam_instance_profile = "${aws_iam_instance_profile.aws_iam_instance_profile_admin.name}"

  tags {
    Name = "example credstash admin"
  }
}

resource "aws_instance" "credstash_dev" {
  ami = "${data.aws_ami.ubuntu_1604.id}"
  instance_type = "t2.micro"

  key_name = "terra"
  user_data = "${file("setup.sh")}"

  vpc_security_group_ids = [
    "${aws_security_group.sg.id}",
  ]

  iam_instance_profile = "${aws_iam_instance_profile.aws_iam_instance_profile_dev.name}"

  tags {
    Name = "example credstash dev"
  }
}

resource "aws_instance" "credstash_qa" {
  ami = "${data.aws_ami.ubuntu_1604.id}"
  instance_type = "t2.micro"

  key_name = "terra"
  user_data = "${file("setup.sh")}"

  vpc_security_group_ids = [
    "${aws_security_group.sg.id}",
  ]

  iam_instance_profile = "${aws_iam_instance_profile.aws_iam_instance_profile_qa.name}"

  tags {
    Name = "example credstash qa"
  }
}

All instances are pretty much same, they all use Ubuntu xenial from latest ubuntu ami which is retrieved with data.tf:

data "aws_ami" "ubuntu_1604" {
  most_recent = true
  name_regex = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-[0-9]*"
}

Then we install credstash and python(in which credstash is written) using user_data, setup.sh:


#!/bin/bash

apt-get update
locale-gen en_GB.UTF-8
apt install -y python-pip
pip install credstash

They all use same security group so we can ssh to instance and instance can install software from the web, it could have been obviously more stick and just enable 80/443, but for purposes of our ephemeral instances which will only live sadly few minutes and the die, it is ok, security-groups.tf:

resource "aws_security_group" "sg" {
  name        = "credstash_reader_sg"
  vpc_id      = "vpc-5b3efd33"
}

resource "aws_security_group_rule" "allow_all_egress" {
  type              = "egress"
  from_port         = 0
  to_port           = 0
  protocol          = "-1"
  security_group_id = "${aws_security_group.sg.id}"
  cidr_blocks       = ["0.0.0.0/0"]

}

resource "aws_security_group_rule" "allow_ssh" {
  type              = "ingress"
  from_port         = 22
  to_port           = 22
  protocol          = "tcp"
  # change this to your IP address
  cidr_blocks     = ["92.4.52.251/32"]

  security_group_id = "${aws_security_group.sg.id}"
}

The only difference is they all use separate iam_instance_profile, because we need to specifically grant permissions for table read/write and KMS key and context.
So let’s see what we need for that, start with admin, iam_admin.tf:

resource "aws_iam_instance_profile" "aws_iam_instance_profile_admin" {
  name_prefix = "credstash_aws_iam_instance_profile-"
  role        = "${aws_iam_role.assume_role_admin.name}"
}

resource "aws_iam_role" "assume_role_admin" {
  name_prefix = "credstash_assume_role_"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "kms_decrypt_role_admin" {
  name_prefix = "kms_decrypt-"
  role        = "${aws_iam_role.assume_role_admin.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "kms:Decrypt"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:kms:eu-west-1:${data.aws_caller_identity.current.account_id}:key/${data.terraform_remote_state.credstash.kms_key_id}"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "dynamodb_credstash_reader_admin" {
  name_prefix = "dynamodb_credstash_reader-"
  role        = "${aws_iam_role.assume_role_admin.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
                  "dynamodb:GetItem",
                  "dynamodb:Query",
                  "dynamodb:Scan"
      ],
      "Resource": "arn:aws:dynamodb:eu-west-1:${data.aws_caller_identity.current.account_id}:table/credential-store",
      "Effect": "Allow"
    }
  ]
}
EOF
}


resource "aws_iam_role_policy" "kms_encrypt" {
  name_prefix = "kms_encrypt-"
  role        = "${aws_iam_role.assume_role_admin.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "kms:GenerateDataKey"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:kms:eu-west-1:${data.aws_caller_identity.current.account_id}:key/${data.terraform_remote_state.credstash.kms_key_id}"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "dynamodb_credstash_writer" {
  name_prefix = "dynamodb_credstash_writer-"
  role        = "${aws_iam_role.assume_role_admin.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
                  "dynamodb:PutItem",
                  "dynamodb:DeleteItem",
                  "dynamodb:Query"
      ],
      "Resource": "arn:aws:dynamodb:eu-west-1:${data.aws_caller_identity.current.account_id}:table/credential-store",
      "Effect": "Allow"
    }
  ]
}
EOF
}

Admin has the most of the permissions, it has two iam role policies for KMS, kms:GenerateDataKey and kms:Decrypt, so it can both encrypt and also decrypt without any contextual restrictions. Then it also has two iam role policies for DynamoDB, in summary giving it access to create, delete, and query the table.

Dev on the other hand can only query the table and decrypt with KMS role/dev encryption context, iam_dev.tf:


resource "aws_iam_instance_profile" "aws_iam_instance_profile_dev" {
  name_prefix = "credstash_aws_iam_instance_profile-"
  role        = "${aws_iam_role.assume_role_dev.name}"
}

resource "aws_iam_role" "assume_role_dev" {
  name_prefix = "credstash_assume_role_"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]

}
EOF
}

resource "aws_iam_role_policy" "kms_decrypt_role_dev" {
  name_prefix = "kms_decrypt-"
  role        = "${aws_iam_role.assume_role_dev.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "kms:Decrypt"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:kms:eu-west-1:${data.aws_caller_identity.current.account_id}:key/${data.terraform_remote_state.credstash.kms_key_id}",
      "Condition": {
          "StringEquals": {
              "kms:EncryptionContext:role": "dev"
           }
      }
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "dynamodb_credstash_reader_dev" {
  name_prefix = "dynamodb_credstash_reader-"
  role        = "${aws_iam_role.assume_role_dev.id}"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
                  "dynamodb:GetItem",
                  "dynamodb:Query",
                  "dynamodb:Scan"
      ],
      "Resource": "arn:aws:dynamodb:eu-west-1:${data.aws_caller_identity.current.account_id}:table/credential-store",
      "Effect": "Allow"
    }
  ]
}
EOF
}

The only difference of iam_qa.tf, is it can decrypt with KMS role/qa encryption context accordingly.

Lets’ spin up the instances and see them in action. In order to ssh to instance we going to use output.tf to print the IPs:

output "admin_ip" {
  value = "${aws_instance.credstash_admin.public_ip}"
}
output "dev_ip" {
  value = "${aws_instance.credstash_dev.public_ip}"
}
output "qa_ip" {
  value = "${aws_instance.credstash_qa.public_ip}"
}

Once terraform is applied in credstash and then credstash_example we will get the output:

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Outputs:

admin_ip = 18.130.239.130
dev_ip = 35.177.8.190
qa_ip = 18.130.190.59

Let’s ssh to admin first:

ubuntu@ip-172-31-31-234:~$ credstash  list

An error occurred (AccessDeniedException) when calling the Scan operation:
User: arn:aws:sts::228426479489:assumed-role/credstash_assume_role_20181111145720051900000003/i-026797e277f13b05e is
not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:228426479489:table/credential-store

ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 list
ubuntu@ip-172-31-31-234:~$

By default credstash is using us-east-1, so it fails to list they secrets, as we don’t have permission
to assume role in this region, we then provide the ‘eu-west-1’ which the region we used to create KMS and dynamodb.

Please note it is not stopping us to spin our instances in different region, as they run in ‘eu-west-2’ actually.

Let’s create some secrets now:

ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 put db_pass_dev 'devpass' role=dev
db_pass_dev has been stored

ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 put db_pass_dev 'devpass_updated' role=dev
db_pass_dev version 0000000000000000001 is already in the credential store. Use the -v flag to specify a new version
ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 put db_pass_dev 'devpass_updated' role=dev -a
db_pass_dev has been stored
ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 put db_pass_qa 'qa_pass' role=qa -a
db_pass_qa has been stored
ubuntu@ip-172-31-31-234:~$ credstash -r eu-west-1 list
db_pass_dev -- version 0000000000000000001 -- comment
db_pass_dev -- version 0000000000000000002 -- comment
db_pass_qa  -- version 0000000000000000001 -- comment
ubuntu@ip-172-31-31-234:~$

First we create secret for ‘role=dev’ context, please note the very key is arbitrary named as a ‘role’, it could have been anything,
like ‘env’ etc, the crucial point though to make sure readers will have appropriate access when setting ‘kms:EncryptionContext’.

So next we try to update it, it fails because it exists already, so we add again with ‘-a’ argument, which appends new records
with new version. It helps to always rollback to previous version. We then create another secret with qa context and finally list them.

Now let’s connect to dev box and see what we can do there:

ubuntu@ip-172-31-30-127:~$ credstash -r eu-west-1 get db_pass_dev role=dev
devpass_updated
ubuntu@ip-172-31-30-127:~$ credstash -r eu-west-1 list
db_pass_dev -- version 0000000000000000001 -- comment
db_pass_dev -- version 0000000000000000002 -- comment
db_pass_qa  -- version 0000000000000000001 -- comment
ubuntu@ip-172-31-30-127:~$ credstash -r eu-west-1 get db_pass_qa role=dev
KMS ERROR: Could not decrypt hmac key with KMS. The encryption context provided may not match the one used when the credential was stored.
ubuntu@ip-172-31-30-127:~$ credstash -r eu-west-1 get db_pass_qa role=qa
KMS ERROR: Decryption error An error occurred (AccessDeniedException) when calling the Decrypt operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
ubuntu@ip-172-31-30-127:~$

We can get our password ‘db_pass_dev’, the list all secrets, but when we try to ‘sneak’ into qa’s ‘db_pass_qa’ password, it fails,
as key hasn’t been created with ‘role=dev’ context, then we try correct context and fail again with AccessDeniedException error, this time as don’t have permission to use ‘role=dev’ context.

Now let’s finally see what qa can do, this time I won’t ssh but simply run command remotely:

ssh -i terra.pem ubuntu@18.130.190.59 credstash -r eu-west-1 get db_pass_qa role=qa
qa_pass

And I could retrieve db_pass_qa secret with ‘role=qa’ context.

As you can see installing and using credstash is extremely easy, the only thing required is setting the right IAM permissions to right users/ec2 instances and you all done.

If you need a simple way to safely store and use sensitive data in a cloud in production then credstash could be considered as a simplest way to go.
Hope you enjoyed the blog post and you can find all code used here in https://github.com/kenych/terraform_exs