Today I am going to show how to setup Kubernetes cluster on AWS using kops(k8s operations).
In order to provision k8s cluster and deploy a Docker container we will need to install and setup couple of things, so here is the list:
0. Setup a VM with CentOS Linux as a control center.
1. Install and configure AWS cli to manage AWS resources.
2. Install and configure kops to manage provisioning of k8s cluster and AWS resources required by k8s.
3. Create a hosted zone in AWS Route53 and setup ELB to access deployed container services.
4. Install and configure kubectl to manage containers on k8s.
5. (Update September 2018) Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
6. (Update June 2019) The advanced way: Automating Highly Available Kubernetes and external ETCD cluster setup with terraform and kubeadm on AWS..
0. Setup a VM with CentOS Linux
Even though I am using MacOS, sometimes it is annoying that you can’t run certain commands or some arguments are different, so let’s spin up a Linux VM first, I choose centos this time, you can go with ubuntu if you wish. Here is how Vagrantfile looks like:
➜ kops cat Vagrantfile # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.define "kops" do |m| m.vm.box = "centos/7" m.vm.hostname = "kops" end end
Let’s start it up and logon:
vagrant up && vagrant ssh Bringing machine 'kops' up with 'virtualbox' provider... ==> kops: Checking if box 'centos/7' is up to date... ==> kops: A newer version of the box 'centos/7' is available! You currently ==> kops: have version '1708.01'. The latest is version '1710.01'. Run ==> kops: `vagrant box update` to update. ==> kops: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> kops: flag to force provisioning. Provisioners marked to run always will still run. Last login: Sun Dec 17 10:48:48 2017 from 10.0.2.2 [vagrant@kops ~]$
1. Install and configure awscli
In order to install aws cli we will need a pip – tool for installing Python packages. As centos doesn’t offer it in it’s
core repositories we will need to enable EPEL first:
[vagrant@kops ~]$ sudo yum -y update [vagrant@kops ~]$ sudo yum install epel-release -y
Now we can install pip and then aws cli:
[vagrant@kops ~]$ sudo yum -y install python-pip [vagrant@kops ~]$ sudo pip install awscli
Next we need to configure awscli:
[vagrant@kops ~]$ aws configure AWS Access Key ID [None]:
yeah, but we don’t have any access keys, so let’s go and configure it first, go to your AWS IAM console and create
a user with admin rights, as shown below:
it doesn’t matter how you call user, important thing is – it should have programmatic access and admin rights, as kops will use it to create a hell lots of things on AWS.
Once you got your access key and secret let’s configure aws cli:
[vagrant@kops ~]$ aws configure AWS Access Key ID [None]: AKIAJSQKBOVAVI**** AWS Secret Access Key [None]: PsAJX************* Default region name [None]: eu-west-2 Default output format [None]:
I used eu-west-2 as I am in London, you can use any region in fact if you want.
2. Install and configure kops
Now we are ready to download and configure kops.
[vagrant@kops ~]$ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64 [vagrant@kops ~]$ chmod +x kops-linux-amd64 [vagrant@kops ~]$ sudo mv kops-linux-amd64 /usr/local/bin/kops [vagrant@kops ~]$ kops version Version 1.8.0 (git-5099bc5)
It’s almost ready, but before we start using it, we need to configure S3 bucket, as kops is using S3 to store clusters state, so let’s create a bucket in the same region where your earlier configured your aws cli:
[vagrant@kops ~]$ aws s3api create-bucket --bucket kayan-kops-state --region eu-west-2 --create-bucket-configuration LocationConstraint=eu-west-2 { "Location": "http://kayan-kops-state.s3.amazonaws.com/" } [vagrant@kops ~]$
I called ‘kayan-kops-state’ you can call it as you wish just make sure it is unique otherwise it complains:
[vagrant@kops ~]$ aws s3api create-bucket --bucket kops-state --region eu-west-2 --create-bucket-configuration LocationConstraint=eu-west-2 An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again. [vagrant@kops ~]$
3. Create a hosted zone in AWS Route53.
Now we need to create a subdomain in route53 so kops can use it to setup DNS records for our cluster.
If you don’t have a domain, you can register one with AWS, it is quite cheap, around 10 quid, alternatively you can
just create a subdomain in route53 and update your root domain’s NS records. I have a domain registered with AWS ifritltd.net, so I will be just creating a subdomain k8s.ifritltd.net as shown below:
You can also do it through command:
aws route53 create-hosted-zone --name k8s.ifritltd.net --caller-reference 1 { "HostedZone": { "ResourceRecordSetCount": 2, "CallerReference": "1", "Config": { "PrivateZone": false }, "Id": "/hostedzone/Z3I6D40LJGM0M0", "Name": "k8s2.ifritltd.net." }, "DelegationSet": { "NameServers": [ "ns-632.awsdns-15.net", "ns-1454.awsdns-53.org", "ns-131.awsdns-16.com", "ns-1630.awsdns-11.co.uk" ] }, "Location": "https://route53.amazonaws.com/2013-04-01/hostedzone/Z3I6D40LJGM0M0", "ChangeInfo": { "Status": "PENDING", "SubmittedAt": "2017-12-17T14:37:30.685Z", "Id": "/change/C38C5CQRHYEMOS" } }
Now copy NameServers, create a NS record in your root domain, and paste there. Once everything is ok you can check it with dig:
[vagrant@kops ~]$ dig NS k8s.ifritltd.net -bash: dig: command not found
Obviously first need to install it:
[vagrant@kops ~]$ sudo yum -y install dig Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.ukfast.co.uk * epel: es-mirrors.evowise.com * extras: mirror.netw.io * updates: www.mirrorservice.org No package dig available. Error: Nothing to do
Of course it would be naive to expect that, lets see which package has it:
[vagrant@kops ~]$ yum whatprovides '*bin/dig' Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.ukfast.co.uk * epel: mirror.vorboss.net * extras: mirrors.ukfast.co.uk * updates: mirrors.coreix.net 32:bind-utils-9.9.4-50.el7.x86_64 : Utilities for querying DNS name servers Repo : base Matched from: Filename : /usr/bin/dig
Now let’s install it:
[vagrant@kops ~]$ sudo yum -y install bind-utils
And finally lets check it:
[vagrant@kops ~]$ dig NS k8s.ifritltd.net ; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7_4.1 <<>> NS k8s.ifritltd.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17688 ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 9 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;k8s.ifritltd.net. IN NS ;; ANSWER SECTION: k8s.ifritltd.net. 300 IN NS ns-855.awsdns-42.net. k8s.ifritltd.net. 300 IN NS ns-1094.awsdns-08.org. k8s.ifritltd.net. 300 IN NS ns-1901.awsdns-45.co.uk. k8s.ifritltd.net. 300 IN NS ns-29.awsdns-03.com. ;; ADDITIONAL SECTION: ns-1094.awsdns-08.org. 300 IN A 205.251.196.70 ns-1094.awsdns-08.org. 300 IN AAAA 2600:9000:5304:4600::1 ns-1901.awsdns-45.co.uk. 300 IN A 205.251.199.109 ns-1901.awsdns-45.co.uk. 300 IN AAAA 2600:9000:5307:6d00::1 ns-29.awsdns-03.com. 300 IN A 205.251.192.29 ns-29.awsdns-03.com. 300 IN AAAA 2600:9000:5300:1d00::1 ns-855.awsdns-42.net. 300 IN A 205.251.195.87 ns-855.awsdns-42.net. 300 IN AAAA 2600:9000:5303:5700::1 ;; Query time: 29 msec ;; SERVER: 10.0.2.3#53(10.0.2.3) ;; WHEN: Sun Dec 17 14:03:37 UTC 2017 ;; MSG SIZE rcvd: 357
If you messed it up, or before NS record is added to root domain, you should not get answers:
[vagrant@kops ~]$ dig NS k8s.ifritltd.net ; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7_4.1 <<>> NS k8s.ifritltd.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 24172 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1280 ;; QUESTION SECTION: ;k8s.ifritltd.net. IN NS ;; AUTHORITY SECTION: ifritltd.net. 600 IN SOA ns-1462.awsdns-54.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;; Query time: 384 msec ;; SERVER: 10.0.2.3#53(10.0.2.3) ;; WHEN: Sun Dec 17 14:24:40 UTC 2017 ;; MSG SIZE rcvd: 130 [vagrant@kops ~]$
if you still get this, it means something went wrong, go double check, google etc, fix it.
4. Install and configure kubectl.
One of the requirements of kops is to have kubectl installed, even if it wasn’t we still would need it in order to deploy our containers to k8s. So let’s install it:
[vagrant@kops ~]$ curl -LO \ https://storage.googleapis.com/kubernetes-release/release/$(curl -s \ https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Lets test it is working:
vagrant@kops ~]$ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port?
All good and we are ready to provision our cluster. At least if you don’t stumble across one issue, lets look what is that.
So, in order to run a command to create a k8s cluster, kops needs to know availability zone of your cluster, in order to get it
I used aws cli:
[vagrant@kops ~]$ aws ec2 describe-availability-zones --region eu-west-2 An error occurred (AuthFailure) when calling the DescribeAvailabilityZones operation: AWS was not able to validate the provided access credentials [vagrant@kops ~]$
Very weird, thankfully running another command saved me from googling further:
[vagrant@kops ~]$ aws s3 ls An error occurred (RequestTimeTooSkewed) when calling the ListBuckets operation: The difference between the request time and the current time is too large.
as now it is obvious why aws cli doesn’t work, that is because we are using VM and it’s time is not in synch, let’s fix it:
[vagrant@kops ~]$ sudo yum install -y ntp ntpdate ntp-doc [vagrant@kops ~]$ sudo ntpdate pool.ntp.org 17 Dec 15:04:41 ntpdate[2888]: step time server 195.219.205.9 offset 1527.918257 sec
Let’s give it another try:
[vagrant@kops ~]$ aws ec2 describe-availability-zones --region eu-west-2 { "AvailabilityZones": [ { "State": "available", "ZoneName": "eu-west-2a", "Messages": [], "RegionName": "eu-west-2" }, { "State": "available", "ZoneName": "eu-west-2b", "Messages": [], "RegionName": "eu-west-2" } ] }
finally, so we have eu-west-2a and eu-west-2b, I will only use one of them, let’s try to create a cluster:
vagrant@kops ~]$ kops create cluster \ --name=k8s.ifritltd.net --state=s3://kayan-kops-state \ --zones="eu-west-2a" --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=k8s.ifritltd.net --cloud aws error reading SSH key file "/home/vagrant/.ssh/id_rsa.pub": open /home/vagrant/.ssh/id_rsa.pub: no such file or directory [vagrant@kops ~]$
again, this time I forgotten that kops also needs ssh key pair, indeed, how otherwise we will be able to connect to EC2 instances once they are created:
[vagrant@kops ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/vagrant/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/vagrant/.ssh/id_rsa. Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub. The key fingerprint is: SHA256:x2+CADMjZZPipCjgB0NuuQaVSSKr1jQUkLdz0c0HTAA vagrant@kops The key's randomart image is: +---[RSA 2048]----+ |++===Eo.*o. | |+BB+.o . + . | |*B=+* . . | |B.==.* . | |o+..o . S o | |o . o . | | . . o | | o | | | +----[SHA256]-----+ [vagrant@kops ~]$
another try:
[vagrant@kops ~]$ vagrant@kops ~]$ kops create cluster \ --name=k8s.ifritltd.net --state=s3://kayan-kops-state \ --zones="eu-west-2a" --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=k8s.ifritltd.net --cloud aws I1217 15:19:38.887426 2919 create_cluster.go:971] Using SSH public key: /home/vagrant/.ssh/id_rsa.pub I1217 15:19:39.699575 2919 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet eu-west-2a Previewing changes that will be made: I1217 15:19:42.256098 2919 executor.go:91] Tasks: 0 done / 73 total; 31 can run I1217 15:19:44.006091 2919 executor.go:91] Tasks: 31 done / 73 total; 24 can run I1217 15:19:44.577779 2919 executor.go:91] Tasks: 55 done / 73 total; 16 can run I1217 15:19:44.824853 2919 executor.go:91] Tasks: 71 done / 73 total; 2 can run I1217 15:19:44.910682 2919 executor.go:91] Tasks: 73 done / 73 total; 0 can run Will create resources: AutoscalingGroup/master-eu-west-2a.masters.k8s.ifritltd.net MinSize 1 MaxSize 1 Subnets [name:eu-west-2a.k8s.ifritltd.net] Tags {Name: master-eu-west-2a.masters.k8s.ifritltd.net, KubernetesCluster: k8s.ifritltd.net, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: master-eu-west-2a, k8s.io/role/master: 1} LaunchConfiguration name:master-eu-west-2a.masters.k8s.ifritltd.net AutoscalingGroup/nodes.k8s.ifritltd.net MinSize 2 MaxSize 2 Subnets [name:eu-west-2a.k8s.ifritltd.net] Tags {k8s.io/role/node: 1, Name: nodes.k8s.ifritltd.net, KubernetesCluster: k8s.ifritltd.net, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: nodes} LaunchConfiguration name:nodes.k8s.ifritltd.net DHCPOptions/k8s.ifritltd.net DomainName eu-west-2.compute.internal DomainNameServers AmazonProvidedDNS ... ..... ....... .......... Cluster configuration has been created. Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster k8s.ifritltd.net * edit your node instance group: kops edit ig --name=k8s.ifritltd.net nodes * edit your master instance group: kops edit ig --name=k8s.ifritltd.net master-eu-west-2a Finally configure your cluster with: kops update cluster k8s.ifritltd.net --yes [vagrant@kops ~]$
After a long list of resources it is going to create, kops suggest what we can do next, it is similar to terraform’s plan command, which is used to provision AWS resources, if you don’t know about it, I have a nice short post about how to provision AWS ec2 with terraform here
So, I suppose we can go ahead and finally make everything happen, let’s copy paste a suggestion from kops:
[vagrant@kops ~]$ kops update cluster k8s.ifritltd.net --yes State Store: Required value: Please set the --state flag or export KOPS_STATE_STORE. A valid value follows the format s3://<bucket>. A s3 bucket is required to store cluster state information.
ah, we need to provide s3 bucket as well:
[vagrant@kops ~]$ kops update cluster k8s.ifritltd.net --yes --state=s3://kayan-kops-state I1217 15:24:19.458665 2930 executor.go:91] Tasks: 0 done / 73 total; 31 can run I1217 15:24:20.777084 2930 vfs_castore.go:430] Issuing new certificate: "ca" I1217 15:24:20.781929 2930 vfs_castore.go:430] Issuing new certificate: "apiserver-aggregator-ca" I1217 15:24:21.852334 2930 executor.go:91] Tasks: 31 done / 73 total; 24 can run I1217 15:24:23.306964 2930 vfs_castore.go:430] Issuing new certificate: "kubecfg" I1217 15:24:23.518178 2930 vfs_castore.go:430] Issuing new certificate: "kube-controller-manager" I1217 15:24:23.955319 2930 vfs_castore.go:430] Issuing new certificate: "apiserver-proxy-client" I1217 15:24:23.986033 2930 vfs_castore.go:430] Issuing new certificate: "kube-proxy" I1217 15:24:24.202773 2930 vfs_castore.go:430] Issuing new certificate: "kubelet" I1217 15:24:24.215050 2930 vfs_castore.go:430] Issuing new certificate: "apiserver-aggregator" I1217 15:24:24.263349 2930 vfs_castore.go:430] Issuing new certificate: "kops" I1217 15:24:24.270783 2930 vfs_castore.go:430] Issuing new certificate: "master" I1217 15:24:24.354607 2930 vfs_castore.go:430] Issuing new certificate: "kubelet-api" I1217 15:24:24.569538 2930 vfs_castore.go:430] Issuing new certificate: "kube-scheduler" I1217 15:24:26.903693 2930 executor.go:91] Tasks: 55 done / 73 total; 16 can run I1217 15:24:28.205377 2930 launchconfiguration.go:333] waiting for IAM instance profile "masters.k8s.ifritltd.net" to be ready I1217 15:24:28.293855 2930 launchconfiguration.go:333] waiting for IAM instance profile "nodes.k8s.ifritltd.net" to be ready I1217 15:24:38.827416 2930 executor.go:91] Tasks: 71 done / 73 total; 2 can run I1217 15:24:39.278717 2930 executor.go:91] Tasks: 73 done / 73 total; 0 can run I1217 15:24:39.278760 2930 dns.go:153] Pre-creating DNS records I1217 15:24:40.338846 2930 update_cluster.go:248] Exporting kubecfg for cluster kops has set your kubectl context to k8s.ifritltd.net Cluster is starting. It should be ready in a few minutes. Suggestions: * validate cluster: kops validate cluster * list nodes: kubectl get nodes --show-labels * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8s.ifritltd.net The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS. * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md [vagrant@kops ~]$
73 tasks! wow, well done kops, btw every time I use it, with a newer release, there are more and more tasks are added, just imagine what would you do, should there be no kops!
Let’s check our cluster, if you don’t want to provide –state every time you can export it with KOPS_STATE_STORE:
[vagrant@kops ~]$ export KOPS_STATE_STORE=s3://kayan-kops-state [vagrant@kops ~]$ kops validate cluster Using cluster from kubectl context: k8s.ifritltd.net Validating cluster k8s.ifritltd.net INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master t2.micro 1 1 eu-west-2a nodes Node t2.micro 2 2 eu-west-2a NODE STATUS NAME ROLE READY ip-172-20-46-134.eu-west-2.compute.internal node True ip-172-20-55-66.eu-west-2.compute.internal node True ip-172-20-62-164.eu-west-2.compute.internal master True Your cluster k8s.ifritltd.net is ready
Please note it might take some times before it is fully ready, especially if you create and delete and then recreate
as DNS records could be cached etc. In case it fails you may get these sort of errors:
vagrant@kops ~]$ kops validate cluster Using cluster from kubectl context: k8s.ifritltd.net Validating cluster k8s.ifritltd.net Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Cannot reach cluster's API server: unable to Validate Cluster: k8s.ifritltd.net
or when one of the nodes is not ready:
[vagrant@kops ~]$ kops validate cluster Using cluster from kubectl context: k8s.ifritltd.net Validating cluster k8s.ifritltd.net INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master t2.micro 1 1 eu-west-2a nodes Node t2.micro 2 2 eu-west-2a NODE STATUS NAME ROLE READY ip-172-20-55-96.eu-west-2.compute.internal master True Validation Failed Ready Master(s) 1 out of 1. Ready Node(s) 0 out of 2. your nodes are NOT ready k8s.ifritltd.net
But once it is ready you can start playing with kubectl, let’s check our nodes and pods and then try to deploy something:
[vagrant@kops ~]$ kubectl cluster-info Kubernetes master is running at https://api.k8s.ifritltd.net KubeDNS is running at https://api.k8s.ifritltd.net/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [vagrant@kops ~]$
[vagrant@kops ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-20-46-134.eu-west-2.compute.internal Ready node 8m v1.8.4 ip-172-20-55-66.eu-west-2.compute.internal Ready node 7m v1.8.4 ip-172-20-62-164.eu-west-2.compute.internal Ready master 9m v1.8.4 [vagrant@kops ~]$
[vagrant@kops ~]$ kubectl get pod --namespace=kube-system NAME READY STATUS RESTARTS AGE dns-controller-fc68bd97b-2ldv7 1/1 Running 0 9m etcd-server-events-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 10m etcd-server-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 10m kube-apiserver-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 9m kube-controller-manager-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 9m kube-dns-7f56f9f8c7-52qkx 3/3 Running 0 9m kube-dns-7f56f9f8c7-pvq8h 3/3 Running 0 7m kube-dns-autoscaler-f4c47db64-hg2x6 1/1 Running 0 9m kube-proxy-ip-172-20-46-134.eu-west-2.compute.internal 1/1 Running 0 8m kube-proxy-ip-172-20-55-66.eu-west-2.compute.internal 1/1 Running 0 7m kube-proxy-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 9m kube-scheduler-ip-172-20-62-164.eu-west-2.compute.internal 1/1 Running 0 10m [vagrant@kops ~]$
As you can see all internal pods are running. Lets deploy something now:
vagrant@kops ~]$ kubectl run busybox --image=busybox --port 8080 \ -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \ echo 'smallest http server'; } | nc -l -p 8080; done" deployment "busybox" created
Now we need to expose it:
vagrant@kops ~]$ kubectl expose deployment busybox --type=NodePort service "busybox" exposed
We need to find out the port it used when created a service:
[vagrant@kops ~]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE busybox NodePort 100.70.222.39 <none> 8080:31155/TCP 7s
As you can see, we have a container running and exposing port 8080, k8s exposed it on all nodes on port 31155,
given we have ssh keys we created earlier, we can now use it to connect to our nodes and check that very port:
You can see for all nodes there is a public IP address, so we can now check the port on that hosts:
[vagrant@kops ~]$ ssh admin@35.177.142.69 -i .ssh/id_rsa curl localhost:30822 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 21 0 21 0 0 3039 0 --:--:-- --:--:-- -smallest http server -:--:-- 3500 [vagrant@kops ~]$ [vagrant@kops ~]$ [vagrant@kops ~]$ ssh admin@35.177.179.222 -i .ssh/id_rsa curl localhost:30822 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- smallest http server 100 21 0 21 0 0 1899 0 --:--:-- --:--:-- --:--:-- 2100 [vagrant@kops ~]$ [vagrant@kops ~]$ [vagrant@kops ~]$ ssh admin@52.56.234.121 -i .ssh/id_rsa curl localhost:30822 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 21 0 21 0 0 3259 0 --:--:-- --:--:--smallest http server --:--:-- 3500 [vagrant@kops ~]$
As you can see it works.
Next we will expose the service through LoadBalancer, in the end of the day we deployed the whole thing
not just for NodePort testing as this could easily be tested with minikube locally in 2-3 steps.
So let’s expose the deployment through LoadBalancer:
vagrant@kops ~]$ kubectl expose deployment busybox --port 80 --target-port=8080 --type=LoadBalancer Error from server (AlreadyExists): services "busybox" already exists [vagrant@kops ~]$
because we already exposed busybox deployment, it creates a service with the same name as deployment, so when we try second
time it is reserved, so we need to explicitly set a different name:
[vagrant@kops ~]$ kubectl expose deployment busybox --name busyboxlb --port 80 --target-port=8080 --type=LoadBalancer service "busyboxlb" exposed
Now, lets check the load balancer address:
vagrant@kops ~]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE busybox NodePort 100.68.181.5 <none> 8080:30822/TCP 47m busyboxlb LoadBalancer 100.68.59.82 <pending> 80:32473/TCP 3s kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 1h .. ... [vagrant@kops ~]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE busybox NodePort 100.68.181.5 <none> 8080:30822/TCP 54m busyboxlb LoadBalancer 100.68.59.82 a53bfecf2e354... 80:32473/TCP 6m kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 1h [vagrant@kops ~]$
First it was still pending, later it was ready. Let’s describe the load balancer:
[vagrant@kops ~]$ kubectl describe svc busyboxlb Name: busyboxlb Namespace: default Labels: run=busybox Annotations: <none> Selector: run=busybox Type: LoadBalancer IP: 100.68.59.82 LoadBalancer Ingress: a53bfecf2e35411e7965c06578e71d83-433398422.eu-west-2.elb.amazonaws.com Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 32473/TCP Endpoints: 100.96.2.2:8080 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 7m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer [vagrant@kops ~]$
let’s take the load balancer’s FQ name and send request:
vagrant@kops ~]$ curl a53bfecf2e35411e7965c06578e71d83-433398422.eu-west-2.elb.amazonaws.com smallest http server [vagrant@kops ~]$
Now our service is publicly accessible! Let’s create an alias busybox.k8s.ifritltd.net now for our load balancer, go to route53,
and in our subdomain, k8s.ifritltd.net, create a new recordset of type A, Alias=Yes,
click alias target and pick up you load balancer from list of ELB classic load balancers:
We can now query it by name:
vagrant@kops ~]$ curl busybox.k8s.ifritltd.net. smallest http server [vagrant@kops ~]$
That is it, we looked all basic steps required to provision production ready k8s cluster on AWS.
Finally, If you don’t want to pay for all those resources kops has provisioned, please dont’ forget to delete your cluster, as it is so easy to create a new one anyway 🙂
[vagrant@kops ~]$ kops delete cluster k8s.ifritltd.net --yes --state=s3://kayan-kops-state
5. Set up authentication in Kubernetes with AWS IAM Authenticator(heptio).
Now if you are setting up k8s cluster for production use, obviously default setup which is using x509 key/certificate for cluster authentication won’t be something secure that you want to use. If you are already using AWS, you can then setup up authentication for you cluster using AWS IAM Authenticator. The details are provided in the link.
Although using kops could help you to spin up your cluster quickly, if you want to go further and embark on a mission of setting up HA cluster for production use, you may well consider other tools, like kubeadm, in the link above you can find more advanced way of setting k8s cluster.