How to Deploy Kubernetes Clusters on AWS using RKE


Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

A few months ago, we announced the Rancher Kubernetes Engine (RKE). RKE is a new open source project we have been working on in Rancher to provide a Kubernetes installer that is simple to use, fast and can be used anywhere. You can read more about the project here.

We have been working on adding more functionality and enable more deployment options with each RKE release. One of the most notable features we rolled out recently was initial support for Kubernetes Cloud Providers. In this post, we will do a simple showcase for the AWS Kubernetes cloud provider.

Kubernetes Cloud provider support

Kubernetes provides support for several Core Cloud Providers as well as a simple to implement interface that allows anyone to implement their own cloud provider and link it against the cloud-controller-manager kubernetes component.

Several cloud providers are supported in Kubernetes Core, for example AWS, Azure and GCE.

RKE cloud provider support

While implementing cloud provider support in RKE we will focus mainly on support for the core Kubernetes cloud providers.

Each of the core providers has its own required configuration, limitations, options and features. So, we made a choice at Rancher to roll them out one by one. Starting with version v0.1.3, RKE provides support for the AWS cloud provider. The latest released version, v0.1.5, as of the time of the writing of this article, adds support for Azure as well. To see the latest release of RKE, visit our GitHub release page.

RKE deployment requirements

For the purpose of this how-to article, we will we deploy a minimal cluster on AWS using the simplest configuration possible to show how to use RKE with AWS. This is not a production grade deployment. However, we will provide pointers when possible on how to improve your cluster deployment.

To deploy Kubernetes on AWS, you simply need to:

  • Provision nodes on EC2. For the purpose of this post, we will create a small non-HA cluster with 1 control plane node and 3 worker nodes. If you Like to add HA to your kubernetes cluster, you simply need to add the control plane and etcd roles to 3 or more nodes.
  • The nodes need to have an instance IAM profile that allows kubernetes to talk to AWS API and manage resources. We will look at that shortly.
  • The nodes should be accessible using SSH, and have a supported version of docker installed. If you are not using root, the ssh user you will be using needs to be part of the Docker group.

You will need a linux machine to perform most of the operations in this post. You will also need to install the kubectl command. If you are using MacOS, the process is fairly similar, you just need to download the rke_darwin-amd64 binary instead of the linux one.

Creating an IAM profile

IAM profiles allows for very fine grained controls of access to AWS resources. How you setup and configure the IAM profile will depend on how you design your cluster, what AWS services and resources you are planning to use or planning to allow your applications to use.

You can check IAM profiles provided by kops or by this repo for examples of some fine grained policies.

Also, since your pods will have access to the same resources as your worker node, you should consider using projects like kube2iam.

Step 1: Create an IAM role

Now, first, we create an IAM role and attach our trust policy to it

$ aws iam create-role --role-name rke-role --assume-role-policy-document file://rke-trust-policy.json

The rke-trust-policy.json file will include the following lines:

Step 2: Add our Access Policy

Next, we add our access policy to the role. For the purpose of this post, we will use this simplified and open policy for our instance profile:

$ aws iam put-role-policy --role-name rke-role --policy-name rke-access-policy --policy-document file://rke-access-policy.json

rke-trust-policy.json contains the following:

Step 3: Create the Instance Profile

Finally, we create the instance profile to use with our EC2 instances, and add the role we created to this instance profile:

$ aws iam create-instance-profile --instance-profile-name rke-aws
$ aws iam add-role-to-instance-profile --instance-profile-name rke-aws --role-name rke-role

Provisioning Nodes

For this, we will simply use the AWS console launch wizard to create EC2 instances. As we mentioned earlier, we will create 4 instances.

We will go with all defaults, we will choose an Ubuntu 16.04 AMI, and any instance type with sufficient resources to run a basic cluster. The instance type t2.large should be suitable.

When configuring Instance Details, make sure to use the Instance IAM profile we created earlier:

When configuring you instances for a production setup, it’s important to carefully plan and configure your cluster AWS security group rules. For simplicity, we will use an all-open security group in our example.

Node configuration

Now, that we have our 4 instances provisioned, we need to prepare them for installation.

RKE requirements are very simple; all you need is to install docker and add your user to the docker group.

In our case, we will use the latest docker v.1.12.x release, and since we used ubuntu on AWS, our user will be ubuntu. We will need to run the following commands on each instance as root:

## curl releases.rancher.com/install-docker/1.12.sh| bash

## usermod -a -G docker ubuntu

Running RKE

So, this is the fun part. One of RKE design goals was to be as simple as possible. It’s possible to build a kubernetes cluster using only node configurations. The rest of the parameters and configuration options available are automatically set to pre-configured defaults.

We will configure the following cluster.yml file to use our for nodes, and configure Kubernetes with AWS cloud provider:

As of the time of this writing, the latest version of RKE is v0.1.5 Click here for the latest RKE vesion):

$ wget [https://github.com/rancher/rke/releases/download/v0.1.5/rke](https://github.com/rancher/rke/releases/download/v0.1.5/rke)

$ chmod +x rke

Now, let’s save the cluster.yml file in the same directory and build our test cluster:

$ ./rke up

That’s it. Our cluster should take a few minutes to be deployed. Once the deployment is completed, we can access out cluster using the kubectl command and the generated kube_config_cluster.yml file:

$ export KUBECONFIG=$PWD/kube\_config\_cluster.yml
$ kubectl get nodes
$ kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health": "true"}

One thing we should point out here: You will notice that the get nodes command is showing private DNS names of the nodes, not the public DNS names we configured in out `cluster.yml.

Even though RKE supports using hostname overrides for configured nodes, it wouldn’t be usable here. This is because Kubernetes uses the cloud provider’s hostnames when configured with a cloud provider and ignores any override values passed.

Using Kubernetes on AWS

Let’s try to take our newly created cluster for a spin!

We will use a slightly modified version of the Kubernetes Guestbook example. We will update this example to achieve two goals:

  • Persist redis master data to an EBS volume.
  • Use an ELB for external access to our application.

Launching a Guestbook

Let’s clone the Kubernetes Examples repo and get to the example files:

$ git clone [https://github.com/kubernetes/examples](https://github.com/kubernetes/examples)
$ cd guestbook

The Guestbook example is a simple php application that writes updates to a redis backend. The redis backend is configured in master and two slave deployments.

It consists of the following manifests which represent kubernetes resources:

  • frontend-deployment.yaml
  • frontend-service.yaml
  • redis-master-deployment.yaml
  • redis-master-service.yaml
  • redis-slave-deployment.yaml
  • redis-slave-service.yaml

We will only modify the ones we need to enable AWS cloud resources.

Using an ELB for frontend

We will update the frontend-service.yaml by adding LoadBalancer type to the service definition:

That’s it! When this service is deployed, Kubernetes will provision an ELB dynamically and point it to the deployment.

Adding persistent storage

We will add a persistent EBS volume to our redis master. We will configure Kubernetes to dynamically provision volumes based on Storage Classes.

First, we will configure the following storage class. Note the we need to set the correct AWS zone as the same one containing the instances we created:

storage-class.yaml:

Next, we will create a PersistentVolumeClaim:

redis-master-pvc.yaml:

And finally, we just need to update the redis-master-deployment.yaml manifest to configure the volume:

Deploying the Guestbook

At this point, we should have a complete set of manifests. The current default Kubernetes version deployed by RKE is v1.8.10-rancher1 . So, I had to update the

Deployment manifests to use apiVersion: apps/v1beta2.

Let’s deploy them to our cluster:

$ kubectl apply -f storage-class.yaml

$ kubectl apply -f redis-pvc.yaml

$ kubectl apply -f redis-master-service.yaml

$ kubectl apply -f redis-master-deployment.yaml

$ kubectl apply -f redis-slave-deployment.yaml

$ kubectl apply -f redis-slave-service.yaml

$ kubectl apply -f frontend-service.yaml

$ kubectl apply -f frontend-deployment.yaml

In a couple of minutes, everything should be up and running.

Examining our deployment

At this point, out guestbook example should be up and running. We can confirm that by running the command:

$ kubectl get all

As you can see, everything is up and running.

Now, let’s try to access our guestbook. We will get the address of the ELB hostname using the following command:

$ kubectl get svc/frontend -o yaml

Using the hostname at the end of the output, you can now access your deployed Guestbook!

Now, let’s run the following command, to see if the PersistentVolume was created for your redis master:

$ kubectl get pv

As you can see, a Persistent Volume was dynamically created for our redis master based on the Persistent Volume Claim that we configured.

Recap

In this post we quickly introduced Kubernetes Cloud Provider support and we talked about our plans to support different cloud providers in RKE.

We also described with as much detail as possible how to prepare resources for a simple Kubernetes deployment using the AWS Cloud Provider. And we configured and deployed a sample application using AWS resources.

As we mentioned before, the example in this article is not a production grade deployment. Kubernetes is a complex system that needs a lot of work to be deployed in production, especially in terms of preparing the infrastructure that will host it. However, we hope that we were able to provide a useful example of the available features and pointers on where to go next.

You can find all the files used in the post in this repo: https://github.com/moelsayed/k8s-guestbook

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.
Mohamed el Sayed
github
Mohamed el Sayed
DevOps Engineer, co-author of RKE
快速开启您的Rancher之旅