Building Microservices with Docker on the new Rancher Beta


Captura de pantalla 2015-06-11 a les
0.31.26In my last post I showed you how to deploy a Highly Available Wordpress installation using Rancher Services, a Gluster cluster for distributed storage, and a database cluster based on Percona XtraDB Cluster. Now I’m going one step further and we are setting Gluster and PXC clusters using Rancher Services too. And now we are using new service features available on the beta Rancher release like DNS service discovery and Label Scheduling. As you will see it’s becoming even easier and cleaner to build microservices clusters using Rancher. First of all I am introducing DNS service discovery and Label Scheduling features, so you can have a better idea about how they help us when creating our clusters. Thanks to DNS service discovery each container within a cluster knows about all other containers on the same cluster. This is fundamental when using software that needs all cluster nodes to intercommunicate, like Gluster or PXC. By using service discovery all our containers are able to talk to each other in order to automatically initialize and configure the cluster. Even more, when you scale up a service the new containers can join automatically the cluster because they are also able to know which containers are already part of it. Label Scheduling allow us to take more control about how to deploy our containers. For example, you are able to use a set of Docker hosts to only deploy a certain type of container. In our case, as an example and in order to show you how Label Scheduling works, we are using some Docker hosts to only deploy storage containers, other hosts to only deploy database containers, and a third group of hosts used to deploy the wordpress containers. This post is divided in these sections:

  1. Preparing AWS environment and creating Docker hosts
  2. Deploying the storage cluster
  3. Deploying the database cluster
  4. Deploying the Wordpress containers
  5. Deploying the Load Balancers
  6. Testing the setup

Prerequisites

Preparing AWS environment

Before deploying the Wordpress environment you need to satisfy the following requirements in AWS:

  • Create an Access Key to use Rancher AWS provisioning feature. You can get an Access Key by clicking your username in AWS console, and then Security Credentials option.
  • Go to EC2 Console, Security Groups section and click Create Security Group button. Configure a Security Group named Wordpress for the default VPC and with the following inbound rules:
    • Allow 22/tcp, 2376/tcp and 8080/tcp ports from any source, needed for Docker machine to provision hosts
    • [Allow 500/udp and 4500/udp ports from any source, needed for Rancher network]
    • [Allow 9345/tcp and 9346/tcp ports from any source, needed for UI features like graphs, view logs, and execute shell]
    • [Allow 80/tcp port from any source, needed to publish the Wordpress site]
  • Create a Linux instance that is running the latest Docker (I’m using RancherOS which you can find in Amazon Community AMIs), minimum t2.small . Configure it to run Rancher Server by defining the following user data and associate it to the Wordpress Security Group. Once the instance is running you can browse to Rancher UI: http://RANCHER_INSTANCE_PUBLIC_IP:8080/
#!/bin/bash
docker run -d -p 8080:8080 rancher/server:v0.23.0-rc6

Creating Docker hosts

In Rancher UI, click Infrastructure menu located at top left, then click Add Host button and confirm Rancher IP. Captura de pantalla
2015-06-10 a les
23.58.49 Then select Amazon EC2 provider and enter the Access Key and Secret Key that you got before. Captura de pantalla 2015-06-11 a les
0.01.25 Now choose the same Availability Zone and Subnet where Rancher Server is deployed. Captura de pantalla 2015-06-11 a les
0.09.16 Then select the Security Group named Wordpress that we created before. Captura de pantalla 2015-06-11 a les
0.11.50 Create two hosts with these options:

  1. Quantity: 2
  2. Name: storage
  3. Instance type: t2.micro is enough for testing purposes
  4. Add a Label and write this Key-Value pair: target.service=storage

Captura de pantalla 2015-06-11 a les
0.27.17 Now you have two hosts running. We have defined the target.service label in order to deploy only Gluster containers on these hosts. Clone one of the hosts in order to create three more hosts that we are using for deploying the PXC containers. Captura de pantalla 2015-06-11 a les
0.31.26 Be sure to use these options:

  1. Choose the same Availability Zone, Subnet and Security Group as before
  2. Quantity: 3
  3. Name: database
  4. Instance type: at least t2.small is required even for testing purposes
  5. Add a Label and write this Key-Value pair: target.service=database

Captura de pantalla 2015-06-11 a les
0.34.28 Repeat this last step to create two more hosts for wordpress containers using these options:

  1. Choose the same Availability Zone, Subnet and Security Group as before
  2. Quantity: 2
  3. Name: wordpress
  4. Instance type: t2.micro is enough for testing purposes
  5. Add a Label and write this Key-Value pair: target.service=web

Finally, after everything connects with our Rancher server, you will see your seven hosts up and running in the Rancher UI. Captura de
pantalla 2015-06-11 a les
0.31.26

Deploying the storage cluster

In Rancher UI click Services menu option located at top left and add a new project named Wordpress where we are deploying our Highly Available wordpress environment. Then expand Add… menu and select Service option. Captura de pantalla 2015-06-11 a les
19.29.07 Fill these options on Add Service screen:

  • Name: gluster - It is important that you name this service gluster. If you prefer another name, please add an environment variable named SERVICE_NAME which value must be equal to the service name you type
  • Check Always run one instance of this container on every host option
  • Image: nixel/rancher-glusterfs-server:v2.3

Expand Advanced Options and fill the following on these sections:

  • Command: add an environment variable named ROOT_PASSWORD which value should be a random string used for containers to automatically join the cluster
  • Volumes: add this volume: /gluster_volume:/gluster_volume
  • Networking: be sure to select Managed networking
  • Security/Host: be sure to check Full access to the host
  • Scheduling: add a new rule so the host must have a host label of target.service = storage. This will force gluster containers to be deployed only on storage1 and storage2 hosts.

Captura de pantalla 2015-06-11 a les
19.44.14 Now start your recently created service. Captura de pantalla
2015-06-11 a les
21.42.25 Wait for containers to be started, then go to Infrastructure again so you will see them running on storage1 and storage2 hosts as expected. In a previous step we chose Always run one instance of this container on every host option. This is because we are using /gluster_volume:/gluster_volume volume, that is, we are using /gluster_volume directory on host local disk. In consequence we cannot start two containers on the same host, because they would use the same local directory to save their data and would collide. Captura de
pantalla 2015-06-11 a les
23.56.46 You can view logs from gluster containers so you will be able to see that the cluster has been successfully started. Captura de pantalla
2015-06-11 a les
23.57.48

Deploying the database cluster

Now add a second service following these instructions:

  • Name: pxc - It is important that you name this service pxc. If you prefer another name, please add an environment variable named SERVICE_NAME which value must be equal to the service name you type
  • Check Always run one instance of this container on every host option
  • Image: nixel/rancher-percona-xtradb-cluster:v1.1

Expand Advanced Options and fill the following on these sections:

  • Command: add an environment variable named PXC_SST_PASSWORD which value should be a random string used for containers to automatically sync their data. Create another environment variable named PXC_ROOT_PASSWORD which value should be a random string used for containers to automatically join the cluster. You need to remember PXC_ROOT_PASSWORD because you are needing it later in order to be able to create the wordpress database.
  • Volumes: add this volume: /var/lib/mysql:/var/lib/mysql
  • Networking: be sure to select Managed networking
  • Security/Host: be sure to check Full access to the host
  • Scheduling: add a new rule so the host must have a host label of target.service = database. This will force pxc containers to be deployed only on database1, database2 and database3 hosts.

After starting pxc service your database containers are up and working. Captura de pantalla 2015-06-12 a les
0.03.17 Again, view pxc logs so you can see messages confirming cluster startup. Captura de pantalla 2015-06-12 a les
0.04.33 Back in Infrastructure you will see new pxc containers deployed on database1, database2 and database3 hosts as expected.

Deploying the Wordpress containers

Now, in order to deploy the Wordpress containers add a new service with these parameters:

  • Name: wordpress
  • Scale: run, for example, 6 containers
  • Image: nixel/rancher-wordpress-ha:v1.1
  • Service links: this is the most important step because this is required so wordpress can contact to storage and database containers. Add these two service links:
    • Destination service: gluster, as name: storage. It is important that you name this link storage
    • Destination service: pxc, as name: db. It is important that you name this link db

Expand Advanced Options and fill the following on these sections:

  • Command: add an environment variable named DB_PASSWORD which value should be equal to PXC_ROOT_PASSWORD variable you defined for pxc service.
  • Networking: be sure to select Managed networking
  • Security/Host: be sure to check Full access to the host
  • Scheduling: add a new rule so the host must have a host label of target.service = web. This will force wordpress containers to be deployed on wordpress1 and wordpress2 hosts.

Captura de pantalla 2015-06-12 a les
0.07.37 Start the wordpress service and check their logs to confirm they have successfully started. If you take a look in Infrastructure section you will see new containers deployed on wordpress1 and wordpress2 hosts. At this point you must have all your services up and running. Captura de
pantalla 2015-06-12 a les
0.12.15

Deploying the Load Balancers

Now it is time to create the Wordpress Load Balancers. We are creating a total of three Load Balancers. Add a new Balancer Service with this configuration:

  • Name: wordpress-lb
  • Scale: 3
  • Target: choose wordpress service
  • Listeners: map source 80 HTTP port to target 80 HTTP port
  • Health Check: GET /healthcheck.txt (HTTP/1.0)

Captura de pantalla 2015-06-12 a les
0.17.16 At this point you have a total of 4 services up and running on Wordpress project, including the Load Balancer. Captura de pantalla 2015-06-12
a les
0.18.07

Testing the setup

Now go to Infrastructure section and copy the IPs of those hosts where Lb Agents are running. Create a domain that resolves these IPs, or edit your /etc/hosts file to resolve one of them. I am using the domain http://myblog.com for testing. Browse to http://myblog.com and finish the Wordpress installation. Captura de pantalla 2015-06-12 a
les
0.25.13 As always you can test the High Availability capabilities. So, for example, go to Infrastructure section and follow these steps to simulate host failures:

  • Deactivate,delete and purge storage1 host
  • Deactivate, delete and purge database1 host
  • Deactivate, delete and purge wordpress1 host
  • Update your /etc/hosts file if needed

Captura de pantalla 2015-06-12 a les
1.11.38Captura
de pantalla 2015-06-12 a les
1.11.54 Captura de pantalla 2015-06-12 a les
1.13.55 At this point your Wordpress project should be in DEGRADED status like shown in the following picture: Captura de pantalla 2015-06-12 a
les
1.15.16 Now browse to http://myblog.com/wp-admin/ and your wordpress site should be working yet. Then back in Host section create again storage1, database1 and wordpress1 hosts. Wait for new containers to be automatically created and running.

Conclusion

Rancher has improved and added new services features. The most important changes with the Rancher beta version are:

  • DNS service discovery, allowing containers in the same service to communicate, which is useful for clustering
  • Label Scheduling, that allows us to create rules to better control on which hosts we deploy our microservices

As you can see it’s now even easier and cleaner to deploy our applications as Rancher services. We have also played with service linking to interconnect all our application stack, that way wordpress is connected to storage and database services. If you’re interested in downloading the Rancher beta, visit the Rancher GitHub site for instructions. You can find out more about the new beta program here. Manel Martinez is a Linux systems engineer with experience in the design and management of scalable, distributable and highly available open source web infrastructures based on products like KVM, Docker, Apache, Nginx, Tomcat, Jboss, RabbitMQ, HAProxy, MySQL and XtraDB. He lives in spain, and you can find him on Twitter @manel_martinezg.

快速开启您的Rancher之旅