Virtual Host Routing for Docker using Rancher Load Balancer


Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

entireApp\_modified\_1

Hello, I’m Alena Prokharchyk, an engineer at Rancher Labs. In the past I’ve written a couple of articles explaining our Load Balancer functionality within Rancher. First, as a standalone feature, then as a part of our Docker Service Discovery functionality. With these capabilities, we’ve developed a load balancing function that could be used not just for sharing traffic between Docker containers, but also for upgrading between software releases with no downtime for users. Today I want to walk you through \“Virtual Host Routing,\” a new set of functionality we added to the Rancher Load Balancer beginning with release 0.29, which allows users to route traffic between multiple services registered within the same LB service, based on a target domain name and/or url path. In this article I’m going to walk through how to configure this new configuration, as well as explain some of the use cases we envisioned when building this functionality.

Use case

The typical use case for host-based load balancing is using a single IP address to distribute traffic to multiple services. For my example today, I’ll show how a company can host two different applications - a website and chat - on a single public IP address. In this case, a user would configure Public DNS with two records pointing to the same IP address: chat.example.com: 108.162.202.109 web.example.com: 108.162.202.109 We’ll then be using two separate applications to deliver these functions: nginx for the website and the wonderful LetsChat for the chat platform. With host load balancing, the traffic can be split between these two services based on the request coming in. If the request is coming for chat.example.com, it will be directed to the LetsChat servers; if the request is for web.example.com - it will be directed to the Nginx nodes. Here is how the end setup should look in Rancher: entireApp\_modified Now let’s walk through building this setup from scratch.

Create stack and services

I’m going to create all of these services within a single Rancher Environment, and a single Rancher Stack. However, it is possible to set up host-based load balancing between services running in different stacks, as well as with external services. The services we’ll create will include a single \“mongo\” service with one container (needed for the LetsChat service), two \“LetsChat\” services with two containers each, and two \“Nginx\” services with two containers each. You’ll notice in the image above I am also creating multiple versions of the nginx and Let’s Chat services (letschat1 and letschat2 for example). I’ve done that to show how you might handle upgrades between different releases of the same software. Mongo service: mongo LetsChat service: letschat1 Nginx service: nginx

###

Register services to a Load Balancer

After all of these services are created, its time to register them on the Load Balancer. Under \“Add Service\” click on \“Add Load Balancer\“. The dialog looks very similar to the previous version of Rancher. Define source port as 80 - that is the port the load balancer will be listening on - and click add Target Services. Then click on \“Show advanced routing options\“: basic\_modified It will expand the view so that you can set host name routing rules for each service: Advanced\_modified I’ve set \“chat.example.com\” as the \“Request Host\” for letschat1/letschat2, so that all traffic with the host header \“chat.example.com\” coming to port 80 of the load balancer, will get balanced across letschat1/letschat2 services listening on port 8080. I’ve then set a similar rule for nginx1/nginx2: these services are listening on port 80, and traffic with host header \“web.example.com\” coming to the load balancer, will be distributed between them. To test how the things work, I’m going to execute a curl request in format of:

curl --header 'Host: <hostname>' 'http://<LB IP>:<LB source port>/login'

LB IP - should be the public IP address of the Host the Load Balancer instance resides on; or - if the request is internal - it can be the private IP address of the Load Balancer container. Example:

curl --header 'Host: chat.example.com' 'http://108.162.202.109:80/login'

The request should bring you to your LetsChat service. To send the request to the Nginx service, execute:

curl --header 'Host: web.example.com' 'http://108.162.202.109:80'

URL based routing

We can add even more granularity to our existing hostname routing rules. There might be a case when your application host name is the same, and you want to forward your traffic to a particular service based on URL path:

web.example.com/support -> nginx1
web.example.com/careers -> nginx2

entireApp\_modified\_1 To support this use case, the way to register services on the Load Balancer gets slightly changed: urlpath\_modified As you can see, in addition to configuring Request Host for nginx1 and nginx2, I’ve defined the Request Path as well. Now the traffic will not only be balanced between nginx/letschat services based on the host name, but also between nginx1 and nginx2 using a more granular criteria, the URL path.

Internal Load Balancing

Sometimes you might want to limit your application to internal use only, but you still need some Load Balancer functionality. In the latest release, you are given a choice to pick internal vs public for the load balancer’s source ports. If you choose to use an internal source port, the load balancer’s listening port will not be published to the host. internal

More LB stuff coming soon

I hope this walk-through helps you understand the latest work we’ve been doing to make our Docker load balancing functionality more useful. As usual, I’ll close by giving you a preview of what we’re working on next for the load balancer. In the next couple of weeks you should see us enable support for SSL termination to the load balancer. Also, as always, an enormous thank you to the amazing team that works on HAproxy, which we use to build our Rancher load balancing services. Please join our August online meetup to learn more about how we use Rancher in our internal CI process. We’ll also talk about some of the new features in Rancher, and our road map including service discovery and load balancing:Please also consider joining our betato get some help getting started with Rancher.

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.
Alena Prokharchyk
github
Alena Prokharchyk
Senior Manager, Engineering
Alena is a Principal Software Engineer at Rancher Labs, who’s been working on building infrastructure services first for Virtual Machines, now for containers with main focus on Kubernetes. She enjoys helping others make sense of problems and explore solutions together. In her free time Alena enjoys rollerblading, reading books on totally random subjects and listening to other people’s stories.
快速开启您的Rancher之旅