关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
Alena is a principal software engineer at Rancher Labs. Rancher has supported Kubernetes as one of our orchestration framework options since March 2016. We’ve incorporated Kubernetes as an essential element within Rancher. It is integrated with all of the core Rancher capabilities to achieve the maximum advantage for both platforms. Writing the Rancher ingress controller to backup the Kubernetes Ingress feature is a good example of that. In this article, I will give a high-level design overview of the feature and describe what steps need to be taken to implement it from a developer’s point of view.
When it comes to orchestration for an app that has to perform under heavy load, the Load Balancer (LB) is a key feature to be considered. Three key LB functionalities will be of significant interest for a majority of the apps that would be involved:
The L7 LB functionality enables support for routing multiple domains (or subdomains) to different hosts or clusters. The Kubernetes Ingress resource allows you to define LB routing rules in a clear and user friendly way. Ingress for host based routing is shown below: The Ingress for path based routing: Most importantly, Kubernetes Ingress can be backed up by any other Load Balancer in your configuration. However, the question that remains to be answered is, how do you bridge the Ingress resource and your LB? You would have to write something called an Ingress Controller which is an app that will have to perform the following functions:
Assuming the Ingress controller is up and running, and you are a Kubernetes user that wants to balance traffic between apps, this is what you need to do:
Load Balancer service has always been a key feature in Rancher. We’ve continued to invest into its feature growth. It has support for Host name routing, SSL offload and can be horizontally scaled. So, choosing Rancher LB as a provider for Kubernetes Ingress doesn’t require any changes to be done on the Rancher side. All that need be done is to write the Ingress controller. Rancher Ingress controller is a containerized microservice written in golang. It is deployed as a part of the Kubernetes system stack in Rancher. That means it will have access to both the Rancher and Kubernetes API servers: The diagram below provides high level architecture of the Rancher Ingress controller: The controller from the diagram above would:
The provider would
The provider would constantly monitor Rancher Load Balancer, and in case the LB instance dies and gets recreated on another host by Rancher HA, the Kubernetes Ingress endpoint will get updated with the new IP address.
I’ve used these terms quite a lot in the previous paragraphs, but need to elaborate on them a bit more. What is a LB public endpoint? It’s an IP address by which your LB can being accessed. In the context of this blog, it is the IP address of the Host where the Rancher LB container is deployed. However, it can also be multiple IP addresses if your LB is scaled cross multiple hosts. Public endpoints get translated to the Kubernetes Ingress Address field as shown below. But, what acts as the real backend Kubernetes service endpoint which is the backend IP address where LB is supposed to forward traffic to? Is it the address of the host where the service is deployed? Or is it a private IP address of the service’s container? The answer is that it depends on the Load Balancer implementation. If your Ingress controller uses an external LB provider (like for example GCE ingress-controller does), it would be the IP address of the host, and your backend Kubernetes service has to have \”type=NodePort\“. But, if your LB provider is deployed in the Kubernetes cluster (like Nginx ingress-controller or Rancher ingress-controller), it can be either the service clusterIP, or the set services’ Pod IPs. Rancher Load Balancer balances between Kubernetes Services podIPs.
The controller has no knowledge about provider implementation. All communications are done using Load Balancer Config, which is pretty generic. I’ve chosen to use this model, as it makes it easy to re-use the code base through:
Provider and controller get defined as arguments passed to the application entry point. Kubernetes is the default LB controller. The Rancher Load Balancer is the default LB provider.
We’ve just released the first version of the Rancher ingress-controller. We are planning to continue actively contributing into it. Features to cover in the next release include:
Kubernetes ingress doc - here you can read more on the Kubernetes Ingress Resource. rancherKubernetes ingress controllers contrib github repo - very helpful for developers who want to contribute to Kubernetes, and want to know best practices and common solutions. Rancher ingress controller github repo - Rancher ingress controller github repo. Rancher Ingress Controller Documentation
Alena Prokharchyk If you have any questions or feedback, please contact me on twitter: @lemonjet https://github.com/alena1108