关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
Since the Service Discovery feature was first introduced in May 2015, Rancher engineering has never stopped adding new functionality and improvements to it. Rancher Betausers and forumsparticipants have been sharing their applications and architecture details to help us shape the product to cover more use cases. This article reviews some of those very useful features.
Through the lifecycle of your Service Discovery, there can be a need to restart it after reapplying certain configuration changes. Usually, the number one requirement is to have no service outage while the restart is happening. The rolling restart feature makes this magic happen and is now available in Rancher Compose (UI support is coming soon). There are a couple of optional parameters must be set for the restart action to initiate: Once these options are set, the service’s containers with these settings will be restarted in a batch size of \“1\” followed by a batch restart at a “zero ms interval\” since the last batch restart completion.
When the service is deployed to a particular port, it’s important that public ports defined in the service don’t conflict with the ports that are already in use in this host. Maintaining and tracking this information can be an unnecessary inconvenience. Random public port assignment becomes useful in this situation. Let’s say you want to deploy a nginx service with scale=2 that internally listens to port 80. You know that you do have at least three hosts in the system, but are not aware of which ports are already allocated. All you need to do is offload the public port picking decision to Rancher by omitting to specify a public port when to adding a service. Rancher will choose the available public port from 49153-65535 range (configurable) and this port will be published on all three hosts where the service is deployed.
Rancher choses the available public port from 49153-65535 range (configurable) and this port will be published on ALL 3 hosts where the service is deployed:
###
The Rancher Load Balancer service allows redistribution of traffic between target services. The services can join and leave the load balancer at any time user demands it. Traffic can then be forwarded to them based on the user’s defined host name routing rules. However, we’ve always wanted to give the user even more flexibility in configuring the Load Balancer, as specific apps might require specific Load Balancer configuration tweaks. The Load Balancer service custom configuration feature will enable it. Let’s say we want to:
To do that, on the \“Add Load Balancer\” dialog, go to the \“Custom haproxy.cfg\” tab and set the following three parameters And set the following 3 parameters: Upon Load Balancer service startup, the custom values will be set in Load Balancer config file. You can find more about what custom properties can be configured at this url: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html
By default, the service reconcile code requires all service containers to be in “running” state. If any of the containers dies right after the start, or some time later, it will be recreated on the spot. For most applications, that is the default-required behavior for most, but not all of them. For example, short-lived containers that are meant to start, finish the task and exit without coming back later. Alternatively, data containers that just have to be created, but staying in a running state are not a requirement for them. Rancher enables this sort of behavior with \“Start once\” option. To enable it, on the \“Add service\” dialog expand \“Advanced Options\” and on the very first tab pick \“Start once.\” All containers that were created and stopped, will never go to the running state again unless the user explicitly choses to start them via Container->Start.
No doubt, the health check configuration is defined by application parameters and requirements (port, health check endpoint, number of retries till application is considered alive). But what about recovering from a failed health check? Should it just be as simple as recreating a failed container? Not necessarily, it also can be driven by your application needs. Rancher Health Check strategies enable this functionality. To try this feature, go to Applications view, click on \“Add Service\” and expand \“Advanced options\” on the dialog opened. Navigate to \“Health Check \” tab, and pick either \“TCP...\” or \“HTTP...\” setting. The \“When unhealthy\” status provides you with three choices:
Today, three strategies are enabled, but we are planning to expand in the future. Please share your opinions on what strategy your application could use, on Rancher forums.
If you’re interested in finding out more about Rancher, please download the software from github, join the Betato get additional support, and visit our forums to view topics and ask questions. To learn more about using Rancher, please join our next online meetup. Alena Prokharchyk If you have any questions or feedback, please contact me on twitter: @lemonjet https://github.com/alena1108