- This release addresses a security vulnerability found in Rancher: the default admin account that is created when Rancher is first launched will be recreated on subsequent restarts of Rancher, even if the account was explicitly deleted by a Rancher administrator. This poses a security risk because the account is recreated with Rancher’s default username and password. You can view the official CVE here CVE-2019-11202.
- In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificate expires. Rancher v2.2.x provides UI support for certificate rotation. In Rancher 2.1.9, the rotation can be performed from Rancher API, more details are here.
rancher/rancher:v2.1.9 image is made available in
server-chart/stable Rancher helm repos.
Known Major Issues
- Clusters created through Rancher can sometimes get stuck in provisioning [#15970] [#15969] [#15695]
- The upgrade for Rancher node-agent daemonset can sometimes get stuck due to pod removal failure on a Kubernetes side [#16722]
Major Bug Fixes since v2.1.8
- Fixed the situation where self signed certificate used for communication with a standalone Rancher server could expire. 
- Fixed an issue where elastic search password was stored as a clear text in the logs 
- Fixed an issue where disk space was reported low in the UI even when there was no disk pressure 
NOTE - Image Name Changes: Please note that as of v2.0.0, our images will be rancher/rancher and rancher/rancher-agent. If you are using v1.6, please continue to use rancher/server and rancher/agent.
Upgrades and Rollbacks
Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.
If you are currently using the RKE add-on install method, see Migrating from a RKE add-on install for details on how to move to using a helm chart.
Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on
update, which will cause any pods in workloads from previous versions to re-create.
Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default
Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. [#13582]