- This release addresses two security vulnerabilities found in Rancher.
The first vulnerability was found and reported by Tyler Welton from Untamed Theory, and applies to Rancher versions v2.0.0 - v2.2.3. This vulnerability allows project owners to inject an extra fluentd logging configuration that makes it possible to read files or execute arbitrary commands inside the fluentd container. You can view the official CVE here CVE-2019-12303.
The second vulnerability applies to Rancher versions v1.6.27 and earlier (Cattle and Kubernetes users) and v2.0.0 - v2.2.3. It affects the built-in node drivers having a file path option that allows the machine to read arbitrary files - including sensitive ones like
/root/.kube/config―from inside the Rancher server container. This can result in the machine creator gaining unauthorized access to the Rancher management plane. You can view the official CVE here CVE-2019-12274.
As a result, the following versions are now latest and stable:
|Type | Rancher Version | Docker Tag |Helm Repo| Helm Chart Version |
| Latest | v2.2.4 |
rancher/rancher:latest | server-charts/latest |v2.2.4 |
| Stable | v2.2.4 |
rancher/rancher:stable | server-charts/stable | v2.2.4 |
Please review our version documentation for more details on versioning and tagging conventions.
- The project monitoring feature that was temporarily disabled as of v2.2.2, is added back with stability fixes and improvements. If you are upgrading from v2.2.0 or v2.2.1 directly to v2.2.4, and you have a project monitoring enabled prior to the upgrade, project monitoring needs to be re-enabled manually [#20308].
- Windows experimental support is no longer maintained in v2.2.x releases. Please use [v2.3.0-alpha3+ release] where support for Windows Server Containers 1809/2019 was made available.
- As of v2.2.0, we’ve introduced a notion of a “system catalog” for managing micro-services that Rancher deploys for specific features such as Monitoring, Global DNS and Alerts. As of v2.2.4, the default branch of the system catalog has changed from
release-v2.2. When upgrading Rancher, this branch change is automatically updated, except for air-gapped setups. For air gapped setups, the branch modification must be manually adjusted via Global Catalog Settings.
- The cluster monitoring feature got several fixes greatly improving feature performance and stability. If you have the monitoring previously enabled on the cluster, we advise you to upgrade it to the new version. You can upgrade cluster monitoring by viewing the Monitoring settings and selecting the new updated version.
- UPDATE: If you have Rancher provisioned cluster with OpenStack cloud provider having LoadBalancer set, the cluster objects have to be modified using kubectl prior to upgrade to address 
Features and Enhancements
- Addressed a major performance issue when a large number of Kubernetes objects like pods, nodes, etc could cause a significant delay in loading the UI 
- Added support for querying monitoring metrics using ClusterIP instead of using DNS name, resulting in better performance 
- Added support for North Central US, South Central US, Korea South, and Korea Central regions in AKS clusters , 
- Added extra arguments support for a CloudFlare Global DNS provider 
Major Bug Fixed Since v2.2.3
The following is a list of the major bugs fixed. Review our milestone to see the full list.
- Fixed vulnerability that allowed a project owners to inject an extra fluentd logging configuration that made it possible to read files or execute arbitrary commands inside the fluentd container CVE-2019-12303
- Fixed vulnerability affecting the built-in node drivers having a file path option that allowed the machine to read arbitrary files - including sensitive ones like
/root/.kube/config―from inside the Rancher server container CVE-2019-12274
- Fixed an issue when Rancher provisioned clusters having k8s version 1.13.x could break when one of the etcd nodes goes down. The situation is caused by [Kubernetes upstream issue] , and Rancher’s fix prevents it from happening 
- Fixed an issue when Rancher provisioned clusters could break by adding a new control plane node to it due to the certificates not being re-generated properly on the node addition 
- Fixed an issue when cluster monitoring was constantly redeploying when
system-default-registrywas configured on the cluster 
- Fixed several other issues affecting cluster monitoring stability , , , , 
- Fixed an issue when catalog templates weren’t updating on the Rancher slave pod in HA mode 
- Fixed an issue when manual etcd snapshots were deleted based on retention count configuration settings and recurring snapshot creation time was impacted by taking a manual snapshot 
- Fixed an issue when if recurring etcd snapshots were disabled on a cluster, users couldn’t restore from previously taken snapshots or take manual snapshots 
- Fixed an issue when restoring from etcd snapshots failed on RancherOS hosts 
- Fixed an issue when Global DNS couldn’t be launched by a regular user 
- Fixed an issue when deactivating or removing the creator of a cluster would prevent the monitoring feature from deploying successfully [#18787]
- Fixed an issue when catalog app answers could get out of sync when switch between the UI and yaml forms 
- Fixed an issue when OpenStack volumes failed to create 
- Fixed an issue when update was broken for node templates having cloud credentials set 
- Fixed an issue when activation was failing for OTC node driver 
Certificate expiry on Rancher provisioned clusters
In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificate expires. In Rancher 2.2.x, the rotation can be performed from Rancher UI, more details are here.
Additional Steps Required for Air Gap Installations and Upgrades
In v2.2.0, we’ve introduced a “system catalog” for managing micro-services that Rancher deploys for certain features such as Global DNS, Alerts, and Monitoring. These additional steps are documented as part of air gap installation instructions.
Known Major Issues
- Certificate rotate for Rancher provisioned clusters will not work for the clusters which certificates had expired on Rancher versions v2.0.13 and earlier on 2.0.x release line, and 2.1.8 or earlier on 2.1.x release line. The issue won’t exist if the certificates expired on later versions of Rancher. Steps to workaround can be found in comments to 
- Etcd snapshots can timeout when Minio configured as a backup target [#19496]
- Catalog app revisions are not visible to the regular user; as a result regular user is not able to rollback the app 
- Global DNS entries are not properly updated when a node that was hosting an associated ingress becomes unavailable. A records to the unavailable hosts will remain on the ingress and in the DNS entry [#18932]
- Update If you have Rancher provisioned cluster with OpenStack cloud provider having LoadBalancer set, the upgrade to 2.2.4 fails. Steps to mitigate can be found in the comment to 
- Update If you set the HTTP_PROXY and HTTPS_PROXY environment variables in your rancher-server container to allow it to reach the public Internet, you will not be able to provision nodes using Rancher’s node driver functionality 
System Charts Branch - For air gap installs
- system charts branch -
release-v2.2- This is the branch used to populate the catalog items required for tools such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the
system-chartsrepository to a location in your network that Rancher can reach and configure Rancher to use that repository.
Upgrades and Rollbacks
Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.
If you are currently using the RKE add-on install method, see Migrating from a RKE add-on install for details on how to move to using a helm chart.
Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on
update, which will cause any pods in workloads from previous versions to re-create.
Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default
Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. [#13582]