Kubernetes 1.12 will be released this week on Thursday, September 27, 2018. Version 1.12 ships just three months after Kubernetes 1.11 and marks the third major release of this year. The short cycle is inline with the quarterly release cycle the project has followed since it’s GA in 2015.
Kubernetes releases 2018
| Kubernetes Release | Date |
| 1.10 | March 26, 2018 |
| 1.11 | June 27, 2018 |
| 1.12 | September 27, 2018 |
Whether you are a developer using Kubernetes or an admin operating clusters, it’s worth getting an idea about the new features and fixes that you can expect in Kubernetes 1.12.
A total of 38 features are included in this milestone. Let’s have a look at some of the highlights.
Kubelet certificate rotation was promoted to beta status. This functionality allows for automated renewal of key and a certificate for the kubelet API server as the current certificate approaches expiration. Until the official 1.12 docs have been published, you can read the beta documentation on this feature here.
Two formerly beta features have now reached stable status: One of them is the ipBlock selector, which allows specifying ingress/egress rules based on network addresses in CIDR notation. The second one adds support for filtering the traffic that is leaving the pods by specifying egress rules. The below example illustrates the use of both features:
As previoulsy beta features, both egress and ipBlock are already described in the official network policies documentation.
Mount namespace propagation, i.e. the ability to mount a volume rshared so that any mounts from inside the container are reflected in the root (= host) mount namespace, has been promoted to stable. You can read more about this feature in the Kubernetes volumes docs.
This feature introduced in 1.8 as early alpha has been promoted to beta. Enabling it’s featureflag causes the node controller to create taints based on node conditions and the scheduler to filter nodes based on taints instead of conditions. The official documentation is available here.
While support for custom metrics in HPA continuous to be in beta status, version 1.12 adds various enhancements like the the ability to select metrics based on the labels available in your monitoring pipeline. If you are interested in autoscaling pods based on application-level metrics provided by monitoring systems such as Prometheus, Sysdig or Datadog, I recommend to checkout the design proposal for external metrics in HPA.
If you are new to Kubernetes monitoring, check out our Intro to Kuberentes Monitoring. If you’re just getting underway with Kubernetes, read the Introduction to Kubernetes Monitoring, which will help you get the most out of the rest of this article. for a good primer.
RuntimeClass is a new cluster-scoped resource “that surfaces container runtime properties to the control plane”. In other words: This early alpha feature will enable users to select and configure (per pod) a specific container runtime (such as Docker, Rkt or Virtlet) by providing the runtimeClass field in the PodSpec. You can read more about it in these docs.
Resource quotas allow administrators to limit the resource consumption in namespaces. This is especially practical in scenarios where the available compute and storage resources in a cluster are shared by several tenants (users, teams). The beta feature Resource quota by priority allows admins to fine-tune resource allocation within the namespace by scoping quotas based on the PriorityClass of pods. You can find more details here.
One of the most exciting new 1.12 features for storage is the early alpha implementation of persistent volume snapshots. This feature allows users to create and restore snapshots at a particular point in time backed by any CSI storage provider. As part of this implementation three new API resources have been added:
VolumeSnapshotClass defines how snapshots for existing volumes are provisioned. VolumeSnapshotContent represents existing snapshots and VolumeSnapshot allows users to request a new snapshot of a persistent volume like so:
For the nitty gritty details take a look at the 1.12 documentation branch on Github.
Another storage related feature, topology aware dynamic provisioning, was introduced in v1.11 and has been promoted to beta in 1.12. It addresses some limitations with dynamic provisioning of volumes in clusters spread across multiple zones where single-zone storage backends are not globally accessible from all nodes.
These two improvements regarding running Kubernetes in Azure are shipping in 1.12:
The cluster autoscaler support for Azure was promoted to stable. This will allow for automatic scaling of the number of Azure nodes in Kubernetes clusters based on global resource usage.
Kubernetes v1.12 adds alpha support for Azure availability zones (AZ). Nodes in an availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> , and topology-aware provisioning is added for Azure managed disks storage class.
Kubernetes 1.12 contains many bug fixes and improvements of internal components, clearly focusing on stabilising the core, maturing existing beta features and improving the release velocity by adding more automated tests to the projects CI pipeline. A noteworthy example for the latter is the addition of CI e2e conformance tests for arm, arm64, ppc64, s390x and windows platforms to the projects test harness.
For a full list of changes in 1.12 see the release notes.
Rancher will support Kubernetes 1.12 on hosted clusters as soon as it becomes available on the particular provider. For RKE provisioned clusters it will be supported starting with Rancher 2.2.