关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
Software developers are typically focused on a single application, application stack or workload that they need to run on a specific infrastructure. In production, however, a diverse set of applications run on a variety of technology (e.g. Java, LAMP, etc.), which need to be deployed on heterogeneous infrastructure running on-premises, in the cloud or both. This gives rise to several challenges with running containerized applications in production:
According to the June 2016 Cloud Foundry \“Hope Versus Reality: Containers in 2016\” report, 45 percent of survey respondents said their biggest deployment worry is that Docker is too complex to integrate into their environments.0 A big reason for this is the density and volatility of containerized environments. Because operating systems and kernels do not need to be loaded for each container, containerized environments enable better workload density within a given amount of infrastructure than more traditional virtualized environments. As a result, the total volume of components that need to be created, monitored and destroyed across the production environment is exponentially larger, significantly increasing the complexity of managing container-based environments. Not only are there more things to be managed, but they are also changing faster than ever before. A Datadog survey shows that, while traditional and cloud-based VMs have an average lifespan of almost 15 days, Docker containers have an average lifespan of 2.5 days.0 The result is an order-of-magnitude increase in the number of things that need to be individually managed and monitored. The complexity of these dense, fast-changing environments is further compounded by the complexity of the architecture. Containers are typically deployed over highly distributed environments; on a single cluster or on a multi-cluster environment. The makeup of these clusters is highly disparate and they may be located on-premises, in the cloud or some combination of the two. While 60% of containers run on Amazon Web Services (AWS), 40% continue to run on-premises.0 Organizations therefore need an easier approach to orchestrate containers and manage the underlying infrastructure services for multi-container, multi-host applications. This is particularly important for applications with a microservices architecture, such as a web application that consists of a container cluster running web servers to host multiple instances of the frontend (for failover and load balancing), as well as multiple backend services each running in separate containers.
The Docker ecosystem is very volatile and complex. Over the past few years a flurry of third-party tools and services have emerged to help developers deploy, configure and manage their containerized workflows as they move from development to production. Because they are based on open source technologies, the rate at which these tools and services change and the volume of new documentation makes it very challenging to put together a stable technology stack to run containers in production. It also makes it hard for companies to build and maintain the engineering skills needed to take advantage of the rich ecosystem. According to RightScale’s fifth annual State of the Cloud Survey, for companies who are not currently using containers, lack of experience was by far the top challenge (39 percent) for container adoption.0
In simplifying container management, it’s important not to lose the flexibility developers require to innovate. They need to be able to pick and choose the tools and frameworks they want to use when they need them. RedMonk refers to this as the “era of permissionless development”.0^ When asked to solve a problem, most developers no longer ask what tools they can use, they look for the best tool for the job. They also prefer to use the most recent releases, which isn’t necessarily the most stable version, so they can quickly take advantage of any new capabilities. However, they are also increasingly being required to take responsibility for ensuring that any application logic they create runs in production and quickly fixing it if it does not. This means that they also need to be able to roll back a deployment if they run into issues. Developers require the freedom of root access and they want to be able to install any open source software they like. This is why they typically avoid traditional platform as a service (PaaS) solutions. PaaS abstracts away containers, so developers can focus on coding instead of managing containers. However, they are also proprietary and are not as versatile as a home-grown open source stacks. They constrain the developers’ ability to innovate by locking them into one vendor or infrastructure provider.
One of the primary benefits of containers is that they are portable—an application and all its dependencies can be bundled into a single container, which is independent from the host version of Linux kernel, platform distribution or deployment model. This container can be transferred to another host running Docker and executed without compatibility issues. Infrastructure services vary dramatically between clouds and data centers, however, making real application portability almost impossible without architecting around those differences in the application. Using containers to make applications portable across diverse infrastructure therefore requires more than just a standardized unit for shipping code. It requires infrastructure services, which include:
Once these infrastructure services are deployed, organizations can be challenged with monitoring them. DevOps teams need to easily troubleshoot issues. Monitoring and logging infrastructure performance, and alerting DevOps teams to any issues is therefore an important capability of any container management solution.
There are security and compliance concerns related to deploying containers that must be addressed for larger enterprises to use them in production, particularly those in regulated industries such as finance and healthcare. Companies such as Docker have continued to push for fixes and create new software and integration across the toolchain to cope with that problem. However, there is still a lack of parity between application container security and what enterprises are used to with virtual machines. This includes enforcing organizational policy and ensuring secure access to the containers and cluster administration, including managing certificates for transport layer security (TLS). Users and groups need to be able to share or deny access to resources and environments (e.g. development or production) via role-based access control (RBAC). User authentication requires integration with Active Directory, LDAP and/or GitHub.
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard. There are a wide variety of technologies to integrate and manage, and new tools are emerging every day. Rancher makes it easy for you to manage all aspects of running containers. You no longer need to develop the technical skills required to integrate a complex set of open source technologies.
Rancher includes everything you need to make Docker work in production on any infrastructure. A portable layer of infrastructure services is easily configured and integrated. An easy to use user interface enables you to take advantage of a rich set orchestration features and then deploy your containers with a single click. The robust application catalog makes it simple to package configuration files as templates and share them across your organization. With over 20 million downloads and enterprise-class support, Rancher has quickly become the open source platform of choice for running containers in production.
Just follow these steps:
Download - Rancher is deployed as a Docker container and easy to deploy on your cluster or even your laptop.
Get started - Deploying Rancher takes less than 5 minutes if you follow the steps in the quick start guide.
Use the docs - Rancher is incredibly easy to use. However, there’s a wealth of information in the technical documents in case you need it.
Take advantage of our awesome community of users - The forums are the best place to hear about the latest product releases as well as interact with your peers and Rancher engineers.
Resources: