关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
GitOps provides a single source of truth for development and operations of applications in your Kubernetes platform and works to ensure application stability and availability.
Rancher Labs CEO and Co-founder Sheng Liang talks with Swapnil Bhartiya of TFiR about where the outlook for Kubernetes, innovation at Rancher and consolidation in the Kubernetes space.
Rancher allows enterprises to adopt container and Kubernetes-related technologies incrementally. Discover how Rancher improves your container orchestration experience when with running containers in Azure Kubernetes Service (AKS).
Citrix and Rancher are partnering to deliver Citrix' cloud-native stack on Rancher, the complete enterprise computing platform to run Kubernetes clusters on-premises, in the cloud or at the edge. Citrix Cloud Native Stack integrates with Rancher as pre-built and reusable application stack templates. These templates stitch together all components of the Citrix Cloud Native Stack (Citrix Ingress Controller, Citrix Node Controller, Citrix Observability Exporter, IPAM). You can modify and deploy these templates into running application stacks via the Rancher admin console.
随着生成式 AI 等前沿技术的迅速发展,一场全新的技术变革正在悄然引领数字创新的浪潮。这种变革为企业未来发展带来了无限可能,使其能够高效应对 IT 挑战、提高生产效率,创造出前所未有的商业价值。然而在这场变革的另一面,“数字信任”已经成为不可或缺的标准,各大厂商都希望通过严格的网络安全实践和强大的数据保护措施,为企业数字创新提供坚实的保障,确保其能够在充满挑战的数字时代稳健发展。SUSE 作为全球范围内创新、可靠且安全的企业级开源解决方案领导者,正在持续推动其在安全、稳定、可靠、可互操作等方面的技术进步,以帮助企业应对不断变化的市场需求。 近日,由 SUSE 主办的年度数字创新峰会 SUSECON 深圳 2023 圆满落幕。通过本届峰会,我们可以非常清晰地看到 SUSE 在过去一年里的探索,SUSE 在边缘计算方面联合合作伙伴,做出了多种尝试。同时,愈演愈烈的生成式 AI 为整个技术市场带来了无限机会,而在此浪潮下的 SUSE 也敏锐地把握了这一新兴技术的潜力,积极投身于相关研究和开发工作中。 SUSE 大中华区总裁陈毅威发表主题演讲 边缘计算无处不在 对基础设施提出更高要求 随着 5G、物联网、云计算等技术的快速发展,边缘计算得到了广泛应用。由于物联网设备的数量不断增加,数据产生和传输的规模也在快速增长,传统的云计算模式已经无法满足低延迟、高带宽、高可靠性的需求,于是将计算任务分配到网络边缘的设备上成为了解决这个问题的一种有效方法。 与云计算相比较,边缘计算就近布置,可以被理解为云计算的下沉。边缘计算通过将数据在靠近设备或终端进行计算和处理,大大提高了数据处理的效率和实时性,同时减轻了中央服务器和网络的负载。此外,边缘计算和云计算需要结合使用,例如在大数据分析中,边缘计算可以处理本地设备上的数据,云计算则可用于存储和分析大量数据。但这也意味着,边缘计算对基础设施有更高的要求,它需要在网络边缘的位置处理和存储数据、部署大量的小型数据中心和设备,设备规模、能源消耗、运维管理和安全性等就成为了重点考虑因素;同时,边缘设备的计算和存储资源往往比中心服务器更为有限,这也限制了其能够处理的数据量和计算的复杂度,边缘计算的能耗和硬件资源效率也面临着挑战。 为了克服这些挑战并充分发挥边缘计算的优势,SUSE 与瞬优智慧达成合作,联合发布了“鼎瞬 Peerless 实时业务协同解决方案平台”,广泛应用于工业制造、物流运输和智能城市等领域。这是一款面向企业级客户的云边端协同平台,集软、硬件于一体,在数字化全生命周期提供从云端至近边缘端到边缘端的全面支持,以实现各业务系统(如物联网终端设备等)海量数据的实时感知、实时处理、实时分析、实时决策。 值得一提的是,该解决方案整合了 SUSE 的分布式基础架构解决方案,包括 SUSE Linux Enterprise Server、SUSE Manager、Rancher 和 NeuVector,为云平台的稳定基础架构提供了强有力的支撑;SUSE Linux Enterprise Micro、Rancher、K3s、Longhorn 也为解决方案中的边缘智能网关、近边缘智能一体机提供了高可用的基础运行架构。 当前,SUSE 正在与联想、中科云谷等行业领导者一起探索边缘的更多可能。SUSE 和联想将充分发挥各自在技术、产品、市场、生态等方面的优势,以边缘硬件、OS、软件定义基础设施为核心数字底座,合力打造边缘原生的全栈式企业级解决方案。SUSE 与联想在边缘计算领域的合作主要分为三个阶段: 第一阶段:基于 SUSE 现有的边缘计算产品组合,联想提供边缘硬件设备算力支持,共同打造边缘计算节点。目前小盒子(网关)的适配工作已基本通过测试,服务器适配工作正在进行中。 第二阶段:联想推出自有边缘计算平台,结合 SUSE 的操作系统,打造商业化边缘计算平台和软件平台。目前第一阶段和第二阶段同步进行,第二阶段的工作重点将聚焦在边缘服务器和边缘云平台的适配上。 第三阶段:双方将在边缘计算产品系列中进行更深度的融合,将 SUSE 的操作系统、Rancher、NeuVector 等组件纳入联想的边缘操作盒一体机产品体系。同时,双方将实现统一架构的管理运维框架,包括边缘管理平台与 SUSE 和管理平台的结合,以实现全栈式全方位全要素的管理和监控运维能力。 此外,在本次峰会上,SUSE 与中科云谷达成了战略合作,联合推出了中科云谷云原生技术平台。该平台基于 Rancher 容器云、微服务、DevOps 等云原生技术构建,提供丰富、通用的数字化转型共性组件。更重要的是,中科云谷目前已经完成了对 SUSE 零信任容器安全平台 NeuVector 的全面测试,近期将正式上线生产环境;同时,双方正在探索将 SUSE Edge 边缘解决方案纳入平台以帮助客户应对复杂的边缘环境。
今年,Red Hat Enterprise Linux(RHEL)对其源代码实行付费访问的举措在技术界引发了激烈的讨论,而开头的这一宣言正是甲骨文、SUSE 和 CIQ 等业界巨头对近期围绕 RHEL 的争议的直接回应:他们组成了一个新组织——Open Enterprise Linux Association(OpenELA),旨在打破围绕 RHEL 和 CentOS 的障碍,邀请所有人加入无拘无束的自由合作时代。 同样在今年,在 Red Hat 工作了 18 年的 Dirk-Peter van Leeuwen(以下简称 DP)加入 SUSE 担任首席执行官,拥有 11 年中国业务经验的他带着 SUSE 首席技术和产品官 Thomas Di Giacomo 博士(以下简称 Thomas)开始了全球巡访。 在 DP 和 Thomas 看来,分享免费的代码是开源世界的基本原则,而 SUSE 则将完全开放代码视为信念,坚持了三十年之久。近日,CSDN &《新程序员》首席内容顾问邹欣与两位 SUSE 高管同时展开对话,从多方面入手,深度讨论了开源、AI、数字化和教育等话题。 让开源人永远有选择 邹欣:红帽宣布 CentOS Stream 将成为公共真实源代码发布的唯一存储库,引发广泛讨论,SUSE 对此也发布了一个公告,近期甲骨文、SUSE 和 CIQ 等业界巨头同样有所回应,此事反映了开源的哪些关键问题? DP:开放源代码总是基于一个事实,即源代码是免费提供的,其他人可以使用并制作衍生解决方案。在过去,真正的源代码一直都是由市场上的参与者提供或使用的。几年前,CentOS 代码的可用性已经发生了一些变化,当下宣布 CentOS Stream 成为唯一的代码库,实际上隐藏了真实代码将不再可用的信息,这一点非常重要,这使得一些人很难继续使用与真实代码二进制兼容的代码。 SUSE 所做的是为了保证人们现在仍然可以在“好”与“其他”之间做出选择,让社区的其他参与者可以创建最适合自己的版本。SUSE 在过去 30 余年里一直 100% 开放源码,这是我们的基因,SUSE 分支也将始终开源。
2023 年 9 月 26-28 日,全球顶级开源云原生盛会 KubeCon + CloudNativeCon + Open Source Summit China 2023 在上海隆重召开,来自世界各地的技术专家、开源社区领袖、企业代表和开发者共聚一堂。对于国内用户和开发者而言,这无疑是一场前沿科技与数字创新的饕餮盛宴。 作为开源领域的元老级玩家,以及 Kubernetes 和容器领域的引领者,SUSE 深度参与了本次大会。SUSE 大中华区总裁陈毅威、安全产品战略副总裁黄飞以及亚太 CTO Vishal Ghariwala 莅临现场,携众多开源项目 Maintainer 和资深研发工程师与开发者们进行了深入交流。 SUSE 的白银赞助商展位以及 K3s Kiosk 和 Longhorn Kiosk 吸引了众多开发者驻足交流。 推动企业级 K8s 落地 深度赋能行业生态 随着 Kubernetes 生态系统的不断扩张和日渐复杂,创新、可互操作性和易用性愈发重要。Rancher 是企业容器管理领域的宝藏级产品,是全球首批获得 CNCF Kubernetes 一致性认证的平台,能够帮助用户在分布式 IT 环境中构建云原生架构,在任意位置统一管理可扩展的、基于容器的应用程序,从数据中心到云,甚至边缘环境。 内核和容器引擎是云原生软件的重要支撑,SUSE 深耕 Linux 和 K8s 领域多年,始终致力于促进云原生的生态繁荣。2022 年,SUSE 在欧拉开源社区创建了 RFO(Rancher for openEuler)SIG,致力于将 Rancher 产品生态与 openEuler 深度结合,为中国本土开源社区构建容器工程基础设施。 SUSE 对开源创新的承诺并未止步于此,而是持续以丰富的开源项目作为创新沙箱。作为 CNCF 管理委员会和技术监督委员会成员,SUSE 贡献了一系列开源项目: K3s:一款轻量级、高可用的 Kubernetes 发行版,能够满足用户在边缘计算环境中运行 Kubernetes 集群的需求,是 CNCF 的沙箱项目。 Longhorn:一款基于 Kubernetes 构建的云原生分布式块存储解决方案,解决了 Kubernetes 存储复杂性问题,是 CNCF 的孵化项目。 Kubewarden:一款安全工具,能够帮助 Kubernetes 集中管理安全策略,是 CNCF 的沙箱项目。 在大会现场,SUSE 边缘业务运营总监 Katerina Arzhayev 与大家共同探讨了不同文化背景的用户如何通过云原生技术实现业务价值的最大化。
可以大胆一点说,SUSE 未来三年很有机会把业务翻一番。 SUSE 大中华区总裁陈毅威做出这一判断的依据是:云原生是云计算最主流的发展方向,它不仅是云服务商们在技术上的角力点,更已经成为企业数字化转型和上云的必经之路。放眼当下,从“上云”向“云原生”演进,全球都在深度参与并见证这样一场技术变革。 SUSE 大中华区总裁 陈毅威 SUSE 恰好处在一个绝佳的位置,并为此做好了准备:在持续完善自身云原生技术版图的同时,更在中国市场寻求了一位“掌舵人”,以期将自己的云原生战略更好地落地中国。 在陈毅威看来,加入 SUSE 成为其中的一员,就像是一次命中注定的缘分。加入 SUSE 之前,陈毅威分别在全球领先的存储、安全以及数据分析企业任职,这些经历让他能够从不同角度更清晰地了解客户的需求,这也许就是 SUSE 选择陈毅威的原因:在中国云原生市场这个门派林立的江湖里,SUSE 想要突出重围,必须有一套属于自己的“内功心法”:不仅需要更出众的技术和产品实力、明确而坚定的策略,更重要的是,还必须对本土市场有清晰的认知,并拥有超强的行动力。以上皆备才能放手一搏。 乘势而上 陈毅威上任后,就开始了“马不停蹄”拜访客户的日子。 “除了春节假期,一半以上的工作时间都交给了客户。”在陈毅威看来,这些“辛苦”是值得的,通过与客户的深入交流,他更加明确了客户的需求——“数字创新”和“降本增效”,而解决这两个难题的答案,正是“开源”和“云原生”。 事实上,云原生的出现,正在重塑整个 IT 行业发展的轨迹:从单一容器技术发展到庞大的云原生技术群,在容器、微服务、DevOps 三大核心技术的推波助澜下,迅速朝着全栈化技术体系发展,让应用开发、部署、运维更简单。这同时也让云原生的应用从互联网迅速蔓延到千行百业,其中不乏政府、金融、制造等传统领域用户。 据 IDC 预测,到 2025 年,超过一半的中国 500 强企业将成为软件生产商,超过 90% 的应用程序为云原生应用程序。到 2024 年,由于采用微服务、容器、动态编排等技术,新增的生产级云原生应用在新增应用的占比将从 2020 年的 10% 增加到 60%。 这些来自用户的真实反馈更加坚定了陈毅威的想法:基于 SUSE 在开源和 Linux 领域 30 余年的积淀,赋予客户更加开放、安全的云原生能力。 谋定而后动 事实上,尽管云原生技术无论在效率和成本方面都极具竞争力,但对于众多企业而言,它并不是一个“完美计划”,企业仍然有自己的担忧。 首先,对于从信息化时代、云计算时代亦步亦趋进阶到当今云原生时代的企业而言,其散布在数据中心、私有云、公有云甚至边缘侧的应用,与企业业务的稳定性和连续性息息相关。因此,一个具有包容性和可扩展性,能够覆盖企业以往所有 IT 投入的基础架构,是企业对云原生平台的基本要求。 其次,安全仍是企业比较担心的问题。权威咨询机构 451 Research 的研究表明,容器安全是企业采用云原生平台最核心的考量因素,尤其是在受到严格监管的行业。 SUSE 正在带来新的可能。 作为全球首个企业级 Linux 的发布者,SUSE 为企业用户提供了一个从数据中心延伸到云端,甚至到边缘端的操作系统平台。而随着对 Rancher 的收购,企业用户又拥有了一个同样可以从数据中心延展到云端、边缘端的容器管理平台。 “SUSE 的 Linux 可以支持其他任何的 Kubernetes;SUSE 的 Kubernetes 也不绑定自己的产品,可以运行在任何形态的 Linux 产品上,无论是 SLES 还是 RHEL” 陈毅威说。
本文作者为 SUSE CEO Dirk-Peter van Leeuwen,以下为博客中文翻译版。阅读原文请访问:https://www.suse.com/c/at-suse-we-make-choice-happen/ 25 年来,开源已经彻底改变了我们的世界。从 Linux 到虚拟化,再到向云端迁移等等,不一而足,开源成为诸多重大技术进步的驱动力量。对我来说,这其中的原因不言而喻。我们想让尽可能多的人在一个开发成果会得到回报的协作框架下,一起寻找解决方案,这样每个人都能受益。毕竟,在众目睽睽之下任何错误都难以遁形。 这一切的核心在于:软件应该能够“由任何人自由访问、使用、更改和共享(以修改或未修改的形式)”,限制客户共享其供应商所提供的源代码,也就限制了他们(作为用户)对其所依赖的软件进行分析和审核的能力。 SUSE 完全赞同这一观点。专有化不应该成为开源企业之间竞争的基础。我们都为开源社区做出了贡献,我们同样也受益良多。 SUSE 与开源社区积极协作,通过开源项目打造企业级产品。我们的客户并非为软件付费,而是为在任务关键型环境中运行软件的能力付费,在这样安全的环境中,他们能够获得长期的全天候支持和经过认证的堆栈。这才是我们竞争的舞台,目标是成为客户最好、最可靠和具有成本效益的供应商。 最近出现了对获取源代码的各种限制,我们认为这种竞争态势偏离了正确的轨道。 持续为客户提供自由选择权才是最重要的。正如 SUSE 宣布的那样,我们将基于 Red Hat Enterprise Linux(RHEL)公开的源代码,开发和维护一个与 RHEL 兼容的发行版。这是我们所擅长的领域:为客户提供长期的兼容性和自由选择的权利。 形象地说:假如您是一位手机用户,您可能希望在不换号的情况下更换电信运营商,让您的消费获得最大价值。同样,作为企业级 Linux 用户,您可以在保留现有 Linux 的情况下切换到 SUSE。SUSE 能够以极具竞争力的方式为开源软件用户提供价值,同时确保其数据资产安全。 在这方面,SUSE 是行业领导者。我们拥有超过 30 年的工程专业知识,为 Linux 做出了诸多贡献,以确保能够满足用户关键任务工作负载的需求。我们的团队在支持混合环境方面经验丰富。去年,我们为需要 CentOS 和 RHEL 支持的客户推出了SUSE Liberty Linux。此外,SUSE Manager 也一直以其高效管理各种 Linux 发行版的能力而广受认可,彰显了我们为用户提供灵活选择的决心。SUSE 将坚定不移地分享所取得的成果,确保用户可以自由、开放地访问源代码,并确保这些项目永远不会受限。 最后我还想补充一点。毋庸置疑,SUSE 将持续全力开发 SUSE Linux Enterprise (SLE)和自适应 Linux 平台(ALP)解决方案以及 openSUSE Linux 发行版。我们致力于为企业和社区在混合环境中的自由创新保驾护航。 欢迎有志之士加入我们:Choice@SUSE.com 让我们为自由选择权而奋斗! Dirk-Peter van Leeuwen SUSE CEO
本文由1月19日晚36Kr运维开发工程师田翰明在Rancher Labs技术交流群中的技术分享整理而成。 田翰明,36Kr 运维开发工程师,在 36Kr 主要负责运维自动化,CI/CD 的建设,以及应用容器化的推动。 背 景 36Kr是一家创立于2010年,专注于科技创投领域的媒体公司,业务场景并不复杂,前端主要使用NodeJS进行Render,移动端有Android也有iOS,后端服务几乎全都由PHP来支持。使用PHP的主要原因是在最初进行技术选型的时候发现,PHP进行Web开发效率比较高,后来就一直这样延续下来了。 但是在后期,随着业务的突飞猛涨,在程序设计中又没能进行解耦,就导致了许多服务耦合成了一个很臃肿的单体应用,逻辑耦合严重,进而导致了很多的性能问题,随着问题越来越难改,开发任务又越来越紧,就不得不往后拖,越往后拖留下的问题就更难改,形成了一个恶性循环,留下了很多的技术债,很不利于后续的开发任务,并且一旦出现了问题,也很难追溯具体原因,所以在那时候经常听到一句话 “这是历史遗留问题” 。 B/S、C/S、单体应用,这是一种很传统 也很简单的架构,但是缺点也暴露无遗,所以经常因为一个业务逻辑的性能问题,进而影响到所有的业务。在运维侧,运维只能够通过堆机器,升配置等策略来应对,投入了很多的机器成本和人力成本,但是收效甚微,很是被动。 这种情况已经是迫在眉睫了,终于技术团队决定使用 Java 语言进行重构,将单体应用进行微服务化拆解,彻底改变这种因为单体应用故障而导致生产环境出现大范围的故障。 需求分析 + 选型 在重构计划开始一段时间后,为了节省虚机资源,我们一台虚机上运行了多个 Java 程序,但是因为没有资源隔离和灵活的调度系统,其实也会导致一些资源的浪费。并且在高并发场景下,偶尔会有资源抢占导致一个应用影响另一个应用的情况。为此,我们运维专门开发了一套自动化部署系统,系统内包括部署、监控检测、部署失败回滚、重启等基础功能。 随着当时 K8s 的风靡,还有 Rancher 2.x 的发布,我们逐渐发现,我们所面临的这些问题,它们基本都能解决,比如资源隔离、deployment 的控制器模型、灵活的调度系统,这些都有,这就是最好的自动化部署系统啊,于是我们运维侧,也开始决定向容器化进军。 在选型上,因为我们的服务基本都在阿里云上面,所以第一个想到的是阿里云。时因为我们和华为有一些业务的往来,所以华为的 CCE 也作为了备选,但是考虑到我们的服务资源全部在阿里云上,这个迁移成本实在太大了,所以就没再考虑华为云。 我们一开始使用过Rancher 1.6,但是只是用来管理主机上部署的原生 Docker。也因此对Rancher的产品产生了很大的好感。 需求方面,因为要降低我们研发人员的学习成本,容器管理平台的易用性十分重要。此外,K8s 的基础功能是必须的,因为 K8s 还在高速发展阶段,所以能需要够随时跟上更新,有安全漏洞后也需要第一时间进行更新打补丁,同时还要有基本的权限控制。而且我们公司内部没有专门的K8S团队,运维人员也只有2位,所以如果能够有专业人员进行技术上的交流,发生了问题可以有专业的服务团队来协助也十分重要。 综上,基本上就是 Rancher 完胜,UI 做得非常友好,开发人员能够很快上手,更新迭代速度也非常快,发现漏洞后也会有详细的补丁方案,认证策略也完美支持我们的 OpenLDAP 协议,能够对开发、测试、运维人员进行不同权限控制,并且也是第一家做到支持多云环境的,方便以后我们做跨云的方案。 我们这次容器化的过程,主要经历了以下几个因素的考虑,今天我就来和大家分享我们在 Rancher 上的一些实践,希望能给大家带来帮助: 应用的容器化改造 Rancher 的高可用性 容器的运维 多租户隔离 应用的容器化改造 因为我们的开发人员,有相当一部分是没有接触过容器的,为了能对开发人员更友好一些,我们的镜像分成了两层,主要的 Dockerfile 编写是由我们运维人员来编写的,而开发人员代码仓库里的 Dockerfile 是最简单的,基本上只有代码拷贝的过程和一些必传的变量,具体可以参考以下示例: 可以看到,开发人员所维护的 Dockerfile 可以说相当简单了,这大大的降低了开发人员维护的难度。 另外,因为构建产物的大小,很大程度上决定了部署时间的长短,所以我们使用了号称最小的镜像——alpine,alpine 有很多的优点: 体积小 有包管理器、有丰富的依赖 大厂的支持,包含 Docker 公司在内的多家大厂官方使用 但是他有一个缺点,alpine 上并没有 glibc 库,他所使用的是一个 musl libc 的小体积替代版,但是 Java 是必须依赖的 glibc 的,不过早就有大神了解了这点,在 GitHub 上已经提供了预编译的 glibc 库,名字为alpine-pkg-glibc,装上这个库就可以完美支持 Java,同时还能够保持体积很小。
今年7月,我们对外宣布Rancher与SUSE已经就SUSE收购Rancher事宜达成最终协议,如今,收购已经完成,我们将和SUSE一起踏入云原生时代的新征程。对于我们的未来,我感到无比兴奋并且万分期待。 SUSE和Rancher的结合,将为我们全球的客户带来哪些变化? Rancher为客户提供了无处不在的计算资源,借助SUSE,我们将赋予客户在任意场景创新的自由。我们将携手帮助我们的客户实现从数据中心到云端再到边缘,甚至更广泛应用场景的无缝创新。这是我们的愿景,也是我们的使命。 SUSE和Rancher的合并,将使“创新无处不在(Innovate Everywhere)”的愿景照进现实。SUSE是支持关键任务业务应用程序和系统领域的市场领导者,Rancher是业界领先的Kubernetes管理平台,我们独立且灵活的产品模块,将使客户高效地应对当前的工作流程挑战,并自由地在未来发展其 IT 战略。 自我们宣布收购这一消息以来,我收到了无数的邮件和电话,这些邮件和电话来源于我们的客户、合作伙伴、开源社区成员,以及Rancher的团队成员。他们对Rancher充满热情,并对SUSE和Rancher结合抱有极大的期望。现在,我们全球的客户可以将Rancher与SUSE的稳定性和坚如磐石的IT基础设施相结合,在任意场景进行无限创新。这将进一步加强我们与全球客户之间的信任关系。 客 户 如果您已经是SUSE和Rancher的客户,先前已购买的产品和订阅服务将根据条款保持不变。除此之外,SUSE CaaS平台未来将基于Rancher所提供的创新功能进行交付。我们将和CaaS客户紧密合作,以确保平台顺利迁移。 展望未来,我们充分发挥我们在安全性、合规性、各国政策和广泛的应用程序认证等领域的价值优势。SUSE和Rancher的结合,为客户提供了全球唯一一个可以管理世界上所有Kubernetes发行版的企业级Kubernetes管理平台,无论客户在底层运行的是何种Linux发行版,也无论Kubernetes集群是运行在公有云、私有数据中心亦或是边缘计算环境当中。 合作伙伴 SUSE One合作伙伴将从SUSE和Rancher的解决方案组合中受益,创新解决方案将帮助合作伙伴发现全新的市场机会,为客户重新定义一致地管理和扩展工作负载的方式,监控集群的运行状况,并简化容器应用程序的部署和管理。 开源社区 诚如我先前所说的,SUSE和Rancher仍将致力于提供100%真正开源的技术,持续为上游开源项目做出贡献。我们会坚定100%开源的承诺,一起为全球客户提供真正100%开源的解决方案。 一个光明的未来正展现在我们的面前,而我们与SUSE奇妙旅程才刚刚开始。 梁胜 SUSE工程与创新部门总裁 梁胜博士是Rancher Labs联合创始人及CEO。创立Rancher Labs之前,早期在全球知名的Sun Microsystems公司担任核心主任级工程师期间,梁胜博士是Java语言J2SE平台核心组件JNI(Java Native Interface)的作者,并随后领导设计和开发了Java语言最为核心的JVM(Java虚拟机)。 他于2008年创立全球顶级的云计算公司cloud.com并担任CEO,推出著名的云计算管理软件CloudStack,也因而被誉为CloudStack之父。2011年cloud.com被 Citrix 以 2 亿美金购入旗下,梁胜博士出任 Citrix云平台首席技术官,也是 Citrix 公司首位华人CTO。 早期,梁胜博士作为创始人之一,创立了Teros网络安全公司,该公司之后也被Citrix公司并购。梁胜博士早年还担任过SEVEN网络公司的工程副总裁,以及Openwave System技术总监。梁胜博士毕业于中国科技大学少年班,并拥有耶鲁大学计算机博士学位。
2020 年 12 月 1 日,全球开源创新领导者SUSE宣布正式完成对 Kubernetes 管理领域市场领导者Rancher Labs(以下简称Rancher)的收购。两家领先开源公司的合并,将为业界带来一流的 Linux 操作系统和市场领先的 Kubernetes 管理平台的全新产品组合,以及众多助力企业创新变革的最先进能力。 “我们的客户明确表示,他们希望获得先进、可靠而且强大的技术,以加速业务转型”,SUSE CEO Melissa Di Donato 表示:“SUSE过往提供的创新解决方案可以预测并完美满足企业的转型需求,如今有了 Rancher,我们将再次创造历史。凭借我们面向开源软件的强大模块化策略,我们的客户可充分利用其可靠性和出色的灵活性随时随地在任意场景进行创新——无论是在数据中心、云端还是边缘环境。” 助力企业激活无限创新 SUSE 和 Rancher 与广泛的开源社区合作,为企业带来无限创新和稳定体验。 SUSE 拥有 28 年的辉煌历史,一直专注于开源创新,为任务关键型应用程序和系统提供支持,被广泛嵌入到全球范围内诸如汽车和医疗设备等设备当中。Rancher 近日在“Forrester New Wave™:多云容器管理平台”报告中被评为卓越领导者,它提供开源容器管理软件,使组织能够在数据中心、云、分支机构和网络边缘的任何基础设施上大规模部署和管理 Kubernetes。 Rancher 对开源社区的平等承诺得到了包括RKE、K3s、微软云AKS、亚马逊EKS、谷歌云GKE、阿里云ACK、腾讯云TKE和百度智能云CCE在内的所有主流认证 Kubernetes 发行版和操作系统的支持。由于没有供应商锁定(Lock-in)以及计算场景的限制,企业可以在其业务范围内进行边缘到核心再到云的无限创新。未来,SUSE和Rancher将共同开发解决方案,致力于解决当今企业的复杂问题,重点聚焦帮助企业进行边缘计算领域的创新。 “与 SUSE 合并是我们的正确选择,我们对开源有着共同的理念、观点和原则,将为企业领导者带来巨大的价值”,SUSE工程与创新部门总裁梁胜(前Rancher联合创始人及CEO)表示:“未来,我们将致力于帮助全球企业实现业务变革,通过为企业提供云相关解决方案,帮助其数字工作流程实现云基础设施现代化。” 真正“开放”的开源软件 为了兑现对开源社区真诚且坚定的承诺,SUSE通过结合Rancher等独立公司的力量与能力,为全球 IT 领导者在业务转型提供助力。SUSE和Rancher的共同客户和合作伙伴均对此表示认同。 “一些开源公司比其他公司更开放。根据我的经验,SUSE 和 Rancher 是很好的案例,他们积极致力于提供技术解决方案的部署和服务,从而真正满足我们的业务需求,并且极具创新意识”,T-Systems 公共云托管服务和大数据 SVP Frank Strecker 表示。 “我们需要在数据中心和云端进行创新,同时需要保持敏捷性,我们不能仅使用功能简单且种类繁多的垂直堆栈。 SUSE 和 Rancher 提供的真正开放的开源软件,以快速且一致的方式响应不断变化的数字化需求”,富士通法律与秩序联合部门 CTO Jason Daniels 表示。 最后,Melissa 强调:“SUSE 和 Rancher 的组合为 IT 行业带来了绝无仅有的创新,通过 SUSE 与Rancher 结合,打造基于未来愿景的专业知识和解决方案。我们独立且灵活的产品模块,将使客户高效地应对当前的工作流程挑战,并自由地在未来发展其 IT 战略。”
我非常高兴地宣布,Rancher与SUSE已经就SUSE收购Rancher事宜正式达成最终协议。Rancher是业界应用最为广泛的Kubernetes管理平台。SUSE是全球最大的独立开源软件公司,也是企业级Linux的领导者。通过结合Rancher与SUSE,我们不仅获得了大量的研发资源,从而进一步增强我们在市场上处于领先地位的产品,还能始终如一地保持我们独特的100%开源的商业模式。 6年前,我们创办了Rancher,开发了一款基于容器技术的新一代计算平台。当时我们尚未预料到,Kubernetes技术将如潮水般迅猛发展和普及。Rancher之所以能够在这个激动人心、高度活跃的市场中茁壮成长,是因为我们开发了深受终端用户喜爱的创新产品。开发者用户的广泛采用,加上独特的企业级支持订阅模式,推动我们在过去的数年间实现了高速增长。我要感谢过去六年来使用我们产品的每一个人,感谢你们的支持,感谢你们帮助我们建立了一个令人惊叹的用户社区。 在今年下半年收购完成之后,我将进入SUSE,领导合并后的研发和创新部门,进一步加快产品创新的步伐。值得强调的是,SUSE在过往的28年间创立了极为成功的开源业务,这与Rancher一直以来秉承的开源精神一脉相承,因此,未来我们将一如既往地坚持100%开源的理念和践行开源的坚定承诺。 此次收购对于Rancher的客户和合作伙伴而言也是一个非常好的消息。我们一直为Rancher行业领先的客户满意度感到自豪,在今年年初的Nicereply客户幸福度榜单上,Rancher以客户忠诚度评分超过80分位列第三。SUSE的全球影响力和企业级战略,将进一步加强我们对依赖Rancher来执行关键任务工作负载的客户的承诺。另一方面,SUSE强大的生态系统也将大大加快Rancher致力于推动企业采用云原生技术的发展进程。 这次收购是Rancher进一步发展的全新起点,对于在整个行业、技术和业务的重新启程,我感到无比振奋,并且抱有殷切的期待。我为我们的团队以及他们在过去六年所做的工作感到骄傲,我期待着与我们的用户、客户、合作伙伴和Rancher同事继续合作,通过利用Rancher和SUSE的最佳优势,建立一个真正令人惊叹的业务。Rancher和SUSE将齐心并进,成为一家改变行业的企业计算公司。
2020年7月8日是Rancher中国自成立以来最为重要的日期之一。随着Rancher全球与SUSE的合并计划的发布,Rancher中国即将开启新的发展历程,踏入新的发展阶段。 自2016年成立以来,Rancher中国秉承持续技术创新的经营理念,致力于为中国企业客户提供具有最佳用户体验的企业级Kubernetes管理平台产品及技术支持,在企业客户支持、开源社区建设、合作伙伴生态建设等方面都取得了显著的成绩。Rancher中国的业绩在过去三年保持300%的平均增长率,得到包括上汽集团、中国人寿、平安科技、中国联通、海通证券、央视网在内的超过100家中国知名企业客户的信任和支持;拥有来自超过2万家企业的约3万名架构师及高级程序员组成的Rancher中国技术社区是国内顶尖的开源技术社区;同时,Rancher中国开源开放的业务模式吸引了近100家ISV/SI合作伙伴,为客户提供更加精细贴心的服务提供了保障。Rancher中国也因此获得诸如36Kr评为WISE2020企服金榜云服务最佳解决方案奖、爱分析中国云计算厂商30强等多项殊荣。 SUSE是全球知名的卓越开源公司,拥有超过25年的Linux工程经验。SUSE Linux Enterprise Server产品是全球使用最广泛的企业级Linux分发版产品,SUSE企业级开放堆栈分发包涵盖了企业级Linux、应用管理、多云基础架构管理、IT基础设施管理等产品,SUSE 与合作伙伴和社区生态系统精诚合作,提供以卓越服务和支持为后盾的企业级、开放源代码软件定义的基础设施和应用程序交付解决方案。 多年以来,SUSE中国公司深耕中国市场,坚持提供真正的开放源代码解决方案、灵活的业务实践和您的数字转型所需的卓越服务与支持,同时避免供应商强制束缚,在中国拥有包括中国民生银行、中国银联、中国电信、联想、平安银行等众多500强企业客户。其中很多客户同时也是Rancher中国的战略客户。 随着云原生理念的在企业越来越广泛的被接受和推广、Kubernetes技术的更加成熟以及边缘计算应用场景的逐渐丰富,为企业在新的云原生计算时代提供更好的计算平台、实现“计算无处不在”是Rancher中国和SUSE中国的共同目标。 Rancher中国一直以来坚持的立足中国、服务中国的理念也得到SUSE管理团队的高度认可。具有硅谷特色的工程师文化及技术创新理念、研发符合中国企业客户需求特点的产品、全力支持中国企业客户实现数字化转型和科技创新将继续是Rancher和SUSE中国团队共同企业风格。 Rancher中国企业版Kubernetes管理平台和Rancher中国软件定义边缘计算平台产品将继续得到更多功能增强和更好的服务支持体验。在SUSE中国和Rancher中国的共同努力下,我们对中国企业客户服务支持承诺将得到更大程度和范围的提升及保障。Rancher中国和SUSE中国的客户将能够获得从底层操作系统到容器管理、Kubernetes管理、IT基础架构管理、应用管理、多云管理的一揽子解决方案和统一企业级技术支持服务。 未来在SUSE管理团队的支持下,我们将增大在设立在深圳和沈阳的两个中国研发中心的投资力度和规模,努力推出更多面向中国企业客户的创新产品和解决方案,更好的满足企业对国产化和本地化需求。 持续创新的技术理念和坚持开源开放的宗旨使Rancher和SUSE在过去的发展历程中惺惺相惜,携手发展的崭新前景也让我们对未来充满共同憧憬。在踏上全新发展历程的此时此刻,我代表Rancher中国期待来自我们广大企业用户、开源社区成员和合作伙伴的支持与祝福。 让我们共同开创波澜壮阔的“计算无处不在”的新时代。 Rancher大中华区总经理 秦小康 2020年7月8日
SUSE/Rancher将成为企业级Linux、Kubernetes、边缘计算和AI的首选开源创新者 双方团队将合并成为全球最大的、致力于通过开源和云原生解决方案推动数字化转型的独立组织 为客户和合作伙伴提供更大的全球影响力以及无与伦比的领先技术以加速创新 北京时间2020年7月8日,全球最大的独立开源公司SUSE与业界应用最为广泛的Kubernetes管理平台创建者Rancher Labs(以下简称Rancher)共同宣布,双方已就SUSE收购Rancher事宜正式达成最终协议。此次收购将使SUSE/Rancher成为企业级Linux、容器、Kubernetes、边缘计算领域的首选开源公司。 “对于IT行业而言,这是一个意义非凡的时刻,因为此次并购是两个开源领域领导者的联合。SUSE是企业级Linux、边缘计算和AI的领导者,此次与企业级Kubernetes管理领域的领导者Rancher的合并,将为全球的IT市场注入全新的可能,以帮助客户加速其数字化转型之旅。”SUSE首席执行官Melissa Di Donato表示, “SUSE和Rancher的结合,将为市场带来先前罕有的、能在全球范围内提供技术支持的、包含云原生技术在内的100%真正开源的产品组合,从而帮助我们的客户从边缘、到数据中心、再到云上,实现整个业务的持续无缝创新。” 为客户和合作伙伴释放云原生的未来 随着企业的IT部门越来越多地寻求利用云来实现创新并推动数字化转型,Kubernetes已迅速成为企业IT战略的核心支柱。Gartner预测,随着采用云原生的应用程序和基础架构的企业数量大幅增加,到2024年,成熟经济体中使用容器管理平台的大型企业的比例,将从2020年的35%增长至超过75%。 SUSE是企业级Linux和边缘计算的领导者,而Rancher是Kubernetes容器管理领域的领导者,二者结合,将通过最新的AI技术以及从边缘到数据中心再到云的容器化工作负载的无缝部署,实现“计算无处不在”。 “Rancher和SUSE一起,将帮助企业掌控其云原生的未来,”Rancher首席执行官梁胜表示,“Rancher领先的Kubernetes平台与SUSE丰富的开源软件解决方案矩阵建立了强大的组合,使全球范围内的IT和运营领导者都能最好地满足自身或其客户在数字化转型过程中的需求——无论是在数据中心、还是在云端或边缘计算环境。” “Rancher和SUSE有着相同的开源精神与基因,过去几年也都同样深耕中国市场,在企业客户支持、开源社区建设、合作伙伴生态建设等方面都取得了显著的成绩,”Rancher大中华区总经理秦小康表示,“Rancher中国与SUSE中国都尤为看好中国市场强大的潜力和强劲的发展前景,未来,为企业在新的云原生计算时代提供更好的计算平台、实现‘计算无处不在’是Rancher中国和SUSE中国的共同目标。” 在获得监管部门批准、收购正式完成之后,SUSE和Rancher这个强强联手的组合将拥有更丰富的一流产品矩阵、更领先的创新能力和更强势的全球业务覆盖。Rancher作为被Forrester评为“企业容器平台软件领导者”的容器领域头号玩家,将能使SUSE的客户受益于Rancher业界领先的云原生技术能力。另一方面,Rancher的客户将能受益于SUSE的全球支持网络和广泛的开源产品组合。 对于SUSE的全球合作伙伴生态系统来说,这种结合也是一个巨大的胜利,合作伙伴现在将能够使用Rancher的产品为其客户提供更全面更丰富的解决方案。 对开源社区的坚定承诺 SUSE在开源技术方面有着悠久的传统,未来亦将继续致力于提供100%真正的开源技术,为客户免于供应商锁定。一直以来秉承着同样开源精神的Rancher,同样将继续其开放、开源的战略,支持多个Kubernetes发行版和操作系统。 Rancher的架构设计,从最初便设计为与基础设施无关,正因如此,Rancher是最早的支持所有经CNCF认证的Kubernetes发行版的Kubernetes管理平台,为包括RKE、K3s、微软云AKS、亚马逊EKS、谷歌云GKE、阿里云ACK、腾讯云TKE和百度智能云CCE在内的所有主流认证发行版以及像Gardener这样的开源项目提供了直观、极简且一致的Kubernetes管理体验。 SUSE并购式增长战略的第一步 此次对Rancher的收购,是SUSE自2019年3月成为完全独立的软件公司以来,并购式增长战略的第一步。它还遵循了SUSE强劲的财务势头,SUSE刚刚发布了出色的2020财年第二季度财务报告,ACV (年度合同价值)预订量同比增长30%,全球云收入同比增长70%。 “SUSE的愿景是为客户和合作伙伴创造更好的未来和可衡量的价值,也正是这一愿景指导我们的决策并推动我们的增长。”Di Donato补充说,“此次收购增强了SUSE提供更全面的产品组合、更多的客户选择以及无供应商锁定的解决方案的能力。它还将使我们在与合作伙伴的合作中发挥更大的战略作用,不论是面向云服务提供商、独立硬件供应商、系统集成商还是增值代理商,都能为其提供更多的价值。” 此次交易预计将在2020年10月底之前完成,但要遵守惯例成交条件,包括收到监管部门的批准。 About SUSE SUSE是全球最大的独立开源公司,提供无与伦比的客户选择,它通过简化、现代化和加速传统的云边解决方案,来推动企业的数字化转型。SUSE与合作伙伴、社区和客户紧密合作,提供并支持可实现关键任务业务成果的解决方案。SUSE的容器和云平台、软件定义的基础架构、人工智能和边缘计算解决方案,使客户可以在本地、多云和边缘的任何地方创建、部署和管理工作负载。有关更多信息,请访问www.suse.com。 About Rancher Labs Rancher Labs由CloudStack之父梁胜创建。旗舰产品Rancher是一个开源的企业级Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理。Rancher一向因操作体验的直观、极简备受用户青睐,被Forrester评为2018年全球容器管理平台领导厂商,被Gartner评为2017年全球最酷的云基础设施供应商。 目前Rancher在全球拥有超过三亿的核心镜像下载量,并拥有包括中国联通、中国平安、中国人寿、上汽集团、三星、西门子、WWK保险集团、澳电讯公司、德国铁路、厦门航空、新东方等全球著名企业在内的共40000家企业客户。
随着机器学习领域不断发展,对于处理机器学习的团队来说,在1台机器上训练1个模型已经有些难以为继,并且现在业界的共识是机器学习已经不仅仅是简单的模型训练。 在模型训练之前、过程中和之后,需要进行许多活动,对于要生成自己的ML模型的团队来说尤其如此。下图常常被引用来说明此类情况: 对于许多团队来说,将机器学习的模型从研究环境应用到生产环境这一过程困难重重,背负很大的压力。糟糕的是,市面上处理每类问题的工具都数量惊人,而这些海量工具都有望解决你所有的机器学习难题。 但是整个团队学习新工具通常很耗时,并且将这些工具集成到你当前的工作流程中也并不容易。这时,或许可以考虑Kubeflow,这是为需要建立机器学习流水线的团队而打造的一个机器学习平台,它包括许多其他工具,可以用于服务模型和调整超参数。Kubeflow尝试做的是将同类最好用的ML工具整合在一起,并将它们集成到一个平台中。 来源:https://www.kubeflow.org/docs/started/kubeflow-overview/ 顾名思义,Kubeflow应该部署在Kubernetes上,既然你是通过Rancher的平台阅读到这篇文章,那么你大概率已经在某个地方部署了Kubernetes集群。 值得注意的是,Kubeflow中的“flow”并不是表示Tensorflow。Kubeflow也能够与PyTorch一起使用,甚至可以与任何ML框架一起使用(不过支持得最好的框架还是Tensorflow和PyTorch)。 在本文中,我将向你展示如何尽可能简单地安装Kubeflow。如果在你的集群上已经有GPU设置,则过程将更为简单。如果尚未设置,那么你需要执行一些额外的设置步骤,因为许多机器学习需要运行在NVIDIA GPU上。 在Kubeflow上设置GPU支持 假设你已经安装了Docker 19.x。 1、 安装NVIDIA 容器运行时 在所有带有GPU的节点上: % distribution=$(. /etc/os-release;echo $ID$VERSION_ID) % curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - % curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list % sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit % sudo apt-get install nvidia-container-runtime 现在,修改Docker守护进程(Daemon)运行时字段: % sudo vim /etc/docker/daemon.json 粘贴以下内容: { "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } } 现在重启Docker守护进程:
对于任何成功的Kubernetes策略来说,集群安全是至关重要的部分。近期,一份由AimPoint发布的调查报告显示,44%的受访者表示由于Kubernetes容器的安全问题,推迟了应用程序进入生产环境。 然而,Kubernetes安全是一台复杂的机器,其中包含许多活动部件、集成件以及旋钮和杠杆。这会使本来就充满挑战的安全工作变得更加困难。 业界应用最为广泛的Kubernetes管理平台创建者Rancher Labs一直在为用户寻找各种高效的方式,因此我们十分高兴在Rancher 2.4中推出了CIS安全扫描功能。这项Rancher托管集群的新功能可以让你针对互联网安全中心发布的100多个CIS基准运行RKE集群的ad-hoc安全扫描以及定期的扫描。使用CIS安全扫描,你可以创建自定义测试配置并生成包含通过/失败信息的报告。根据报告内容,你可以采取各种措施以确保你的集群满足所有安全要求。 CIS基准已经被广泛接受为保障Kubernetes集群安全的事实标准。它提供了行业认可的指标,该指标可以用来衡量Kubernetes集群的安全状况。它将信息安全社区领域的知识与Kubernetes中的API、交互和总体控制路径的深刻理解相结合。当工程师试图了解他们保护集群所需的所有位置时,他们可以从基准中了解到数十种攻击的可能性以及如何缓解它们。 为什么IT Ops需要CIS安全扫描? 根据CIS基准手动评估集群是一个十分耗时且容易失败的过程。而现实中,我们的系统不断变化,因此我们需要经常进行重新评估。这就是kube-bench大展身手之处。这是Aqua创建的一种开源工具,用于根据CIS Benchmark自动评估集群。 Rancher 2.4使用kube-bench作为安全引擎,并且对其进行了一些补充。借助Rancher 2.4中的CIS安全扫描,你可以一键编排集群扫描。Rancher负责获取kube-bench工具并将其连接到集群。然后,Rancher将从所有节点的结果中总结出一个易于阅读的报告,该报告会展示集群通过或失败的区域。此外,Rancher还能让你在集群级别安排周期扫描。该设置可以在集群模板级别启用,并在默认情况下,允许管理员为计划的扫描配置模板,以便针对Rancher设置中任何用户创建的每个新集群运行扫描。最后,Rancher为CIS安全扫描提供自定义告警和通知,由于集群的配置改动导致安全不合规,或者本身集群配置就不合规的时候通过邮件、微信等方式通知安全管理员。 在Rancher 2.4中动手实践CIS集群 让我们启动一个Rancher RKE集群。 前期准备:CentOS VM(至少2核),并安装好Docker Step1:运行Rancher Server [root@rancher-rke ~]# sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.4.0-rc3 Unable to find image 'rancher/rancher:v2.4.0-rc3' locally Trying to pull repository docker.io/rancher/rancher ... v2.4.0-rc3: Pulling from docker.io/rancher/rancher 423ae2b273f4: Pull complete de83a2304fa1: Pull complete f9a83bce3af0: Pull complete b6b53be908de: Pull complete b365c90117f7: Pull complete c939267bea55: Pull complete 7669306d1ae0: Pull complete 25e0f5e123a3: Pull complete d6664495480f: Pull complete 99f55ceed479: Pull complete edd7d0bc05aa: Pull complete 77e4b172baa4: Pull complete 48f474afa2cd: Pull complete 2270fe22f735: Pull complete 44c4786f7637: Pull complete 45e3db8be413: Pull complete 6be735114771: Pull complete dfa5473bfef3: Pull complete Digest: sha256:496bd1d204744099d70f191e86d6a35a5827f86501322b55f11c686206010b51 Status: Downloaded newer image for docker.
这篇文章将承接此前关于使用Prometheus配置自定义告警规则的文章。在本文中,我们将demo安装Prometheus的过程以及配置Alertmanager,使其能够在触发告警时能发送邮件,但我们将以更简单的方式进行这一切——通过Rancher安装。 我们将在这篇文章中看到没有使用依赖项的情况下如何完成这一操作。在本文中,我们不需要: 专门配置运行指向Kubernetes集群的kubectl 有关kubectl的知识,因为我们可以使用Rancher UI Helm binary的安装/配置 前期准备 一个谷歌云平台账号(免费的即可),其他云也是一样的 Rancher v2.4.2(文章发布时的最新版本) 运行在GKE(版本为1.15.11-gke.3)上的Kubernetes集群(EKS或者AKS也可以) 启动一个Rancher实例 首先,启动一个Rancher实例。你可以根据Rancher的指引启动: https://www.rancher.cn/quick-start/ 使用Rancher部署一个GKE集群 使用Rancher来设置并配置一个Kubernetes集群。你可以访问下方链接获取文档: https://rancher2.docs.rancher.cn/docs/cluster-provisioning/_index 部署Prometheus 我们将利用Rancher的应用商店来安装Prometheus。Rancher的应用商店主要集合了许多Helm Chart,以便于用户能够重复部署应用程序。 我们的集群起来并且开始运行之后,让我们在“Apps”的标签下选择为其创建的默认项目,然后单击“Launch”按钮。 现在我们来搜索我们感兴趣的chart。我们可以设置很多字段——但是对于本次demo来说我们将保留默认值。你可以在Detailed Description部分找到关于这些值的有用信息。无需担心出现问题,尽管去查看它们的用途。在页面底部,点击【Launch】。Prometheus Server以及Alertmanager将会被安装以及配置。 当安装完成时,页面如下所示: 接下来,我们需要创建Services以访问Prometheus Server以及Alertmanager。点开资源下方的工作负载标签,在负载均衡部分,我们可以看到目前还没有配置。点击导入YAML,选择prometheus namespace,一次性复制两个YAML并点击导入。稍后你将了解我们如何知道使用那些特定的端口和组件tag。 apiVersion: v1 kind: Service metadata: name: prometheus-service spec: type: LoadBalancer ports: - port: 80 targetPort: 9090 protocol: TCP selector: component: serverapiVersion: v1 kind: Service metadata: name: alertmanager-service spec: type: LoadBalancer ports: - port: 80 targetPort: 9093 protocol: TCP selector: component: alertmanager 完成之后,service将显示Active。
在瞬息万变的技术世界中,为用户提供持续不断、快速的创新至关重要。Kubernetes是一个极佳引擎,可以在云端、本地以及边缘驱动创新。因此,Kubernetes及其整个生态系统本身迭代十分迅速,让Kubernetes保持最新状态以确保安全和新功能的使用对于任何部署来说都至关重要。 什么是零宕机升级集群 Rancher 2.4已于上周GA,在Rancher 2.4中,我们正式引入了零宕机集群升级功能。通俗来说,这个功能可以让你在飞机飞行过程中更换引擎,而不受任何干扰。开发人员可以继续将应用程序部署到集群,用户也可以继续使用服务而不会受到干扰。与此同时,与Rancher的OOB (out of band)Kubernetes更新结合使用之后,集群operator可以在已发布版本的数小时内安全地发布维护和安全更新。 在Rancher之前的版本中,RKE首先升级etcd节点,并且注意不中断quorum。然后Rancher立刻迅速升级所有控制平面的节点,最后所有worker节点也会马上升级。这导致API和工作负载可用性会出现短暂故障。此外,一旦控制平面更新,Rancher便将集群状态视为“active”,使得operator可能不知道工作节点依旧在升级中。 在Rancher 2.4中,我们优化了整个升级流程以保证CI/CD流水线的正常交付和工作负载持续为流量提供服务。在整个过程中,Rancher会以更新状态查看集群,这使operator可以快速看到集群中正在发生的某些事情。 Rancher依旧先从ectd节点开始升级,一次升级一个节点,并且注意不破坏quorum。作为额外的预防措施,operator会在升级前对etcd和Kubernetes配置进行快照。并且如果你需要回滚,整个集群可以恢复到升级前的状态。 如你所知,部署应用程序到集群需要Kubernetes API可用。在Rancher 2.4中,Kubernetes控制平面节点也会一次升级一个。第一台server将会 offline、升级然后放回集群。接下来,仅当之前的节点报告其状态为健康时,控制平面节点才会开始升级。这一行为保证了API在升级过程中始终响应请求。 Rancher 2.4节点升级的两大更改 集群上的大多数活动发生在worker节点上。在Rancher 2.4中,节点的升级方式发生了两个重大变化。第一个是可以设置单次升级worker节点的数量。对于传统的方法或者较小的集群,operator可以一次只选择一个节点进行升级。对于较大集群的operator而言,可以调整设置以升级更大的批处理规模。该选项在风险和时间之间取得平衡,并提供了最大的灵活性。第二个更改是operator可以在worker节点升级前选择消耗工作负载。首先驱逐节点可以最大程度地减少Pod重新启动对Kubernetes次要版本升级的影响。 诸如CoreDNS、NGINX Ingress和CNI驱动程序之类的附加服务与worker节点同步更新。Rancher 2.4公开了每种附加部署类型的升级策略,这使得附加升级可以使用原生Kubernetes可用性结构。
前 言 Prometheus是一个用于监控和告警的开源系统。一开始由Soundcloud开发,后来在2016年,它迁移到CNCF并且称为Kubernetes之后最流行的项目之一。从整个Linux服务器到stand-alone web服务器、数据库服务或一个单独的进程,它都能监控。在Prometheus术语中,它所监控的事物称为目标(Target)。每个目标单元被称为指标(metric)。它以设置好的时间间隔通过http抓取目标,以收集指标并将数据放置在其时序数据库(Time Series Database)中。你可以使用PromQL查询语言查询相关target的指标。 本文中,我们将一步一步展示如何: 安装Prometheus(使用prometheus-operator Helm chart)以基于自定义事件进行监控/告警 创建和配置自定义告警规则,它将会在满足条件时发出告警 集成Alertmanager以处理由客户端应用程序(在本例中为Prometheus server)发送的告警 将Alertmanager与发送告警通知的邮件账户集成。 理解Prometheus及其抽象概念 从下图我们将看到所有组成Prometheus生态的组件: 以下是与本文相关的术语,大家可以快速了解: Prometheus Server:在时序数据库中抓取和存储指标的主要组件 抓取:一种拉取方法以获取指标。它通常以10-60秒的时间间隔抓取。 Target:检索数据的server客户端 服务发现:启用Prometheus,使其能够识别它需要监控的应用程序并在动态环境中拉取指标 Alert Manager:负责处理警报的组件(包括silencing、inhibition、聚合告警信息,并通过邮件、PagerDuty、Slack等方式发送告警通知)。 数据可视化:抓取的数据存储在本地存储中,并使用PromQL直接查询,或通过Grafana dashboard查看。 理解Prometheus Operator 根据Prometheus Operator的项目所有者CoreOS称,Prometheus Operator可以配置原生Kubernetes并且可以管理和操作Prometheus和Alertmanager集群。 该Operator引入了以下Kubernetes自定义资源定义(CRDs):Prometheus、ServiceMonitor、PrometheusRule和Alertmanager。如果你想了解更多内容可以访问链接: https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md 在我们的演示中,我们将使用PrometheusRule来定义自定义规则。 首先,我们需要使用 stable/prometheus-operator Helm chart来安装Prometheus Operator,下载链接: https://github.com/helm/charts/tree/master/stable/prometheus-operator 默认安装程序将会部署以下组件:prometheus-operator、prometheus、alertmanager、node-exporter、kube-state-metrics以及grafana。默认状态下,Prometheus将会抓取Kubernetes的主要组件:kube-apiserver、kube-controller-manager以及etcd。 安装Prometheus软件 前期准备 要顺利执行此次demo,你需要准备以下内容: 一个Google Cloud Platform账号(免费套餐即可)。其他任意云也可以 Rancher v2.3.5(发布文章时的最新版本) 运行在GKE(版本1.15.9-gke.12.)上的Kubernetes集群(使用EKS或AKS也可以) 在计算机上安装好Helm binary
Learn how to move legacy applications from Windows 2003 to Kubernetes. These applications include .Net, web, SQL and other applications that don’t have a dependency to run only on Windows 2003. You can move these applications to containers without code changes, making them portable for the future. And you’ll get the benefit of running the containers on Kubernetes, which provides orchestration, availability, increased resiliency and density.
Transport Layer Security is used to secure network communication. Find about why TLS is important and how to effectively use it for Rancher and Kubernetes management.
Kubernetes and DevOps work together to help organizations deliver fast. Read why Kubernetes is essential to your DevOps strategy.
Our customers are happy! Based on our average NPS* score of 77, Nicereply ranked us 3rd in its Customer Loyalty category in the 2020 Customer Happiness Awards. Delivering the best possible customer support is at the heart of everything we do, so this recognition is particularly gratifying. The adoption of Kubernetes is accelerating digital transformation in the enterprise, but it’s complex. Rancher eases the learning curve and enables Kubernetes adoption by ITOps and DevOps teams.
In any rapidly emerging market, consultants can be a great source for vendor-neutral insights, as they typically work with multiple technologies to help their customers make informed decisions. In that vein, Derya (Dorian) Sezen of kloia, a new-era consulting organization that provides services toward transition of legacy workloads to frontline technologies in Cloud, DevOps and Microservices, recently wrote a blog summarizing his experience with Rancher and Red Hat OpenShift. In his blog, Dorian compared and scored the two Kubernetes management tools across 13 categories including installation easiness, CNCF/industry standards, open-source, licensing, multi-cluster, upgrades, Kubernetes version, vendor-lock, Windows container support, support, sales, partner ecosystem and bundle options.
Security is one of the most talked-about topics for Kubernetes users. Google “Kubernetes security” and you’ll find a huge number of articles, blogs and more. The reason is simple: you need to align your container and Kubernetes security with your organization’s existing security profile. Kubernetes has some strong security best practices for your cluster—authentication and authorization, encryption in secrets and objects in the etcd database—to name a few. However, you need to be aware of other risks, such as privilege escalation and secrets obtained.
This article discusses Istio support offered out of the box through the Rancher UI. We'll see an example deployment and visualize it via the Kiali dashboard.
It’s that time of year again, the time for retrospective articles and “Top 10 of the Year” posts. We decided to focus our recap on how CVEs and changes in the threat landscape affected Kubernetes in 2019, and what changes that brought about inside of Rancher.
Having completed a series of twelve Lighthouse Roadshow events across Europe and North America in six months, Tom Callway reflects on the rapid growth of the Kubernetes ecosystem, the importance of community and personal development.
Rio Beta was released on November 19. This article gives you a broad introduction to the features and capabilities of this awesome application deployment engine for Kubernetes.
IOT and Edge systems are pushing the limits of where containers and Kubernetes can run. Bhumik Patel from Arm takes a moment to talk about why this is important and how K3s solves the challenges that these deployments face.
To get an accurate picture of the current state of Kubernetes deployments, Rancher Labs recently conducted an industry survey that included 1,106 respondents from large and small enterprises. Read the results here.
Before Rancher 2.3, upgrading to the latest K8s release required upgrading Rancher. Rancher now enables updates to the latest secure releases of Kubernetes without changing the Rancher server version. Read on to learn how!
Containers - and Kubernetes - are now a key part of enterprise growth strategy across EMEA. How are companies in this region capitalising on Kubernetes?
Today I am very excited to announce that Rancher Labs' Project Longhorn has been accepted by the Cloud Native Computing Foundation as a sandbox project.
The k3s project was started by Darren Shepherd, Chief Architect at Rancher 7 months ago and has already become one of the most popular Kubernetes options on the CNCF Landscape by number of GitHub stars. To put this into context, k3s is more popular than OpenShift by IBM/Red Hat and only Rancher Kubernetes itself is more popular than k3s. Now stars are indicative of interest and popularity only and that should be noted.
Together, Rancher and Spotinst Ocean offer simple, cost-effective management of Kubernetes clusters in the cloud while eliminating the overhead of planning and managing the underlying infrastructure for your applications.
Use Terraform, Azure and RKE to provision a Rancher cluster and a Managed Windows Kubernetes cluster from scratch.
Rancher 2.3 is now generally available. Read on to learn about the primary features and their benefits.
With the release of Rancher 2.3, Rancher is the first to have graduated Windows support to GA and can now deploy Kubernetes clusters with Windows support from within the user experience.
Rancher Labs and Arm are working together to help organizations take advantage of the transformative capabilities of Kubernetes for edge computing.
Support for relational databases is a growing focus for Kubernetes users, and the release of Windows Server 2019 is expanding options for .NET applications and SQL Server. SQL Server workloads, however, often rely on Active Directory and Windows Auth, and storage arrays, which will not be supported by SQL Server containers on Windows Server 2019. Fortunately, a new Rancher Labs partner, Windocks, offers new options for SQL Server on Kubernetes and Rancher.
K3s and Traefik partner to speed up cloud native applications deployment.
Your team is tasked with implementing Kubernetes, but you’ve realized that your tasks go beyond Kubernetes. Never Fear, the Rancher Community is Here!
The Rancher community is 30,000+ strong and growing fast! In order to continue growing and nurturing our vibrant open source community, we’ve created the Trusted Ranch Hand Program. Learn more.
Sheng Liang, CEO of Rancher Labs, shares his thoughts about the latest VMWare announcements: Tanzu and Project Pacific, and on the future of Kubernetes in the market.
Set up k3s clusters and join other VMs as nodes running on an Amazon EC2 instance in less than 60 seconds using k3ups.
Some open source vendors think the number of code commits demonstrates their technical prowess and community commitment. I think users care much more about ‘business value’.
Rancher Labs recently hired marketing strategist Peter Smails as Vice President of Marketing. Peter explains why he joins Rancher and the market opportunity for container management software.
The RanchCast is an opportunity for engineers and operators to interact live with Rancher and Kubernetes community members.
This week Rancher Labs announced a record 161% year-on-year revenue growth, along with a 52% increase in the number of customers in the first half of 2019.
Rancher 2.3 Preview 2 dropped today, with preview support for Istio.
This is the third of a series of three articles focusing on Kubernetes security: the outside attack, the inside attack, and dealing with resource consumption or noisy neighbors.
This guide details how to rotate certificates for Rancher launched, and Rancher Kubernetes Engine CLI provisioned, Kubernetes clusters, both before expiry when certificates are still valid, and also in the event that the certificates have already expired.
This post outlines how to build a production-grade ingress solution using Citrix ADC on Rancher. Customers can confidently expose end user traffic to microservices or legacy workloads on Kubernetes clusters on Rancher using this solution.
Containers have become incredibly common in modern development workflows and production environments. But what exactly are they and why are they getting so much attention? In this article, we will talk about what containers are, how they differ from related technologies, and what primary advantages they provide for the individuals and teams who adopt them.
The primary way to administer Kubernetes clusters is through a command line utility called kubectl. In this guide, we will explain how kubectl works, how to install and configure it, and demonstrate how to use it to perform common actions on your Kubernetes clusters.
Rancher Labs announces a new project called Rio, a MicroPaaS that can be layered on any standard Kubernetes cluster.
Hashicorp's Terraform allows you to quickly provision infrastructure and other components in a scalable, repeatable way. Today, we're announcing the release of a Terraform provider for Rancher 2 to help you provision and manage your Rancher and Kubernetes clusters with ease.
Rancher 2.1.0 supported Windows containers in experimental mode. Now we have upgraded Rancher to support the latest version of Windows containers and Kubernetes
Today we launched a new open source project called k3OS. K3OS is a Linux distro built for the sole purpose of running Kubernetes clusters. Read more.
This article walks though connecting GitLab's Auto DevOps feature to a Rancher-managed Kubernetes cluster using a Rancher feature called Authorized Cluster Endpoint.
This article will compare and contrast six operating systems commonly used in container deployments. It will present information on why the choice of operating system matters, and how differences in application may require differences in operating system.
In this article, we will speak about some basic Kubernetes concepts and its master node architecture, concentrating on Kubernetes node components.
Rancher is the first multi-cluster, multi-cloud Kubernetes management platform.
In this article, we talk about monitoring for scaling and life cycle management with the help of built-in tools like probes and horizontal pod autoscaler. A previous article covered monitoring and metrics for users using tools like the Kubernetes dashboard and cAdvisor. We will test each one of these tools to see what they offer and how they can help us.
In this article, we talk about monitoring Kubernetes with the help of built-in tools like the dashboard and cAdvisor. In part 2, we will cover scaling and life cycle management using other built-in tools like probes and horizontal pod autoscaler. We will then test each one of these to see what they offer and how they can help us.
Containerized applications have the ability to rapidly transform IT environments by enabling faster development, predictable deployments, and more flexible architectures. In spite of these advantages, it can still be difficult to communicate the value of containers to businesses. In this guide, we address some of these challenges to help make a case for container adoption within your organization.
Big data is category of data management, processing, and storage that is primarily defined by its scale. Conventional data processing techniques and tooling are often not suitable for the volume, velocity, and variety of data generated by some modern environments, so new paradigms had to be developed. In this article, we introduce big data concepts and discuss why and how they can be useful.
Rancher 2.2 has reached GA and is available for immediate use. It's packed with features for Day 2 Kubernetes operations, designed to make clusters and their workloads more available and easier to manage. This article describes the main features in 2.2, their benefits, and when to use them.
CNI, or container network interface, is a standard system for provision networking for containers, especially for multi-host orchestrators like Kubernetes. In this article, we'll describe what CNI is, why it's helpful, and then compare some popular CNI plugins for establishing the network for Kubernetes containers.
My name is Jason van Brackel and I'm Rancher Labs' new Director of Community. In this post, I'd like to introduce myself and outline my vision of the community we can shape together.
Rancher's newest open source project, Submariner, creates a single network across all Kubernetes clusters running on premise or in the cloud.
When evaluating application and system architecture, it is important to understand your options and their implications. In recent years, highly distributed systems have become popular, in part due to an influx of sophisticated tooling and an evolution in system management practices. In this guide, we will discuss some of the historical contexts from which distributed systems emerged and offer some general advice on what to keep in mind when designing these applications.
A brief look into the critical differences of VMware vs Docker containers as a platform for application deployment.
The release of k3s has been met with enthusiasm by the Kubernetes community. Find out why k3s has become so popular so quickly and what teams are already doing with k3s one week after its launch.
Microservices are an alternative to monolithic application architecture that can help businesses adapt to modern deployment environments and increase their development velocity. In this article, we'll discuss the differences between these two approaches and the reasons organizations might want to consider microservices.
More than 20,000 production environments run Rancher, and more than 200 businesses across finance, health care, military, government, retail, manufacturing, and entertainment engage with with Rancher Labs commercially because they recognize that Rancher works better than other solutions. Read on to learn why those who use Rancher are passionate about its benefits.
This article covers Kubernetes security solutions that have an eye toward keeping clusters safe from unauthorized inside access. Second in a series of articles on Kubernetes security. Read more here.
Today Rancher Labs is announcing a new open source project, k3s, which is a lightweight, easy to install Kubernetes distribution geared towards resource-constrained environments and low touch operations.
In this tutorial, we walk through using Rancher to deploy a Redis cluster within Kubernetes. After following the steps in this article, you will have a fully functional installation of Redis, and you will have tested the cluster's availability under failure conditions.
This is the first of a series of three articles focusing on Kubernetes security: the outside attack, the inside attack, and dealing with resource consumption or noisy neighbors.
This article by Rancher Head of Product Management describes the difference between Kubernetes scale up and scale out, and the need for multi-cluster applications solutions to handle challenges of Kubernetes scaling in production.
As enterprises move to Kubernetes at a rapid rate, some common experiences and challenges are emerging. In this post, we look at some of the current trends in enterprise Kubernetes adoption and explain how free Rancher Rodeos can help teams learn how to manage these scenarios.
This article covers some of the major advantages and disadvantages of two of the most popular container orchestration tools: Kubernetes and Docker Swarm. We describe each piece of software and then dive in to compare across different features.
This article covers the high level details of CVE-2019-5736, mitigations and patches
Rancher's multi-cluster applications are the easiest way to add reliability to applications running in multiple Kubernetes clusters.
In this article, we differentiate between Rancher and related components like RKE and custom clusters. We talk about what each piece is responsible for and how they work together to enable better cluster management.
In this article we talk about Etcd, what it is, how it works, and how Kubernetes is using it internally. We then walk through how to use Rancher to deploy an Etcd cluster within Kubernetes. By following the steps in this article, you will have a fully functional installation of Etcd. Once Etcd is up and running, we will go over some basic Etcd commands and demonstrate Etcd's cluster availability under failure conditions.
Today we announced releases v2.1.6 and v2.0.11 to address two security vulnerabilities recently discovered in Rancher. The first vulnerability allows users in the Default project of a cluster to escalate privileges to that of a cluster admin through a service account. The second vulnerability allows members to have continued access to create, update, read, and delete namespaces in a project after they have been removed from it. You can view the official CVEs here CVE-2018-20321 and here CVE-2019-6287.
In this article, we explore Kubernetes namespaces as a way to organize and manage objects within a cluster.
This article analyzes the recent CNCF article, '9 Kubernetes Security Best Practices Everyone Must Follow' and discusses how Rancher, RKE, and RancherOS satisfy these by default. I also discuss the Rancher Hardening Guide, which covers 101 more security changes that will secure your Kubernetes clusters.
This blog describes steps to migrate Rancher 2.1.x from a single node installation to a high availability installation.
This blog describes how Rancher and its managed kubernetes clusters can be affected by the recent announcement detailing the vulnerabilities of the proxying external IPs and dashboard.
Learn how to create a hybrid Kubernetes registry for any application deployed on any cluster managed by Rancher and JFrog Artifactory.
This tutorial walks through using Rancher to deploy Elasticsearch into a Kubernetes cluster. At the end of this article, you will have a fully functional 2-node Elasticsearch cluster, complete with sample data and examples of successful queries.
Swapnil Bhartiya of TFiR interviewed Rancher co-founder and CEO Sheng Liang at KubeCon China. The ensuing conversation will teach you about the fascinating ways Kubernetes enhances IT infrastructure from the ground up. Watch the video or read the transcript.
Ankur Agarwal, Rancher's Head of Product Management, describes new features in Rancher 2.2. Learn how to monitor multiple Kubernetes clusters in this step-by-step tutorial and how our new preview release process works.
Today Rancher announces a partnership with Arm to create a Kubernetes-based platform for IoT, edge, and data center nodes, all powered by Arm servers. Rancher and Arm are working jointly on a Smart City project in China. Read more here.
Monitoring a Kubernetes cluster allows engineers to observe its resource utilization and take action when something goes wrong. This article explores what you should be monitoring and how to go about it with Rancher, Prometheus, and Grafana.
Darren Shepherd, Rancher co-founder and Chief Architect, describes the Kubernetes critical CVE issue he discovered, how it came to a resolution, and what it says about the Kubernetes open-source community.
In this tutorial, we will walk through using Rancher to deploy and scale Jenkins on top of Kubernetes. By following steps from this article, you will create a fully functional installation of Jenkins with a master-agent architecture that we use to test real build jobs.
This article describes how continuous integration, delivery, and deployment can help development teams build and release software quickly and reliably.
Rancher now has added support for Alibaba Cloud Container Service for Kubernetes (ACK) and Tencent Kubernetes Engine (TKE). The integration will be available in Rancher 2.2, scheduled to ship in early 2019.
This article is a continuation of Deploying JFrog Artifactory with Rancher. In this chapter we'll demonstrate how to use JFrog Artifactory as a private repository for your own Docker images.
In this article we'll walk through using Rancher to deploy and manage JFrog Artifactory on a Kubernetes cluster. When you have finished reading this article, you will have a fully functional installation of, and you can use the same steps to install the OSS or commercial version of Artifactory in any other Kubernetes cluster.
Rancher was identified as a leader along with Docker Enterprise Edition and RedHat OpenShift, and was specifically called out for excelling in providing multi-cluster Kubernetes management.
When traffic increases, we need to have a way to scale our application to keep up with user demand. With Kubernetes multi-cluster management through Rancher, scaling has never been easier and more efficient. Read here about scaling Kubernetes and the challenges you might be facing when managing a hybrid cloud environment.
This demonstration by Rancher Engineer Prachi Damle shows users how to migrate applications from Rancher 1.6 Cattle to Rancher 2.0 Kubernetes.
Thoughts on Kubernetes design choices, complexity, and usability
Kubernetes vs Docker: What's the difference? Read our introduction to Docker and Kubernetes, and the pressures of delivering reliable applications at scale.
In our introduction to container security, we discuss the issues surrounding this new technology and what you can do to address them. Read more at Rancher.
Learn more about setting up a basic Kubernetes cluster with ease using Rancher Kubernetes Engine (RKE.) For more trainings and tutorials, visit Rancher.
A look at some of the highlights in the upcoming Kubernetes 1.12 release.
When you try to to create persistent storage in Kubernetes, the first two concepts you will likely encounter are Kubernetes PV, and PVC.
This tutorial gets you up and running with using Kubernetes deployment tools to deploy a cluster on Exoscale. Read our tips and best practices at Rancher.
This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0. Learn more about application load balancing options in Rancher 2.0
In this Kubernetes tutorial we explore the many benefits of containers for an application and how to orchestrate their lifecycles. Read more at Rancher.
This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0. Learn more about how internal service discovery works in Rancher 2.0
This introduction to Vitess explains what Vitess is and how to get started with the database clustering tool on a Kubernetes cluster. Read more here.
Rancher 1.6 is a widely used container orchestration platform that runs and manages Docker and Kubernetes in production. This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0. This article explores how to map the 1.6 scheduling options to Rancher 2.0
Project Longhorn v0.3.0 Release
When your application is user-facing, ensuring continuous availability and minimal downtime is a challenge. Hence, monitoring health of the application is essential to avoid any outages. This article explains how to monitor the health of your applications on Kubernetes clusters in Rancher 2.0.
This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0. It explores how to expose Kubernetes workloads publicly using port mapping in Rancher 2.0
Rancher 1.6 is a widely used container orchestration platform that runs and manages Docker and Kubernetes in production. This article is a continuation in a series on migrating from Rancher 1.6 to Rancher 2.0
This blog covers building a CI/CD Pipeline using the hosted GitLab.com solution. The Kubernetes integrations that are covered are generic and should work with any CI/CD provider that interface directly to Kubernetes using a service account. Tools used are Auto Devops, Rancher, and Gitlab.
One of the nicer features of Kubernetes is the ability to code and configure autoscale on your running services. Without autoscaling, it's difficult to accommodate deployment scaling and meet SLAs. This article will show you how to autoscale your services on Kubernetes using Horizontal Pod Autoscale.
In this blog series, we will try to explore how various features supported by Rancher 1.6 using Cattle can be mapped to their equivalents in the Kubernetes world using Rancher 2.0. Read part 1 here.
Service mesh is a new technology stack aimed at solving the connectivity problem between cloud native applications. Read an overview at the Rancher blog.
Leveraging Datadog with Rancher gives you a full stack view of your applications running on Kubernetes clusters, wherever they are hosted. Learn more.
We are excited to announce a new version of rancher released on July 11th, 2018. The latest release is version 2.0.6. Read an overview of the new enhancements and new features in Rancher, open-source container management for running apps in production.
Datadog is a popular hosted monitoring solution for aggregating and analyzing metrics and events for distributed systems. Leveraging Datadog with Rancher can then give you a full stack view of all of your applications running on Kubernetes clusters, wherever they are hosted.
Learn about Rancher management plane architecture where every API resource is represented as a CustomResourceDefinition(CRD) and every functional routine runs as Kubernetes controller
This article series focuses on what metrics, tools, and best practices engineering teams need to know in order to successfully manage workloads on Kubernetes clusters at scale. If you're building a distributed system, releasing new features, and avoiding regression - this article is for you.
Objective: In this article, we will walk through running a distributed, production-quality database setup managed by Rancher and characterized by stable persistence. We will use Stateful Sets with a Kubernetes cluster in Rancher for the purpose of deploying a stateful distributed Cassandra database. Pre-requisites: We assume that you have a Kubernetes cluster provisioned with a cloud provider. Consult the Rancher resource if you would like to create a K8s cluster in Amazon EC2 using Rancher 2.
Rancher's Solutions Architect Jason van Brackel reviews the ExternalDNS subproject of Kubernetes. Learn what ExternalDNS is, and get a step-by-step instructions and helper code for the subproject. Read more here.
Some of the cool new features that have been introduced in Rancher 2.0 include Alerting. These Kubernetes monitoring features were frequently asked for under 1.x so were high on the feature list for when we started development on 2.0. Learn how to create Kubernetes cluster-level and workload alerts in Rancher 2.0.
Learn different ways of load balancing traffic to your kuberentes workload with Rancher
With the latest release of the EKS in GA, Rancher is excited to announce integration with the new managed Kubernetes cluster solution by AWS.
In this post we discuss how to backup etcd and how to recover from a backup to restore operations to a Kubernetes cluster. Etcd is a highly available distributed key-value store that provides a reliable way to store data across machines.
Live migration of virtual machines is now supported by RancherVM. Learn how to setup shared storage and run VM migration to different hosts.
The shiny new tool at KubeCon Europe in May 2018 is gVisor, gVisor a sandboxed container runtime authored by Google, that acts as a user-space kernel. Read about gVisor, what it is and how to use it.
Don't have access to Cloud infrastructure? Maybe you would like to use Rancher for local Kubernetes deployments just like you do in production? No problem, you can install Rancher 2.0 on your desktop. Learn how here.
Rancher Kubernetes Engine is a lightweight, easy to use installer for Kubernetes.
Shannon Williams discusses takeaways from KubeCon Europe 2018. As co-founder of Rancher Labs and after attending KubeCon every year since 2015, Shannon sees important lessons from the conference that affect small and large organizations who are adopting Kubernetes. Read now.
It’s sometimes not possible to use hosted services like GKE or AKS, and there are occasions where direct internet access is not possibe (offline/airgapped). In these instances it is still possible to use Rancher to manage your clusters. In this post we’ll walk through what you need to do when you want to run Rancher 2.0 in an offline/air gapped environment. Docker Private Registry Everything Rancher related runs in a container, so a place to store the containers in your environment is the first requirement.
Rancher 2.0 builds on top of a strong base in Kubernetes authentication and authorization. Explore the benefits we provide to organizations, admins and users.
Rancher 2.0 is an open-source, enterprise Kubernetes container orchestration platform for running containers in production.
The design of RancherVM relies heavily on Docker containers and container registries.
Rancher 1.6 and Rancher 2.0 have slightly different terms and concepts underpinning the container orchestration engine. Learn the fundamental differences between Cattle and Kubernetes. For anyone who has used Cattle or is new to Kubernetes, this article is for you. Get a container orchestrator Cattle to Kubernetes glossary cheatsheet as well.
Use our step-by-step guide to develop a continuous delivery pipeline to a Kubernetes cluster using Webhooks on Rancher.
Rancher 2.0 was built with many things in mind. You can provision and manage Kubernetes clusters, deploy user services onto them and easily control access with authentication and RBAC. One of the coolest things about Rancher 2.0 is its intuitive UI, which we’ve designed to try and demystify Kubernetes, and accelerate adoption for anyone new to it. In this tutorial I’ll walk you through that new user interface, and explain how you can use it to deploy a simple NGINX service.
I’m excited to announce that today we achieved feature freeze on Rancher 2.0. This is an important milestone in our journey towards a GA release, which we’re targeting for the end of April. We have upstreamed all of the critical features into Rancher 2.0 master branch, and we are ready to enter the final beta phase focused on testing, documentation, and scalability. We started 2.0 development more than a year ago.
Since we announced Project Longhorn last year, there has been a great deal of interest in running Longhorn storage on a Kubernetes cluster. Today, I am very excited to announce the availability of Project Longhorn v0.2, which is a persistent storage implementation for any Kubernetes cluster. Once deployed on a Kubernetes cluster, Longhorn automatically clusters all available local storage from all the nodes in the cluster to form replicated and distributed block storage.
Recently, we announced our second milestone release of Rancher 2.0 Tech Preview 2. This includes the possibility to add custom nodes (nodes that are already provisioned with a Linux operating system and Docker) by running a generated docker run command to launch the rancher/agent container, or by connecting over SSH to that node. In this post, we will explore how we can automate the generation of the command to add nodes using the docker runcommand.
Introduction Jenkins has been the industry standard CI tool for years. It contains a multitude of functionalities, with almost 1,000 plugins in its ecosystem, this can be daunting to some who appreciate simplicity. Jenkins also came up in a world before containers, though it does fit nicely into the environment. This means that there is not a particular focus on the things that make containers great, though with the inclusion of Blue Ocean and pipelines, that is rapidly changing.
Today we released the second tech preview of Rancher 2.0, our next major Rancher product release. We’ve been hard at work since the last tech preview release in September 2017, driven by the overwhelmingly positive response to our Rancher 2.0 vision and a great deal of feedback we have received. The Tech Preview 2 release contains many significant changes and enhancements: Rancher server is now 100% written in Go and no longer requires a MySQL database.
Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started.
It is not an overstatement to say that, when it comes to container technologies, 2017 was the year of Kubernetes. While Kubernetes has been steadily gaining momentum ever since it was announced in 2014, it reached escape velocity in 2017. Just this year, more than 10,000 people participated in our free online Kubernetes Training classes. A few other key data points: Our company, Rancher Labs, built a product that supported multiple container orchestrators, including Swarm, Mesos, and Kubernetes.
With the recent \“container revolution,\” a seemingly new idea became popular: immutable infrastructure. In fact, it wasn’t particularly new, nor did it specifically require containers. However, it was through containers that it became more practical, understandable, and got the attention of many in the industry. So, what is immutable infrastructure? I’ll attempt to define it as the practice of making infrastructure changes only in production by replacing components instead of modifying them.
In this tutorial Rancher looks at the steps of building out a highly available WordPress deployment using Kubernetes and MySQL. Visit us to learn more.
If you use Rancher 1.6, you probably already know about Rancher Catalog, which lets your Rancher system users create and share application templates without the need for any technical knowledge about the applications. In this tutorial, you’ll learn how to: Create productive and reproducible private catalog templates Provide templates in a self-service portal Share services between distinct development teams Manage versions and updates of your services Offer your private templates to all Rancher users and customers Catalog Repositories You can configure Rancher Catalog with different catalog repositories (repos) to get distinct templates.
Ye Olde Worlde Back in older times, B.C. as in Before Cloud, to put a service live you had to: Spend months figuring out how much hardware you needed Wait at least eight weeks for your hardware to arrive Allow another four weeks for installation Then, configure firewall ports Finally, add servers to config management and provision them All of this was in an organised company! The Now The new norm is to use hosted instances.
Partnership Combines Rancher 2.0 with Canonical Kubernetes and Leading Cloud OS, Ubuntu Today, we joined Canonical in announcing the Canonical Cloud Native Platform, a new offering that provides complete support and management for Kubernetes in the Enterprise. The Cloud Native Platform combines Rancher 2.0 container management software with Canonical Ubuntu and Ubuntu Kubernetes, and will be available when Rancher 2.0 launches next spring. This announcement is an enormous accomplishment for our team here at Rancher.
Today, Amazon announced a managed Kubernetes service called Elastic Container Service for Kubernetes (EKS). This means that all three major cloud providers—AWS, Azure, and GCP—now offer managed Kubernetes services. This is great news for Kubernetes users. Even though users always have the option to stand up their own Kubernetes clusters, and new tools like Rancher Kubernetes Engine (RKE) make that process even easier, cloud-managed Kubernetes installations should be the best choice for the majority of Kubernetes users.
Today, we are announcing a new open-source project called the Rancher Kubernetes Engine (RKE), our new Kubernetes installer. RKE is extremely simple, lightning fast, and works everywhere. Why a new Kubernetes installer? In the last two years, Rancher has become one of the most popular ways to stand up and manage Kubernetes clusters. Users love Rancher as a Kubernetes installer because it is very easy to use. Rancher fully automates etcd, the Kubernetes master, and worker node operations.
Installing Kubernetes can be one of the toughest problems for operations and DevOps. Learn more about Rancher's lightweight tool for installing Kubernetes.
In this post, we'll explore the fine control you will have over your Rancher 2.0 deployments when you leverage the role-based access control (RBAC) that Kubernetes brings to the platform.
I spend a large amount of my time helping clients implement Rancher successfully. As Rancher is involved in just about every vertical, I come across a large number of different infrastructure configurations, including (but not limited to!) air-gapped, proxied, SSL, HA Rancher Server, and non-HA Rancher Server. Scenario & Criteria What I wanted was a way to quickly emulate an environment to allow me to more closely test or replicate an issue.
Let’s explore the new software and features in the latest release of Rancher, which you can use by running the rancher/server:v1.6.11{.markup–code .markup–p-code} image. Check our documentation on Installing Rancher Server if you need help running Rancher. NOTE: Keep in mind that Rancher v1.6.11 is tagged as latest, which means it’s not ready for production use. Our current, stable version recommended for use in production is Rancher v1.6.10. Kubernetes Kubernetes 1.
Earlier this year, I received an email invitation from Dan Kohn, Executive Director of Cloud Native Computing Foundation (CNCF), about a meeting at KubeCon Berlin to explore a Kubernetes conformance program. That effort culminated in today’s official launch of the Kubernetes Software Conformance Certification program. Rancher is among the first group of vendors and community members participating in this effort. Users of the Rancher Kubernetes distribution now have the peace of mind of running on a certified Kubernetes platform.
Rancher 2.0 is out and odds are, you’re wondering what’s so shiny and new about it. Well, here’s a huge selling point for the next big Rancher release; Kubernetes cluster adoption! That’s right, we here at Rancher wanted more kids, so we decided it was time to adopt. In all seriousness though, this feature helps make Rancher more relevant to developers who already have Kubernetes clusters deployed and are looking for a new way to manage them.
Having a cool deployment system is pretty neat, but one thing every engineer learns one way or another is that manual processes aren’t processes, they’re chores. If you have to do something more than once, you should automate it if you can. Of course, if the task of automating the process takes longer than the total projected time you’ll spend executing the process, you shouldn’t automate it. XKCD 1205 - Is It Worth the Time?
Containers generally deploy faster and perform better than virtual machines. Visit Rancher to explore five tips for making Docker technology faster.
One of the hallmark features of Rancher 2.0 is its ability to consume Kubernetes clusters from anywhere. In this post, I’m going to walk you through using the popular kops tool to create and manage Kubernetes clusters on AWS and then bring them under Rancher 2.0 management. This walkthrough will help you create a non-HA Kubernetes cluster, though kops does support HA configurations. With this new cluster, we will run the Rancher 2.
Rancher 2.0 is coming, and it’s amazing. In the Beginning... When Rancher released 1.0 in early 2016, the container landscape looked completely different. Kubernetes wasn’t the powerhouse that it is today. Swarm and Mesos satisfied specific use cases, and the bulk of the community still used Docker and Docker Compose with tools like Ansible, Puppet, or Chef. It was still BYOLB (bring your own load balancer), and volume management was another manual nightmare.
Container monitoring environments come in all shapes and sizes. Some are open source while others are commercial. Some are available in the Rancher Catalog while others require manual configuration. Some are general purpose while others are aimed specifically at container environments. Some are hosted in the cloud while others require installation on own cluster hosts. In this post, we take an updated look at 10 container monitoring solutions. This effort builds on earlier work including Ismail Usman’s Comparing 7 Monitoring Options for Docker from 2015 and The Great Container Monitoring Bake Off Meetup in October of 2016.
I just came back from DockerCon EU. I have not met a more friendly and helpful group of people than the users, vendors, and Docker employees at DockerCon. It was a well-organized event and a fun experience. I went into the event with some questions[ about where Docker was headed. Solomon Hykes addressed these questions in his keynote, which was the highlight of the entire show. Docker embracing Kubernetes is clearly the single biggest piece of news coming out of DockerCon.
Google Container Engine, or GKE for short (the K stands for Kubernetes), is Google’s offering in the space of Kubernetes runtime deployments. When used in conjunction with a couple of other components from the Google Cloud Platform, GKE provides a one-stop shop for creating your own Kubernetes environment, on which you can deploy all of the containers and pods that you wish without having to worry about managing Kubernetes masters and capacity.
When public clouds first began gaining popularity, it seemed that providers were quick to append the phrase “as a service” to everything imaginable, as a way of indicating that a given application, service, or infrastructure component was designed to run in the cloud. It should therefore come as no surprise that “Containers as a Service,” or CaaS, refers to a cloud-based container environment. But there is a bit more to the CaaS story than this.
I am heading to Copenhagen this week to attend DockerCon Europe 2017 you can still register for the conferencehere. Because we created Rancher to serve the market needs resulting from the widespread adoption of Docker technology, we have maintained a strong presence at every DockerCon conference over the last three years. DockerCon is special—not only is it a gathering place for major industry players, it is one of the few events that brings together far more users than vendors.
I’m not gonna tell you how to live your life—that’s for your doctor to do. What I am gonna tell you is how a beautifully poetic dynamic duo of DevOps delightfulness can make your next project shine brighter than the sun and give you more marketable skills. We live in a world where everything is becoming more modular. From your phone to your Keurig coffee maker to your USB type-C laptop setup, modularity allows you to do more and rearrange components of your life to best suit your needs.
We would like to quickly explain and address the recent metasploit module, which was created to exploit Rancher servers and Docker hosts. This is not a security issue because it only works in the following two scenarios: 1. Your Rancher server does not have authentication enabled While Rancher does not require you to enable authentication, you should always enable it if you are deploying Rancher in an untrusted environment (e.
Rancher looks at what you need to know about serverless computing, how it compares to containers, and how it can figure into your IT strategy. Learn more.
Attention, Ansible users! We’ve released the first version of our Ansible playbooks for Rancher. Ansible is a configuration management system that allows you to write instruction manuals it uses to manage local and remote systems. These playbooks give full control over the installation and configuration of Rancher server and agent nodes, with features that include: Static inventory Dynamic inventory via EC2 tags Detection of multiple servers and automatic configuration of HA Support for local, bind-mount, and external databases Optional, local HAProxy with SSL termination for single-server deployments Ansible Vault for secure storage of secrets This first release is for Ubuntu and Debian, and it targets EC2 as a provider.
It’s finally here: the Rancher you’ve all been waiting for. Rancher 2.0 is now in preview mode and available to deploy! Rancher 2.0 brings us a whole new Kubernetes-based structure, with new features like platform-wide multi-select, adoption of existing Kubernetes clusters, and much, much more. If you’re looking to dive in with Rancher 2.0, you’ve come to the right place. Assumptions You have a Linux host with at least 4 GB of RAM.
Ready to make the big move to containers? If you’re thinking of moving services from an existing, non-containerized system to a container-based environment, you’re probably wondering how to do it. Is there a right way? Is there a best way? Is there even a single lift-and-shift process that can be applied to all applications? The answer to those questions is—in general, yes. While the specifics of a migration to containers and microservices will vary from organization to organization, there are general principles and best practices that you should follow to achieve a seamless lift-and-shift of your apps from legacy infrastructure to a containerized environment.
If you’ve followed the container space recently, you’ve likely seen the influx of Kubernetes-related technologies being announced. So, when another one comes along, it’s easy to be less than excited about it. However, in the case of Rancher’s recent product announcement, it’s well worth your time. The engineering team at Rancher Labs has been working on some new ideas that I think will have a real influence on the way we all think about Kubernetes (K8s).
Update: Rancher 2.0 Tech Preview has since gone to GA. Read the announcement here. [ We achieved another significant milestone today at Rancher Labs. After months of hard work, our engineering team released a technology preview of the Rancher 2.0 container management platform.] Rancher 2.0 builds on the tremendous momentum of market-leading Rancher 1.x container management software. Since we shipped Rancher 1.0 in March 2016, Rancher server and Rancher agent have been downloaded over 60 million times.
To better understand how the RBAC feature works, this post will shed light on how authentication works with the Kubernetes API, and how the RBAC authorization module works with authenticated users. Read more here.
Learn more about the steps of installing Docker on Windows, and explore the similarities and differences between Windows docker containers and Linux containers.
Update: This tutorial was updated for Rancher 2.x in 2019 here Any time an organization, team or developer adopts a new platform, there are certain challenges during the setup and configuration process. Often installations have to be restarted from scratch and workloads are lost. This leaves adopters apprehensive about moving forward with new technologies. The cost, risk and effort are too great in the business of today. With Rancher, we’ve established a clear container installation and upgrade path so no work is thrown away.
RancherOS v1.1.0 is [now available]! It includes a number of key enhancements such as: VMWare ESXi support; improved OS level logging, including boot-time logs; remote Syslog logging; and built in Logrotate and Cron services. Syslinux support has improved with the addition of a boot menu, allowing you to select debug, autologin, and recovery consoles. The reboot command can kexec into the latest and previous OS versions. With RancherOS v1.1.0 you still select Docker engines between 1.
I’m pleased to announce that Rancher has released a new Terraform module for deploying Rancher on Google Compute Engine (GCE). This complements our existing module for Amazon Web Services (AWS). Terraform is an excellent tool for managing infrastructure as code, and many of our users already make use of it elsewhere in their environments. Using this module along with either GCE or AWS to orchestrate Rancher gives you the ability to define the entirety of the stack—from the application layer being managed by Docker Compose or Kubernetes resource YML in Rancher all the way down to the servers and networks in the Terraform plan.
A step-by-step guide Rancher is now available for easy deployment from the Amazon Web Services (AWS) Marketplace. While Rancher has always been easy to install, availability in the marketplace makes installing Rancher faster and easier than ever. In the article below, I provide a step-by-step guide to deploying a working Rancher environment on AWS. The process involves two distinct parts: In part I I step through the process of installing a Rancher management node from the AWS Marketplace In **part II **I deploy a Kubernetes cluster in AWS using the Rancher management node deployed in part I From my own experience, it is often small details missed that can lead to trouble.
Update: This tutorial on Istio was updated for Rancher 2.0 here. One of the recent open source initiatives that has caught our interest at Rancher Labs is Istio, the micro-services development framework. It’s a great technology, combining some of the latest ideas in distributed services architecture in an easy-to-use abstraction. Istio does several things for you. Sometimes referred to as a \“service mesh\“, it has facilities for API authentication/authorization, service routing, service discovery, request monitoring, request rate-limiting, and more.
Since its founding in 2015, the Cloud Native Computing Foundation (CNCF) has become one of the most important movers and shakers in the open source ecosystem—especially when it comes to tools that affect containers and other “cloud-native” technologies. CNCF was established to promote and organize projects related to large-scale industry trends towards containerization, orchestration, and microservices architectures. In the time since, 10 open source projects have been added to the foundation.
It’s 8:00 PM. I just deployed to production, but nothing’s working. Oh, wait. the production Kinesis stream doesn’t exist, because the CloudFormation template for production wasn’t updated. Okay, fix that. 9:00 PM. Redeploy. Still broken. Oh, wait. The production config file wasn’t updated to use the new database. Okay, fix that. Finally, it works, and it’s time to go home. Ever been there? How about the late night when your provisioning scripts work for updating existing servers, but not for creating a brand new environment?
Join Rancher in taking a closer look at Kubernetes load balancing, and the built-in tools used for managing communication between individual pods.
Kubernetes is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. However, this doesn’t mean Kubernetes can scale in all situations all on its own. There are steps you can and should take to maximize Kubernetes’ ability to scale—and there are important caveats and limitations to keep in mind when scaling Kubernetes. I’ll explain them in this article. Scale versus Performance The first thing that must be understood about scaling a Kubernetes cluster is that there is a tradeoff between scale and performance.
For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge. The decision affects not only deployment and management, but how applications are architected as well. DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more. It turns out that the choice of orchestration engine is critical to all these areas.
Recently, I moved to New York City. As a new resident, I decided to take part in the NYC DeveloperWeek hackathon, where our team won the NetApp challenge. In this post, I’ll walk through the product we put together, and share how we built a CI/CD pipeline for quick, iterative product development under tight constraints. The Problem: Have you ever lived or worked in a building where it’s a pain to configure the buzzer to forward to multiple roommates or coworkers?
Container security was initially a big obstacle to many organizations in adopting Docker. However, that has changed over the past year, as many open source projects, startups, cloud vendors, and even Docker itself have stepped up to the challenge by creating new solutions for hardening Docker environments. Today, there is a wide range of security tools that cater to every aspect of the container lifecycle. Docker security tools fall into these categories:
For any team using containers – whether in development, test, or production – an enterprise-grade registry is a non-negotiable requirement. JFrog Artifactory is much beloved by Java developers, and it’s easy to use as a Docker registry as well. To make it even easier, we’ve put together a short walkthrough to setting things up Artifactory in Rancher. Before you start For this article, we’ve assumed that you already have a Rancher installation up and running (if not, check out our Quick Start guide), and will be working with either Artifactory Pro or Artifactory Enterprise.
On July 25th, Luke Marsden from Weaveworks and Bill Maxwell from Rancher Labs led a webinar on ‘A Practical Toolbox to Supercharge Your Kubernetes Cluster’. In the talk they described how you can use Rancher and Weave Cloud to set up, manage and monitor an app in Kubernetes. In this blog, we’ll discuss how and why Weave developed the best-practice RED method for monitoring apps with Prometheus. What is Prometheus Monitoring?
In the world of containers, Kubernetes has become the community standard for container orchestration and management. But there are some basic elements surrounding networking that need to be considered as applications are built to ensure that full multi-cloud capabilities can be leveraged. The Basics of Kubernetes Networking: Pods The basic unit of management inside Kubernetes is not a container—It is called a pod. A pod is simply one or more containers that are deployed as a unit.
When deploying applications in the container world, one of the less obvious points is how to make the application available to the external world, outside of the container cluster. One option is to use the host port, which basically maps one port of the host to the container port where the application is exposed. While this option is fine for local development, it is not viable in a real cluster with many applications deployed.
This article gives an introduction to Role-Based Access Control in Kubernetes and Rancher 1.6. Learn how admins can create user roles and permissions, and different best practice scenarios for authentication.
At Higher Education, we’ve tested and used quite a few CI/CD tools for our Docker CI pipeline. Using Rancher and Drone CI has proven to be the simplest, fastest, and most enjoyable experience we’ve found to date. From the moment code is pushed/merged to a deployment branch, code is tested, built, and deployed to production in about half the time of cloud-hosted solutions - as little as three to five minutes (Some apps take longer due to a larger build/test process).
Cyber security is no longer a luxury. If you need a reminder of that, just take a look at the seemingly endless number of stories appearing in the news lately about things like malware and security breaches. If you manage a Docker environment, and you want to help make sure your organization or users are not mentioned in the news stories that accompany the next big breach, you should know the tools available to you for helping to secure the Docker stack, and put them to work.
You have a complex monolithic system that is critical to your business. You’ve read articles and would love to move it to a more modern platform using microservices and containers, but you have no idea where to start. If that sounds like your situation, then this is the article for you. Below, I identify best practices and the areas to focus on as you evolve your monolithic application into a microservices-oriented application.
*This is part two of our series on using GitLab and Rancher together to build a CI/CD pipeline, and follows part one from last week, which covered deploying, configuring, and securing GitLab in Rancher. We’ve also made the entire walkthrough available for download. * Using GitLab CI Multi-Runner to Build Containers GitLab CI is a powerful tool for continuous integration and continuous delivery. To use it with Rancher, we’ll deploy a runner that will execute jobs.
containerd is an industry-standard core container runtime that was initially released by Docker Inc. in December 2015 and contributed to CNCF in March 2017. We’ve received a number of questions about the project, so I thought I would provide you my perspective as well as some preliminary thoughts on how how Rancher Labs will leverage it. Docker, Kubernetes, and containerd The containerd project represents an important step in the evolution of the Docker platform.
I am incredibly excited to be joining such a talented, diverse group at Rancher Labs as Vice President of Business Development. In this role, I’ll be building upon my experience of developing foundational and strategic relationships based on open source technology. This change is motivated by my desire to go back to my roots, working with small, promising companies with passionate teams. I joined Docker, Inc. in 2013, just as it started to bring containers out of the shadows and empower developers to write software with the tools of their choice, while redefining their relationship with infrastructure.
Note: This post is the first in a two-part series on using GitLab and Rancher together for continuous integration and deployment, and part two is now up. We’ve also made the entire walkthrough available for download. Introduction GitLab is, at its core, a tool for centrally managing Git repositories. As one might expect form a platform that provides this service, GitLab provides a robust authentication and authorization mechanism, groups, issue tracking, wiki, and snippets, along with public, internal, and private repositories.
One of the things that often surprises administrators when they first begin working with Docker containers is the fact that containers natively use non-persistent storage. When a container is removed, so too is the container’s storage. Of course, containerized applications would be of very limited use if there were no way of enabling persistent docker container storage. Fortunately, there are ways to implement persistent storage in a containerized environment. Although a container’s own native storage is non-persistent, a container can be connected to storage that is external to the container.
Each time a new software technology arrives on the scene, InfoSec teams can get a little anxious. And why shouldn’t they? Their job is to assess and mitigate risk – and new software introduces unknown variables that equate to additional risk for the enterprise. It’s a tough job to make judgments about new, evolving, and complex technologies; that these teams approach unknown, new technologies with skepticism should be appreciated. This article is an appeal to the InfoSec people of the world to be optimistic when it comes to containers, as containers come with some inherent security advantages: Immutability In a typical production environment, you have a number of things managing state on your servers.
Containers may be super cool, but at the end of the day, they’re just another kind of infrastructure. A seasoned developer is probably already familiar with several other kinds of infrastructure and approaches to deploying applications. Another is not really that big of a deal. However, when the infrastructure creates new possibilities with the way an application is architected—as containers do—that’s a huge deal. That is why the services in a microservice application are far more important than the containerized infrastructure they run on.
So you’ve decided to use microservices. To help implement them, you may have already started refactoring your app. Or perhaps refactoring is still on your to-do list. In either case, if this is your first major experience with refactoring, at some point, you and your team will come face-to-face with the very large and very obvious question: How do you refactor an app for microservices? That’s the question we’ll be considering in this post.
Are you monitoring your containers’ resources in real time? If not, then you’re probably not monitoring as effectively as possible. In a fast-moving, dynamic microservices environment, monitoring data that is even seconds old may no longer be actionable. To prevent disruptions, you need real-time monitoring. In this post, I explain why real-time monitoring of container resources is important, and which types of container metrics you should focus on monitoring in real time.
One of the great benefits of the Rancher container management platform is that it runs on any infrastructure. While it’s possible to add any Linux machine as a host using our custom setup option, using one of the machine drivers in Rancher makes it especially easy to add and manage your infrastructure. Today, we’re pleased to have a new machine driver available in Rancher, from our friends at cloud.ca. cloud.ca is a regional cloud IaaS for Canadian or foreign businesses requiring that all or some of their data remain in Canada, for reasons of compliance, performance, privacy or cost.
Discover how using containers can optimize cloud costs. Our self-contained models give you everything you need to run containers in production, on any platform.
In Kubernetes, we often hear terms like resource management, scheduling and load balancing. While Kubernetes offers many capabilities, understanding these concepts is key to appreciating how workloads are placed, managed and made resilient. In this short article, I provide an overview of each facility, explain how they are implemented in Kubernetes, and how they interact with one another to provide efficient management of containerized *workloads. *If you’re new to Kubernetes and seeking to learn the space, please consider reading our case for Kubernetes article.
One of the more novel concepts in systems design lately has been the notion of serverless architectures. It is no doubt a bit of hyperbole as there are certainly servers involved, but it does mean we get to think about servers differently. The potential upside of serverless Imagine a simple web based application that handles requests from HTTP clients. Instead of having some number of program runtimes waiting for a request to arrive, then invoking a function to handle them, what if we could start the runtime on-demand for each function as a needed and throw it away afterwards?
Is service-oriented architecture, or SOA, dead? You may be tempted to think so. But that’s not really true. Yes, SOA itself may have receded into the shadows as newer ideas have come forth, yet the remnants of SOA are still providing the fuel that is propelling the microservices market forward. That’s because incorporating SOA principles into the design and build-out of microservices is the best way to ensure that your product or service offering is well positioned for the long term.
We’ve just released Rancher v1.6, the latest version of our container management platform. While we still recommend that production or mission-critical deployments use our most recent stable release, we’re excited to share what’s new in v1.6. In this release, we’ve built greater control for our users over their storage and secrets. Validating EBS Support We first implemented support for EBS before Rancher itself was even generally available, but in v1.
Fei Huang is Co-Founder and CEO of NeuVector. Managing containers requires a broad scope from application development, test, and system OS preparation, and as a result, securing containers can be a broad topic with many separate areas. Taking a layered security approach works just as well for containers as it does for any IT infrastructure. There are many precautions that should be taken before running containers in production.* These include:
Since Docker launched in 2013, it has brought a level of excitement and innovation to software development that’s contagious. It has rallied support from every corner—enterprises to startups, developers to IT folk, plus the open source community, ISVs, the biggest public cloud vendors, and every tool across the software stack. Since the launch of Docker, many major milestones have served to advance the container revolution. Let’s look at some of them.
At 360pi, we deliver commerce analytics that enable retailers to make sense of retail and shopper big data, which will then be used to improve their commerce strategy. Our infrastructure is all in Amazon Web Services, and up until now, was simply Ec2 instances built with our own AMIs. We used to maintain the traditional dev/test/master branch hierarchy in GitHub for our monolithic Python application, and we deployed those branches with Jenkins and Ansible scripts.
Why Smart Container Management is Key For anyone working in IT, the excitement around containers has been hard to miss. According to RightScale, enterprise deployments of Docker over doubled in 2016 with 29% of organizations using the software versus just 14% in 2015 [1]. Even more impressive, fully 67% of organizations surveyed are either using Docker or plan to adopt it. While many of these efforts are early stage, separate research shows that over two thirds of organizations who try Docker report that it meets or exceeds expectations [2], and the average Docker deployment quintuples in size in just nine months.
One of the first questions you are likely to come up against when deploying containers in production is the choice of orchestration framework. While it may not be the right solution for everyone, Kubernetes is a popular scheduler that enjoys strong industry support. In this short article, I’ll provide an overview of Kubernetes, explain how it is deployed with Rancher, and show some of the advantages of using Kubernetes for distributed multi-tier applications.
This week, the Moby Project was introduced with the idea of componentizing Docker into a series of assemblies. At DockerCon, a neat demo was done using the moby tool to assemble various components into customized Linux operating system images. While very cool, this seemed to have confused people – we’d like to provide some more background and explanation about the Moby Project and how it affects Rancher, RancherOS, and our users.
We’ve just returned from DockerCon 2017, which was a fantastic experience. I thought I’d share some of my thoughts and impressions of the event, including my perspective on some of the key announcements, while they are still fresh in my mind. New open source projects Container adoption for production environments is very real. The keynotes on both days included some exciting announcements that should further accelerate adoption in the enterprise as well as foster innovation in the open source community.
I’m super excited to unveil Project Longhorn, a new way to build distributed block storage for container and cloud deployment models. Following the principles of microservices, we have leveraged containers to build distributed block storage out of small independent components, and use container orchestration to coordinate these components to form a resilient distributed system. Why Longhorn? To keep up with the growing scale of cloud- and container-based deployments, distributed block storage systems are becoming increasingly sophisticated.
As a relatively new technology, Docker containers may seem like a risk when it comes to security -- and it’s true that, in some ways, Docker creates new security challenges. But if implemented in a secure way, containers can actually help to make your entire environment more secure overall than it would be if you stuck with legacy infrastructure technologies. This article builds on existing container security resources, like Security for your Container, to explain how a secured containerized environment can harden your entire infrastructure against attack.
Modern microservices applications span multiple containers, and sometimes a single app may use thousands of containers. When operating at this scale, you need a container orchestration tool to manage all of those containers. Managing them by hand is simply not feasible. This is where Kubernetes comes in. Kubernetes manages Docker containers that are used to package applications at scale. Since its launch in 2014, Kubernetes has enjoyed widespread adoption within the container ecosystem.
Your storage system should be locked down with all security and access control tools available to you as well. That is true whether the storage serves containers or any other type of application environment. How do you secure containers? That may sound like a simple question, but it actually has a six- or seven-part answer. That’s because securing containers doesn’t involve just deploying one tool or paying careful attention to one area where vulnerabilities can exist.
If you’re going to successfully deploy containers in production, you need more than just container orchestration Kubernetes is a valuable tool Kubernetes is an open-source container orchestrator for deploying and managing containerized applications. Building on 15 years of experience running production workloads at Google, it provides the advantages inherent to containers, while enabling DevOps teams to build container-ready environments which are customized to their needs. The Kubernetes architecture is comprised of loosely coupled components combined with a rich set of APIs, making Kubernetes well-suited for running highly distributed application architectures, including microservices, monolithic web applications and batch applications.
DevOps can now efficiently and securely deploy containers for enterprise applications As more enterprises move to a container-based application deployment model, DevOps teams are discovering the need for management and orchestration tools to automate container deployments. At the same time, production deployments of containers for business critical applications require specialized container-intelligent security tools. To address this, Rancher Labs and NeuVector today announced that they have partnered to make container security as easy to deploy as application containers.
In my prior posts, I’ve written about how to ensure a highly resilient workloads using Docker, Rancher, and various open source tools. For this post, I will build on this prior knowledge, and to setup an AWS infrastructure for Rancher with some commonly used tools. If you check out the repository here, you should be able to follow along and setup the same infrastructure. The final output of our AWS infrastructure will look like the following picture: In case you missed the prior posts, they’re available on the Rancher blog and cover some reliability talking points.
The cloud vs. on-premises debate is an old one. It goes back to the days when the cloud was new and people were trying to decide whether to keep workloads in on-premises datacenters or migrate to cloud hosts. But the Docker revolution has introduced a new dimension to the debate. As more and more organizations adopt containers, they are now asking themselves whether the best place to host containers is on-premises or in the cloud.
MongoDB, the popular open source NoSQL database, has been in the news a lot recently—and not for reasons that are good for MongoDB admins. Early this year, reports began appearing of MongoDB databases being “taken hostage” by attackers who delete all of the data stored inside the databases, then demand ransoms to restore it. Security is always important, no matter which type of database you’re using. But the recent spate of MongoDB attacks makes it especially crucial to secure any MongoDB databases that you may use as part of your container stack.
On Friday we released version 1.5 of the Rancher container management platform. The enhancements in this release are designed to help ensure enterprise- as well as production-readiness. Additional webhooks improve Rancher extensibility and enable you to optimize overall infrastructure utilization. New API, networking and container scheduling policies provide fine-grained control of the container environment. Additional enhancements include metadata performance improvements and conditional logic support for catalog templates. Additional webhooks drivers With Rancher 1.
If you’re headed to Pasadena, California this weekend for Scale15x, come see us! We’re excited to be presenting, and our talks are focused on practical knowledge for running containers and Kubernetes in production. While Rancher makes it easy to deploy and manage containers and Kubernetes, building that ease of use has required specific expertise and disciplined thought on how teams are incorporating them into their projects today. We’re headed to Scale to share what we’ve learned, and to get feedback from you.
Technology is a constantly changing field, and as a result, any application can feel out of date in a matter of months. With this constant feeling of impending obsolescence, how can we work to maintain and modernize legacy applications? While rebuilding a legacy application from the ground up is an engineer’s dream, business goals and product timelines often make this impractical. It’s difficult to justify spending six months rewriting an application when the current one is working just fine, code debt be damned.
Docker containers make app development easier. But deploying them in production can be hard. Software developers are typically focused on a single application, application stack or workload that they need to run on a specific infrastructure. In production, however, a diverse set of applications run on a variety of technology (e.g. Java, LAMP, etc.), which need to be deployed on heterogeneous infrastructure running on-premises, in the cloud or both. This gives rise to several challenges with running containerized applications in production:
RancherOS v0.8.0 is now available! This release has taken a bit more time than prior versions, as we’ve been laying more groundwork to allow us to do much faster updates, and to release more often. Release Highlights Using the Linux 4.9.9 mainline kernel Using the mainline stable Linux kernel should allow us to give container users access to new features faster - and will mean that RancherOS will have a simpler debug and update path for other software too.
This article is essentially a guide to getting started with Docker for people who, like me, have a strong IT background but feel a little behind the curve when it comes to containers. We live in an age where new and wondrous technologies are being introduced into the market regularly. If you’re an IT professional, part of your job is to identify which technologies are going to make it into the toolbox for the average developer, and which will be relegated to the annals of history.
Docker has been a source of excitement and experimentation among developers since March 2013, when it was released into the world as an open source project. As the platform has become more stable and achieved increased acceptance from development teams, a conversation about when and how to move from experimentation to the introduction of containers into a continuous integration environment is inevitable. What form that conversation takes will depend on the players involved and the risk to the organization.
Rancher has added a new feature in 1.4 for webhooks, with an initial driver to handle scaling. A key concept for implementing webhooks is that of a ‘Receiver’, which lets you register a webhook and provides a URL used to trigger an action inside of Rancher. We have implemented webhooks with our new microservice, called webhook-service. I will explain the feature using our current driver, scaleService. The scaleService driver allows users to create a receiver hook for scaling a service up or down.
Rancher 1.4 is out today! As always, we encourage you to review the release notes. However, we’d like to run through a few notable changes, and the rationale behind them here. First, we’ve continued our move towards a friendlier Kubernetes experience by transitioning to Dashboard and Helm, which replace the Rancher Kubernetes UI and Catalog Kubernetes templates, respectively. We started this move in 1.3 as both Dashboard and Helm have matured tremendously in the past year, and we feel they’ve reached production stability and feature parity with what they’re replacing.
Infrastructure as code is a practice of codifying and automating the deployment and management of infrastructure with tooling. This allows for testing, reviewing, approving, and deploying infrastructure changes with the same processes and tools as application code. In this blog post, we’ll walk through using Rancher and Terraformto implement infrastructure as code, using the recently built-in Rancher Terraform provider. Terraform from Hashicorp is a tool for abstracting service and provider APIs into declarative configuration files.
What do Docker containers have to do with Infrastructure as Code (IaC)? In a word, everything. Let me explain. When you compare monolithic applications to microservices, there are a number of trade-offs. On the one hand, moving from a monolithic model to a microservices model allows the processing to be separated into distinct units of work. This lets developers focus on a single function at a time, and facilitates testing and scalability.
As one of the most disruptive technologies in recent years, container-based applications are rapidly gaining traction as a platform on which to launch applications. But as with any new technology, the security of containers in all stages of the software lifecycle must be our highest priority. This post seeks to identify some of the inherent security challenges you’ll encounter with a container environment, and suggests base elements for a docker security plan to mitigate those vulnerabilities.
Open source container management company exceeds revenue goals by twenty percent, reports 19 million software downloads. Cupertino, Calif. – January 25, 2017 – Rancher Labs, a provider of container management software, today announced momentum in 2016, doubling its employees, exceeding revenue targets by twenty percent and surpassing 19 million software downloads. This growth underscores the heavy demand for its popular open source software that simplifies the deployment and running of containers in production, on any infrastructure.
This is the last part in a series on designing resilient containerized workloads. In case you missed it, Parts 1, 2, 3, and 4 are already available online. In Part 4 last week, we covered in-service and rolling updates for single and multiple hosts. Now, let’s dive into common errors that can pop up during these updates: Common Problems Encountered with Updates Below is a brief accounting of all the supporting components required during an upgrade.
Last week we announced a partnership with EVRY, one of the leading IT companies in the Nordics, to deliver Rancher’s container management platform as a service to EVRY customers. This is an exciting moment for Rancher as the service will introduce our software to a new audience looking to embrace DevOps and transform how they deliver IT. Not surprisingly, our relationship with EVRY began last year when a couple of their cloud architects downloaded Rancher and built a small test deployment.
Which databases provide the best performance when used with containers? That’s an important question for people seeking to make the most of containerized infrastructure. In this post, I take a look at some basic performance metrics for three relational databases—PostgreSQL, MySQL, and MariaDB—when they are run as containers. Introduction For the purposes of my tests, I used the official container images available from Docker Hub to install and start the databases.
Note: this is Part 4 in a series on building highly resilient workloads. Parts 1, 2, and 3 are available already online. In Part 4 of this series on running resilient workloads with Docker and Rancher, we take a look at service updates. Generally, service updates are where the risk of downtime is the highest. It doesn’t hurt to have a grasp of how deployments work in Rancher and the options available within.
If you’re anything like me, you’ve been watching the increasing growth of container-based solutions with considerable interest, and you’ve probably been experimenting with a couple of ideas. At some point in the future, perhaps you’d like to take those experiments and actually put them out there for people to use. Why wait? It’s a new year, and there is no time like the present to take some action on that goal.
Earlier this week, we released [Rancher 1.3]. It includes several new features: user interface fixes, changes to our DNS engines, and improvements when using Kubernetes and associated tooling. However, Rancher 1.3 also begins addressing a frequent request we receive from users: Windows 2016 support. Windows support in Rancher 1.3 is purely experimental and limited in scope (you can [read more in our docs]), but it’s an important step towards serving the needs of our customers as containers become more widely adopted in enterprises.
As we start a new year, I’d like to thank the Rancher community for a great 2016. 2016 was an awesome year for Rancher Labs, and we’ve been fortunate to have a deeply engaged community of open source users and developers, customers, and partners. In March, [we shipped our 1.0 GA release], and since then Rancher has established itself as a leading product in the container ecosystem. 2016 was especially rewarding because of the tremendous amount of support we received from our users and customers.
2017 Predictions: Rapid Adoption and Innovation to Come Rapid adoption of container orchestration frameworks As more companies use containers in production, adoption of orchestration frameworks like Kubernetes, Mesos, Cattle and Docker Swarm will increase as well. These projects have evolved quickly in terms of stability, community and partner ecosystem, and will act as necessary and enabling technologies for enterprises using containers more widely in production. Greater innovation in container infrastructure services Free eBook: Comparing Kubernetes, Mesos, and Docker Swarm Though there’s a strong set of container storage and networking solutions on the market today, more products will emerge to support the growth and scale of production container workloads, particularly as specifications like Container Network Interface (used by Kubernetes) continue to mature.
[RabbitMQ ] RabbitMQ is a messaging broker that transports messages between data producers and data consumers. Data producers can be just about any application, host, or device that emits data that needs to be consumed by other applications for aggregation, processing, or analysis. RabbitMQ is easy to set up, use, and maintain. It can be scaled to handle large numbers of messages between many different data producers and consumers in a variety of application use cases.
One of the great things about microservices is that they allow engineering to decouple software development from application lifecycle. Every microservice: can be written in its own language, be it Go, Java, or Python can be contained and isolated form others can be scaled horizontally across additional nodes and instances is owned by a single team, rather than being a shared responsibility among many teams communicates with other microservices through an API a message bus must support a common service level agreement to be consumed by other microservices, and conversely, to consume other microservices These are all very cool features, and most of them help to decouple various software dependencies from each other.
If you use containers as part of your day-to-day operations, you need to monitor them -- ideally, by using a docker performance monitoring solution that you already have in place, rather than implementing an entirely new tool. Containers are often deployed quickly and at a high volume, and they frequently consume and release system resources at a rapid rate. You need to have some way of measuring container performance, and the impact that container deployment has on your system.
We’re winding down for the year, but you’ll still be able to check out Rancherin a few places, live and in real-time. We always look forward to meeting users whenever we can, and hearing from you only helps make us better. Meetups and Talks Dec 3, Montreal, CN: Architecting Distributed Applications Across Datacenters and Clouds. Join us as we discuss popular orchestrators, and strategies for operationalizing distributed applications across diverse infrastructure.
*Note: since this article has posted, we’ve released Rancher 1.2.1, which addresses much of the feedback we have received on the initial release. You can read more about the v1.2.1 release on Github. * I am very excited to announce the release of Rancher 1.2! This release goes beyond the requisite support for the latest versions of Kubernetes, Docker, and Docker Compose, and includes major enhancements to the Rancher container management platform itself Rancher 1.
In less than a week, over 24,000 developers, sysadmins, and engineers will arrive in Las Vegas to attend AWS re:Invent (Nov. 28 - Dec 2). If you’re headed to the conference, we look forward to seeing you there! We’ll be onsite previewing enhancements included in our upcoming Rancher v1.2 release: Support for the latest versions of Kubernetes and Docker: As we’ve previously mentioned, we’re committed to supporting multiple container orchestration frameworks, and we’re eager to show off our latest support for Docker Native Orchestration and Kubernetes.
In the third section on data resiliency, we delve into various ways that data can be managed on Rancher (you can catch up on Part 1 and Part 2 here). We left off last time after setting up loadbalancers, health checks and multi-container applications for our WordPress setup. Our containers spin up and down in response to health checks, and we are able to run the same code that works on our desktops in production.
Registries are one of the key components that make working with containers, primarily Docker, so appealing to the masses. A registry hosts images that are downloaded and run on hosts in a container engine. A container is simply a running instance of a specific image. Think of an image as a ready-to-go package, like an MSI on Microsoft Windows or an RPM on Red Hat Enterprise Linux. I won’t go into the details of how registries work here, but if you want to learn more,this article is a great read.
We’re excited to announce that RancherOS is now available as a first-class operating system on Packet for all instance types. Packet is a bare metal cloud that combines the speed of physical hardware with the flexibility and ease of use of virtualized infrastructure. We’ve always been fans of Packet and we make use of it internally quite often. In fact, we’ve recently decided to move our entire CI/CD pipeline over to Packet instances.
We’ve been really fortunate at Rancher to have an enthusiastic community of users around the world, and we always looking forward to seeing and meeting our users in person. Here’s a few places Rancher will be in November. Please come say hi (and by the way, you can always keep track of where we’re headed at rancher.com/events)! Don’t see anything in your area? Let us know where we can meet you.
Version v0.7.0 of RancherOS, which mainly contains bug fixes and enhancements, was recently released and is now available on our releases page. Since there hasn’t been a blog post since the v0.5.0 release, this post also includes some of the key features implemented as part of v0.6.0 and v0.6.1. In addition to switching the default Docker version to 1.12.1 and kernel version to 4.4.21, the following features have been implemented.
Yesterday, Atlantis Computing announced a new converged platform for managing infrastructure and containers, which combines Rancher with their award-winning USX software-defined storage solution. This turnkey solution will make it easier for IT organizations to deliver containers as a service to their developers with enterprise-grade storage, without losing sight of the very real, bottom-line benefits that come from optimizing virtualized infrastructure. This solution will be available as a tech preview in early November.
Rancher ships with two types of catalog items to deploy applications; Rancher certified catalog and community catalog, which enable the community to contribute to the reusable pre-built application stack templates. One of the recent interesting community catalog templates is the external load balancer for AWS Classic Elastic Load Balancer, which keeps an existing Load balancer updated with the EC2 instances on which Rancher services that have one or more exposed ports and specific label.
Note: You can find an updated comparison of Kubernetes vs. Docker Swarm in a recent blog post here. Recent versions of Rancher have added support for several common orchestration engines in addition to the standard Cattle. The three newly supported engines, Swarm (soon to be Docker Native Orchestration), Kubernetes and Mesos are the most widely used orchestration systems in the Docker community and provide a gradient of usability versus feature sets.
This morning, we’re excited to launch the Rancher Partner Network - a group of leading organizations focused on building top-notch cloud and container solutions for their customers. These are vendors with whom we collaborate, and whom we trust and endorse to help enterprises bring containers into their development workflows and production environments. The Rancher Partner Network includes consulting partners, systems integrators, resellers, and service providers from the Americas, Europe, Asia, and Australia.
Consulting and reseller partner programs expand company’s global reach; Service provider program helps partners deliver Containers-as-a-Service and other Rancher-powered offerings **Cupertino, Calif. – October 18, 2016 - **Rancher Labs, a provider of container management software, today announced the launch of the Rancher Partner Network, a comprehensive partner program designed to expand the company’s global reach, increase enterprise adoption, and provide partners and customers with tools for success. The program will support consultancies and systems integrators, as well as resellers and service providers worldwide, with initial partners from North and South America, Europe, Asia and Australia.
This is a guest post by Alejandro Mesa, Full-Stack Software Engineer and Chief Architect at Pit Rho. Introduction Docker and Rancher have made it far easier to deploy and manage microservice-based applications. A key challenge, however, is managing the configuration of services that depend on other dynamic services. Imagine the following scenario: you have multiple backend containers that run your web application, and a few nginx containers that proxy all requests to the backend containers.
In a previous article in this series we looked at the basic Kubernetes concepts including namespaces, pods, deployments and services. Now we will use these building blocks in a realistic deployment. We will cover how to setup persistent volumes, how to setup claims for those volumes and then mount those claims into pods. We will also look at creating and using secrets using the Kubernetes secrets management system. Lastly, we will look at service discovery within the cluster as well as exposing services to the outside world.
In the previous part of this series, we have seen how to deploy an Elasticsearch Cluster using Rancher Catalog. Now it’s time to make good use of this catalog, right? Introduction As a reminder, Elasticsearch is the cornerstone of the ELK platform (ELK stands for Elasticsearch/Logstash/Kibana). In this article, we’ll deploy the stack using Rancher Catalog, and use it to track tags and brands on Twitter. Tracking hashtags on Twitter can be very useful for measuring the impact of a Twitter-based marketing campaign.
In Part 1: Rancher Server HA, we looked into setting up Rancher Server in HA mode to secure it against failure. There now exists a degree of engineering in our system on top of which we can iterate. So what now? In this installment, we’ll look towards building better service resiliency with Rancher Health Checks and Load Balancing. Since the Rancher documentation for Health Checks and Load Balancing are extremely detailed, Part 2 will focus on illustrating how they work, so we can become familiar with the nuances of running services in Rancher.
As everyone is aware, Amazon has EC2 Container Services, the Amazon solution for running Docker containers. I haven’t had much luck with this, so now I’m testing Rancher and Kubernetes on Amazon Web Services. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, and Rancher is a complete platform running containers within enterprises, providing enterprise-level access control and container orchestration. I started first by creating a new Virtual Private Cloud, using the default wizard.
Note: you can read the Part 1 and Part 2 of this series, which describes how to deploy service stacks from a private docker registry with Rancher. This is my third and final blog post, and follows part 2, where I stepped through the creation of a private, password-protected Docker registry. and integrated this private registry with Rancher. In this post, we will be putting this registry to work (although for speed, I will use public images).
Most people running Docker in production use it as a way to build and move deployment artifacts. However, their deployment model is still very monolithic or comprises of a few large services. The major stumbling block in the way of using true containerized microservices is the lack of clarity on how to manage and orchestrate containerized workloads at scale. Today we are going to talk about building a Kubernetes based microservice deployment.
Introduction If you have been working with Docker for any length of time, you probably already know that shared volumes and data access across hosts is a tough problem. While the Docker ecosystem is maturing, implementing persistent storage across environments still seems to be a problem for most folks. Luckily, Rancher has been working on this problem and come up with a unique solution that addresses most of these issues.
We’ve recently released v0.5.0 of RancherOS, the latest major release since v0.4.0. Since then, we’ve moved RancherOS out of an alpha state and made many changes to improve both stability and user experience. In addition to various bug fixes and support for Docker 1.11, v0.5.0 includes the following changes: [Official Raspberry Pi Image] [On our releases page you can now find an official Raspberry Pi image which is known to work on both Raspberry Pi 2 and 3.
Containers and orchestration frameworks like Rancher will soon allow every organization to have access to efficient cluster management. This brave new world frees operations from managing application configuration and allows development to focus on writing code; containers abstract complex dependency requirements, which enables ops to deploy immutable containerized applications and allows devs a consistent runtime for their code. If the benefits are so clear, then why do companies with existing infrastructure practices not switch?
[Rancher is a complete container management solution, and to be a complete platform, we’ve placed careful consideration into how we handle networking between containers on our platform. So today, we’re posting a quick example to illustrate how networking in Rancher works. While Rancher can be deployed on a single node, or scaled to thousands of nodes, in this walkthrough, we’ll use just a handful of hosts and containers.] Setting up and Launching a Containerized Application [Our first task is to set up our infrastructure, and for this exercise, we’ll use AWS.
By Timon Sotiropoulos, software engineer at SEED. SEED is a leading product development company that builds design-driven web and mobile applications for startup founders and enterprise innovators. Deployment days can be quite confronting and scary for new developers. We realized through onboarding some of our developers and introducing them to the world of DevOps that the complexity and stress of deployment days could take a toll on morale and productivity, with everyone always half dreading a deployment on the upcoming calendar.
[] *by Stefan Thies (@seti321), DevOps evangelist at Sematext. * [The Rancher Community Catalog just got two new gems - SPM and Logsene - monitoring and logging tools from ]Sematext[. If you are familiar with Logstash, Kibana, Prometheus, Grafana, and friends, this post explains what SPM and Logsene bring to the Rancher users’ table, and how they are different from other monitoring or logging solutions.] Meet Sematext Docker Agent [Sematext Docker Agent] is a modern, Docker-native monitoring and log collection agent.
Monitoring your container-based infrastructure is crucial to ensure good performance, identify issues early and gain the insight necessary to maximize its efficiency. When you are dealing with a large number of often short-lived containers spread over multiple hosts and even data centers, understanding the operational health of your infrastructure implies the need to aggregate performance data from both physical hosts as well as the container cluster running on top of it.
Prometheus is a modern and popular monitoring alerting system, built at SoundCloud and eventually open sourced in 2012 – it handles multi-dimensional time series data really well, and friends at InfinityWorks have already developed a Rancher template to deploy Prometheus at click of a button. In hybrid cloud environments, it is likely that one might be using multiple orchestration engines such as Kubernetes and Mesos, in which case it is helpful to have the stack or application portable across environments.
Rancher ships with a number of reusable, pre-built application stack templates. Extending these templates or creating and sharing completely new ones are great ways to participate in the Rancher user community and to help your organization effectively leverage container-based technologies. Although the Rancher documentation is fairly exhaustive, so far documentation on how to get started as a new Catalog template author has consisted of only a single high-level blog post.
Rancher has users all over the world, and for a long time we’ve wanted to internationalize our UI. Recently, this project became a reality and I embarked on the simple, yet massive, task of making our UI i18n-compliant. The task itself did not present a massive engineering challenge; short of hot loading translations scripts as to not bloat the UI javascript, the majority of the project was moving strings around.
View the Rancher 1.1.0 release notes on GitHub After a very exciting DockerCon last week where the bulk of the engineering team was able to see all the latest and greatest innovations surrounding the Docker ecosystem, our team was able to squash the remaining issues for our Rancher 1.1 stable release. If you have been following our dev builds, we have been shipping tech preview features with each release for our open source community members who want to play with the latest Rancher has to offer.
A few months back, we launched a new feature at Rancher aptly named Rancher Catalog and subsequently Community Catalog. This feature had been brewing in the minds of quite a few people around the office, so by the time it was placed on my plate it was highly anticipated by the team. The concept on a whole is not unfamiliar to the majority of our users: a single page through which users can search for commonly deployed applications, with sane defaults and a repeatable launch process.
[We just came back from DockerCon 2016, the biggest and most exciting DockerCon yet. Rancher had a large and well-trafficked presence there - our developers even skipped attending breakout sessions in favor of staffing the booth, just to talk with all the people who were interested in Rancher. In only two days, over a thousand people stopped by to talk to us!] [Docker-Native Orchestration] [Without a doubt, the biggest news out of DockerCon this year is the new built-in container orchestration capabilities in the upcoming Docker 1.
Rancher Labs has been developing open source projects for about two years now. We have a ton of GitHub repositories under our hood, and our number keeps growing. The number of external contributions to our projects keeps growing, too; Rancher has become more well-known over the past year, and structural changes to our code base have made it easier to contribute. So what are these structural changes? I would highlight 3 major ones:
Today, Chef announced the release of Habitat, a new approach to automate applications. Habitat shifts the focus of application management and configuration from the infrastructure to the application itself. In a nutshell, it allows users to create packages that encapsulate the application logic itself, runtime dependencies, and configuration. These packages can then auto-update accordingly to policies set by your organization. In this article, I will show you how to leverage the runtime configuration and service member discovery capabilities of Habitat to build a Rancher Catalog template.
Elasticsearch is a Lucene-based search engine developed by the open-source vendor, elastic. With principal features like scalability, resiliency, and top-notch performance, it has overtaken Apache Solr, one of its closest competitors. Nowadays, Elasticsearch is almost everywhere where a search engine is involved: it’s the E of the well-known ELK stack, which makes it straightforward for your project to process analytics (the L stands for Logstash which is used to process data like logs, streams, metrics; K stands for Kibana, a data visualization platform – projects also managed by elastic).
In my last blog post, I detailed how we can quickly and easily get the Rancher Server up and running with Github authentication and persistent storage to facilitate easy upgrades. In this post, I will step through the creation of a private Docker registry that is password protected and how to integrate this private registry into Rancher. We will then tag and push an image to this registry. Finally, we will use the Rancher Server to deploy this image onto a server.
*Note: Since publishing this post, we’ve created a guide comparing Kubernetes with Docker Swarm. You can read the details in the blog post here..* Over the last six months, Rancher has grown very quickly, and now includes support for multiple orchestration frameworks in addition to Cattle, Rancher’s native orchestrator. The first framework to arrive was Kubernetes, and not long after, Docker Swarm was added. This week, the team at Rancher added support for Mesos.
[I am excited to announce that Rancher officially supports Mesos, one of the most popular distributed cluster managers on the market today. Mesos is able to manage a large number of computing hosts and schedule computing jobs to these hosts according to their CPU, memory, and storage needs. ] [But Mesos is much more than a distributed resource manager and scheduler. In the past few years, the ][Mesos community][ has developed a rich set of frameworks to automate the deployment and operations of large-scale distributed applications such as Hadoop, Elasticsearch, Spark, and Kafka.
Elasticsearch is one of the most popular analytics platform for large datasets. It is useful for a range of use-cases ranger from log aggregation, business intelligence as well as machine learning. Elasticsearch is popular because of its simple REST based API which makes it trivial to create indices, add data and make complex queries. However, before you get up and running building your dataset and running queries you need to setup a elasticsearch cluster, which can be a somewhat daunting prospect.
Alena is a principal software engineer at Rancher Labs. Rancher has supported Kubernetes as one of our orchestration framework options since March 2016. We’ve incorporated Kubernetes as an essential element within Rancher. It is integrated with all of the core Rancher capabilities to achieve the maximum advantage for both platforms. Writing the Rancher ingress controller to backup the Kubernetes Ingress feature is a good example of that. In this article, I will give a high-level design overview of the feature and describe what steps need to be taken to implement it from a developer’s point of view.
Raul is a DevOps microservices architect specializing in scrum, kanban, microservices, CI/CD, open source and other new technologies. This post focuses on the Traefik \“active mode\” load balancer technology that works in conjunction with Docker labels and Rancher meta-data to configure itself automatically and provide access to services. Load balancers/proxies are software programs that make it possible for you to access your services backend. In the microservices architectures scope, they have an additional challenge to manage high dynamism.
In this post, we’ll discuss how we implemented consul for service discovery with Rancher. John Patterson (@cantrobot) and Chris Lunsford run This End Out, an operations and infrastructure services company. You can find them online at* https://www.thisendout.com *and follow them on twitter @thisendout. ** If you haven’t already, please read the previous posts in this series: Part 1: Getting started with CI/CD and Docker Part 2: Moving to Compose blueprints Part 3: Adding Rancher for Orchestration In this final post of the series on building a deployment pipeline, we will explore some of the challenges we faced when transitioning to Rancher for cluster scheduling.
The ultimate goal for a developer is to have their own micro data center, enabling them to test their services in an exact live replica. However, the life of a developer is full of compromises. Data is a reduced set or anonymized, and companies aren’t quite ready to pay for a data center per developmer. Today, i’ll provide an overview of how using Rancher and a local machine can eliminate some of these compromises.
In case you missed it, we were at EMC World a couple of weeks ago, demonstrating how to build your own container service using RackHD, ScaleIO, and REX-Ray (all from EMC) and Rancher. Since then, we’ve gotten a lot of requests to walk through things in a bit more detail. While you can read through things on the emccode blog(split into part 1 and part 2), we’ve also assembled a short video on how we built the container service: https://vimeo.
We have been using Rancher at the Piel.io (site is not up yet but by the time I finish these blog posts for Rancher it will be...stay tuned...) for several months now as we build our first Micro Service to release publicly in the coming months. During that time, many things have changed as Rancher moved towards the 1.0 release so it seems appropriate that in this series of guest blog posts I will be stepping you through how we at Piel.
John Patterson (@cantrobot) and Chris Lunsford run This End Out, an operations and infrastructure services company. You can find them online at www.thisendout.com and on Twitter @thisendout. Update: All four parts of the series are now live: Part 1: Getting started with CI/CD and Docker Part 2: Moving to Compose blueprints Part 3: Adding Rancher for OrchestrationPart 4: Completing the Cycle with Service Discovery In this installment of our series, we’ll explore how we came to Rancher, detailing how it solved some issues around deploying and managing containers.
Users have submitted thousands of issues to help us improve Rancher these last 18 months. This morning, we announced that Rancher Labs has raised \$20 million in series B funding to accelerate our growth in response to the incredible Rancher adoption we’ve seen over the last year. It is an exciting day for our entire team, as it validates so much hard work, and gives us an opportunity to continue the work we are so passionate about.
With the release of Rancher v1.0.1, setting up and running a highly-available Rancher cluster just got a whole lot easier. Prior to this release, users were required to create and manage their own Zookeeper Ensemble, Redis Cluster, external relational database, Rancher servers and external load balancer. Monitoring of these components was a wholly manual process or required extra middleware and ramp-up time. Configuring Rancher servers to communicate with these components was another hurdle, and left more room for error and frustration.
Rancher recently shipped 1.0 which added support for the Kubernetes orchestration framework. Now, you can leverage the capabilities of Kubernetes with your Rancher environments. Kubernetes is an open-source cluster management framework for application containers started by Google; it’s designed to provide a simple way to deploy, schedule, scale, and roll out new features of applications by providing a container-centric environment without depending on the underlying infrastructure. The native support for Kubernetesin Rancher gives you the ability to launch multiple Kubernetes clusters and Rancher will take care of deploying these clusters into your environment, also, it adds several features including:
One of the key features of the Kubernetes integration in Rancher is the application catalog that Rancher provides. Rancher provides the ability to create Kubernetes templates that give users the ability to launch sophisticated multi-node applications with the click of a button. Rancher also adds the support of Application Services to Kubernetes, which leverage the use of Rancher’s meta-data services, DNS, and Load Balancers. All of this comes with a consistent and easy to use UI.
John Patterson (@cantrobot) and Chris Lunsford run This End Out, an operations and infrastructure services company. You can find them online at https://www.thisendout.com *and follow them on twitter @thisendout. * Update: All four parts of the series are now live: Part 1: Getting started with CI/CD and Docker Part 2: Moving to Compose blueprints Part 3: Adding Rancher for OrchestrationPart 4: Completing the Cycle with Service Discovery In part one of our series, we left off with constructing a rudimentary build and deployment pipeline.
Raul Sanchez is a microservices and Dev0ps architect in the innovation department at BBVA, exploring new technologies, bringing them to the company and the production lifecycle. In his spare time, he is a developer who collaborates on open source projects. He’s spent more than 20 years working on GNU/Linux and unix systems in different areas and sectors. Introduction GoCD is a Java open source continuous delivery system from ThoughtWorks.
John Patterson (@cantrobot) and Chris Lunsford run This End Out, an operations and infrastructure services company. You can find them online at https://www.thisendout.com *and follow them on twitter @thisendout. * Update: All four parts of the series are now live, you can find them here: Part 1: Getting started with CI/CD and Docker Part 2: Moving to Compose blueprints Part 3: Adding Rancher for OrchestrationPart 4: Completing the Cycle with Service Discovery This post is the first in a series in which we’d like to share the story of how we implemented a container deployment workflow using Docker, Docker-Compose and Rancher.
Rancher is out of Beta, in our March online meetup we shared how to build a full-stack container platform. After nine months of beta, hundreds of thousands of downloads, and endless contributions to the open source community, Rancher 1.0 is now available. To celebrate, we focused our March meetup on demonstrating what’s new in 1.0 and providing an overview on how to use Rancher to deliver containers as a service for your organization.
Today we achieved a major milestone by shipping Rancher 1.0, our first generally available release. After more than one and a half years of development, Rancher has reached the quality and feature completeness for production deployment. We first unveiled a preview of Rancher to the world at the November 2014 Amazon Re:invent conference. We followed that with a Beta release in June 2015. I’d like to congratulate the entire Rancher development team for this achievement.
Apache Cassandra is a popular database technology which is gaining popularity these days. It provides adjustable consistency guarantees, is horizontally scalable, is built to be fault-tolerant and provides very low latency, (sub-millisecond) writes. This is why Cassandra is used heavily by large companies such as Facebook and Twitter. Furthermore, Cassandra uses application layer replication for its data which makes it ideal for a containerized environment. However, Cassandra, like most databases, assumes that database nodes are fairly static.
Visit Rancher for an overview of setting up and using Amazon's container registry service, plus a comparison to other hosted Docker repositories.
This week we shipped Rancher v0.63. With this latest release, we are dramatically expanding the capabilities of Rancher, by including a complete distribution of Kubernetes as a container orchestration framework, within Rancher. Beginning with this release, when you create an environment, you’ll be able to launch a Kubernetes environment with a single click and you within 5-10 minutes have access to fully deployed Kubernetes cluster. The work we’ve done, however, goes beyond simply launching a Kubernetes cluster on top of the existing Rancher platform.
Recently, Rancher released a community catalog that will contain entries of Compose templates generated by the community. By default, the catalog in Rancher UI is populated from the Rancher catalog repository under the name “library catalog”. Now, you can also see the community catalog as well. This post will introduce how to build a secure Consul cluster as a Rancher Compose template that will be an addition to the newly released Rancher community catalog.
Since the Service Discovery feature was first introduced in May 2015, Rancher engineering has never stopped adding new functionality and improvements to it. Rancher Betausers and forumsparticipants have been sharing their applications and architecture details to help us shape the product to cover more use cases. This article reviews some of those very useful features. Rolling Restart Through the lifecycle of your Service Discovery, there can be a need to restart it after reapplying certain configuration changes.
Over the last few months our team at Rancher Labs has been adding support for Kubernetes within Rancher. We’ve been implementing Kubernetes in a way that takes advantage of Rancher’s platform orchestration, simple UI, access control, networking and storage capabilities to deliver simple to deploy Kubernetes clusters for managing applications. In our February meetup we introduced this new support, and discussed how these environments compare with our traditional Docker environments and help users understand when and how each can be used to deploy and manage container deployments.
*Quentin Hamard is one of the founders of Octoperf, and is based in Marseille, France. * Octoperf is a full-stack cloud load testing SaaS platform. It allows developers to test the design performance limits of mobile apps and websites in a realistic virtual environment. As a startup, we are attempting to use containers to change the load testing paradigm, and deliver a platfrom that can run on any cloud, for a fraction of the cost of existing approaches.
So far in this series of articles we have looked at creating continuous integration pipelines using Jenkins and continuously deployingto integration environments. We also looked at using Rancher compose to run deployments as well as Route53 integration to do basic DNS management. Today we will cover production deployments strategies and also circle back to DNS management to cover how we can run multi-region and/or multi-data-center deployments with automatic fail-over. We also look at some rudimentary auto-scaling so that we can automatically respond to request surges and scale back when request rate drops again.
[Recently Rancher introduced the Rancher catalog, an awesome feature that enables Rancher users to one-click deploy common applications and complex services from catalog templates on your infrastructure, and Rancher will take care of creating and orchestrating the Docker containers for you.] Rancher catalog offers a wide variety of applications in its out of the box catalog, including glusterfs or elasticsearch, as well as supporting private catalogs. Today I am going to introduce a new catalog template I developed for deploying a MongoDB replicaset, and show you how I built it.
Docker Compose is a great framework for deploying application stacks, and at Rancher we’ve been working hard to make it possible to leverage that framework to create a catalog of application blueprints that can be repeatably configured and deployed. In this recording of our January online meetup, we demonstrated the new Catalog feature in Rancher and how to create catalog items. In the meetup we demonstrated: - Using the Rancher catalog to configure, deploy and upgrade an application - Creating a private app catalog linked to a git repo - Best practices for building catalog templates - Inserting application configuration into templates We demonstrated all of this live, and answered dozens of questions about Docker, Rancher, and building application templates.
Last month we introduced a new application catalog in the latest versions of Rancher. The Rancher Catalog provides an easy to use interface that simplifies deploying Docker-based applications. Using a catalog entry it becomes simple to deploy complex applications such as Elasticsearch, Jenkins, Hadoop, as well as tools like etcd and zookeeper, storage services like GlusterFS, and databases like MongoDB. Already, companies like Sysdig and others have provided easy to use templates for deploying their services using Docker.
In previous articles we have seen how to setup a Jenkins CI system on top of docker and leverage docker in order to create a continuous integration pipeline. As part of that we used docker to create a centrally managed build environment which can be rolled out to any number of machines. We then setup the environment in Jenkins CI and automated the continuous building, packaging and testing of the source.
Over the last year we have written about getting several application stacks running on top of docker, e.g. Magento, Jenkins, Prometheus and so forth. However, containerized deployment can be useful for more than just defining application stacks. In this series of articles we would like to cover an end-to-end development pipeline and discuss how to leverage Docker and Rancher in its’ various stages. Specifically, we’re going to cover; building code, running tests, packaging artifacts, continuous integration and deployment, as well as managing an application stack in production.
Amazon Web Services (AWS) is one of the most popular clouds for running Docker workloads, and we’ve seen more and more users deploy Rancher in conjunction with multiple AWS services to deliver a resilient production grade service. In our December online meetup, we discussed best practices for running applications using Docker on AWS with Rancher. We’ll demosntrate how to deploy, scale and manage deployments while using underlying AWS features, such as EBS, ELB, Route53, RDS, and more.
At DockerCon a couple weeks back, we announced Rancher’s new capability to manage Persistent Storage Services, and how it can be used to make it easier to manage stateful applications. Today, with the release of Rancher 0.47.0, I’m excited to finally make it available for our users to try it as an Experimental feature. Rancher makes it easy for you to deploy this by allowing you to both launch a GlusterFS storage service and deploy Convoy-Gluster as the Docker volume plugin to your environment directly from Rancher’s App Catalog.
[] Chris Crane is VP of Product at Sysdig. [Here at Sysdig we build monitoring and visibility tools, specializing in Docker monitoring and containerized infrastructures. Our open source CLI tool, ][sysdig][, offers universal system visibility into Linux machines along with native support for Docker. And based on the same core technology, ][Sysdig Cloud][ offers the first and only comprehensive monitoring solution built from the ground up for containers and microservices.
Yesterday we hosted an online meetup that provided a detailed overview of how to automate the deployment and upgrade of complex production application stacks using Rancher and Docker. We used Ampache, an open-source music streaming platform, to demonstrate how to deploy a scalable, distributed service using Rancher. As always, Rancher co-founders Darren Shepherd and Shannon Williams answered dozens and dozens of questions, and demoed nearly constantly over more than two hours to hundreds of attendees.
](https://cdn.rancher.com/wp-content/uploads/2015/11/16025649/spotinstlogo.png) We are very excited to announce a new partnership with Spotinst today to deliver intelligent management and migration of container workloads running on spot instances. With this new solution, we have developed a simple, intuitive way for using spot instances to run any container workload reliably and for a fraction of the cost of traditional applications. Since the dawn of data centers we’ve seen continuous improvements in utilization and cost efficiency.
Hyper-Converged Infrastructure is one of the greatest innovations in the modern data center. I have been a big fan ever since I heard the analogy “iPhone for the data center“ from Nutanix, the company who invented hyper-converged infrastructure. In my previous roles as CEO of Cloud.com, creator of CloudStack, and CTO of Citrix’s CloudPlatform Group, I helped many organizations transform their data centers into infrastructure clouds. The biggest challenge was always how to integrate a variety of technologies from multiple vendors into a coherent and reliable cloud platform.
Today, our team at Rancher announced an exciting new feature called Persistent Storage Services. Persistent storage support builds on the work we’ve done with Rancher Convoy, and makes it dramatically easier to run stateful applications in production using Rancher. The Docker volume plugins, introduced in Docker 1.8 and further enhanced in Docker 1.9, enables developers to utilize a variety of persistent storage implementations as standard Docker volumes. Our new Persistent Storage Services capability complements Docker volume plugins by providing a backend implementation of a Docker volume plugin, and is the core storage technology in our recently announced hyper-converged infrastructure stack for Docker.
Hello, I’m Alena Prokharchyk, one of the developers here at Rancher. In my previous blog posts, I’ve covered various aspects of Service Discovery, a feature we use to discover and interconnect services of user applications deployed in Rancher. This discovery is done by services registering themselves dynamically to Rancher’s internal DNS so that other services in the system can discover them by fully qualified domain name (FQDN). Service Discovery can also be registered to Rancher’s Load Balancer (LB) service which allows it to balance traffic between all of a services’ containers.
Containerization brings several benefits to traditional CI platforms where builds share hosts: build dependencies can be isolated, applications can be tested against multiple environments (testing a Java app against multiple versions of JVM), on-demand build environments can be created with minimal stickiness to ensure test fidelity, Docker Compose can be used to quickly bring up environments which mirror development environments. Lastly, the inherent isolation offered by Docker Compose-based stacks allow for concurrent builds -- a sticking point for traditional build environments with shared components.
Visit us to learn more about using Ansible with Docker to deploy a Wordpress service on Rancher. For more tutorials and to request a demo, visit Rancher today.
Today we are excited to announce a major release of RancherOS. The first release of RancherOS was announced just six months ago. At that time, powering an entire operating system with Docker was a really experimental concept. We had good reason to believe it was a good idea, but honestly we didn’t know how well it would play out and what issues we might encounter. I’m excited to say that it’s worked out great.
Meetup Screenshot: Bill Maxwell Demonstrates Sysdig monitoring his Rancher environment Yesterday we hosted an online meetup with the team from Sysdig, in which we discussed best practices for Docker monitoring, and some of the unique challenges around applying monitoring policies to containers. Over the course of the meetup, we introduced Rancher and Sysdig, and demonstrated how we’re using Sysdig here at Rancher to manage our containers. The meetup included a number of presentations, and we’ve included the agenda below along with direct links to that portion of the meetup if you’d like to jump ahead at all.
Last week Ivan Mikushin discussed adding system services to RancherOS using Docker Compose. Today I want to show you an exmaple of how to deploy Linux Dash as a system service. Linux Dash is a simple, low overhead, and web supported monitoring tool for Linux, you can read more about Linux Dash here. In this post i will add Linux Dash as a system service to RancherOS version 0.3.0 which allows users to add system services using rancherctl command.
Rancher has come a long way since its early versions, and is becoming quite good at managing Docker applications and deploying complex services. That said, as your stacks move from simple demos to production applications you quickly realize that you need to know more about your Docker environment when you configure and upgrade container services. To address that problem, we recently introduced a container metadata service with Rancher v0.38.0, similar to Amazon’s Instance Metadata service.
Hello, I’m Ivan Mikushin (@imikushin), one of the developers here at Rancher working on RancherOS. Today I wanted to walk you through the concept of RancherOS \“system services.\” As you may know, RancherOS was designed from the ground up to run everything above the kernel as Docker containers, allowing simple upgrades and a tiny OS footprint. The goal of RancherOS is to provide the perfect small OS for running Docker containers.
Once any application, dockerized or otherwise, reaches production, log aggregation becomes one of the biggest concerns. We will be looking at a number of solutions for gathering and parsing application logs from docker containers running on multiple hosts. This will include using a third-party service such as Loggly for getting setup quickly as well as bringing up an ELK stack (Elastic Search, Log Stash, Kibana) stack. We will look at using middleware such as FluentD to gather logs from Docker containers which can then be routed to one of the hundreds of consumers supported by fluentd.
We just released Convoy v0.3 last week, and I’m excited to announce that it now supports Amazon Elastic Block Store (EBS) as a Convoy driver. With this release you can now create persistent Docker volumes on AWS, backed with all the performance and features of EBS. With this new feature, when users create a Convoy volume using the EBS driver, Convoy will create an EBS volume, attach it to the current running instance, and then assign it to the Docker container.
Container logging is a common challenge for container deployments. Logging with containers is a bit different than traditional logging, because the logs for each container are nested within the container. On September 16th, we hosted an online meetup discussing all aspects of container logging, and demonstrating how to build a scalable logging service for Docker and Rancher that uses Elasticsearch, Logstash, and Kibana (ELK), along with Logspout. In the meetup Rancher DevOps lead Bill Maxwell discussed: • Docker Logging Challenges • Options for gathering logs from containers • System and Application logging requirements • Deploying an ELK stack using Docker Compose with Rancher • Scaling and managing a production ELK deployment You can view a recording of the meetup below.
At Rancher Labs we generate a lot of logs in our internal environments. As we conduct more and more testing on these environments we have found the need to centrally aggregate the logs from each environment. We decided to use Rancherto build and run a scalable ELK stack to manage all of these logs. For those that are unfamiliar with the ELK stack, it is made up of Elasticsearch, Logstash and Kibana.
The latest release of Docker Engine now supports volume plugins, which allow users to extend Docker capabilities by adding solutions that can create and manage data volumes for containers that need to manage and operate on persistent datasets.This is especially important for databases, and addresses one of the key limitations in Docker. Recently at Rancher we released Convoy, an open-source Docker volume driver that makes it simple to snapshot, backup, restore Docker volumes across clouds.
Over the last few months our team at Rancher Labs has been working on building software that would allow users to create and manage persistent Docker volumes. With the release of Docker 1.8, which now officially supports Docker volume drivers, we announced Convoy, an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere. Convoy is designed to be a standalone Docker volume driver that runs on individual Linux hosts.
Hi, I am Sheng Yang (@yasker), an engineer here at Rancher Labs. Over the last few months our team has been working on building Docker storage software that would allow users to create and manage persistent Docker volumes. With last week’s release of Docker 1.8, which now officially supports Docker volume drivers, I am excited to announce Convoy, an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere.
[Since the availability of Rancher’s Beta release a few weeks ago, I’ve been pretty excited about the new scheduling and service discovery capabilities in the platform. To help people understand the impact of these capabilities, today I’m going to show how to use these features to deploy a fully clustered and HA implementation of a Node.js application. I’m going to use ][Let’s Chat as our example application, It is an excellent ][open-source, Slack-like team chat application.
Running Drone as a Rancher Service for Dockerizing Builds On August 13th, Darren Shepherd and Shannon Williams hosted an online meetup demonstrating how our team at Rancher uses Drone.io, Docker and Rancher to build a scalable CI platform for builds and test environments. Rancher engineer Bill Maxwell gave a demonstration of how he built Rancher’s CI platform, and provided a Docker Compose file for anyone interested in deploying it themselves.
*This post is now a bit out of date. Since posting this article we’ve added full support for Mesos environments directly into Rancher. You can read more about it at rancher.com/mesos. * Hi, I’m Sidhartha Mani, one of the engineers here at Rancher Labs. Over the last few months I’ve been working with Apache Mesos, an open source resource manager and scheduler, which can be used to deploy workloads on infrastructure.
Yesterday I was really excited when Solomon Hykes from Docker announced libcomposeas an official implementation of the Docker-compose multi-container file format. It is a project our team has been working on with Docker for awhile, and I’m very glad that Docker decided to adopt the code we’ve developed as the starting point for the next version of Docker Compose. We’ve been big fans of Compose for some time, even when it was still called Fig, and we think it is critical to the long term adoption of containers.
Hello, I’m Alena Prokharchyk, an engineer at Rancher Labs. In the past I’ve written a couple of articles explaining our Load Balancer functionality within Rancher. First, as a standalone feature, then as a part of our Docker Service Discovery functionality. With these capabilities, we’ve developed a load balancing function that could be used not just for sharing traffic between Docker containers, but also for upgrading between software releases with no downtime for users.
During the meetup Darren Shepherd demonstrated how to deploy a complete container stack On July 15th, Darren Shepherd and Shannon Williams hosted an online meetup demonstrating how to deploy a pilot Docker Service, and teaching attendees how to implement an integrated stack that included DockerHub, GitHub, Rancher, Jenkins and Prometheus. We’ve recorded the meeting and shared it below. You can register for our next online meetup on our events page.
Hi, I’m Sidhartha Mani, one of the engineers at Rancher, and I wanted to provide a quick overview for how to get started using RancherOS. RancherOS is a micro-linux distribution that has the aim of providing just the right amount of OS to run Docker. It turns out, all Docker really requires to function is the kernel. RancherOS embraces this by running Docker as PID1 and everything running inside of it is a container.
For proprietary applications, a hosted docker registry is ideal for hosting images privately in a production-grade registry. Learn more at Rancher.
I have already talked about several ways to monitor docker containers and also using Prometheus to monitor Rancher deployments. However, until now it has been a manual process of launching monitoring agents on our various hosts. With the release of the Rancher beta with scheduling and support for Docker compose we can begin to make monitoring a lot more automated. In today’s post we will look at using Rancher’s new \“Rancher compose\” tool to bring up our deployment with a single command, using scheduling to make sure we have a monitoring agent running on every host, and using labels to isolate and present our metrics.
Hi, I’m Craig Jellick, an engineer here at Rancher Labs, and I wanted to walk you through a new set of features that we recently added to Rancher as we prepared for beta. Internally, we call it our \“Native Docker Management\” functionality, and it is incredibly core to our mission here at Rancher. When we built Rancher, we explicitly didn’t want to wrap Docker’s APIs with a new management layer. A number of existing tools already take that approach, and while it is an effective way of building a controlled system, we really loved the experience using the Docker CLI and API, and were sure that it would just keep getting better over time.
Our team just spent the last 4 days in San Francisco attending the Dockercon conference and participating in the Hackathon. We decided to send the entire Rancher Labs engineering team to the conference. I’m so glad we did. There was big news and great new Docker capabilities. It gave us a chance to meet so many Rancher friends and users at one time. First there’s the city, the venue, the party, and the food.
Today Rancher Labs joins a group of industry leaders to create the Open Container Project(OCP). With OCP, the Docker container format and container runtime form the basis for an industry standard. At Rancher Labs we decided early on to focus on developing our Rancher and RancherOS products specifically for Docker, even though the underlying technology can apply to other container formats as well. We are so excited about OCP because we can now focus on delivering the best user experience with a singular container standard knowing it will be supported by every major vendor in the industry.
This year we sponsored the DockerConHackathon, and had an amazing 24 hours working with people hacking on Docker, Rancher, RancherOS and more. Two of our team, Darren Shepherd and Alena Prokharchyk were judges, so we didn’t think it would be fair to enter the contest. That said, we wanted to be involved in the hacking anyway so we built a little tool called SherDock, a simple image management tool for garbage collection, identifying orphaned volumes and more.
In my last post I showed you how to deploy a Highly Available Wordpress installation using Rancher Services, a Gluster cluster for distributed storage, and a database cluster based on Percona XtraDB Cluster. Now I’m going one step further and we are setting Gluster and PXC clusters using Rancher Services too. And now we are using new service features available on the beta Rancher release like DNS service discovery and Label Scheduling.
On June 16th, Darren Shepherd and Shannon Williams hosted an online meetup demonstrating the Beta release of Rancher, and teaching attendees how to deploy Docker applications using Rancher. We’ve recorded the meeting and shared it below. If you would like to learn more about Rancher, please sign up for our Beta Program, or schedule a discussion with one of our engineers.
Today we announced the beta availability of Rancher, our open source Docker infrastructure and management software. It is an exciting day for our team, and a great opportunity to say thank you to all the people who have worked on the open source Rancher project, blogged and tweeted about using Rancher, and helped other new users on our support groups. For the last seven months, Rancher has been an alpha project, and throughout that time we’ve had a wonderful group of early users and testers trying out the product, suggesting enhancements, documenting bugs, and contributing code.
Rancher co-founder Shannon Williams provides a quick video overview on how to get started with Rancher. Getting Started with Rancher from Rancher Labs
Recently Rancher provided a disk image to be used to deploy RancherOS v0.3 on Google Compute Engine (GCE). The image supports RancherOS cloud config functionality. Additionally, it merges the SSH keys from the project, instance and cloud-config and adds them to the rancher user. Building The Setup In this post, I will cover how to use the RancherOS image on GCE to set up a MongoDB Replica Set. Additionally I will cover how to use one of the recent features of Rancher platform which is the Load Balancer.
Hi everyone, my name is Alena Prokharchyk, part of the engineering team here at Rancher, and still loving working on container infrastructure. A few months ago I wrote an articleintroducing Docker load balancing in Rancher. Today, I want to focus on how we’ve built a brand new service discovery capability into Rancher, as well as how we’ve integrated it with load balancing. If you’re not familiar with service discovery, it is a networking capability that allows groups of devices (or in our case containers) to be identified with a common name, and discovered by other services on the network.
Today we announced Series A funding and officially launched our company Rancher Labs. We started Rancher Labs because we saw the benefits of running Docker containers in production and wanted to build tools to help make it happen. We are building two open source software products: Rancher and RancherOS. Rancher is a container infrastructure platform designed to make it simple to operate Docker in production. RancherOS is a minimal Linux distribution designed specifically for running Docker.
I have blogged about monitoring docker deployments a couple times now (here & here), however, up to this point we have been monitoring container stats without looking at the bigger picture. How do these containers fit into a larger unit and how we get insights into the deployment as a whole rather than individual containers. In this post I will cover leveraging docker labels and Rancher’s projects and services support to provide monitoring information that understands the deployment structure.
Note: Rancher has come a long way since this was first published in June 2015. We’ve revised this post (as of August 2016) to reflect the updates in our enterprise container management service. Read on for the updated tutorial! Rancher supports multiple orchestration engines for its managed environments, including Kubernetes, Mesos, Docker Swarm, and Cattle (the default Rancher managed environment). The Cattle environment is rich with features like stacks, services, and load balancing, and in this post, we’ll highlight common uses for these features.
Rancher Labs Chief Architect Darren Shepherd explains how to get started with RancherOS. RancherOS: A tiny Linux distribution ideal for running Docker from Rancher Labs on Vimeo. Darren explains how to upgrade and downgrade RancherOS RancherOS 0.2 Install and Upgrade from Rancher Labs on Vimeo.
Recently, we announced RancherVM, an open source project that makes it possible to run KVM virtual machines embedded in Docker containers. Yesterday, we hosted an online meetup to demonstrate this new project and answer questions about how it works, and why you might want to use it for. We recorded that video, and have posted it here. You can download RancherVM from GitHub. If you’d like to speak to someone about how to get involved with RancherVM, please request a demonstration.
GlusterFS is a scalable, highly available, and distributed network file system widely used for applications that need shared storage including cloud computing, media streaming, content delivery networks, and web cluster solutions. High availability is ensured by the fact that storage data is redundant, so in case one node fails another will cover it without service interruption. In this post I’ll show you how to create a GlusterFS cluster for Docker that you can use to store your containers data.
A little over a month ago I wrote about setting up a Magento cluster on Docker using Rancher. At the I identified some short comings of Rancher such as its lack of support fot load-balancing. Rancher released support for load balancing and docker machine with 0.16, and I would like to revisit our Magento deployment to cover the use of load balancers for scalability as well as availability. Furthermore, I would also like to cover how the docker machine integration makes it easier to launch Rancher compute nodes directly from the Rancher UI.
Renowned computer scientist Paul Hudak, one of the designers of the Haskell programming language, died of leukemia this week. There’s been an outpouring of reactions from people Paul’s life and work has touched. Paul was my Ph.D. adviser at Yale in the 1990s. He supervised my work, paid for my education, and created an environment that enabled me to learn from some of the brightest minds in the world. Paul was an influential figure in the advancement of functional programming.
On April 29th, Shannon Williams and Darren Shepherd hosted an online meetup to talk about deploying microservices based applications using Docker Compose and Rancher. The session included demonstrations of how to build a Docker Compose file, and how to use Rancher’s upcoming services capability to deploy, scale and manage docker environments. The first hour of the video includes overview content and the demonstrations. The rest of the recording are questions from the attendees.
Since I started playing with Docker I have been thinking that its network implementation is something that will need to be improved before I could really use it in production. It is based on container links and service discovery but it only works for host-local containers. This creates issues for a few use cases, for example when you are setting up services that need advanced network features like broadcasting/multicasting for clustering.
Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 - a Docker Load Balancing service. One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API.
I recently compared several docker monitoring tools and services. Since the article went live we have gotten feedback about additional tools that should be included in our survey. I would like to highlight two such tools; Prometheus and Sysdig cloud. Prometheus is a capable self-hosted solution which is easier to manage than sensu. Sysdig cloud on the other hand provides us with another hosted service much like Scout and Datadog.
Virtual machines and containers are two of my favorite technologies. I have always wondered about different ways they can work together. It has become clear over time these two technologies compliment each other. True there is overlap, but most people who are running containers today run them on virtual machines, and for good reason. Virtual machines provide the underlying computing resources and are typically managed by the IT operations teams. Containers, on the other hand, are managed by application developers and devops teams.
Over the last few months our team, with the help of Daniel Walsh (@rhatdan) from Red Hat and many other community members, have worked to add support for labels in Docker 1.6. Labels allow users to attach arbitrary key value metadata to Docker images and containers. This feature, while very simple in concept, gives us the opportunity to add many powerful features to Rancher, and will benefit everyone in the Docker ecosystem.
In this article, Rancher compares seven Docker monitoring options and goes over some of the common tools used to monitor containers. Visit us to learn more.
Nagios is a fantastic monitoring tool, and I wanted to see if I could get the agent to run as a system container on RancherOS, in order to monitor the host and any Docker containers running on it. It turned out to be incredibly easy. In this blog post, I’ll walk through how to launch the Nagios agent as system container in RancherOS. Specifically, I’ll use two vagrant boxes to cover:
Rancher Server has recently added Docker Machine support, enabling us to easily deploy new Docker hosts on multiple cloud providers via Rancher’s UI/API and automatically have those hosts registered with Rancher. For now Rancher supports DigitalOcean and Amazon EC2 clouds, and more providers will be supported in the future. Another significant feature of Rancher is its networking implementation, because it enhances and facilitates the way you connect Docker containers and those services running on them.
Recently I have been playing around with Riak and I wanted to get it running with Docker, using RancherOS and Rancher. If you’re not familiar with Riak, it is a distributed key/value store which is designed for high availability, fault tolerance, simplicity, and near-linear scalability. Riak is written in Erlang programming language and it runs on an Erlang virtual machine. Riak provides availability through replication and faster operations and more capacity through partitions, using the ring design to its cluster, hashed keys are partitioned by default to 64 partitions (or vnodes), each vnode will be assigned to one physical node as following: From Relational to Riak Whitepaper For example, if the cluster consists of 4 nodes: Node1, Node2, Node3, and Node4, we will count around the nodes assigning each vnode to a physical node until the all vnodes are accounted for, so in the previous figure, Riak used 32 partition with 4 node cluster so we get:
Yesterday we hosted our first Rancher online meetup, which was focused on how to get started with RancherOS. For those of you who weren’t able to attend our first online meetup on March 31st, we’ve posted a recording. The meetup ran for more than two hours, and included demos of RancherOS and Rancher, as well as dozens of questions about current capabilities and some of the features we’re still working on.
When we shipped Rancher 0.12 last week we added one of the more frequently requested features, support for private Docker registries. Rancher had always allowed users to provision containers from DockerHub, but many organizations run their own registries, or use private hosted registries such as Quay.io, and private DockerHub accounts. Beginning with this release, users will be able to connect their private registry directly to their Rancher environment, and deploy containers from private Docker images.
As you may have seen, Rancher recently announced our integration with docker-machine. This integration will allow users to spin up Rancher compute nodes across multiple cloud providers right from the Rancher UI. In our initial release, we supported Digital Ocean. Amazon EC2 is soon to follow and we’ll continue to add more cloud providers as interest dictates. We believe this feature will really help the Zero-to-Docker _(and Zero-to-Rancher)_ experience. But the feature itself is not the focus of this post.
This week we released Rancher 0.12, which adds support for provisioning hosts using Docker Machine. We’re really excited to get this feature out, because it makes launching Rancher-enabled Docker hosts easier than ever. If you’re not familiar with Docker Machine, it is a project that allows cloud providers to develop standard \“drivers\” for provisioning cloud infrastructure on the fly. You can learn more about it on the Docker website. The first cloud we’re supporting with Docker Machine is Digital Ocean.
This week we released RancherOS 0.2, which introduces support for upgrades. RancherOS is a tiny Linux distribution designed specifically to run Docker, using containers to isolate user and system processes. Given that RancherOS does just about everything with containers, it shouldn’t be a surprise that upgrading a RancherOS node is almost exactly like upgrading a Docker container. All of the upgrade procedures in RancherOS are accessed through the \“rancherctl\” system service.
We’re in the process of building a feature for Rancherthat makes use of the Docker event stream. The stream is a useful feature of the Docker API that allows us to augment and enhance the Docker experience without wrapping or obfuscating Docker itself. Michael Crosby (@crosbymichael) gives a good overview of the Docker Events API here. If you’re looking for an introduction to Docker events, I recommend starting there. The code I’m working on in Rancher lives here: https://github.
*This post is now a bit out of date. Since posting this article we’ve released the Rancher container management platform, and added full support for Mesos environments. You can read more about it at rancher.com/mesos. * In this tutorial, I will explain how to deploy a Mesos cluster in containers running on RancherOS and then make our deployment portable across different cloud platforms and virtualization systems. If you’re not familiar with Apache Mesos, it is an open-source project that provides an elastic and highly available clustering framework.
Kubernetes running as a system service on RancherOS from Ivan Mikushin (@imikushin) Yesterday, Ivan Mikushin did an excellent write up on deploying Kubernetes on RancherOS. I spent some time with it, and I think it illustrates some of the things we are most excited about with RancherOS. Specifically, in RancherOS we have a concept of system services that are deployed on a separate Docker daemon that we have called System Docker.
One of the exciting things about RancherOS is the concept of running system services as containers. It offers the chance to clearly delineate between containers running an application, and containers running agents and operating system services. This has some interesting potential implications for managing operations, such as making patching and upgrading system services simpler, setting app and organizational policies for required services, and prioritizing which services have access to system resources.
In the first part of this post, I created a full Node.js application stack using MongoDB as the application’s database and Nginx as a load balancer that distributed incoming requests to two Node.js application servers. I created the environment on Rancher and using Docker containers. In this post I will go through setting up Rancher authentication with GitHub, and creating a webhook with GitHub for automatic deployments. []Rancher Access Control Starting from version 0.
So last week I finally got out from my “tech” comfort zone, and tried to set up a Node.js application which uses a MongoDB database, and to add an extra layer of fun I used Rancher to set up the whole application stack using Docker containers. I designed a small application with Node, its only function is to calculate the number of hits on the website, you can find the code at Github
Today Docker acquired SDN software maker SocketPlane. Congratulations to both Docker and SocketPlane teams. We have worked closely with SocketPlane team since the early Docker networking discussions and have a great amount of respect for their technical abilities. We are also happy to see Docker Inc. make a serious effort to bring SDN capabilities to the Docker platform. Many customers have told us that the lack of multi-host networking is one of the last remaining gaps that impede the wide-spread production use of Docker containers.
In last week’s 0.9 release we added support in Rancher for users to create new deployment environments that can be shared with colleagues. These docker environments are called projects, and are an extension of the GitHub OAuth integration we added to Rancher last month. The focus of projects is to allow teams to collaborate on Docker environments, and since our user management is connected with GitHub today, we leverage standard GitHub abstractions, such as users, teams and organizations, to support Rancher Projects.
[Usman is a server and infrastructure engineer, with experience in building large scale distributed services on top of various cloud platforms. You can read more of his work at techtraits.com, or follow him on twitter @usman_ismailor on GitHub.] Magento is an open-source content management system (CMS) offering a powerful tool-set for managing eCommerce web-sites. Magento is used by thousands of companies including Nike and Office Max. Today we are going to walk through the process of setting up a Magento cluster using Docker and Rancher on the Amazon Elastic Compute Cloud (EC2).
Hi, I’m James Harris, (@sir_yogi_bear) one of the engineers here @Rancher_Labs, and I am excited to announce we added support this week for pulling and viewing Docker logs in Rancher. The addition of the feature allows users to easily work with their containers from the web UI in a much more involved way. Previously, there was no way to track the output of a container through Rancher. Now you can easily follow both the Std out and Std error of a container.
Thanks to Docker, Orange and Blumberg Capital for hosting a great meetup last night in San Francisco. Darren Shepherd, Chief Architect of Rancher Labs introduced RancherOS for the first time, and answered questions from the audience. Learn more about RancherOS, or download it from GitHub. If you’d like to learn more, Darren will be presenting RancherOS at an online meetup on March 31st, 2015. RancherOS Demo at Docker Meetup from Rancher Labs on Vimeo.
Today I would like to announce a new open source project called RancherOS – the smallest, easiest way to run Docker in production and at scale. RancherOS is the first operating system to fully embrace Docker, and to run all system services as Docker containers. At Rancher Labs we focus on building tools that help customers run Docker in production, and we think RancherOS will be an excellent choice for anyone who wants a lightweight version of Linux ideal for running containers.
Hi, I’m Sidhartha Mani, one of the engineers here @Rancher_Labs, and I’ve been working on the user management functionality in Rancher. This week, we released support for GitHub OAuth. I’m very excited about his, because it allows organizations to connect their GitHub org structures to docker and collaborate on management. In this blogpost I’ll show you how to setup GitHub OAuth on Rancher for your organization. Rancher-Auth 2-minute setup.
Hussein Galal is a Linux System Administrator, with experience in Linux, Unix, Networking, and open source technologies like Nginx, Apache, PHP-FPM, Passenger, MySQL, LXC, and Docker. You can follow Hussein on Twitter @galal_hussein. I recently used Docker and Rancher to set up a Redis cluster on Digital Ocean. Redis clustering provides a way to share data across multiple Redis instances, keys are distributed equally across instances using hash slots. Redis clusters provide a number of nice features, such as data resharding and availability between instances.
Hi everyone, I recorded a brief overview of how to launch a Rancher 0.3 environment, connect it with some resources from a few different public clouds, and then deploy an application. If you’d like to learn more about Rancher, please visit our GitHubsite for information on joining the community, or downloading the software. You can also schedule a demo to talk with one of our engineers about the project.
Hi Everyone, I’m Will Chan, the new VP of engineering here at Rancher, and I wanted to post an update about some of the things we’re working on here at Rancher for release later this quarter. I started at Rancher in early December, and since then I’ve been thrilled to see how many people have downloaded Rancher and are using it to manage and implement networking around Docker. I’m really excited about some of the features we’re working on, and wanted to give you a sneak peek of what’s coming over the next two months.
In my current role at Rancher Labs, we do a lot of testing and provisioning on Google Compute Engine. One of the things that we found missing were official Ubuntu and Fedora images. Fortunately, Ubuntu now has official images on GCE and we hope that Fedora follows as well. There is an open issue to track the official progress, but in the meantime the new Fedora 21 cloud image is straight forward enough to get going.
Last week we introduced our new project, Rancher.io, at AWS Re:Invent, and it was amazing. We’d been working on the software for months, talking with good friends, old customers and former colleagues about what we were building and wondering how it would be received by users. We were anxious to share it with new people and eager to get their feedback. We were also really nervous. Four of us flew out to Vegas, set up our little booth, tested our demos and organized our piles of stickers and t-shirts.
Almost one year ago I started Stampede as an R&D project to look at the implications of Docker on cloud computing moving forward, and as such I’ve explored many ideas. After releasing Stampede, and getting so much great feedback, I’ve decided to concentrate my efforts. I’m renaming Stampede.io to Rancher.io to signify the new direction and focus the project is taking. Going forward, instead of the experimental personal project that Stampede was, Rancher will be a well-sponsored open source project focused on building a portable implementation of infrastructure services similar to EBS, VPC, ELB, and many other services.
After months of work we will be previewing Rancher at AWS Re:Invent November 11-14. Stop by Booth #455 to meet our team and get the latest on what we’re planning over the next few months.