Deploying and Scaling Jenkins on Kubernetes

Deploying and Scaling Jenkins on Kubernetes


How to Build Cloud-Native CI/CD Pipelines on Kubernetes
See how to set up a build, test, deploy pipeline using Helm, Jenkins and JFrog Artifactory. Demo included!

Introduction

Jenkins is an open-source continuous integration and continuous delivery tool, which can be used to automate building, testing, and deploying software. It is widely considered the most popular automation server, used by more than a million users worldwide. Some advantages of Jenkins include:

  • Open-source software with extensive community support
  • Java-based codebase, making it portable to all major platforms
  • A rich ecosystem of more than 1000 plugins

Jenkins works well with all popular Source Control Management systems (Git, SVN, Mercurial and CVS), popular build tools (Ant, Maven, Grunt), shell scripts and Windows batch commands, as well as testing frameworks and report generators. Jenkins plugins provide support for technologies like Docker and Kubernetes, which enable the creation and deployment of cloud-based microservice environments, both for testing as well as production deployments.

Jenkins supports the master-agent architecture (many build agents completing work scheduled by a master server) making it highly scalable. The master’s job is to schedule build jobs, distribute the jobs to agents for actual execution, monitor the agents, and get the build results. Master servers can also execute build job directly.

The agents’ task is to build the job sent by the master. A job can be configured to run on a particular type of agent, or if there are no special requirements, Jenkins can simply choose the next available agent.

Jenkins scalability provides many benefits:

  • Running many build plans in parallel
  • Automatically spinning up and removing agents to save costs
  • Distributing the load

Even if Jenkins includes scalability features out-of-the-box, the process of configuring scaling is not always straightforward. There are many options available to scale Jenkins, but one of the powerful options is to use Jenkins on Kubernetes.

What is Kubernetes?

Kubernetes is an open-source container orchestration tool. Its main purpose is to manage containerized applications on clusters of nodes by helping operators deploy, scale, update, and maintain their services, and providing mechanisms for service discovery. You can learn more about what Kubernetes is and what it can do by checking out the official documentation.

Kubernetes is one of the best tools for managing scalable, container-based workloads. Most applications, including Jenkins, can be containerized, which makes Kubernetes a very good option.

01

Project Goals

Before we begin, let’s take a moment and describe the system we are attempting to build by putting Jenkins on Kubernetes.

We want to start by deploying a Jenkins master instance onto a Kubernetes cluster. We will use Jenkins’ kubernetes plugin to scale Jenkins on the cluster by provisioning dynamic agents to accommodate its current workloads. The plugin will create a Kubernetes Pod for each build by launching an agent based on a specific Docker image. When the build completes, Jenkins will remove the Pod to save resources. Agents will be launched using JNLP (Java Network Launch Protocol), so we the containers will be able to automatically connect to the Jenkins master once up and running.

Prerequisites and Setup

To complete this guide, you will need the following:

  • A Linux box to run Rancher: We will also use this to build custom Jenkins images. Follow the Rancher installation quick start guide to install Docker and Rancher on an appropriate host.
  • Docker Hub account: We will need an account with a container image repository to push the custom images for our Jenkins master and agents.
  • GCP account: We will provision our Kubernetes cluster on GCP. The free-tier of Google’s cloud platform should be enough to complete this guide.

Building Custom Images for Jenkins

Let’s start by building custom images for our Jenkins components and pushing them to Docker Hub.

Log in to the Linux server where you will be running Rancher and building images. If you haven’t already done so, install Docker and Rancher on the host by following the Rancher installation quick start guide. Once the host is ready, we can prepare our Dockerfiles.

Writing the Jenkins Master Dockerfile

We can begin by creating a file called Dockerfile-jenkins-master in the current directory to define the Jenkins master image:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-master

Inside, include the following Dockerfile build instructions. These instructions use the main Jenkins Docker image as a base and configure the plugins we will use to deploy onto a Kubernetes cluster:

FROM jenkins/jenkins:lts

# Plugins for better UX (not mandatory)
RUN /usr/local/bin/install-plugins.sh ansicolor
RUN /usr/local/bin/install-plugins.sh greenballs

# Plugin for scaling Jenkins agents
RUN /usr/local/bin/install-plugins.sh kubernetes

USER jenkins

Save and close the file when you are finished.

Writing the Jenkins Agent Dockerfiles

Next, we can create the Dockerfiles for our Jenkins agents. We will be creating two agent images to demonstrate how Jenkins can correctly identify the correct agent to provision for each job.

Create an empty file in the current directory. We will copy this to the image as an identifier for each agent we are building:

[root@rancher-instance jenkins-kubernetes]# touch empty-test-file

Now, create a new Dockerfile for the first agent image:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-slave-jnlp1

This image will copy the empty file to a unique name to identify the agent being used.

FROM jenkins/jnlp-slave

# For testing purpose only
COPY empty-test-file /jenkins-slave1

ENTRYPOINT ["jenkins-slave"]

Save and close the file when you are finished.

Finally, define a second agent. This is identical to the previous agent, but includes a different file identifier:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-slave-jnlp2
FROM jenkins/jnlp-slave

# For testing purpose only
COPY empty-test-file /jenkins-slave2

ENTRYPOINT ["jenkins-slave"]

Save the file when you are finished.

You’re working directory should now look like this:

[root@rancher-instance jenkins-kubernetes]# ls -l
total 16
-rw-r--r--. 1 root root  265 Oct 21 12:58 Dockerfile-jenkins-master
-rw-r--r--. 1 root root  322 Oct 21 13:16 Dockerfile-jenkins-slave-jnlp1
-rw-r--r--. 1 root root  315 Oct 21 13:05 Dockerfile-jenkins-slave-jnlp2

Building the Images and Pushing to Docker Hub

With the Dockerfiles written, we are now ready to build and push the images to Docker Hub.

Let’s start by building the image for the Jenkins master:

Note: In the command below, replace <dockerhub_user> with your Docker Hub account name.

[root@rancher-instance jenkins-kubernetes]# docker build -f Dockerfile-jenkins-master -t <dockerhub_user>/jenkins-master .

Click for full command output

Sending build context to Docker daemon 12.29 kB
Step 1/5 : FROM jenkins/jenkins:lts
Trying to pull repository docker.io/jenkins/jenkins ... 
lts: Pulling from docker.io/jenkins/jenkins
05d1a5232b46: Pull complete 
5cee356eda6b: Pull complete 
89d3385f0fd3: Pull complete 
80ae6b477848: Pull complete 
40624ba8b77e: Pull complete 
8081dc39373d: Pull complete 
8a4b3841871b: Pull complete 
b919b8fd1620: Pull complete 
2760538fe600: Pull complete 
bcb851da81db: Pull complete 
eacbf73f87b6: Pull complete 
9a7e396a0cbd: Pull complete 
8900cde5602e: Pull complete 
c8f62fde3f4d: Pull complete 
eb91939ba069: Pull complete 
b894a41fcbe2: Pull complete 
b3c60e932390: Pull complete 
18f663576636: Pull complete 
4445e4b557b3: Pull complete 
f09e9b4be8ed: Pull complete 
e3abe5324295: Pull complete 
432eff1ecbb4: Pull complete 
Digest: sha256:d5c835407130a393becac222b979b120c675f8cd815fadd085adb76b216e4ce1
Status: Downloaded newer image for docker.io/jenkins/jenkins:lts
 ---> 9cff19ad8c8b
Step 2/5 : RUN /usr/local/bin/install-plugins.sh ansicolor
 ---> Running in ff752eeb107d

Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Using version-specific update center: https://updates.jenkins.io/2.138...
Downloading plugins...
Downloading plugin: ansicolor from https://updates.jenkins.io/2.138/latest/ansicolor.hpi
 > ansicolor depends on workflow-step-api:2.12;resolution:=optional
Skipping optional dependency workflow-step-api

WAR bundled plugins:


Installed plugins:
ansicolor:0.5.2
Cleaning up locks
 ---> a018ec9e38e6
Removing intermediate container ff752eeb107d
Step 3/5 : RUN /usr/local/bin/install-plugins.sh greenballs
 ---> Running in 3505e21268b2

Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Using version-specific update center: https://updates.jenkins.io/2.138...
Downloading plugins...
Downloading plugin: greenballs from https://updates.jenkins.io/2.138/latest/greenballs.hpi

WAR bundled plugins:


Installed plugins:
ansicolor:0.5.2
greenballs:1.15
Cleaning up locks
 ---> 0af36c7afa67
Removing intermediate container 3505e21268b2
Step 4/5 : RUN /usr/local/bin/install-plugins.sh kubernetes
 ---> Running in ed0afae3ac94

Creating initial locks...
Analyzing war...
Registering preinstalled plugins...
Using version-specific update center: https://updates.jenkins.io/2.138...
Downloading plugins...
Downloading plugin: kubernetes from https://updates.jenkins.io/2.138/latest/kubernetes.hpi
 > kubernetes depends on workflow-step-api:2.14,apache-httpcomponents-client-4-api:4.5.3-2.0,cloudbees-folder:5.18,durable-task:1.16,jackson2-api:2.7.3,variant:1.0,kubernetes-credentials:0.3.0,pipeline-model-extensions:1.3.1;resolution:=optional
Downloading plugin: workflow-step-api from https://updates.jenkins.io/2.138/latest/workflow-step-api.hpi
Downloading plugin: apache-httpcomponents-client-4-api from https://updates.jenkins.io/2.138/latest/apache-httpcomponents-client-4-api.hpi
Downloading plugin: cloudbees-folder from https://updates.jenkins.io/2.138/latest/cloudbees-folder.hpi
Downloading plugin: durable-task from https://updates.jenkins.io/2.138/latest/durable-task.hpi
Downloading plugin: jackson2-api from https://updates.jenkins.io/2.138/latest/jackson2-api.hpi
Downloading plugin: variant from https://updates.jenkins.io/2.138/latest/variant.hpi
Skipping optional dependency pipeline-model-extensions
Downloading plugin: kubernetes-credentials from https://updates.jenkins.io/2.138/latest/kubernetes-credentials.hpi
 > workflow-step-api depends on structs:1.5
Downloading plugin: structs from https://updates.jenkins.io/2.138/latest/structs.hpi
 > kubernetes-credentials depends on apache-httpcomponents-client-4-api:4.5.5-3.0,credentials:2.1.7,plain-credentials:1.3
Downloading plugin: credentials from https://updates.jenkins.io/2.138/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/2.138/latest/plain-credentials.hpi
 > cloudbees-folder depends on credentials:2.1.11;resolution:=optional
Skipping optional dependency credentials
 > plain-credentials depends on credentials:2.1.5
 > credentials depends on structs:1.7

WAR bundled plugins:


Installed plugins:
ansicolor:0.5.2
apache-httpcomponents-client-4-api:4.5.5-3.0
cloudbees-folder:6.6
credentials:2.1.18
durable-task:1.26
greenballs:1.15
jackson2-api:2.8.11.3
kubernetes-credentials:0.4.0
kubernetes:1.13.0
plain-credentials:1.4
structs:1.17
variant:1.1
workflow-step-api:2.16
Cleaning up locks
 ---> dd19890f3139
Removing intermediate container ed0afae3ac94
Step 5/5 : USER jenkins
 ---> Running in c1066861d5a3
 ---> 034e27e479c5
Removing intermediate container c1066861d5a3
Successfully built 034e27e479c5

When the command returns, check the newly created image:

[root@rancher-instance jenkins-kubernetes]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
<dockerhub_user>/jenkins-master      latest              034e27e479c5        16 seconds ago      744 MB
docker.io/jenkins/jenkins            lts                 9cff19ad8c8b        10 days ago         730 MB

Log in to Docker Hub using the credentials of your account:

[root@rancher-instance jenkins-kubernetes]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password: 
Login Succeeded

Now, push the image to your Docker Hub account:

Note: In the command below, be sure to substitute your own Docker Hub account again.

[root@rancher-instance jenkins-kubernetes]# docker push <dockerhub_user>/jenkins-master

Click for full command output

The push refers to a repository [docker.io/calinrus/jenkins-master]
b267c63b5961: Pushed 
2cd1dc56ef56: Pushed 
e99d7d8d116f: Pushed 
8d117101392a: Mounted from jenkins/jenkins 
c2607b4e8ae4: Mounted from jenkins/jenkins 
81e4bc7cb1f1: Mounted from jenkins/jenkins 
8bac294d4ee8: Mounted from jenkins/jenkins 
707f669f3d58: Mounted from jenkins/jenkins 
ac2b51b56ac6: Mounted from jenkins/jenkins 
1b2b61bef21f: Mounted from jenkins/jenkins 
efe1c25100f5: Mounted from jenkins/jenkins 
8e656983ccf7: Mounted from jenkins/jenkins 
ba000aef226d: Mounted from jenkins/jenkins 
a046c3cdf994: Mounted from jenkins/jenkins 
67e27eb293e8: Mounted from jenkins/jenkins 
bdd1835d949d: Mounted from jenkins/jenkins 
84bbcb8ef932: Mounted from jenkins/jenkins 
0d67aa2185d5: Mounted from jenkins/jenkins 
3499b696191f: Pushed 
3b2a1688b8f3: Pushed 
b7c56a9790e6: Mounted from jenkins/jenkins 
ab016c9ea8f8: Mounted from jenkins/jenkins 
2eb1c9bfc5ea: Mounted from jenkins/jenkins 
0b703c74a09c: Mounted from jenkins/jenkins 
b28ef0b6fef8: Mounted from jenkins/jenkins 
latest: digest: sha256:6b2c8c63eccd795db5b633c70b03fe1b5fa9c4a3b68e3901b10dc3af7c3549f0 size: 5552


You will need to repeat similar commands to build the two images for the Jenkins JNLP agents:

Note: Substitute your Docker Hub account name for <dockerhub_user> in the commands below.

docker build -f Dockerfile-jenkins-slave-jnlp1 -t <dockerhub_user>/jenkins-slave-jnlp1 .
docker push <dockerhub_user>/jenkins-slave-jnlp1

docker build -f Dockerfile-jenkins-slave-jnlp2 -t <dockerhub_user>/jenkins-slave-jnlp2 .
docker push <dockerhub_user>/jenkins-slave-jnlp2

If everything was successful, you should see something like this in your Docker Hub account:

02

Using Rancher to Deploy a Cluster

Now that our images are published, we can use Rancher to help us deploy a GKE cluster. If you set up Rancher earlier, you should be able to log into your instance by visiting your server’s IP address with a web browser.

Next, create a new GKE cluster. You will need to log in to your Google Cloud account to create a service account with the appropriate access. Follow the Rancher documentation on creating a GKE cluster to learn how to create a service account and the provision a cluster with Rancher.

Deploying Jenkins to the Cluster

As soon as the cluster is ready, we can deploy the Jenkins master and create some services. If you are familiar with kubectl, you can achieve this from command line, but you can easily deploy all of the components you need through Rancher’s UI.

Regardless of how you choose to submit workloads to your cluster, create the following files on your local computer to define the objects you need to create.

Start by creating a file to define the Jenkins deployment:

[root@rancher-instance k8s]# vi deployment.yml 

Inside, paste the following:

Note: Make sure to change <dockerhub_user> to your Docker Hub account name in the file below.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
        - name: jenkins
          image: <dockerhub_user>/jenkins-master
          env:
            - name: JAVA_OPTS
              value: -Djenkins.install.runSetupWizard=false
          ports:
            - name: http-port
              containerPort: 8080
            - name: jnlp-port
              containerPort: 50000
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
      volumes:
        - name: jenkins-home
          emptyDir: {}

Next, create a file to configure the two services we will create.

One will be a LoadBalancer service which will provision a public IP allowing us to access Jenkins from Internet. The other one will be a ClusterIP service needed for internal communication between master and agents that will be provisioned in the future:

[root@rancher-instance k8s]# vi service.yml 

Inside, paste the following YAML structure:

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app: jenkins

---

apiVersion: v1
kind: Service
metadata:
  name: jenkins-jnlp
spec:
  type: ClusterIP
  ports:
    - port: 50000
      targetPort: 50000
  selector:
    app: jenkins

From Rancher, click on your managed cluster (called jenkins in this demo). In the upper-left menu, select the Default project and then select the Workloads tab.

03

From here, click Import YAML. On the page that follows, click the Read from a file button in the upper-right corner. Choose the local deployment.yml file you created on your computer and click Import.

04

Rancher will deploy a pod based on your Jenkins master image to the cluster:

06

Next, we need to configure a way to access the UI on the Jenkins master.

In Load Balancing tab, follow same process as you did to import the previous file. Click the Import YAML button, followed by the Read from a file button. Next, select the service.yml file from your computer and click the Import button:

07

Rancher will begin to create your services. Provisioning the load balancer may take a few minutes.

08

As soon as service is marked as Active, you can find its public IP address by clicking on the three vertical dots at the right end of the load balancer’s row and select View/Edit YAML. From here, scroll down to find the IP address under status > loadBalancer > ingress > ip:

09

We can access the Jenkins UI by typing this IP into a web browser:

10

Configuring Dynamic Build Agents

With the Jenkins master up and running, we can go ahead and configure dynamic build agents to automatically spin up Pods as necessary.

Disabling the Default Master Build Agents

In the Jenkins UI, under Build Executor Status on the left side, two executors are configured by default, waiting to pick up build jobs. These are provided by the Jenkins master.

The master instance should only be in charge of scheduling build jobs, distributing the jobs to agents for execution, monitoring the agents, and getting the build results. Since we don’t want our master instance to execute builds, we will disable these.

Click on Manage Jenkins followed by Manage Nodes.

13

Click the gear icon associated with the master row.

14_1

On the following page, set # of executors to 0 and click Save.

14_2

The two idle executors will be removed from the Build Executor Status on the left side of the UI.

Gathering Configuration Information

We need a few pieces of information to configure Jenkins to automatically provision build agents on our Kubernetes cluster. We need three pieces of information from our GCP account and one from our ClusterIP service.

In your GCP account, select Kubernetes Engine, followed by Clusters and then click on the name of your cluster. In the Details column, copy the Endpoint IP address for later reference. This is the URL we need to give Jenkins to connect to the cluster:

16

Next, click Show credentials to the right of the Endpoint. Copy the Username and Password for later reference.

17

Now, switch over to the Rancher UI. In the upper-left menu, select the Default project on Jenkins cluster. Select the Workloads tab in the upper navigation pane and click the Service Discovery tab on the page:

20-3

Click on the three vertical dots associated with the jenkins-jnlp row and click View/Edit YAML. Copy values in the spec > clusterIP and spec > ports > port for later reference.

Configuring the Jenkins Kubernetes Plugin

Back in the main Jenkins dashboard, click on Manage Jenkins, followed by Manage Plugins:

11

Click the Installed tab and check that the Kubernetes plugin is installed:

12

We can now configure the plugin. Go to Manage Jenkins and select Configure System:

18

Scroll to the Cloud section at the bottom of the page. Click on Add a new cloud and select Kubernetes.

19

On the form that follows, in the Kubernetes URL field, enter https:// followed by the cluster endpoint IP address you copied from your GCP account.

Under Credentials, click the Add button and select Jenkins. On the form that appears, enter the username and password you copied from your GCP account and click the Add button at the bottom.

When you return to the Kubernetes form, select the credentials you just added from the Credentials drop down menu and click the Test Connection button. If the configuration is correct, the test will show “Connection test successful”.

Next, in the Jenkins tunnel field, enter the IP address and port that you retrieved from the jenkins-jnlp service in the Rancher UI, separated by a colon:

20-1

Now, scroll down to the Images section at the bottom of the page, click the Add Pod Template button, and select Kubernetes Pod Template. Fill out the Name and Labels fields with unique values to identify your first agent. We will use the label to specify which agent image should be used to run each build.

Next, in the Containers field, click the Add Container button and select Container Template. In the section that appears, fill out the following fields:

  • Name: jnlp (this is required by the Jenkins agent)
  • Docker image: <dockerhub_user>/jenkins-slave-jnlp1 (make sure to change the Docker Hub username)
  • Command to run: Delete the value here
  • Arguments to pass to the command: Delete the value here

The rest of the fields can be left as they are.

21

Next, click the Add Pod Template button and select Kubernetes Pod Template again. Repeat the process for the second agent image you created. Make sure to change the values to refer to your second image where applicable:

22

Click the Save button at the bottom to save your changes and continue.

Testing Dynamic Build Jobs

Now that our configuration is complete, we can create some build jobs to ensure that Jenkins can scale on top of Kubernetes. We will create five build jobs for each of our Jenkins agents.

On the main Jenkins page, click New Item on the left side. Enter a name for the first build of your first agent. Select Freestyle project and click the OK button.

23

On the next page, in the Label Expression field, type the label you set for your first Jenkins agent image. If you click out of the field, a message will appear indicating that the label is serviced by a cloud:

24

Scroll down to the Build Environment section and check Color ANSI Console Output.

In the Build section, click Add build step and select Execute shell. Paste the following script in the text box that appears:

#!/bin/bash

RED='\033[0;31m'
NC='\033[0m'

result=`ls / | grep -e jenkins-slave1 -e jenkins-slave2`
echo -e "${RED}Docker image is for $result ${NC}"

Click the Save button when you are finished.

25

Create another four jobs for the first agent by clicking New Item, filling out a new name, and using the Copy from field to copy from your first build. You can save each build without changes to duplicate the first build exactly.

Next, configure the first job for your second Jenkins agent. Click New Item, select a name for the first job for the second agent, and copy the job from your first agent again. This time, we will modify the fields on the configuration page before saving.

First, change the Label Expression field to match the label for your second agent.

Next, replace the script in the text box in the Build section with the following script:

#!/bin/bash

BLUE='\e[34m'
NC='\033[0m'

result=`ls / | grep -e jenkins-slave1 -e jenkins-slave2`
echo -e "${BLUE}Docker image is for $result ${NC}"

Click Save when you are finished.

26

Create four more builds for your second agent by copying from the job we just created.

Now, go to the home screen and start all the ten jobs you just created by clicking on the icon on the far right side of each row. As soon as you start them, they will be queued for execution as indicated by the Build Queue section:

28

After a few seconds, Pods will begin to be created to execute the builds (you can verify this in Rancher’s Workload tab). Jenkins will create one Pod for each job. As each agent is started, it connects to the master and receives a job from the queue to execute.

29 30

As soon as an agent finishes processing its job, it is automatically removed from the cluster:

31

To check the status of our jobs, we can click on one from each agent. Click the build from the Build History and then click Console Output. Jobs executed by the first agent should specify that the jenkins-slave1 Docker image was used, while builds executed by the second agent should indicate that the jenkins-slave2 image was used:

32 33

If you see the output above, Jenkins is configured correctly and functioning as intended. You can now begin to customize your Kubernetes-backed build system to help your team test and release software.

Conclusion

In this article, we configured Jenkins to automatically provision build agents on demand by connecting it with a Kubernetes cluster managed by Rancher. To achieve this, we completed the following steps:

  • Created a cluster using Rancher
  • Created custom Docker images for the Jenkins master and agents
  • Deployed the Jenkins master and an L4 LoadBalancer service to the Kubernetes cluster
  • Configured the Jenkins kubernetes plugin to automatically spawn dynamic agents on our cluster.
  • Tested a scenario using multiple build jobs with dedicated agent images

The main purpose of this article was to highlight the basic configuration necessary to set up a Jenkins master and agent architecture. We saw how Jenkins launched agents using JNLP and how the containers automatically connected to the Jenkins master to receive instructions. To achieve this, we used Rancher to create the cluster, deploy a workload, and monitor the resulting Pods. Afterwards, we relied on the Jenkins Kubernetes plugin to glue together all of the different components.

Continue training in building production-Ready CI/CD Pipelines for Kubernetes

To learn more, we recently presented a training in how to set up a build, test, deploy pipeline using Helm, Jenkins and JFrog Artifactory. Demo included! Watch the training here.

How to Build Cloud-Native CI/CD Pipelines on Kubernetes
See how to set up a build, test, deploy pipeline using Helm, Jenkins and JFrog Artifactory. Demo included!
Calin Rus
github
Calin Rus
快速开启您的Rancher之旅