Using GitLab Auto DevOps with Kubernetes Through Rancher's Authorized Cluster Endpoint

Using GitLab Auto DevOps with Kubernetes Through Rancher's Authorized Cluster Endpoint


Introduction

In this post, we will walk through how to connect GitLab’s Auto DevOps feature with a Rancher-managed Kubernetes cluster, making use of a feature introduced in Rancher v2.2.0 called Authorized Cluster Endpoint. Readers can expect to walk away with an understanding of how GitLab integrates with Kubernetes and how Rancher simplifies this workflow with Authorized Cluster Endpoint. This article might be a good fit for Kubernetes administrators, DevOps engineers, or anyone interested in integrating their development and Kubernetes workflows.

Background

What is GitLab Auto DevOps?

Introduced in GitLab 10.0, Auto DevOps is a feature that allows you to setup a DevOps pipeline that automatically detects, builds, tests, and deploys your project. Paired with a Kubernetes cluster, this means you can deploy applications without the overhead of provisioning CI/CD resources and configuring those tools.

What is Rancher’s Authorized Cluster Endpoint?

Starting with v2.2.0, Rancher introduced a new feature called Authorized Cluster Endpoint to enable direct access to Kubernetes without proxying through Rancher. Prior to v2.2.0, if you wanted to communicate directly with your downstream Kubernetes clusters, you had to manually retrieve the kubeconfig file and the API server address from individual nodes. Not only was this cumbersome, it also didn’t provide a mechanism for controlling the granular permissions available when managing clusters through Rancher.

As of Rancher v2.2.0, when deploying a Rancher-managed cluster, the Authorized Cluster Endpoint (ACE) feature is enabled by default. ACE pushes down some of the Rancher authentication and authorization mechanics into the downstream Kubernetes cluster, allowing Rancher users to connect to these clusters directly while still adhering to security policies.

If you have explicitly given permissions to a user for certain projects, those permissions will apply when that user connects using an Authorized Cluster Endpoint. Now, you can ensure that security and policy guidelines are enforced whether users connect through Rancher or directly to your Kubernetes clusters.

You can find a more detailed explanation by visiting the documentation for the Authorized Cluster Endpoint feature.

Note

The Authorized Cluster Endpoint feature is only available for downstream Kubernetes clusters launched using Rancher Kubernetes Engine (RKE).

Prerequisites

To get started connecting GitLab Auto DevOps to Rancher-managed Kubernetes clusters, you will need the following:

  • A GitLab.com account or an account on a self-hosted GitLab instance with Auto DevOps enabled: GitLab.com accounts are already configured with Auto DevOps. If you are working with a self-hosted GitLab instance, you can find out how to enable Auto DevOps using the GitLab documentation.
  • A Rancher instance running version v2.2.0 or later: You can start Rancher in standalone mode using the quick start or create an HA installation using the high availability instructions in the documentation.
  • A Rancher-managed Kubernetes cluster: The Rancher-managed cluster needs to be provisioned via RKE. In addition, you will need an admin user in that cluster and, if you are using GitLab.com, access to the control plane nodes from the public internet.

Set Up Rancher and Kubernetes

We will start by preparing Rancher and Kubernetes for the integration. This first part of the process mainly involves gathering information.

Note

For simplicity’s sake, these steps use the default admin account within Rancher. Best practices dictate that you use an independent user for procedures like this and restrict that user’s permissions to the cluster that is being GitLab-integrated.

Log in to Rancher and navigate to the downstream cluster you wish to integrate. In this demo, we will target a cluster called testing, which is running in Amazon, on EC2 instances:

navigating to testing cluster

On the dashboard for the cluster, click the Kubeconfig File button at the top. This brings up the kubeconfig file for the cluster, which includes the Authorized Cluster Endpoint information.

The first entry in the kubeconfig file is the endpoint for the cluster via your Rancher server. Scroll down to identify the Authorized Cluster Endpoint for this cluster which is listed as a separate cluster entry:

displaying cluster information

In my example, the name of this cluster is testing-testing-2, and the endpoint, server, is a public IP provided by AWS.

Copy the values for server and certificate-authority-data fields, excluding quotation marks, and save them.

Scroll down further in the kubeconfig file and locate your username and token:

user field from kubeconfig

Copy the token field, excluding quotation marks, and save it.

Next decode the base64 version of the certificate authority data, converting it back into its original version and save it. A few options, depending on your tooling, include:

echo '<certificate_authority_data>' | base64 --decode
openssl enc -base64 -d <<< <certificate_authority_data>

Set Up a GitLab Project

With the information we’ve gathered from Rancher, we can now focus on configuring GitLab. We will start by creating a new project within GitLab that will integrate with our Kubernetes cluster using the Auto DevOps feature.

To begin, login to GitLab, and select New Project.

On the New Project page, select the Create from template tab. This will provide you a list of template projects to use. Select NodeJS Express, and click Use template:

nodejs express template selection

Give the project a name, and set the Visibility Level to Public. Click Create project when you are finished.

Note

As of this writing, private visibility is experimental for GitLab’s Auto DevOps feature.

In the menu pane on the left side of the project page, select Settings > CI/CD. Expand the Environment variables section, and set the following variables:

Name Value
POSTGRES_ENABLED false
TEST_DISABLED true
CODE_QUALITY_DISABLED true
LICENSE_MANAGEMENT_DISABLED true
SAST_DISABLED true
DEPENDENCY_SCANNING_DISABLED true
CONTAINER_SCANNING_DISABLED true

We are disabling these features because they are not needed for our simple example and will extend the amount of time require to deploy. In a real project, you might want to keep some of these options enabled depending on your requirements:

cicd variable selection

Click Save variables to complete your GitLab project configuration.

Connect GitLab and Rancher

Now, we’re ready to integrate our GitLab project with our Rancher-managed Kubernetes cluster.

In GitLab, select your newly-cloned project. On the menu on the left, select Operations > Kubernetes. Click the green Add Kubernetes cluster button. On the next page, select the Add existing cluster tab.

Fill in the fields using the following details:

Field Value
Kubernetes Cluster Name [any]
API URL Value of the ‘server field’ from above
CA Certificate Base64-decoded CA Cert from above
Token Token from above
Project Namespace Optional
RBAC-enabled cluster checked

Click Add Kubernetes cluster. GitLab will add the cluster, and create a new namespace within it. You can look inside the Rancher interface to confirm the creation of the newly-created namespace.

Note

The first thing that GitLab does when connecting to a cluster is create a namespace for the project. If you do not see a namespace created after a few moments, something may have gone wrong.

Once you add the cluster to GitLab, a list of applications to install into the cluster will appear. The first of these is Helm Tiller. Go ahead and click Install to add it to the cluster.

Next, install Ingress which will allow GitLab to route traffic into your application:

install ingress chart

Depending on how you have configured your cluster, your ingress endpoint may automatically populate or it may not. For this tutorial, I will be using a xip.io hostname to point traffic at a single node. For your use case, you may want to setup a wildcard domain and point it at this ingress (or at your node IPs, etc).

Once you deploy the ingress, scroll to the top of the page and locate the Base domain field. Enter the public IP address of one of your nodes, followed by .xip.io. This will create a wildcard domain that resolves to that IP address, which is sufficient for our example:

fill in the base domain

Next, in the navigation pane, select Settings > CI/CD. Expand the Auto DevOps section, and check the Default to Auto DevOps pipeline box. Not only will this make Auto DevOps the default, it will also trigger a build. Leave the Deployment strategy set to Continuous deployment to production:

checking the autodevops box

Once you check the Auto DevOps box, a pipeline run will kick off. Navigate to CI/CD > Pipelines in GitLab. You should see something similar to the following image, which indicates that GitLab is in process of deploying your application:

GitLab pipeline running

Validating the Deploy in Rancher

Let’s head back to Rancher so we can check up on our deployment and see how resources created on our behalf translate to Kubernetes objects in the Rancher interface.

In Rancher, navigate to your cluster and click on Projects/Namespaces in the navigation menu at the top.

GitLab has created two namespaces on your behalf: gitlab-managed-apps and a second, unique application namespace. The gitlab-managed-apps namespace contains resource such as the nginx ingress and Helm tiller instance used to deploy applications. The application-unique namespace contains deployments of your app.

In order to visualize these, let’s move those namespaces into our Default project. You can also use any other project you want. Click the Move button and select your desired project:

move namespaces interface

Once you have moved the namespaces, navigate to the Project they belong to, and into the Workloads page. The page will show your new deployment in its application-specific namespace:

app specific namespace table

Note the 443/https link under the name of the deployment. Clicking that link will take you to the wildcard domain ingress for your deployment. If all goes well, you should see this page:

welcome to express page

Conclusion

Congratulations! You have just connected GitLab’s Auto DevOps with a Rancher-managed Kubernetes cluster using the Authorized Cluster Endpoint for secure, direct connectivity.

As you explore other areas of Rancher, you may note the other objects that GitLab has created on your behalf. For instance, the Load Balancing tab displays the L7 ingress that was deployed as well as the hostnames that were created. You can also see the internal services for the application deployed under the Service Discovery tab.

GitLab’s Auto DevOps feature is designed not only to be easy-to-use, but also customizable and powerful. For our demo, we disabled some advanced features such as automatic testing, dependency scanning, and license management. These features can be re-enabled and, by configuring GitLab, used to provide additional value to your development environment. In addition to Auto DevOps, GitLab also provides CI/CD with .gitlab-ci.yml files which allow for extensive customization. Learn about these topics and more on GitLab’s documentation site.

Build a CI/CD Pipeline on Kubernetes and Rancher

One of the most common uses for Kubernetes is to improve development operations, and as part of that, teams need to determine the best way to integrate their CI workflows with Kubernetes.

In our August 2018 Rancher Online Meetup, we dove into how to build a CI/CD workflow with Rancher and Kubernetes. We looked at best practices for building pipelines with containers, and some of the tools that make it easier. Download the free recorded video and slides.

Eamon Bauman
github
Eamon Bauman
Field Engineer, Rancher Labs
Eamon Bauman is a developer and systems engineer with 10+ years in the industry. He has worked in telecommunications, software development, higher education and now the cloud-native field, where he is a field engineer at Rancher Labs.
快速开启您的Rancher之旅