Using Kubernetes API from Go


Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go

In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.

Kubernetes is a platform

Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform. Here are some ways to extend Kubernetes: Kubernetes Cluster Diagram On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.

What exactly is Kubernetes Controller?

The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:

  • Watch ingress/services/endpoints resource events (Create/Update/Remove)
  • Program internal or external Load Balancer
  • Update Ingress with the Load Balancer address

The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state. The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.

Client-go

There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.

Lets build…

The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:

  • Monitors Kubernetes nodes
  • Alerts when storage occupied by images on the node, changes

The code source can be found here.

Ground work

Setup the project

As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.

  1. go-skel - skeleton for Go microservices Just run ./skel.sh test123, and it will create the skeleton for the new go project test123.
  2. trash - Go vendor management tool. There are many go dependencies management tools out there, but trash has been proved to be simple to use and great when it comes to transient dependencies management.
  3. dapper - a tool to wrap any existing build tool in an consistent environment

Add client-go as a dependency

In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf: Add Client-go as a Dependency And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.

Create a client

Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks - you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag: Create a client For Kubernetes API We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster. As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.

Play with basic CRUDs

For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic: Screen shot above displays how to do:

  • List nodes named “minikube” which can be achieved by passing FieldSelector filter to the command.
  • Update the node with the new annotation
  • Delete the node with the gracePeriod=10 seconds - meaning that the removal will happen only after 10 seconds since the command is issued.

All that is done using the clientset we’ve created on the previous step. We would need information about the images on the node; it can be retrieved by accessing corresponding field:

Watch/Notify using Informer

Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that - we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is - to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.

On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it - a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions - handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes - find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects - controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly: As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers - one per controller - you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:

Deployment time

Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode: To change the message output, deploy a pod using an image which is not presented on the node yet. Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile: And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app: So we have:

  • Created go project
  • Added client-go package dependencies to it
  • Created a client to talk to Kubernetes api
  • Defined an Informer that would watch node object changes, and execute callback function once that happens
  • Implemented an actual logic in the callback definition.
  • Tested the code by running the binary in outside of cluster, and then deployed it inside the cluster

If you have any comments or questions on the topic, please feel free to share them with me ! AvatarAlena Prokharchyk twitter: @lemonjet github: https://github.com/alena1108

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.
Alena Prokharchyk
github
Alena Prokharchyk
Senior Manager, Engineering
Alena is a Principal Software Engineer at Rancher Labs, who’s been working on building infrastructure services first for Virtual Machines, now for containers with main focus on Kubernetes. She enjoys helping others make sense of problems and explore solutions together. In her free time Alena enjoys rollerblading, reading books on totally random subjects and listening to other people’s stories.
快速开启您的Rancher之旅