关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go
In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.
Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform. Here are some ways to extend Kubernetes: On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.
The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:
The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state. The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.
There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.
The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:
The code source can be found here.
As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.
In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf: And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.
trash
Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks - you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag: We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster. As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.
kubernetes.NewForConfig
For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic: Screen shot above displays how to do:
All that is done using the clientset we’ve created on the previous step. We would need information about the images on the node; it can be retrieved by accessing corresponding field:
Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that - we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is - to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.
On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it - a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions - handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes - find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects - controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly: As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers - one per controller - you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:
watchList
api.Node
Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode: To change the message output, deploy a pod using an image which is not presented on the node yet. Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile: And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app: So we have:
docker build .
If you have any comments or questions on the topic, please feel free to share them with me ! Alena Prokharchyk twitter: @lemonjet github: https://github.com/alena1108