关注微信公众号
第一手干货与资讯
加入官方微信群
获取免费技术支持
In the early days of Pokemon Go, we were all amazed at how Niantic managed to scale its user-base at planetary scale, seamlessly adding additional nodes to its container cluster to accommodate additional players and environments, all made possible by using Kubernetes as a container orchestrator. Kubernetes is able to abstract away parts of processing and low-level dependencies from the programmer's eyes in scaling and managing a container infrastructure. This makes it a very efficient platform to develop and maintain application services that span multiple containers. This whitepaper will explore how we can take the very useful design parameters and service orchestration features of K8s and marry them with serverless frameworks and Functions as a Service (FaaS). In particular, we will hone in on the features and functionalities, operational performance and efficiencies of three serverless frameworks that have been architected on a K8s structure: (i) Fission; (ii) OpenFaaS; and (iii) Kubeless.
Serverless architectures refer to the application architecture that abstracts away server management tasks from the developer and enhances development speed and efficiency by dynamically allocating and managing compute resources. Function as a Service (FaaS) is a runtime on top of which a serverless architecture can be built. FaaS frameworks operate as ephemeral containers that have common language runtimes already installed on them and which allows code to be executed within those runtimes.
FaaS framework should be able to run on a variety of infrastructures to be truly useful including the public cloud, hybrid cloud and on-premise environments. Serverless frameworks built on top of FaaS runtimes in a real production environment should be able to rely on proven and tested orchestration and management capabilities to deploy containers at scale and for distributed workloads.
For orchestration and management, serverless FaaS frameworks are able to rely on Kubernetes due to its ability to:
The three serverless frameworks that we will walk through in this article have their individual strengths and weaknesses. The common thread between these FaaS frameworks is that they are able to (1) turn functions into services; and (2) manage the lifecycle of these services by leveraging the Kubernetes platform. The engineering behind these frameworks differs in the exact modalities that they employ to accomplish these common goals, which we will explore in the next section. In particular, these are some of the differences between these frameworks that we will highlight in the following sections:
OpenFaaS ia a serverless platform that allows functions to be managed with a Docker or Kubernetes runtime since its basic primitive is a container in OCI format. OpenFaaS can be extended to leverage enterprise functionalities such as Docker Universal Control Plane enterprise-grade cluster management solution with Docker Enterprise or Tectonic for Kubernetes. OpenFaaS inherits existing container security features such as r/o filesystem, privilege drop and content trust. It is able to manage functions with a Docker or K8s scheduler/orchestrator and has their associated rich ecosystem of commercial and community vendors at its disposal. As well, any executable can be packaged into a function in OpenFaas due to its polyglot nature.
SpringBoot and Vertx are very popular frameworks to develop microservices and their ease of use has been extended to OpenFaaS via OpenFaaS templates. These templates allow a seamless way to develop and deploy serverless functions on the OpenFaaS platform. The templates are available in the github repository here. Let's walk-thru how to deploy a SpringBoot template on the OpenFaaS platform.
We will need to have FaaS CLI installed and configured to work with our local or remote K8s or Docker. In this exercise, we will use a local Docker client and will extend it to a cloud-based GKE cluster in the follow-up.
For the latest version of the CLI type in:
$ curl -sL https://cli.openfaas.com | sudo sh
[or via brew install faas-cli on MacOS.]
Use the following command to verify templates are installed locally:
faas-cli new --list
Before we can create a serverless function we have to install these templates on our local machine.
TL;DR faas-cli template pull [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git)
The –help flag can be invoked for all the commands.
$ faas-cli --help
Manage your OpenFaaS functions from the command line
Usage: faas-cli [flags] faas-cli [command]
faas-cli
Available Commands:
build Builds OpenFaaS function containers
build
deploy Deploy OpenFaaS functions
deploy
help Help about any command
help
push Push OpenFaaS functions to remote registry (Docker Hub)
push
remove Remove deployed OpenFaaS functions
remove
version Display the clients version information
version
Flags: -h, --help help for faas-cli -f, --yaml string Path to YAML file describing function(s)
-h
--help
-f
--yaml string
Use faas-cli [command] --help for more information about a command.
Using our function of interest from the github repository of Vertx/SpringBoot templates, we can create a function (replace text within curly brackets with our function, we used springboot but you can replace it with vertx for a vertx template):
faas-cli new {name of function} --lang springboot
Using mvnw, the command is
faas-cli new mvnw --lang vertx|springboot Folder: mvnw created. Function created in folder: mvnw Stack file written: mvnw.yml
The contents of mvnw.yml can now be used with the CLI.
Note: If your cluster is remote or not running on port 8080 - then edit this in the YAML file before continuing. A handler.java file was generated for our function. You can edit the pom.xml file and any dependencies will be installed during the “build” step.
Now that we've created the function logic, we can build the function using the faas cli build command. We will build the function into a docker image using the local Docker client.
$ faas-cli build -f mvnw.yml Building: mvnw. Clearing temporary build folder: ./build/mvnw/ Preparing ./mvnw/ ./build/mvnw/function Building: mvnw with node template. Please wait.. docker build -t mvnw . Sending build context to Docker daemon 8.704kB Step 1/19 : FROM node:6.11.2-alpine ---> 16566b7ed19e Step 19/19 : CMD fwatchdog ---> Running in 53d04c1631aa ---> f5e1266b0d32 Removing intermediate container 53d04c1631aa Successfully built f5e1266b0d32 Successfully tagged mvnw:latest Image: mvnw built.
In order to deploy our function, we will edit the mvnw.yml file and set the “image” line to the applicable username on the Docker Hub such as: hishamhasan/mvnw. We will then build the function again.
$ faas-cli push -f mvnw.yml Pushing: mvnw to remote repository. The push refers to a repository [docker.io/hishamhasan/mvnw]
Once this is done, the image will be pushed up to the Docker Hub or a remote Docker registry and we can deploy and run the function.
$ faas-cli deploy -f mvnw.yml Deploying: mvnw. No existing service to remove Deployed. 200 OK URL: [http://localhost:8080/function/mvnw](http://localhost:8080/function/mvnw)
``` $ faas-cli invoke -f mvnw.yml callme Reading from STDIN - hit (Control + D) to stop. This is my message {"status":"done"} ``` We can also pipe a command into the function such as: ``` $ date | faas-cli invoke -f mvnw.yml mvnw {"status":"done"} ```
We are not restricted to any on-prem or cloud infrastructure in working with OpenFaaS. Now that we have deployed our template in a local Docker cluster, we can leverage the versatility of OpenFaaS by setting it up on GKE in the GCP.
gcloud components install kubectl
k8s\_version=$(gcloud container get-server-config --format=json | jq -r '.validNodeVersions[0]') gcloud container clusters create demo \ --cluster-version=${k8s\_version} \ --zone=us-west1-a \ --additional-zones=us-west1-b,us-west1-c \ --num-nodes=1 \ --machine-type=n1-standard-2 \ --scopes=default,storage-rw
Increase the size of the default node pool to desired number of nodes (in this example we scale up by a factor of 3 to nine node):
gcloud container clusters resize --size=3
You can carry out a host of cluster management functions by invoking the applicable SDK command as described on this page, for example delete the cluster.
gcloud container clusters delete demo -z=us-west1-a
Set up credentials for kubectl:
gcloud container clusters get-credentials demo -z=us-west1-a
Create a cluster admin user:
kubectl create clusterrolebinding "cluster-admin-$(whoami)" \ --clusterrole=cluster-admin \ --user="$(gcloud config get-value core/account)"
Grant admin privileges to kubernetes-dashboard (ensure this is done in non-production environment):
``` kubectl create clusterrolebinding "cluster-admin-$(whoami)" \ --clusterrole=cluster-admin \ --user="$(gcloud config get-value core/account)" ```
You can access the kubernetes-dashboard on port-8080 by invoking kubectl proxy --port=8080 and navigating to http://localhost:8080/uion the browser (or at http://localhost:9099/ui) using the kubectl reverse proxy:
kubectl proxy --port=8080
``` kubectl proxy --port=9099 & ```
A Kubernetes cluster consists of master and node resources - master resources coordinate the cluster and nodes run the application and they communicate via the Kubernetes API. We built our containerized application with the OpenFaaS CLI and wrote the .yml file to build and deploy the function, By deploying the function across nodes in the Kubernetes cluster, we allow GKE to distribute and schedule our node resources. Our nodes has been provisioned with tools to handle container operations which can be via the kubectl CLI.
Clone openfaas-gke repository:
git clone [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git) cd openfaas-gke
Create the openfaas and openfaas-fn namespaces to deploy OpenFaaS services in a multi-tenant setup:
kubectl apply -f ./namespaces.yaml
To deploy OpenFaaS services in the openfaas namespace:
kubectl apply -f ./openfaas
This will endow K8s pods, deployments, and services for an OpenFaaS gateway, FaaS-netesd (K8S controller), Prometheus, Alert Manager, Nats, and the Queue worker.
We need to secure our gateway before exposing OpenFaaS on the Internet by setting up authentication. We can create a generic basic-auth secret with a set of credentials:
kubectl -n openfaas create secret generic basic-auth \ --from-literal=user=admin \ --from-literal=password=admin
We can then deploy Caddy for our OpenFaaS gateway which functions both as a reverse proxy and a robust load balancer and supports WebSocket connections:
kubectl apply -f ./caddy
We will then use the external IP exposed by the K8s Service object to access the access the OpenFaaS gateway UI with our credentials at http://<EXTERNAL-IP>. We can get the external IP by running kubectl get svc.
get\_gateway\_ip() { kubectl -n openfaas describe service caddy-lb | grep Ingress | awk'{ print $NF }' } until [["$(get\_gateway\_ip)"]] dosleep1; echo-n"."; done echo"." gateway\_ip=$(get\_gateway\_ip) echo"OpenFaaS Gateway IP: ${gateway\_ip}"
Note: If the external IP address is shown as <pending>, wait for a minute and enter the same command again.
If you haven't carried out the previous exercise, install the OpenFaaS CLI by invoking.
curl-sL cli.openfaas.com | sh
Then login with the CLI, credentials and the external IP exposed by the K8s service:
faas-cli login -u admin -p admin --gateway http://<EXTERNAL-IP>
Note: (a) You can expose the OpenFaaS gateway using Google Cloud L7 HTTPS load balancer by creating an Ingress Resource. Detailed guidelines for creating your load balancer can be found here. (b) You can create a text file with your password and use the file along with the -password-stdin flag in order to avoid having your password in the bash history.
$ faas-cli deploy -f mvnw.yml
The deploy command looks for a mvnw.yml file in the current directory and deploys all of the functions in the openfaas-fn namespace.
Note: (a) You can set the minimum number of running pods with the com.openfaas.scale.min label and the minimum number of replicas for the autoscaler com.openfaas.scale.max. The default settings for OpenFaaS is one pod running per function and it scales up to 20 pods under load
faas-cli invoke mvnw--gateway=http://<GATEWAY-IP>
faas-cli logout –gateway http://<EXTERNAL-IP>
Fission is a serverless framework that further abstracts away container images and allows HTTP services to be created on K8s just from functions. The container images in Fission contain the language runtime, a set of commonly used dependencies and a dynamic loader for functions. These images can be customized, for example to package binary dependencies. Fission is able to optimize cold start overheads by maintaining a running pool of containers. As new requests come in from client applications or business services, it copies the function into the container, loads it dynamically, and routes the request to that instance. It is therefore able to minimize cold-start overhead on the order of 100msec for NodeJS and Python functions.
By operating at the source level, Fission saves the user from having to deal with container image building, pushing images to registries, managing registry credentials, image versioning and other administrative tasks.
As depicted in the schematic above, Fission is architected as a set of microservices with the main components described below:
We can use the K8s cluster we created on GCP in the previous exercise to deploy a HTTP request on Fission. Let's walk-thru the process.
Install Helm CLI Helm is a Kubernetes package manager. Let's initiate Helm:
$ helm init
Install Fission in GKE namespace
$ helm install --namespace fission [https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz](https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz)
Install Fission CLI
OSX
$ curl -Lo fission https://github.com/fission/fission/releases/download/0.7.0/fission-cli-osx && chmod +x fission && sudo mv fission /usr/local/bin/
Windows Download the Windows executable here.
Create your HTTP service We will create a simple HTTP service to print Hello World.
$ cat \> hello.py def main(context): print "Hello, world!"
Deploy HTTP service on Fission
$ fission function create --name hello --env python --code hello.py --route /hello $ curl http://\<fission router\>/hello Hello, world!
Kubeless is a Kubernetes-native serverless framework that enables functions to be deployed on a K8s cluster while allowing users to leverages Kubernetes resources to provide auto-scaling, API routing, monitoring and troubleshooting. Kubeless uses Kubernetes Custom Resource Definitions to create functions as custom kubernetes resources. A custom resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind, for example K8s pod objects, and that represents a customization of a particular K8s installation. Custom resources are quite useful because they can be provisioned and then deleted in a running cluster through dynamic registration and cluster admins can update custom resources independently of the cluster itself. Kubeless leverages these functionalities and runs an in-cluster controller that keeps track of these custom resources and launches runtimes on-demand.
With the K8s cluster is running, we can make the dashboard available on port 8080 with kubectl:
$ curl -L https://github.com/kubeless/kubeless/releases/download/0.0.20/kubeless\_darwin-amd64.zip > kubeless.zip $ unzip kubeless.zip $ sudo cp bundles/kubeless\_darwin-amd64/kubeless /usr/local/bin/
Windows
Download the Windows executable here.
We will deploy Kubless in our K8s cluster using a manifest found in this link. The manifest creates a kubeless Namespace, a function ThirdPartyResource, a kubeless Controller, and sets in process a kafka, zookeeper StatefulSet. One of the main advantages of Kubless is it's highly-Kubernetes native nature and it can set up both non-rbac and rbac specific environments. The screenshot below shows how to deploy kubeless on a non-rbac environment using kubectl commands.
Create function
We can create a function that allows us to create a server, and pull the method, URL, headers and body out of the request.
const http = require('http'); http.createServer((request, response) => { const { headers, method, url } = request; let body = []; request.on('error', (err) => { console.error(err); }).on('data', (chunk) => { body.push(chunk); }).on('end', () => { body = Buffer.concat(body).toString(); // At this point, we have the headers, method, url and body, and can now // do whatever we need to in order to respond to this request. }); }).listen(8080); // Activates this server, listening on port 8080.
Run functions in Kubeless environment
We can register the function with Kubeless by providing the following information:
By adding the variables 1-5 above, we invoke the following command to register and deploy the function in Kubeless:
kubeless function deploy serverequest--trigger-http --runtime nodejs6 --handler serverequest.createServer --from-file /tmp/serverequest.js
Each of the serverless platforms we evaluated have their unique value propositions. With OpenFaas, any process or container can be packaged as a serverless function for either Linux or Windows. For enterprises, the architecture used by OpenFaaS provides a seamless ability to plug in to a schedule cluster and CI/CD workflow for their existing microservices, as OpenFaaS is built around Docker and all functions are packaged into Docker images. OpenFaaS also provides enterprises a seamless way to administer and execute functions via the external API, the Gateway and to manage the lifecycle of a function, including deployments, scale and secretes management via a Provider.
Fission has an event-driven architecture which makes it ideal for short-lived, stateless applications, including REST API or webhook implementations and DevOps automation. A good use case for using Fission might be a backend for chatbot development as Fission achieves good cold-start performance and delivers fast response times when needed by keeping a running pool of containers with their runtimes.
Finally, the Kubeless architecture leverages native Kubernetes concepts to deploy and manage functions, such as Custom Resource Definitions to define a function and custom controller to manage a function, deploy it as a Kubernetes deployment and expose it via Kubernetes service. This close alignment with Kubernetes native functionalities will appeal to existing Kubernetes users, lowering the required learning curve and seamlessly plugging into an existing Kubernetes architecture.