Evaluation of Serverless Frameworks for Kubernetes (K8s)

Evaluation of Serverless Frameworks for Kubernetes (K8s)


Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

In the early days of Pokemon Go, we were all amazed at how Niantic managed to scale its user-base at planetary scale, seamlessly adding additional nodes to its container cluster to accommodate additional players and environments, all made possible by using Kubernetes as a container orchestrator. Kubernetes is able to abstract away parts of processing and low-level dependencies from the programmer's eyes in scaling and managing a container infrastructure. This makes it a very efficient platform to develop and maintain application services that span multiple containers. This whitepaper will explore how we can take the very useful design parameters and service orchestration features of K8s and marry them with serverless frameworks and Functions as a Service (FaaS). In particular, we will hone in on the features and functionalities, operational performance and efficiencies of three serverless frameworks that have been architected on a K8s structure: (i) Fission; (ii) OpenFaaS; and (iii) Kubeless.

A. Why Kubernetes is an excellent orchestration system for Serverless Frameworks?

Serverless architectures refer to the application architecture that abstracts away server management tasks from the developer and enhances development speed and efficiency by dynamically allocating and managing compute resources. Function as a Service (FaaS) is a runtime on top of which a serverless architecture can be built. FaaS frameworks operate as ephemeral containers that have common language runtimes already installed on them and which allows code to be executed within those runtimes.

FaaS framework should be able to run on a variety of infrastructures to be truly useful including the public cloud, hybrid cloud and on-premise environments. Serverless frameworks built on top of FaaS runtimes in a real production environment should be able to rely on proven and tested orchestration and management capabilities to deploy containers at scale and for distributed workloads.

For orchestration and management, serverless FaaS frameworks are able to rely on Kubernetes due to its ability to:

  • Orchestrate containers across clusters of hosts.
  • Maximize hardware resources needed for enterprise applications.
  • Manage and automate application deployments and provide declarative updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications on the fly and provide resources to support them.
  • Declaratively manage services.
  • Provide a barometer to check the health of apps and self-heal apps with auto-placement, auto-restart, auto-replication, and autoscaling.

A serverless system can consist of a function triggered via a client request or functions being executed as part of business services. Both these processes can be orchestrated using a Container Cluster Manager such as Kubernetes. Source: dzone.com

The three serverless frameworks that we will walk through in this article have their individual strengths and weaknesses. The common thread between these FaaS frameworks is that they are able to (1) turn functions into services; and (2) manage the lifecycle of these services by leveraging the Kubernetes platform. The engineering behind these frameworks differs in the exact modalities that they employ to accomplish these common goals, which we will explore in the next section. In particular, these are some of the differences between these frameworks that we will highlight in the following sections:

  1. Does the framework operate at the source-level or at the level of Docker images or in-between, e.g. buildpacks?
  2. What is the latency in cold-start performance or the time lag during the execution of the function due to the container being initiated with the common language runtime?
  3. How do they allocate memory or resources to services?
  4. How do they access and deploy the orchestration and container management functionalities of Kubernetes?

B. OpenFaaS and Deploying a Spring Boot Template

OpenFaaS ia a serverless platform that allows functions to be managed with a Docker or Kubernetes runtime since its basic primitive is a container in OCI format. OpenFaaS can be extended to leverage enterprise functionalities such as Docker Universal Control Plane enterprise-grade cluster management solution with Docker Enterprise or Tectonic for Kubernetes. OpenFaaS inherits existing container security features such as r/o filesystem, privilege drop and content trust. It is able to manage functions with a Docker or K8s scheduler/orchestrator and has their associated rich ecosystem of commercial and community vendors at its disposal. As well, any executable can be packaged into a function in OpenFaas due to its polyglot nature.

SpringBoot and Vertx are very popular frameworks to develop microservices and their ease of use has been extended to OpenFaaS via OpenFaaS templates. These templates allow a seamless way to develop and deploy serverless functions on the OpenFaaS platform. The templates are available in the github repository here. Let's walk-thru how to deploy a SpringBoot template on the OpenFaaS platform.

Installing OpenFaaS locally

Downloading and installing templates on local machine

We will need to have FaaS CLI installed and configured to work with our local or remote K8s or Docker. In this exercise, we will use a local Docker client and will extend it to a cloud-based GKE cluster in the follow-up.

For the latest version of the CLI type in:

  $ curl -sL https://cli.openfaas.com | sudo sh

[or via brew install faas-cli on MacOS.]

Use the following command to verify templates are installed locally:

  faas-cli new --list

Before we can create a serverless function we have to install these templates on our local machine.

TL;DR

faas-cli template pull [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git)

Pull up Help Menu

The –help flag can be invoked for all the commands.

$ faas-cli --help

Manage your OpenFaaS functions from the command line

Usage: faas-cli [flags] faas-cli [command]

Available Commands:

build Builds OpenFaaS function containers

deploy Deploy OpenFaaS functions

help Help about any command

push Push OpenFaaS functions to remote registry (Docker Hub)

remove Remove deployed OpenFaaS functions

version Display the clients version information

Flags: -h, --help help for faas-cli -f, --yaml string Path to YAML file describing function(s)

Use faas-cli [command] --help for more information about a command.

Creating functions with installed templates

Using our function of interest from the github repository of Vertx/SpringBoot templates, we can create a function (replace text within curly brackets with our function, we used springboot but you can replace it with vertx for a vertx template):

faas-cli new {name of function} --lang springboot

Using mvnw, the command is

  faas-cli new mvnw --lang vertx|springboot
  Folder: mvnw created.
  Function created in folder: mvnw
  Stack file written: mvnw.yml

The contents of mvnw.yml can now be used with the CLI.

Note: If your cluster is remote or not running on port 8080 - then edit this in the YAML file before continuing. A handler.java file was generated for our function. You can edit the pom.xml file and any dependencies will be installed during the “build” step.

Build function

Now that we've created the function logic, we can build the function using the faas cli build command. We will build the function into a docker image using the local Docker client.

$ faas-cli build -f mvnw.yml
Building: mvnw.
Clearing temporary build folder: ./build/mvnw/
Preparing ./mvnw/ ./build/mvnw/function
Building: mvnw with node template. Please wait..
docker build -t mvnw .
Sending build context to Docker daemon  8.704kB
Step 1/19 : FROM node:6.11.2-alpine
 ---> 16566b7ed19e

Step 19/19 : CMD fwatchdog
 ---> Running in 53d04c1631aa
 ---> f5e1266b0d32
Removing intermediate container 53d04c1631aa
Successfully built f5e1266b0d32
Successfully tagged mvnw:latest
Image: mvnw built.

Push your Function (optional as we are working on a local install)

In order to deploy our function, we will edit the mvnw.yml file and set the “image” line to the applicable username on the Docker Hub such as: hishamhasan/mvnw. We will then build the function again.

  $ faas-cli push -f mvnw.yml
  Pushing: mvnw to remote repository.
  The push refers to a repository [docker.io/hishamhasan/mvnw]

Once this is done, the image will be pushed up to the Docker Hub or a remote Docker registry and we can deploy and run the function.

Deploy Function

  $ faas-cli deploy -f mvnw.yml
  Deploying: mvnw.
  No existing service to remove
  Deployed.
  200 OK
  URL: [http://localhost:8080/function/mvnw](http://localhost:8080/function/mvnw)

Invoke Function

```
$ faas-cli invoke -f mvnw.yml callme
Reading from STDIN - hit (Control + D) to stop.
This is my message

{"status":"done"}
```

We can also pipe a command into the function such as:

```
$ date | faas-cli invoke -f mvnw.yml mvnw
{"status":"done"}
```

Installing OpenFaaS on the Google Cloud Platform

We are not restricted to any on-prem or cloud infrastructure in working with OpenFaaS. Now that we have deployed our template in a local Docker cluster, we can leverage the versatility of OpenFaaS by setting it up on GKE in the GCP.

  1. Create a GCP Project called
  2. Download and install the Google Cloud SDK here. After installing the SDK, run gcloud init and then, set the default project to penfaas.
  3. Install kubectl using gcloud: gcloud components install kubectl
  4. Navigate to API Manager > Credentials > Create Credentials > Service account key.
  5. Select JSON as key type. Rename the file to json and place it in the project
  6. Add the SSH key you just created under ComputeEngine > Metadata > SSH Keys and create a metadata entry named sshKeys with your public SSH key as the value.
  7. Create a three-node Kubernetes cluster with each node in a different zone. Read up here on cluster federation to find out about how to select the number of clusters and number of nodes in each cluster, which may change frequently according to load or growth.
    k8s\_version=$(gcloud container get-server-config --format=json | jq -r '.validNodeVersions[0]')

    gcloud container clusters create demo \
       --cluster-version=${k8s\_version} \
       --zone=us-west1-a \
       --additional-zones=us-west1-b,us-west1-c \
       --num-nodes=1 \
       --machine-type=n1-standard-2 \
       --scopes=default,storage-rw

Increase the size of the default node pool to desired number of nodes (in this example we scale up by a factor of 3 to nine node):

  gcloud container clusters resize --size=3

You can carry out a host of cluster management functions by invoking the applicable SDK command as described on this page, for example delete the cluster.

  gcloud container clusters delete demo -z=us-west1-a

Complete administrative setup for

Set up credentials for kubectl:

  gcloud container clusters get-credentials demo -z=us-west1-a

Create a cluster admin user:

  kubectl create clusterrolebinding "cluster-admin-$(whoami)" \

     --clusterrole=cluster-admin \

     --user="$(gcloud config get-value core/account)"

Grant admin privileges to kubernetes-dashboard (ensure this is done in non-production environment):

```
kubectl create clusterrolebinding "cluster-admin-$(whoami)" \

   --clusterrole=cluster-admin \

   --user="$(gcloud config get-value core/account)"
```

You can access the kubernetes-dashboard on port-8080 by invoking kubectl proxy --port=8080 and navigating to http://localhost:8080/uion the browser (or at http://localhost:9099/ui) using the kubectl reverse proxy:

```
kubectl proxy --port=9099 &
```

A Kubernetes cluster consists of master and node resources - master resources coordinate the cluster and nodes run the application and they communicate via the Kubernetes API. We built our containerized application with the OpenFaaS CLI and wrote the .yml file to build and deploy the function, By deploying the function across nodes in the Kubernetes cluster, we allow GKE to distribute and schedule our node resources. Our nodes has been provisioned with tools to handle container operations which can be via the kubectl CLI.

Source: dzone.com

Deploy OpenFaaS with basic authentication.

Clone openfaas-gke repository:

  git clone [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git)

  cd openfaas-gke

Create the openfaas and openfaas-fn namespaces to deploy OpenFaaS services in a multi-tenant setup:

  kubectl apply -f ./namespaces.yaml

To deploy OpenFaaS services in the openfaas namespace:

  kubectl apply -f ./openfaas

This will endow K8s pods, deployments, and services for an OpenFaaS gateway, FaaS-netesd (K8S controller), Prometheus, Alert Manager, Nats, and the Queue worker.

We need to secure our gateway before exposing OpenFaaS on the Internet by setting up authentication. We can create a generic basic-auth secret with a set of credentials:

  kubectl -n openfaas create secret generic basic-auth \

     --from-literal=user=admin \

     --from-literal=password=admin

We can then deploy Caddy for our OpenFaaS gateway which functions both as a reverse proxy and a robust load balancer and supports WebSocket connections:

  kubectl apply -f ./caddy

We will then use the external IP exposed by the K8s Service object to access the access the OpenFaaS gateway UI with our credentials at http://<EXTERNAL-IP>. We can get the external IP by running kubectl get svc.

  get\_gateway\_ip() {

     kubectl -n openfaas describe service caddy-lb | grep Ingress | awk&#39;{ print $NF }&#39;

  }

  until [["$(get\_gateway\_ip)"]]

  dosleep1;

  echo-n".";

  done

  echo"."

  gateway\_ip=$(get\_gateway\_ip)

  echo"OpenFaaS Gateway IP: ${gateway\_ip}"

Note: If the external IP address is shown as <pending>, wait for a minute and enter the same command again.

If you haven't carried out the previous exercise, install the OpenFaaS CLI by invoking.

  curl-sL cli.openfaas.com | sh

Then login with the CLI, credentials and the external IP exposed by the K8s service:

  faas-cli login -u admin -p admin --gateway http://&lt;EXTERNAL-IP&gt;

Note: (a) You can expose the OpenFaaS gateway using Google Cloud L7 HTTPS load balancer by creating an Ingress Resource. Detailed guidelines for creating your load balancer can be found here. (b) You can create a text file with your password and use the file along with the -password-stdin flag in order to avoid having your password in the bash history.

You can use the image previously published in the last exercise and deploy your serverless function.

  $ faas-cli deploy -f mvnw.yml

The deploy command looks for a mvnw.yml file in the current directory and deploys all of the functions in the openfaas-fn namespace.

Note: (a) You can set the minimum number of running pods with the com.openfaas.scale.min label and the minimum number of replicas for the autoscaler com.openfaas.scale.max. The default settings for OpenFaaS is one pod running per function and it scales up to 20 pods under load

Invoke your serverless function.

  faas-cli invoke mvnw--gateway=http://&lt;GATEWAY-IP&gt;

You can log out at any time with:

faas-cli logout –gateway http://<EXTERNAL-IP>

C. Fission and Deploying a Simple HTTP request

Fission is a serverless framework that further abstracts away container images and allows HTTP services to be created on K8s just from functions. The container images in Fission contain the language runtime, a set of commonly used dependencies and a dynamic loader for functions. These images can be customized, for example to package binary dependencies. Fission is able to optimize cold start overheads by maintaining a running pool of containers. As new requests come in from client applications or business services, it copies the function into the container, loads it dynamically, and routes the request to that instance. It is therefore able to minimize cold-start overhead on the order of 100msec for NodeJS and Python functions.

By operating at the source level, Fission saves the user from having to deal with container image building, pushing images to registries, managing registry credentials, image versioning and other administrative tasks.

https://kubernetes.io/blog/2017/01/fission-serverless-functions-as-service-for-kubernetes

As depicted in the schematic above, Fission is architected as a set of microservices with the main components described below:

  1. A controller that keeps track of functions, HTTP routes, event triggers and environment images;
  2. A pool manager that manages the pool of idle environment containers, loads functions into these containers, and kills function instances periodically to manage container overhead;
  3. A router that receives HTTP requests and routes them to either fresh function instances from poolmgr or already-running instances.

We can use the K8s cluster we created on GCP in the previous exercise to deploy a HTTP request on Fission. Let's walk-thru the process.

  1. Install Helm CLI Helm is a Kubernetes package manager. Let's initiate Helm:

     $ helm init
  2. Install Fission in GKE namespace

      $ helm install --namespace fission [https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz](https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz)
  3. Install Fission CLI

    OSX

    $ curl -Lo fission https://github.com/fission/fission/releases/download/0.7.0/fission-cli-osx && chmod +x fission && sudo mv fission /usr/local/bin/

    Windows Download the Windows executable here.

  4. Create your HTTP service We will create a simple HTTP service to print Hello World.

    $ cat \&gt; hello.py
    
    def main(context):
    
        print "Hello, world!"
  5. Deploy HTTP service on Fission

     $ fission function create --name hello --env python --code hello.py --route /hello
    
     $ curl http://\&lt;fission router\&gt;/hello
    
     Hello, world!

D. Kubeless and Deploying a Spring Boot Template

Kubeless is a Kubernetes-native serverless framework that enables functions to be deployed on a K8s cluster while allowing users to leverages Kubernetes resources to provide auto-scaling, API routing, monitoring and troubleshooting. Kubeless uses Kubernetes Custom Resource Definitions to create functions as custom kubernetes resources. A custom resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind, for example K8s pod objects, and that represents a customization of a particular K8s installation. Custom resources are quite useful because they can be provisioned and then deleted in a running cluster through dynamic registration and cluster admins can update custom resources independently of the cluster itself. Kubeless leverages these functionalities and runs an in-cluster controller that keeps track of these custom resources and launches runtimes on-demand.

We can use the K8s cluster we created on GCP in the previous exercise to deploy a HTTP request on Fission. Let's walk-thru the process.

  1. Access Kubernetes Dashboard

With the K8s cluster is running, we can make the dashboard available on port 8080 with kubectl:

  kubectl proxy --port=8080

The dashboard can be accessed by navigating to [http://localhost:8080/ui](http://localhost:8080/ui)on your browser

  1. Install Kubeless CLI

OSX

    $ curl -L https://github.com/kubeless/kubeless/releases/download/0.0.20/kubeless\_darwin-amd64.zip &gt; kubeless.zip
    $ unzip kubeless.zip
    $ sudo cp bundles/kubeless\_darwin-amd64/kubeless /usr/local/bin/

Windows

Download the Windows executable here.

  1. Deploy Kubeless in K8s cluster

We will deploy Kubless in our K8s cluster using a manifest found in this link. The manifest creates a kubeless Namespace, a function ThirdPartyResource, a kubeless Controller, and sets in process a kafka, zookeeper StatefulSet. One of the main advantages of Kubless is it's highly-Kubernetes native nature and it can set up both non-rbac and rbac specific environments. The screenshot below shows how to deploy kubeless on a non-rbac environment using kubectl commands.

  1. Create function

    We can create a function that allows us to create a server, and pull the method, URL, headers and body out of the request.

        const http = require(&#39;http&#39;);
    
        http.createServer((request, response) =&gt; {
          const { headers, method, url } = request;
          let body = [];
          request.on(&#39;error&#39;, (err) =&gt; {
            console.error(err);
          }).on(&#39;data&#39;, (chunk) =&gt; {
            body.push(chunk);
          }).on(&#39;end&#39;, () =&gt; {
            body = Buffer.concat(body).toString();
            // At this point, we have the headers, method, url and body, and can now
            // do whatever we need to in order to respond to this request.
          });
        }).listen(8080); // Activates this server, listening on port 8080.
  2. Run functions in Kubeless environment

We can register the function with Kubeless by providing the following information:

  1. The name to be used to access the function over the Web
  2. The protocol to be used to access the function
  3. The language runtime to be executed to run the code
  4. The name of the file containing the function code
  5. The name of the function inside the file

By adding the variables 1-5 above, we invoke the following command to register and deploy the function in Kubeless:

  kubeless function deploy serverequest--trigger-http --runtime nodejs6 --handler serverequest.createServer --from-file /tmp/serverequest.js

E. Evaluation of Serverless Platforms

Each of the serverless platforms we evaluated have their unique value propositions. With OpenFaas, any process or container can be packaged as a serverless function for either Linux or Windows. For enterprises, the architecture used by OpenFaaS provides a seamless ability to plug in to a schedule cluster and CI/CD workflow for their existing microservices, as OpenFaaS is built around Docker and all functions are packaged into Docker images. OpenFaaS also provides enterprises a seamless way to administer and execute functions via the external API, the Gateway and to manage the lifecycle of a function, including deployments, scale and secretes management via a Provider.

Fission has an event-driven architecture which makes it ideal for short-lived, stateless applications, including REST API or webhook implementations and DevOps automation. A good use case for using Fission might be a backend for chatbot development as Fission achieves good cold-start performance and delivers fast response times when needed by keeping a running pool of containers with their runtimes.

Finally, the Kubeless architecture leverages native Kubernetes concepts to deploy and manage functions, such as Custom Resource Definitions to define a function and custom controller to manage a function, deploy it as a Kubernetes deployment and expose it via Kubernetes service. This close alignment with Kubernetes native functionalities will appeal to existing Kubernetes users, lowering the required learning curve and seamlessly plugging into an existing Kubernetes architecture.

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we've seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.
Hisham Hasan
Hisham Hasan
Hisham is a consulting Enterprise Solutions Architect with experience in leveraging container technologies to solve infrastructure problems and deploy applications faster and with higher levels of security, performance and reliability. Recently, Hisham has been leveraging containers and cloud-native architecture for a variety of middleware applications to deploy complex and mission-critical services across the enterprise. Prior to entering the consulting world, Hisham worked at Aon Hewitt, Lexmark and ADP in software implementation and technical support.
快速开启您的Rancher之旅