Serverless with Knative

The source code for this section is contained in the 03_02 branch of the GitHub repository for this course.

Serverless is a cloud computing model and architecture that divides the application logic into minimal units. These units have fast startup and shutdown times, and they scale automatically: according to demand, and the platform fully manages this scaling without human intervention.

Common examples of Serverless services are AWS Lambda and Azure Functions.

In the case of OpenShift, the Knative project enables Serverless functionality. You can learn more about Knative at their website: knative.dev.

Knative is an open-source project created by Google, and it has become a de facto standard for developing and deploying serverless services in Kubernetes.

Knative is encapsulated in an OpenShift operator called the "Red Hat OpenShift Serverless" operator. Your cluster administrator must install this operator.

Knative services are defined just like any other Kubernetes object using YAML. This example shows that Knative services exist in a separate custom resource definition from standard Kubernetes services.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: simple-deno-api
spec:
  template:
    metadata:
      name: simple-deno-api-v1 (1)
      annotations:
        autoscaling.knative.dev/min-scale: "0"
        autoscaling.knative.dev/max-scale: "10" (2)
        autoscaling.knative.dev/target: "3"
    spec:
      containers:
      - image: registry.gitlab.com/akosma/simple-deno-api:latest (3)
1 Knative services must also follow a versioning schema shown in the example, which allows various versions of the same service to be installed simultaneously on the same cluster.
2 Developers configure Knative services autoscaling with annotations.
3 Knative uses standard containers as serverless units of work.
knative serving
Figure 1. A Knative serverless service deployed on Red Hat OpenShift