Microservices and Service Mesh

The source code for this section is contained in the 03_03 branch of the GitHub repository for this course.

Microservices are a very common architectural pattern these days, in which complex applications are decomposed into individual services consisting of one of many containers. Deploying and maintaining a microservices architecture can be, however, extremely complicated.

A service mesh is a dedicated infrastructure that provides reliable and secure communication between microservices in a distributed system. It consists of proxies that intercept and manage the network traffic between the microservices, and systems responsible for configuring and monitoring those proxies. A service mesh can offer various benefits:

  • Load balancing

  • Service discovery

  • Encryption

  • Authentication and authorization

  • Observability

  • Fault tolerance

  • Resilience

OpenShift Service Mesh is a Red Hat OpenShift operator that provides service mesh functionality. It is based on the Istio project, an open-source platform that simplifies the deployment and operation of distributed systems. Be mindful that you cannot use the istioctl tool in OpenShift to interact with Istio’s control plane components. To learn more about Istio, visit its official website.

Istio also integrates with Kiali, a management console for Service Mesh. Kiali provides a graphical UI that allows you to visualize your mesh topology and health, as well as perform various actions on your services and workloads. You can use Kiali to view metrics, traces, logs, alerts, configurations, validations, and more. To learn more about Kiali, visit its official website.

Installation

Let’s install OpenShift Service Mesh using the operator provided by Red Hat:

  1. Navigate to Operators → OperatorHub from the left navigation menu.

  2. Search for "OpenShift Service Mesh" in the search box and click on it.

  3. Click on "Install" from the pop-up window.

  4. Select the default options.

  5. Click on Install again from the bottom-right corner.

  6. Wait for the operator installation to complete.

Repeat the same operation but for the "Kiali Operator" provided by Red Hat, as well as the "Red Hat OpenShift distributed tracing platform" operator, also provided by Red Hat.

Istio integrates itself transparently onto existing distributed applications through a proxy sidecar injected into every service deployed as a sidecar container, intercepting and directing traffic between services.

Using Istio and Kiali

To deploy your applications using Istio’s sidecar injection feature, you need to modify your deployments with the sidecar.istio.io/inject: 'true' annotation in your deployments. This will instruct Istio to inject Envoy proxies into any pod that is created in those projects.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apitest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apitest
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: 'true' (1)
      labels:
        app: apitest
    spec:
      volumes:
        # ...
1 Annotation required to inject the sidecar proxy in your application pod.

Once you have installed Istio and deployed your applications with Envoy proxies, you can start exploring Istio’s features and capabilities using Kiali.

Deploy your application and use it for a while. You will see that your pods have two containers now instead of the usual one. Open the "istio-system" project in the OpenShift Web Console and select "Networking → Routes." Open the "Kiali" route, and select your project. You should see now a graphical representation of the interactions between the various components of your microservices architecture.

kiali microservices
Figure 1. Kiali showing routing between microservices