Logging
The source code for this section is contained in the 04_01 branch of the GitHub repository for this course.
|
Once your applications are up and running, it is essential to keep an eye on them. The good news is that OpenShift makes it trivial to monitor the behavior of your Cloud Native applications at runtime. OpenShift comes bundled with various open-source and standard tools such as Prometheus and Kibana. No need to install them separately.
Let us talk about logging. Following Kubernetes' best practices and the Twelve-Factor App Principles, your Cloud Native applications should be publishing their log messages as a stream of data directly sent to standard output, so that standard Kubernetes tools can process it. Your application events should be sent one event per line, directly to the "stdout" of your process.
Checking Pod Logs on the Web Console
The easiest way to check the logs of your pods is to use the Web Console. In the Administration perspective, select the Workloads / Pods menu, and click on any pod running in your cluster. The "Logs" tab will immediately show the current logs of your pod, which will give you a precise idea of the current status of your application.
Checking for logs individually quickly becomes cumbersome, particularly for large microservice applications with many individual components. In particular, Kubernetes can kill pods at any time depending on their runtime status and the current cluster conditions, which means that log messages of a particular pod can be lost at any time.
Using ElasticSearch and Kibana
Given the nature of Kubernetes deployments, it is better to coalesce application logs using ElasticSearch and Kibana, two tools that have become the standard Cloud Native way to manage logs. ElasticSearch and Kibana are provided as standard OpenShift operators.
If you are using OpenShift Local, please make sure to configure your CRC cluster to enable monitoring. You must delete any current instance using crc delete and then run the crc config set enable-cluster-monitoring true command. Then start a new instance with enough disk space and memory: crc start --memory 20480 --cpus 6 --disk-size 300 --nameserver 1.1.1.1 --pull-secret-file ./pull-secret
|
Using the web console:
-
Install "OpenShift Elasticsearch Operator" with the defaults.
-
Install the "Red Hat OpenShift Logging" operator with the defaults.
-
Wait until both operators show the "Succeeded" status in the Operators / Installed Operators screen.
-
On the terminal,
oc login
as kubeadmin. -
Apply the logging/cluster-logging-instance.yaml file to your cluster using the
oc
tool, to create an instance of Kibana.
Depending on the amount of RAM in your computer, this process should be faster or slower.
Once all the pods in the "openshift-logging" namespace are ready and running, you should be able to open the menu "applications" on the top-right of the OpenShift web console, and Kibana will open up in a new browser window. You can login to Kibana using the standard username and passwords provided by OpenShift Local at startup. You can define an index pattern and then customize your dashboard with graphs and tables.
Create a new project, deploy the registry.gitlab.com/akosma/linkedin-learning-simpleapi:latest
container in it, and execute the "/unstable" path a few times. Then navigate in the OpenShift web console to the pod, and select the "Logs" tab. Click on the View in Kibana button and this will open the Kibana application with your application logs in it.
OpenShift also allows you to forward all logs outside of your cluster, if that’s required or desirable. |