In this chapter, we will learn about application logging in Kubernetes and implement it. Additionally, we will introduce a structured logging format to make it ready for parsing by log collection and analysis systems.

The application described in this chapter is not intended for use in production environments as-is. Note that successful completion of this entire guide is required to create a production-ready application.

Preparing the environment

Prepare the environment using the instructions provided in the “Preparing the environment” chapter (if you have not done this already).

Please, refer to these helpful resources if the environment has stopped working or instructions in this chapter don’t work:

Is Docker running?

Let’s launch Docker Desktop. It takes some time for this application to start Docker. If there are no errors during the startup process, check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Let’s launch the Docker Desktop application. It takes some time for the application to start Docker. If there are no errors during the startup process, then check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Start Docker:

sudo systemctl restart docker

Make sure that Docker is running:

sudo systemctl status docker

If the Docker start is successful, you will see the following output:

● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2021-06-24 13:05:17 MSK; 13s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2013888 (dockerd)
      Tasks: 36
     Memory: 100.3M
     CGroup: /system.slice/docker.service
             └─2013888 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

dockerd[2013888]: time="2021-06-24T13:05:16.936197880+03:00" level=warning msg="Your kernel does not support CPU realtime scheduler"
dockerd[2013888]: time="2021-06-24T13:05:16.936219851+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
dockerd[2013888]: time="2021-06-24T13:05:16.936224976+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
dockerd[2013888]: time="2021-06-24T13:05:16.936311001+03:00" level=info msg="Loading containers: start."
dockerd[2013888]: time="2021-06-24T13:05:17.119938367+03:00" level=info msg="Loading containers: done."
dockerd[2013888]: time="2021-06-24T13:05:17.134054120+03:00" level=info msg="Daemon has completed initialization"
systemd[1]: Started Docker Application Container Engine.
dockerd[2013888]: time="2021-06-24T13:05:17.148493957+03:00" level=info msg="API listen on /run/docker.sock"

Now let’s check if Docker is available and its configuration is correct:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Have you restarted the computer after setting up the environment?

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment using minikube.

Now run the command in the background PowerShell terminal (do not close its window):

minikube tunnel --cleanup=true

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start --namespace werf-guide-app

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment from scratch using minikube.

Did you accidentally delete the application's Namespace?

If you have inadvertently deleted Namespace of the application, you must run the following commands to proceed with the guide:

kubectl create namespace werf-guide-app
kubectl create secret docker-registry registrysecret \
  --docker-server='https://index.docker.io/v1/' \
  --docker-username='<Docker Hub username>' \
  --docker-password='<Docker Hub password>'

You will see the following output if the command completes successfully:

namespace/werf-guide-app created
secret/registrysecret created
Nothing helps; the environment or instructions keep failing.

If nothing worked, repeat all the steps described in the “Preparing the environment” chapter and create a new environment from scratch. If creating an environment from scratch did not help either, please, tell us about your problem in our Telegram chat or create an issue on GitHub. We will be happy to help you!

Preparing the repository

Update the existing repository containing the application:

Run the following commands in PowerShell:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/rails/020_logging/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/rails/020_logging/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Doesn’t work? Try the instructions on the “I am just starting from this chapter” tab above.

Prepare a new repository with the application:

Run the following commands in PowerShell:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
if (-not (Test-Path ~/werf-guide/guides)) {
  git clone https://github.com/werf/werf-guides $env:HOMEPATH/werf-guide/guides
}

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -Recurse -Force ~/werf-guide/app
cp -Recurse -Force ~/werf-guide/guides/examples/rails/010_basic_app ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/rails/020_logging/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
test -e ~/werf-guide/guides || git clone https://github.com/werf/werf-guides ~/werf-guide/guides

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -rf ~/werf-guide/app
cp -rf ~/werf-guide/guides/examples/rails/010_basic_app ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/rails/020_logging/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Redirecting logs to stdout

All applications deployed to a Kubernetes cluster must write logs to stdout/stderr. Sending logs to standard streams makes them accessible by Kubernetes and log collection systems. Such an approach keeps logs from being deleted when containers are recreated and prevents consuming all available storage on Kubernetes nodes if the output is sent to log files in containers.

By default, Rails saves logs to a file instead of stdout/stderr. Set the appropriate environment variable (RAILS_LOG_TO_STDOUT=1) to enable redirecting logs (including errors) to stdout.

Since streaming logs to stdout is all we need, they will be directed to stdout in all cases regardless of the environment variables:

...
config.logger           = ActiveSupport::Logger.new(STDOUT)
... config.logger = ActiveSupport::Logger.new(STDOUT)

Formatting logs

By default, Rails-based application generate plain text logs:

=> Booting Puma
=> Rails 6.1.4 application starting in production
=> Run "bin/rails server --help" for more startup options
Puma starting in single mode...
* Puma version: 5.3.2 (ruby 2.7.4-p191) ("Sweetnighter")
*  Min threads: 5
*  Max threads: 5
*  Environment: production
*          PID: 1
* Listening on http://0.0.0.0:3000
Use Ctrl-C to stop
I, [2021-06-07T16:47:28.498897 #1]  INFO -- : Started GET \"/ping\" for 192.168.49.1 at 2021-07-23 15:08:16 +0000
I, [2021-06-07T16:47:28.498972 #1]  INFO -- : Processing by ApplicationController#ping as */*
I, [2021-06-07T16:47:28.498897 #1]  INFO -- : Completed 200 OK in 0ms (Views: 0.1ms | Allocations: 166)

Note how the application logs end up mixed with Rails and Puma logs. All these logs have different formats. Regular log collection and analysis systems will probably struggle with parsing all this gibberish.

You can solve this problem by shipping logs in a structured JSON-like format. Most log collection systems can both, easily parse JSON and correctly process logs/messages in other (unexpected, unstructured) formats when they sneak in between JSON-formatted logs.

Let’s define an ActiveSupport::Logger::SimpleFormatter class that converts plain text logs into JSON-formatted ones:

class JSONSimpleFormatter < ActiveSupport::Logger::SimpleFormatter
  def call(severity, time, _, message)
    JSON.generate({
      type: severity,
      time: time.iso8601,
      message: message,
    }) + "\n"
  end
end
class JSONSimpleFormatter < ActiveSupport::Logger::SimpleFormatter def call(severity, time, _, message) JSON.generate({ type: severity, time: time.iso8601, message: message, }) + "\n" end end

Log tagging support is beyond the scope of this guide; however, you can implement it if necessary.

Now let’s make the new JSONSimpleFormatter class the default one to convert all logs to JSON:

...
config.log_formatter = JSONSimpleFormatter.new

config.logger           = ActiveSupport::Logger.new(STDOUT)
config.logger.formatter = config.log_formatter
... config.log_formatter = JSONSimpleFormatter.new config.logger = ActiveSupport::Logger.new(STDOUT) config.logger.formatter = config.log_formatter

Please Note! The commands in the current section are for illustrative purposes only. They show how a basic application was generated. Only the commands in the “Checking whether the application is running” section are intended to be executed.

Managing the logging level

By default, the application logging level for the production environment is set to info. However, sometimes you may wish to change that.

For example, switching the logging level from info to debug can provide additional debugging information and help in troubleshooting problems in production. However, if the application has many replicas, switching them all to the debug level may not be the best idea. It may affect security and significantly increase the load on log collection, storage, and analysis components.

You can use an environment variable to set the logging level, thus solving the above problem. Using this approach, you can run a single Deployment replica with the debug logging level next to the existing replicas with the info logging level enabled. Moreover, you can disable centralized log collection for this new Deployment (if any). Together, all these measures prevent overloading log collection systems while keeping debug logs containing potentially sensitive information from being streamed to them.

Here is how you can set the logging level using the RAILS_LOG_LEVEL environment variable:

...
config.log_level = ENV.fetch("RAILS_LOG_LEVEL", "info").downcase.strip.to_sym
... config.log_level = ENV.fetch("RAILS_LOG_LEVEL", "info").downcase.strip.to_sym

If the environment variable is omitted, the info level is used by default.

Log filtering

Let’s enable log filtering to filter out secrets from the log file:

# Be sure to restart your server when you modify this file.

# Configure sensitive parameters which will be filtered from the log file.
Rails.application.config.filter_parameters += [
  :passw, :secret, :token, :_key, :crypt, :salt, :certificate, :otp, :ssn
]
# Be sure to restart your server when you modify this file. # Configure sensitive parameters which will be filtered from the log file. Rails.application.config.filter_parameters += [ :passw, :secret, :token, :_key, :crypt, :salt, :certificate, :otp, :ssn ]

Keep in mind that you need to update this list after adding new secrets.

Displaying logs in werf-based deploys

By default, when deploying, werf prints logs of all application containers until they become Ready. You can filter these logs using custom werf annotations. In this case, werf will only print lines that match the specified patterns. Additionally, you can enable log output for specific containers.

For example, here is how you can disable log output for the container_name container during the deployment:

annotations:
  werf.io/skip-logs-for-containers: container_name

The example below shows how you can enable printing lines that match a pre-defined regular expression:

annotations:
  werf.io/log-regex: ".*ERROR.*"

A list of all available annotations can be found here.

Note that these annotations only influence the way logs are shown during the werf-based deployment process. They do not affect the application being deployed or its configuration in any way. You can still use stdout/stderr streams of the container to view raw logs.

Checking whether the application is running

Let’s deploy our application:

werf converge --repo <DOCKER HUB USERNAME>/werf-guide-app

You should see the following output:

...
┌ ⛵ image app
│ ┌ Building stage app/dockerfile
│ │ app/dockerfile  Sending build context to Docker daemon  30.72kB
│ │ app/dockerfile  Step 1/13 : FROM ruby:2.7
│ │ app/dockerfile   ---> 1faa5f2f8ca3
...
│ │ app/dockerfile  Step 13/13 : LABEL werf-version=v1.2.11+fix10
│ │ app/dockerfile   ---> Running in db6e76c3f427
│ │ app/dockerfile  Removing intermediate container db6e76c3f427
│ │ app/dockerfile   ---> d7f69fbfdedb
│ │ app/dockerfile  Successfully built d7f69fbfdedb
│ │ app/dockerfile  Successfully tagged eb50cb50-1191-4d0b-8bf2-a4d5bba18ecf:latest
│ ├ Info
│ │      name: .../werf-guide-app:...
│ │        id: d7f69fbfdedb
│ │   created: 2021-07-23 18:00:19 +0300 MSK
│ │      size: 327.3 MiB
│ └ Building stage app/dockerfile (12.72 seconds)
└ ⛵ image app (17.81 seconds)

┌ Waiting for release resources to become ready
│ ┌ Status progress
│ │ DEPLOYMENT      REPLICAS  AVAILABLE  UP-TO-DATE
│ │ werf-guide-app  1/1       1          1
│ │ │   POD                         READY  RESTARTS  STATUS
│ │ ├── guide-app-5f97776488-vwqfg  1/1    0         Terminating
│ │ └── guide-app-fcf7c4ff5-wvb62   1/1    0         Running
│ └ Status progress
└ Waiting for release resources to become ready (4.89 seconds)

Release "werf-guide-app" has been upgraded. Happy Helming!
NAME: werf-guide-app
LAST DEPLOYED: Fri Jul 23 18:00:34 2021
NAMESPACE: werf-guide-app
STATUS: deployed
REVISION: 14
TEST SUITE: None
Running time 26.67 seconds

Make several requests in order to generate some logging data:

curl http://werf-guide-app/ping       # returns "pong" + 200 OK status code
curl http://werf-guide-app/not_found  # no response; returns 404 Not Found

While our requests are being made, we won’t see any codes returned by the server. However, we can find them in the logs — let’s take a look at them:

kubectl logs deploy/werf-guide-app

You should see the following output:

=> Booting Puma
=> Rails 6.1.4 application starting in production
=> Run `bin/rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 5.3.2 (ruby 2.7.4-p191) ("Sweetnighter")
*  Min threads: 5
*  Max threads: 5
*  Environment: production
*          PID: 1
* Listening on http://0.0.0.0:3000
Use Ctrl-C to stop
{"type":"INFO","time":"2021-07-23T15:11:36+00:00","message":"Started GET \"/ping\" for 192.168.49.1 at 2021-07-23 15:11:36 +0000"}
{"type":"INFO","time":"2021-07-23T15:11:36+00:00","message":"Processing by ApplicationController#ping as */*"}
{"type":"INFO","time":"2021-07-23T15:11:36+00:00","message":"Completed 200 OK in 0ms (Views: 0.1ms | Allocations: 135)"}
{"type":"INFO","time":"2021-07-23T15:11:30+00:00","message":"Started GET \"/not_found\" for 192.168.49.1 at 2021-07-23 15:11:30 +0000"}
{"type":"FATAL","time":"2021-07-23T15:11:30+00:00","message":"  \nActionController::RoutingError (No route matches [GET] \"/not_found\"):\n  "}

Note that application logs are now rendered in JSON format, and most log processing systems can easily parse them. At the same time, Rails and Puma logs are streamed in plain text just like before. The main advantage of this approach is that log processing systems will no longer try to parse application logs and Rails/Puma logs as if they have the same format. JSON logs will be stored separately, letting you perform searching/filtering based on the selected fields.