In this chapter, we will learn about application logging in Kubernetes and implement it. Additionally, we will introduce a structured logging format to make it ready for parsing by log collection and analysis systems.
The application described in this chapter is not intended for use in production environments as-is. Note that successful completion of this entire guide is required to create a production-ready application.
Preparing the environment
If you haven’t prepared your environment during previous steps, please, do it using the instructions provided in the “Preparing the environment” chapter.
If your environment has stopped working or instructions in this chapter don’t work, please, refer to these hints:
Let’s launch Docker Desktop. It takes some time for this application to start Docker. If there are no errors during the startup process, check that Docker is running and is properly configured:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Let’s launch the Docker Desktop application. It takes some time for the application to start Docker. If there are no errors during the startup process, then check that Docker is running and is properly configured:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Start Docker:
sudo systemctl restart docker
Make sure that Docker is running:
sudo systemctl status docker
If the Docker start is successful, you will see the following output:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-06-24 13:05:17 MSK; 13s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 2013888 (dockerd)
Tasks: 36
Memory: 100.3M
CGroup: /system.slice/docker.service
└─2013888 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
dockerd[2013888]: time="2021-06-24T13:05:16.936197880+03:00" level=warning msg="Your kernel does not support CPU realtime scheduler"
dockerd[2013888]: time="2021-06-24T13:05:16.936219851+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
dockerd[2013888]: time="2021-06-24T13:05:16.936224976+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
dockerd[2013888]: time="2021-06-24T13:05:16.936311001+03:00" level=info msg="Loading containers: start."
dockerd[2013888]: time="2021-06-24T13:05:17.119938367+03:00" level=info msg="Loading containers: done."
dockerd[2013888]: time="2021-06-24T13:05:17.134054120+03:00" level=info msg="Daemon has completed initialization"
systemd[1]: Started Docker Application Container Engine.
dockerd[2013888]: time="2021-06-24T13:05:17.148493957+03:00" level=info msg="API listen on /run/docker.sock"
Now let’s check if Docker is available and its configuration is correct:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:
minikube start
Set the default Namespace so that you don’t have to specify it every time you invoke kubectl
:
kubectl config set-context minikube --namespace=werf-guide-app
You will see the following output if the command completes successfully:
😄 minikube v1.20.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image registry:2.7.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying registry addon...
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄 Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default
Make sure that the command output contains the following line:
Restarting existing docker container for "minikube"
Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment using minikube.
Now run the command in the background PowerShell terminal (do not close its window):
minikube tunnel --cleanup=true
Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:
minikube start --namespace werf-guide-app
Set the default Namespace so that you don’t have to specify it every time you invoke kubectl
:
kubectl config set-context minikube --namespace=werf-guide-app
You will see the following output if the command completes successfully:
😄 minikube v1.20.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image registry:2.7.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying registry addon...
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄 Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default
Make sure that the command output contains the following line:
Restarting existing docker container for "minikube"
Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment from scratch using minikube.
If you have inadvertently deleted Namespace of the application, you must run the following commands to proceed with the guide:
kubectl create namespace werf-guide-app
kubectl create secret docker-registry registrysecret \
--docker-server='https://index.docker.io/v1/' \
--docker-username='<Docker Hub username>' \
--docker-password='<Docker Hub password>'
You will see the following output if the command completes successfully:
namespace/werf-guide-app created
secret/registrysecret created
If nothing worked, repeat all the steps described in the “Preparing the environment” chapter and create a new environment from scratch. If creating an environment from scratch did not help either, please, tell us about your problem in our Telegram chat or create an issue on GitHub. We will be happy to help you!
Preparing the repository
Update the existing repository containing the application:
Run the following commands in PowerShell:
cd ~/werf-guide/app
# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/nodejs/020_logging/* .
git add .
git commit -m WIP
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show
Run the following commands in Bash:
cd ~/werf-guide/app
# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/nodejs/020_logging/. .
git add .
git commit -m WIP
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show
Doesn’t work? Try the instructions on the “I am just starting from this chapter” tab above.
Prepare a new repository with the application:
Run the following commands in PowerShell:
# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
if (-not (Test-Path ~/werf-guide/guides)) {
git clone https://github.com/werf/website $env:HOMEPATH/werf-guide/guides
}
# Copy the (unchanged) application files to ~/werf-guide/app.
rm -Recurse -Force ~/werf-guide/app
cp -Recurse -Force ~/werf-guide/guides/examples/nodejs/010_basic_app ~/werf-guide/app
# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial
# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/nodejs/020_logging/* .
git add .
git commit -m WIP
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show
Run the following commands in Bash:
# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
test -e ~/werf-guide/guides || git clone https://github.com/werf/website ~/werf-guide/guides
# Copy the (unchanged) application files to ~/werf-guide/app.
rm -rf ~/werf-guide/app
cp -rf ~/werf-guide/guides/examples/nodejs/010_basic_app ~/werf-guide/app
# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial
# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/nodejs/020_logging/. .
git add .
git commit -m WIP
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show
Redirecting logs to stdout
All applications deployed to a Kubernetes cluster must write logs to stdout/stderr. Sending logs to standard streams makes them accessible by Kubernetes and log collection systems. Such an approach keeps logs from being deleted when containers are recreated and prevents consuming all available storage on Kubernetes nodes if the output is sent to log files in containers.
By default, express writes logs to stdout, so no additional actions are required to output logs.
Formatting logs
By default, Node.js-based application generate plain text logs:
> app@0.0.0 start /app
> node ./bin/www
Thu, 11 Nov 2021 15:06:10 GMT app:server Listening on port 3000
GET /ping 200 6.086 ms - 5
Note that both the application start message and the query (we used curl just like at the end of the previous section) are displayed in plain text.
Regular log collection and analysis systems will probably struggle with parsing all this gibberish.
You can solve this problem by shipping logs in a structured JSON-like format. Most log collection systems can both, easily parse JSON and correctly process logs/messages in other (unexpected, unstructured) formats when they sneak in between JSON-formatted logs.
To do this, you will need a logger that supports JSON output. In our example, we will use pino
(don’t forget to uninstall the default logger):
$ npm uninstall morgan
$ npm install pino pino-http
Connect the logger to the server:
...
const pino = require('pino');
const pinoHttp = require('pino-http');
...
app.use(
pinoHttp({
// Control log level that is to be used elsewhere via res.log.
logger: pino({ level: process.env.LOG_LEVEL || 'info' }),
// Fixed log level for all server logs.
useLevel: 'info',
})
);
Thanks to the pino-http
library, you can now access the pino
logger via the res.log
response object.
This way, you only need to initialize the logger once, and then you can use it as a dependency in the code:
...
router.get('/', function (req, res, next) {
res.log.debug('we are being pinged');
res.send('pong\n');
});
Log tagging support is beyond the scope of this guide; however, you can implement it if necessary.
Now, if you send a query to the /ping
endpoint, you will get the following two JSON strings:
{"level":20,"time":1636645957094,"pid":1,"hostname":"werf-guide-app-d47fd67f9-qln6t","req":{"id":4,"method":"GET","url":"/ping","query":{},"params":{},"headers":{"host":"werf-guide-app.test","x-request-id":"17ff378b3d2068788b21bb702856cdee","x-real-ip":"192.168.64.1","x-forwarded-for":"192.168.64.1","x-forwarded-host":"werf-guide-app.test","x-forwarded-port":"80","x-forwarded-proto":"http","x-forwarded-scheme":"http","x-scheme":"http","user-agent":"curl/7.64.1","accept":"*/*"},"remoteAddress":"::ffff:172.17.0.2","remotePort":42204},"msg":"we are being pinged"}
{"level":30,"time":1636645957097,"pid":1,"hostname":"werf-guide-app-d47fd67f9-qln6t","req":{"id":4,"method":"GET","url":"/ping","query":{},"params":{},"headers":{"host":"werf-guide-app.test","x-request-id":"17ff378b3d2068788b21bb702856cdee","x-real-ip":"192.168.64.1","x-forwarded-for":"192.168.64.1","x-forwarded-host":"werf-guide-app.test","x-forwarded-port":"80","x-forwarded-proto":"http","x-forwarded-scheme":"http","x-scheme":"http","user-agent":"curl/7.64.1","accept":"*/*"},"remoteAddress":"::ffff:172.17.0.2","remotePort":42204},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"text/html; charset=utf-8","content-length":"5","etag":"W/\"5-dv2B8tXZGrCcINvgl7S82poObJs\""}},"responseTime":3,"msg":"request completed"}
We were able to convert the server logs into JSON using pino
. However, this did not affect npm
logs (we use it to run the application in the Pod). Run the application without npm
to get rid of unnecessary logs:
...
command: ["node", "./bin/www"]
Please Note! The steps in the current section are for illustrative purposes only. They show how a basic application was generated. Only the steps in the “Checking whether the application is running” section are intended to be followed.
Managing the logging level
By default, the application logging level for the production environment is set to info
. However, sometimes you may wish to change that.
For example, switching the logging level from info
to debug
can provide additional debugging information and help in troubleshooting problems in production. However, if the application has many replicas, switching them all to the debug
level may not be the best idea. It may affect security and significantly increase the load on log collection, storage, and analysis components.
You can use an environment variable to set the logging level, thus solving the above problem. Using this approach, you can run a single Deployment replica with the debug
logging level next to the existing replicas with the info
logging level enabled. Moreover, you can disable centralized log collection for this new Deployment (if any). Together, all these measures prevent overloading log collection systems while keeping debug logs containing potentially sensitive information from being streamed to them.
Here is how you can set the logging level using the LOG_LEVEL
environment variable:
...
env:
- name: NODE_ENV
value: production
- name: LOG_LEVEL
value: debug
We have defined the log level environment variable. We have also set it to the default info
level. Note that pino
does this by default:
...
app.use(
pinoHttp({
// Control log level that is to be used elsewhere via res.log.
logger: pino({ level: process.env.LOG_LEVEL || 'info' }),
// Fixed log level for all server logs.
useLevel: 'info',
})
);
If the environment variable is omitted, the info
level is used by default.
Displaying logs in werf-based deploys
By default, when deploying, werf prints logs of all application containers until they become Ready
.
You can filter these logs using custom werf annotations. In this case, werf will only print lines that match the specified patterns.
Additionally, you can enable log output for specific containers.
For example, here is how you can disable log output for the container_name
container during the deployment:
annotations:
werf.io/skip-logs-for-containers: container_name
The example below shows how you can enable printing lines that match a pre-defined regular expression:
annotations:
werf.io/log-regex: ".*ERROR.*"
A list of all available annotations can be found here.
Note that these annotations only influence the way logs are shown during the werf-based deployment process. They do not affect the application being deployed or its configuration in any way. You can still use stdout/stderr streams of the container to view raw logs.
Checking whether the application is running
Let’s deploy our application:
werf converge --repo <DOCKER HUB USERNAME>/werf-guide-app
You should see the following output:
┌ ⛵ image app
│ Use cache image for app/dockerfile
│ name: <DOCKER HUB USERNAME>/werf-guide-app:a0c674d1d0378b6c53b8694be1688c2b454e892105a6850e7d5b074c-1636644540936
│ id: 37e64591badb
│ created: 2022-11-11 18:29:00 +0000 UTC
│ size: 30.3 MiB
└ ⛵ image app (8.03 seconds)
┌ Waiting for release resources to become ready
│ ┌ Status progress
│ │ DEPLOYMENT REPLICAS AVAILABLE UP-TO-DATE
│ │ werf-guide-app 1/1 1 1
│ │ │ POD READY RESTARTS STATUS
│ │ ├── guide-app-6c6c4dbf6-wxr6k 1/1 0 Terminating
│ │ └── guide-app-d47fd67f9-hq2ds 1/1 0 Running
│ └ Status progress
└ Waiting for release resources to become ready (1.94 seconds)
Release "werf-guide-app" has been upgraded. Happy Helming!
NAME: werf-guide-app
LAST DEPLOYED: Thu Nov 11 18:30:47 2022
NAMESPACE: werf-guide-app
STATUS: deployed
REVISION: 3
TEST SUITE: None
Running time 15.69 seconds
Make several requests in order to generate some logging data:
curl http://werf-guide-app.test/ping # returns "pong" + 200 OK status code
curl http://werf-guide-app.test/not_found # no response; returns 404 Not Found
While our requests are being made, we won’t see any codes returned by the server. However, we can find them in the logs — let’s take a look at them:
kubectl logs deploy/werf-guide-app
You should see the following output:
{"level":20,"time":1636647707471,"pid":1,"hostname":"werf-guide-app-d47fd67f9-qln6t","req":{"id":7,"method":"GET","url":"/ping","query":{},"params":{},"headers":{"host":"werf-guide-app.test","x-request-id":"a7bda9916010a7547682c14a7f1d3c93","x-real-ip":"192.168.64.1","x-forwarded-for":"192.168.64.1","x-forwarded-host":"werf-guide-app.test","x-forwarded-port":"80","x-forwarded-proto":"http","x-forwarded-scheme":"http","x-scheme":"http","user-agent":"curl/7.64.1","accept":"*/*"},"remoteAddress":"::ffff:172.17.0.2","remotePort":52936},"msg":"we are being pinged"}
{"level":30,"time":1636647707472,"pid":1,"hostname":"werf-guide-app-d47fd67f9-qln6t","req":{"id":7,"method":"GET","url":"/ping","query":{},"params":{},"headers":{"host":"werf-guide-app.test","x-request-id":"a7bda9916010a7547682c14a7f1d3c93","x-real-ip":"192.168.64.1","x-forwarded-for":"192.168.64.1","x-forwarded-host":"werf-guide-app.test","x-forwarded-port":"80","x-forwarded-proto":"http","x-forwarded-scheme":"http","x-scheme":"http","user-agent":"curl/7.64.1","accept":"*/*"},"remoteAddress":"::ffff:172.17.0.2","remotePort":52936},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"text/html; charset=utf-8","content-length":"5","etag":"W/\"5-dv2B8tXZGrCcINvgl7S82poObJs\""}},"responseTime":1,"msg":"request completed"}
{"level":30,"time":1636647710678,"pid":1,"hostname":"werf-guide-app-d47fd67f9-qln6t","req":{"id":8,"method":"GET","url":"/not_found","query":{},"params":{},"headers":{"host":"werf-guide-app.test","x-request-id":"abc9b62a1f9ba9a490dbfa6e82471bf8","x-real-ip":"192.168.64.1","x-forwarded-for":"192.168.64.1","x-forwarded-host":"werf-guide-app.test","x-forwarded-port":"80","x-forwarded-proto":"http","x-forwarded-scheme":"http","x-scheme":"http","user-agent":"curl/7.64.1","accept":"*/*"},"remoteAddress":"::ffff:172.17.0.2","remotePort":52958},"res":{"statusCode":404,"headers":{"x-powered-by":"Express","content-security-policy":"default-src 'self'","x-content-type-options":"nosniff","content-type":"text/html; charset=utf-8","content-length":148}},"responseTime":1,"msg":"request completed"}
Note that application logs are now rendered in JSON format, and most log processing systems can easily parse them. At the same time, Rails and Puma logs are streamed in plain text just like before. The main advantage of this approach is that log processing systems will no longer try to parse application logs and Rails/Puma logs as if they have the same format. JSON logs will be stored separately, letting you perform searching/filtering based on the selected fields.