In this chapter, we will deploy a database, configure our application to use it, and set up automatic DB migrations and initializations.

The application described in this chapter is not intended for use in production environments as-is. Note that successful completion of this entire guide is required to create a production-ready application.

Preparing the environment

If you haven’t prepared your environment during previous steps, please, do it using the instructions provided in the “Preparing the environment” chapter.

If your environment has stopped working or instructions in this chapter don’t work, please, refer to these hints:

Is Docker running?

Let’s launch Docker Desktop. It takes some time for this application to start Docker. If there are no errors during the startup process, check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Let’s launch the Docker Desktop application. It takes some time for the application to start Docker. If there are no errors during the startup process, then check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Start Docker:

sudo systemctl restart docker

Make sure that Docker is running:

sudo systemctl status docker

If the Docker start is successful, you will see the following output:

● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2021-06-24 13:05:17 MSK; 13s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2013888 (dockerd)
      Tasks: 36
     Memory: 100.3M
     CGroup: /system.slice/docker.service
             └─2013888 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

dockerd[2013888]: time="2021-06-24T13:05:16.936197880+03:00" level=warning msg="Your kernel does not support CPU realtime scheduler"
dockerd[2013888]: time="2021-06-24T13:05:16.936219851+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
dockerd[2013888]: time="2021-06-24T13:05:16.936224976+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
dockerd[2013888]: time="2021-06-24T13:05:16.936311001+03:00" level=info msg="Loading containers: start."
dockerd[2013888]: time="2021-06-24T13:05:17.119938367+03:00" level=info msg="Loading containers: done."
dockerd[2013888]: time="2021-06-24T13:05:17.134054120+03:00" level=info msg="Daemon has completed initialization"
systemd[1]: Started Docker Application Container Engine.
dockerd[2013888]: time="2021-06-24T13:05:17.148493957+03:00" level=info msg="API listen on /run/docker.sock"

Now let’s check if Docker is available and its configuration is correct:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Have you restarted the computer after setting up the environment?

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment using minikube.

Now run the command in the background PowerShell terminal (do not close its window):

minikube tunnel --cleanup=true

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start --namespace werf-guide-app

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment from scratch using minikube.

Did you accidentally delete the application's Namespace?

If you have inadvertently deleted Namespace of the application, you must run the following commands to proceed with the guide:

kubectl create namespace werf-guide-app
kubectl create secret docker-registry registrysecret \
  --docker-server='https://index.docker.io/v1/' \
  --docker-username='<Docker Hub username>' \
  --docker-password='<Docker Hub password>'

You will see the following output if the command completes successfully:

namespace/werf-guide-app created
secret/registrysecret created
Nothing helps; the environment or instructions keep failing.

If nothing worked, repeat all the steps described in the “Preparing the environment” chapter and create a new environment from scratch. If creating an environment from scratch did not help either, please, tell us about your problem in our Telegram chat or create an issue on GitHub. We will be happy to help you!

Preparing the repository

Update the existing repository containing the application:

Run the following commands in PowerShell:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/golang/040_db/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/golang/040_db/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Doesn’t work? Try the instructions on the “I am just starting from this chapter” tab above.

Prepare a new repository with the application:

Run the following commands in PowerShell:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
if (-not (Test-Path ~/werf-guide/guides)) {
  git clone https://github.com/werf/website $env:HOMEPATH/werf-guide/guides
}

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -Recurse -Force ~/werf-guide/app
cp -Recurse -Force ~/werf-guide/guides/examples/golang/030_assets ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/golang/040_db/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
test -e ~/werf-guide/guides || git clone https://github.com/werf/website ~/werf-guide/guides

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -rf ~/werf-guide/app
cp -rf ~/werf-guide/guides/examples/golang/030_assets ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/golang/040_db/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Making our application stateful

At this point, our application does not use a database and does not store any data (i.e., it is stateless). To make it stateful, we need prepare the application to work with the MySQL database (we will use it to store the state).

The following changes have been made to our application:

  1. Dependencies for working with the MySQL database are installed.
  2. Access to the database is configured (using the environment variables).
  3. Routes to work with the database are added.

Adding /remember and /say endpoints to the application

Let’s add two new endpoints to our application. The /remember endpoint will store the data to the database while the /say endpoint will retrieve the data from it.

First, let’s create an auxiliary service that retrieves the database connection parameters from the environment variables and returns them:

package services

import "os"

func GetDBCredentials() (string, string) {
	dbType := os.Getenv("DB_TYPE")
	dbName := os.Getenv("DB_NAME")
	dbUser := os.Getenv("DB_USER")
	dbPasswd := os.Getenv("DB_PASSWD")
	dbHost := os.Getenv("DB_HOST")
	dbPort := os.Getenv("DB_PORT")
	return dbType, dbUser + ":" + dbPasswd + "@tcp(" + dbHost + ":" + dbPort + ")/" + dbName
}
package services import "os" func GetDBCredentials() (string, string) { dbType := os.Getenv("DB_TYPE") dbName := os.Getenv("DB_NAME") dbUser := os.Getenv("DB_USER") dbPasswd := os.Getenv("DB_PASSWD") dbHost := os.Getenv("DB_HOST") dbPort := os.Getenv("DB_PORT") return dbType, dbUser + ":" + dbPasswd + "@tcp(" + dbHost + ":" + dbPort + ")/" + dbName }

The resulting strings with the database type and the address to connect to will be used to initialize the connection.

Let’s create two controllers in charge of the new endpoints:

package controllers

import (
	"database/sql"
	"github.com/gin-gonic/gin"
	"net/http"
	"werf_guide_app/internal/services"

	_ "github.com/go-sql-driver/mysql"
)

func RememberController(c *gin.Context) {
	dbType, dbPath := services.GetDBCredentials()

	db, err := sql.Open(dbType, dbPath)
	if err != nil {
		panic(err)
	}

	answer := c.Query("answer")
	name := c.Query("name")
	_, err = db.Exec("INSERT INTO talkers (answer, name) VALUES (?, ?)",
		answer, name)
	if err != nil {
		panic(err)
	}

	c.String(http.StatusOK, "Got it.\n")

	defer db.Close()
}

func SayController(c *gin.Context) {
	dbType, dbPath := services.GetDBCredentials()

	db, err := sql.Open(dbType, dbPath)
	if err != nil {
		panic(err)
	}

	result, err := db.Query("SELECT * FROM talkers")
	if err != nil {
		panic(err)
	}

	count := 0
	for result.Next() {
		count++
		var id int
		var answer string
		var name string

		err = result.Scan(&id, &answer, &name)
		if err != nil {
			panic(err)
		}

		c.String(http.StatusOK, answer+", "+name+"!\n")
		break
	}
	if count == 0 {
		c.String(http.StatusOK, "I have nothing to say.\n")
	}
}
package controllers import ( "database/sql" "github.com/gin-gonic/gin" "net/http" "werf_guide_app/internal/services" _ "github.com/go-sql-driver/mysql" ) func RememberController(c *gin.Context) { dbType, dbPath := services.GetDBCredentials() db, err := sql.Open(dbType, dbPath) if err != nil { panic(err) } answer := c.Query("answer") name := c.Query("name") _, err = db.Exec("INSERT INTO talkers (answer, name) VALUES (?, ?)", answer, name) if err != nil { panic(err) } c.String(http.StatusOK, "Got it.\n") defer db.Close() } func SayController(c *gin.Context) { dbType, dbPath := services.GetDBCredentials() db, err := sql.Open(dbType, dbPath) if err != nil { panic(err) } result, err := db.Query("SELECT * FROM talkers") if err != nil { panic(err) } count := 0 for result.Next() { count++ var id int var answer string var name string err = result.Scan(&id, &answer, &name) if err != nil { panic(err) } c.String(http.StatusOK, answer+", "+name+"!\n") break } if count == 0 { c.String(http.StatusOK, "I have nothing to say.\n") } }

Now you need to add new routes and connect the created controllers to them:

...
route.GET("/remember", controllers.RememberController)
route.GET("/say", controllers.SayController)
... route.GET("/remember", controllers.RememberController) route.GET("/say", controllers.SayController)

New endpoints, /remember and /say, are ready.

Deploying a MySQL database and connecting to it

In real life, a database can be a part of the Kubernetes infrastructure or run outside of it. Outside of Kubernetes, you can deploy and maintain a database yourself or use a managed solution like Amazon RDS. For illustrative purposes, let’s deploy a MySQL database inside the Kubernetes cluster using the following basic StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: mysql
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.4.0
        args: ["--mysql-native-password=ON"]
        ports:
        - containerPort: 3306
        env:
          - name: MYSQL_DATABASE
            value: werf-guide-app
          - name: MYSQL_ROOT_PASSWORD
            value: password
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: "100Mi"

---
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  selector:
    app: mysql
  ports:
  - port: 3306
apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: serviceName: mysql selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.4.0 args: ["--mysql-native-password=ON"] ports: - containerPort: 3306 env: - name: MYSQL_DATABASE value: werf-guide-app - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - name: mysql-data mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: name: mysql-data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: "100Mi" --- apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306

Note that you can also use a database deployed differently. In this case, you will not need the above StatefulSet, while all further steps remain unchanged.

Now let’s configure our application to use the new database:

...
- name: DB_TYPE
  value: "mysql"
- name: DB_NAME
  value: "werf-guide-app"
- name: DB_USER
  value: "root"
- name: DB_PASSWD
  value: "password"
- name: DB_HOST
  value: "mysql"
- name: DB_PORT
  value: "3306"
... - name: DB_TYPE value: "mysql" - name: DB_NAME value: "werf-guide-app" - name: DB_USER value: "root" - name: DB_PASSWD value: "password" - name: DB_HOST value: "mysql" - name: DB_PORT value: "3306"

All the parameters are set via environment variables.

Great, the database and the application are ready to be deployed.

Initializing and migrating the database

There are several ways to initialize and migrate a database when deploying applications to Kubernetes. We will use one straightforward yet efficient method. In it, database migrations (and database initialization, if necessary) will be carried out by a separate Job simultaneously with the deployment of the application and the database itself.To ensure that resources are deployed in an orderly fashion, we:

  1. Require that the initialization/migration Job wait for the database to be accessible before running.
  2. Require applications to wait for the database to be accessible and ready and migrations to be complete before running.

Thus, all K8s resources will be created simultaneously during the deployment, yet they will start in the following order:

  1. The database will start.
  2. The Job to initialize/migrate the database will run.
  3. The applications will start.

We will use the migrate tool to migrate the database. It lets you work with migrations both from the command line (CLI) or straight from the Go code. We will stick with the first option.

To generate the migration files, run the command in the project’s root:

migrate create -ext sql -dir db/migrations -seq create_talkers_table

The following two files will be created in the db/migrations directory:

db
└── migrations
    ├── 000001_create_talkers_table.down.sql
    └── 000001_create_talkers_table.up.sql

The file with the up suffix contains migrations to initialize the database:

CREATE TABLE talkers (
    id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
    answer TEXT NOT NULL,
    name TEXT NOT NULL
);
CREATE TABLE talkers ( id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT, answer TEXT NOT NULL, name TEXT NOT NULL );

The file with the down suffix contains instructions for clearing the database:

DROP TABLE IF EXISTS talkers;
DROP TABLE IF EXISTS talkers;

Let’s modify the Dockerfile by adding the migrate tool to the final backend image:

# Here, a multi-stage build is used to assemble the image
# The image to build the project
FROM golang:1.18-alpine AS build
# Устанавливаем curl и tar.
RUN apk add curl tar
# Copy the source code of the application.
COPY . /app
WORKDIR /app
# Скачиваем утилиту migrate и распаковываем полученный архив.
RUN curl -L https://github.com/golang-migrate/migrate/releases/download/v4.16.2/migrate.linux-amd64.tar.gz | tar xvz
# Запускаем загрузку нужных пакетов.
RUN go mod download
# Запускаем сборку приложения.
RUN go build -o /goapp cmd/main.go

# The image to deploy to the cluster.
FROM alpine:latest as backend
WORKDIR /
# Copy the project executable from the build image.
COPY --from=build /goapp /goapp
# Копируем из сборочного образа распакованный файл утилиты migrate и схемы миграции.
COPY --from=build /app/migrate /migrations/migrate
COPY db/migrations /migrations/schemes
# Копируем файлы ассетов и шаблоны.
COPY ./templates /templates
COPY ./static /static
EXPOSE 8080
ENTRYPOINT ["/goapp"]

# The NGINX image that contains the static files.
FROM nginx:stable-alpine as frontend
WORKDIR /www
# Copy static files.
COPY static /www/static/
# Copy NGINX configuration.
COPY .werf/nginx.conf /etc/nginx/nginx.conf
# Here, a multi-stage build is used to assemble the image # The image to build the project FROM golang:1.18-alpine AS build # Устанавливаем curl и tar. RUN apk add curl tar # Copy the source code of the application. COPY . /app WORKDIR /app # Скачиваем утилиту migrate и распаковываем полученный архив. RUN curl -L https://github.com/golang-migrate/migrate/releases/download/v4.16.2/migrate.linux-amd64.tar.gz | tar xvz # Запускаем загрузку нужных пакетов. RUN go mod download # Запускаем сборку приложения. RUN go build -o /goapp cmd/main.go # The image to deploy to the cluster. FROM alpine:latest as backend WORKDIR / # Copy the project executable from the build image. COPY --from=build /goapp /goapp # Копируем из сборочного образа распакованный файл утилиты migrate и схемы миграции. COPY --from=build /app/migrate /migrations/migrate COPY db/migrations /migrations/schemes # Копируем файлы ассетов и шаблоны. COPY ./templates /templates COPY ./static /static EXPOSE 8080 ENTRYPOINT ["/goapp"] # The NGINX image that contains the static files. FROM nginx:stable-alpine as frontend WORKDIR /www # Copy static files. COPY static /www/static/ # Copy NGINX configuration. COPY .werf/nginx.conf /etc/nginx/nginx.conf

Let’s do this by adding a Job to perform the database migrations/initialization:

apiVersion: batch/v1
kind: Job
metadata:
  # The release revision in the Job name will cause the Job to be recreated every time.
  # This way, we can get around the fact that the Job is immutable.
  name: "setup-and-migrate-db-rev{{ .Release.Revision }}"
spec:
  backoffLimit: 0
  template:
    spec:
      restartPolicy: Never
      initContainers:
      - name: waiting-mysql
        image: alpine:3.6
        command: [ '/bin/sh', '-c', 'while ! nc -z mysql 3306; do sleep 1; done' ]
      containers:
      - name: setup-and-migrate-db
        image: {{ .Values.werf.image.backend }}
        command: ["/migrations/migrate",  "-database", "mysql://root:password@tcp(mysql:3306)/werf-guide-app", "-path", "/migrations/schemes", "up"]
apiVersion: batch/v1 kind: Job metadata: # The release revision in the Job name will cause the Job to be recreated every time. # This way, we can get around the fact that the Job is immutable. name: "setup-and-migrate-db-rev{{ .Release.Revision }}" spec: backoffLimit: 0 template: spec: restartPolicy: Never initContainers: - name: waiting-mysql image: alpine:3.6 command: [ '/bin/sh', '-c', 'while ! nc -z mysql 3306; do sleep 1; done' ] containers: - name: setup-and-migrate-db image: {{ .Values.werf.image.backend }} command: ["/migrations/migrate", "-database", "mysql://root:password@tcp(mysql:3306)/werf-guide-app", "-path", "/migrations/schemes", "up"]

Two containers are involved here:

  • The initContainer will start first, and it will be pinging the MySQL server until it gets a response.
  • Then the the migrate utility running in the main container (the Dockerfile’s backend) will migrate the database.

Testing the application/database

Let’s deploy the application:

werf converge --repo <DOCKER HUB USERNAME>/werf-guide-app

You should see the following output:

┌ Concurrent build plan (no more than 5 images at the same time)
│ Set #0:
│ - ⛵ image backend
│ - ⛵ image frontend
└ Concurrent build plan (no more than 5 images at the same time)

┌ ⛵ image backend
│ Use previously built image for backend/dockerfile
│      name: <DOCKER HUB USERNAME>/werf-guide-app:4dedde8307a6aa4cecc00f99a44d19a7e716484677bf321c2ba68feb-1687373921162
│        id: e6ea855b7d02
│   created: 2023-06-21 21:58:40 +0300 MSK
│      size: 25.5 MiB
└ ⛵ image backend (2.05 seconds)

┌ ⛵ image frontend
│ Use previously built image for frontend/dockerfile
│      name: <DOCKER HUB USERNAME>/werf-guide-app:328a492a57c7aa12e8af2ead55341f51f696961c2864d83545c6518d-1687253596426
│        id: f05a34a1cdd2
│   created: 2023-06-20 12:33:15 +0300 MSK
│      size: 16.0 MiB
└ ⛵ image frontend (2.08 seconds)

┌ Build summary
│ Set #0:
│ - ⛵ image backend (2.05 seconds)
│ - ⛵ image frontend (2.08 seconds)
└ Build summary

┌ Waiting for resources to become ready
│ ┌ job/setup-and-migrate-db-rev17 po/setup-and-migrate-db-rev17-62f6d container/setup-and-migrate-db logs
│ │ 1/u create_talkers_table (129.700116ms)
│ └ job/setup-and-migrate-db-rev17 po/setup-and-migrate-db-rev17-62f6d container/setup-and-migrate-db logs
│ 
│ ┌ Status progress
│ │ DEPLOYMENT                                                                                    REPLICAS         AVAILABLE          UP-TO-DATE                       
│ │ werf-guide-app                                                                                1/1              1                  1                                
│ │ STATEFULSET                                                                                   REPLICAS         READY              UP-TO-DATE                       
│ │ mysql                                                                                         1/1              1                  1                                
│ │ JOB                                                                                           ACTIVE           DURATION           SUCCEEDED/FAILED                 
│ │ setup-and-migrate-db-rev17                                                                    0                5s                 1/0                              
│ │ │   POD                                 READY        RESTARTS          STATUS                                                                                      
│ │ └── and-migrate-db-rev17-62f6d          0/1          0                 Completed              
│ │ RESOURCE                                                              NAMESPACE             CONDITION: CURRENT (DESIRED)                                           
│ │ Service/mysql                                                         werf-guide-app        -                                                                      
│ │ Service/werf-guide-app                                                werf-guide-app        -                                                                      
│ │ Ingress/werf-guide-app                                                werf-guide-app        -                                                                 ↵
│ │      
│ └ Status progress
└ Waiting for resources to become ready (4.98 seconds)

┌ Waiting for resources elimination: jobs/setup-and-migrate-db-rev11, jobs/setup-and-migrate-db-rev12, jobs/setup-and-migrate-db-rev16, jobs/setup-and-migrate-db-re ...
└ Waiting for resources elimination: jobs/setup-and-migrate-db-rev11, jobs/setup-and-migrate-db-rev12, jobs/setup-and-migrate-db-rev16, jobs/setup-an ... (0.30 seconds)

Release "werf-guide-app" has been upgraded. Happy Helming!
NAME: werf-guide-app
LAST DEPLOYED: Wed Jun 21 21:59:06 2023
LAST PHASE: rollout
LAST STAGE: 0
NAMESPACE: werf-guide-app
STATUS: deployed
REVISION: 17
TEST SUITE: None
Running time 9.43 seconds

Don’t worry if the process seems to be stuck at this point and many errors appear in the messages. This happens due to checking the MySQL status; you just need to wait a bit when it’s done (usually, it takes no more than 1-2 minutes).

Now let’s try to access the /say endpoint that retrieves the data from the database:

curl http://werf-guide-app.test/say

Since the database is still empty, it should return the following message:

I have nothing to say.

Let’s save some data to the database using /remember:

curl "http://werf-guide-app.test/remember?answer=Love+you&name=sweetie"

The database must respond with the following:

Got it.

Let’s try to retrieve the data from the database using the /say endpoint once again:

curl http://werf-guide-app.test/say

If successful, you will see the following output:

Love you, sweetie!

You can also make sure that the data is in the database by directly querying the table contents:

kubectl exec -it statefulset/mysql -- mysql -ppassword -e "SELECT * from talkers" werf-guide-app

You should see the following output:

+----+----------+---------+
| id | answer   | name    |
+----+----------+---------+
|  1 | Love you | sweetie |
+----+----------+---------+

Done!

In this chapter, we turned our application into a stateful one by connecting it to the corresponding database. We deployed the database to the Kubernetes cluster, initialized it, and performed necessary DB migrations. Note that the above approach should work well with any relational database.

As usual, you can see all the changes made in this chapter by running the commands provided at the beginning.

prev
next