In this chapter, you will learn how to work with static files and serve them to the client correctly.

The application described in this chapter is not intended for use in production environments as-is. Note that successful completion of this entire guide is required to create a production-ready application.

Preparing the environment

If you haven’t prepared your environment during previous steps, please, do it using the instructions provided in the “Preparing the environment” chapter.

If your environment has stopped working or instructions in this chapter don’t work, please, refer to these hints:

Is Docker running?

Let’s launch Docker Desktop. It takes some time for this application to start Docker. If there are no errors during the startup process, check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Let’s launch the Docker Desktop application. It takes some time for the application to start Docker. If there are no errors during the startup process, then check that Docker is running and is properly configured:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Start Docker:

sudo systemctl restart docker

Make sure that Docker is running:

sudo systemctl status docker

If the Docker start is successful, you will see the following output:

● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2021-06-24 13:05:17 MSK; 13s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2013888 (dockerd)
      Tasks: 36
     Memory: 100.3M
     CGroup: /system.slice/docker.service
             └─2013888 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

dockerd[2013888]: time="2021-06-24T13:05:16.936197880+03:00" level=warning msg="Your kernel does not support CPU realtime scheduler"
dockerd[2013888]: time="2021-06-24T13:05:16.936219851+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
dockerd[2013888]: time="2021-06-24T13:05:16.936224976+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
dockerd[2013888]: time="2021-06-24T13:05:16.936311001+03:00" level=info msg="Loading containers: start."
dockerd[2013888]: time="2021-06-24T13:05:17.119938367+03:00" level=info msg="Loading containers: done."
dockerd[2013888]: time="2021-06-24T13:05:17.134054120+03:00" level=info msg="Daemon has completed initialization"
systemd[1]: Started Docker Application Container Engine.
dockerd[2013888]: time="2021-06-24T13:05:17.148493957+03:00" level=info msg="API listen on /run/docker.sock"

Now let’s check if Docker is available and its configuration is correct:

docker run hello-world

You will see the following output if the command completes successfully:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Should you have any problems, please refer to the Docker documentation.

Have you restarted the computer after setting up the environment?

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment using minikube.

Now run the command in the background PowerShell terminal (do not close its window):

minikube tunnel --cleanup=true

Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:

minikube start --namespace werf-guide-app

Set the default Namespace so that you don’t have to specify it every time you invoke kubectl:

kubectl config set-context minikube --namespace=werf-guide-app

You will see the following output if the command completes successfully:

😄  minikube v1.20.0 on Ubuntu 20.04
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
    ▪ Using image registry:2.7.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying registry addon...
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default

Make sure that the command output contains the following line:

Restarting existing docker container for "minikube"

Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment from scratch using minikube.

Did you accidentally delete the application's Namespace?

If you have inadvertently deleted Namespace of the application, you must run the following commands to proceed with the guide:

kubectl create namespace werf-guide-app
kubectl create secret docker-registry registrysecret \
  --docker-server='https://index.docker.io/v1/' \
  --docker-username='<Docker Hub username>' \
  --docker-password='<Docker Hub password>'

You will see the following output if the command completes successfully:

namespace/werf-guide-app created
secret/registrysecret created
Nothing helps; the environment or instructions keep failing.

If nothing worked, repeat all the steps described in the “Preparing the environment” chapter and create a new environment from scratch. If creating an environment from scratch did not help either, please, tell us about your problem in our Telegram chat or create an issue on GitHub. We will be happy to help you!

Preparing the repository

Update the existing repository containing the application:

Run the following commands in PowerShell:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/golang/030_assets/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

cd ~/werf-guide/app

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/golang/030_assets/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Doesn’t work? Try the instructions on the “I am just starting from this chapter” tab above.

Prepare a new repository with the application:

Run the following commands in PowerShell:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
if (-not (Test-Path ~/werf-guide/guides)) {
  git clone https://github.com/werf/website $env:HOMEPATH/werf-guide/guides
}

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -Recurse -Force ~/werf-guide/app
cp -Recurse -Force ~/werf-guide/guides/examples/golang/020_logging ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -Recurse -Force ~/werf-guide/guides/examples/golang/030_assets/* .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show the files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Run the following commands in Bash:

# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
test -e ~/werf-guide/guides || git clone https://github.com/werf/website ~/werf-guide/guides

# Copy the (unchanged) application files to ~/werf-guide/app.
rm -rf ~/werf-guide/app
cp -rf ~/werf-guide/guides/examples/golang/020_logging ~/werf-guide/app

# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial

# To see what changes we will make later in this chapter, let's replace all the application files
# in the repository with new, modified files containing the changes described below.
git rm -r .
cp -rf ~/werf-guide/guides/examples/golang/030_assets/. .
git add .
git commit -m WIP
What changes we will make
# Enter the command below to show files we are going to change.
git show --stat
# Enter the command below to show the changes that will be made.
git show

Adding an /image page to the application

Let’s add a new /image endpoint to our application. It will return a page with a set of static files. We will use JS and CSS files as well as media will be served by the regular application means. to bundle all JS, CSS, and media files.

The application now provides only a basic API with a single /ping endpoint and no support for any HTML templates or asset distribution.

To turn this API into a basic web application, we made several changes. First, we created directories with assets and templates:

├── static
│   ├── images
│   │   └── werf-logo.svg
│   ├── javascripts
│   │   └── image.js
│   └── stylesheets
│       └── site.css
├── templates
│   ├── image.html
│   └── index.html

Then we configured the application to serve assets from the created directories:

...
route.Static("/static/stylesheets", "static/stylesheets")
route.Static("/static/javascripts", "static/javascripts")
route.Static("/static/images", "static/images")
... route.Static("/static/stylesheets", "static/stylesheets") route.Static("/static/javascripts", "static/javascripts") route.Static("/static/images", "static/images")

On the left are the paths to request the assets, and on the right are the paths to them in the directory. Then we configured the loading of the templates from the templates directory:

...
route.LoadHTMLGlob("templates/*")
... route.LoadHTMLGlob("templates/*")

Next, we prepared controllers to handle requests to the /image and / routes that use the loaded templates:

package controllers

import (
	"github.com/gin-gonic/gin"
	"net/http"
)

func MainPage(context *gin.Context) {
	context.HTML(http.StatusOK, "index.html", gin.H{})
}

func ImagePage(context *gin.Context) {
	context.HTML(http.StatusOK, "image.html", gin.H{})
}
package controllers import ( "github.com/gin-gonic/gin" "net/http" ) func MainPage(context *gin.Context) { context.HTML(http.StatusOK, "index.html", gin.H{}) } func ImagePage(context *gin.Context) { context.HTML(http.StatusOK, "image.html", gin.H{}) }

Finally, we added new routes, binding them to the created controllers:

...
route.GET("/", controllers.MainPage)
route.GET("/image", controllers.ImagePage)
... route.GET("/", controllers.MainPage) route.GET("/image", controllers.ImagePage)

The controllers access the template files, the locations of which we specified above:

<html>
  <head>
    <title>Werf Guide App</title>
    <link rel="stylesheet" href="static/stylesheets/site.css" />
  </head>

  <body>
    <h1>Wef Guide App based on Express</h1>
    <p>Welcome to Werf guide. Visit <a href="/image">the image page</a>.</p>
  </body>
</html>
<html> <head> <title>Werf Guide App</title> <link rel="stylesheet" href="static/stylesheets/site.css" /> </head> <body> <h1>Wef Guide App based on Express</h1> <p>Welcome to Werf guide. Visit <a href="/image">the image page</a>.</p> </body> </html>
<!DOCTYPE html>
<html>
  <head>
    <title>Werf Logo</title>
    <meta name="viewport" content="width=device-width,initial-scale=1" />
    <link rel="stylesheet" href="static/stylesheets/site.css" />
    <script src="static/javascripts/image.js"></script>
  </head>

  <body>
    <div id="container">
      <!-- The "Get image" button; it will generate an Ajax request when clicked. -->
      <button type="button" id="show-image-btn">Get image</button>
    </div>
  </body>
</html>
<!DOCTYPE html> <html> <head> <title>Werf Logo</title> <meta name="viewport" content="width=device-width,initial-scale=1" /> <link rel="stylesheet" href="static/stylesheets/site.css" /> <script src="static/javascripts/image.js"></script> </head> <body> <div id="container"> <!-- The "Get image" button; it will generate an Ajax request when clicked. --> <button type="button" id="show-image-btn">Get image</button> </div> </body> </html>

Clicking on the Get image button must result in running an Ajax request. It will retrieve and render the SVG image:

window.onload=function(){
  var btn = document.getElementById("show-image-btn")
  btn.addEventListener("click", _ => {
      fetch("/static/images/werf-logo.svg")
        .then((data) => data.text())
        .then((html) => {
          const svgContainer = document.getElementById("container")
          svgContainer.insertAdjacentHTML("beforeend", html)
          var svg = svgContainer.getElementsByTagName("svg")[0]
          svg.setAttribute("id", "image")
          btn.remove()
        }
      )
    }
  )
}
window.onload=function(){ var btn = document.getElementById("show-image-btn") btn.addEventListener("click", _ => { fetch("/static/images/werf-logo.svg") .then((data) => data.text()) .then((html) => { const svgContainer = document.getElementById("container") svgContainer.insertAdjacentHTML("beforeend", html) var svg = svgContainer.getElementsByTagName("svg")[0] svg.setAttribute("id", "image") btn.remove() } ) } ) }

On top of that, the page will use the CSS file static/stylesheets/site.css.

The application now has a new endpoint called /image in addition to the /ping endpoint we made in previous chapters. This new endpoint displays a page that uses different types of static files.

The commands provided at the beginning of the chapter allow you to view a complete, exhaustive list of the changes made to the application in the current chapter.

Serving static files

The static files are currently served by the Go application itself, using its built-in tools. A better way would be to use a reverse proxy such as NGINX. This is recommended because the performance and efficiency of the reverse proxy for serving static files is much better than the application’s built-in tools.

There are several ways to use a reverse proxy with a Go application in Kubernetes. We will follow a common and simple one that scales well. In it, an NGINX container is run together with the application container (in the same Pod). It proxies all requests to the application, except for static file requests. The static files are then served by the NGINX container itself.

Now, let’s get to the implementation.

Making changes to the build and deploy process

First of all, we have to make changes to the application build process. Let’s make the following changes to the Dockerfile:

  • enable copying templates and assets into the image (they should be located next to the application’s executable);
  • build an NGINX image to store static application files (in addition to the application image); NGINX will serve these static files to the clients directly.
# Here, a multi-stage build is used to assemble the image
# The image to build the project
FROM golang:1.18-alpine AS build
# Copy the source code of the application.
COPY . /app
WORKDIR /app
# Запускаем загрузку нужных пакетов.
RUN go mod download
# Запускаем сборку приложения.
RUN go build -o /goapp cmd/main.go

# The image to deploy to the cluster.
FROM alpine:latest as backend
WORKDIR /
# Copy the project executable from the build image.
COPY --from=build /goapp /goapp
# Копируем файлы ассетов и шаблоны.
COPY ./templates /templates
COPY ./static /static
EXPOSE 8080
ENTRYPOINT ["/goapp"]

# The NGINX image that contains the static files.
FROM nginx:stable-alpine as frontend
WORKDIR /www
# Copy static files.
COPY static /www/static/
# Copy NGINX configuration.
COPY .werf/nginx.conf /etc/nginx/nginx.conf
# Here, a multi-stage build is used to assemble the image # The image to build the project FROM golang:1.18-alpine AS build # Copy the source code of the application. COPY . /app WORKDIR /app # Запускаем загрузку нужных пакетов. RUN go mod download # Запускаем сборку приложения. RUN go build -o /goapp cmd/main.go # The image to deploy to the cluster. FROM alpine:latest as backend WORKDIR / # Copy the project executable from the build image. COPY --from=build /goapp /goapp # Копируем файлы ассетов и шаблоны. COPY ./templates /templates COPY ./static /static EXPOSE 8080 ENTRYPOINT ["/goapp"] # The NGINX image that contains the static files. FROM nginx:stable-alpine as frontend WORKDIR /www # Copy static files. COPY static /www/static/ # Copy NGINX configuration. COPY .werf/nginx.conf /etc/nginx/nginx.conf

At build time, the NGINX configuration file will be added to the NGINX image:

user nginx;
worker_processes 1;
pid /run/nginx.pid;

events {
  worker_connections 1024;
}

http {
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  upstream backend {
    server 127.0.0.1:8080 fail_timeout=0;
  }

  server {
    listen 80;
    server_name _;

    root /www;

    client_max_body_size 100M;
    keepalive_timeout 10s;

    # For the /static path, serve the assets directly from the NGINX container file system.
    location /static {

      expires 1y;
      add_header Cache-Control public;
      add_header Last-Modified "";
      add_header ETag "";

      # If possible, serve pre-compressed files (instead of compressing them on the fly).
      gzip_static on;

      access_log off;

      try_files $uri =404;
    }

    # Serve media assets (images, etc.) directly from the NGINX container file system, 
    # but turn off gzip compression.
    location /static/images {
      expires 1y;
      add_header Cache-Control public;
      add_header Last-Modified "";
      add_header ETag "";

      access_log off;

      try_files $uri =404;
    }

    # All requests, except for asset requests, are sent to the Spring Boot backend.
    location / {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      proxy_pass http://backend;
    }
  }
}
user nginx; worker_processes 1; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; upstream backend { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; server_name _; root /www; client_max_body_size 100M; keepalive_timeout 10s; # For the /static path, serve the assets directly from the NGINX container file system. location /static { expires 1y; add_header Cache-Control public; add_header Last-Modified ""; add_header ETag ""; # If possible, serve pre-compressed files (instead of compressing them on the fly). gzip_static on; access_log off; try_files $uri =404; } # Serve media assets (images, etc.) directly from the NGINX container file system, # but turn off gzip compression. location /static/images { expires 1y; add_header Cache-Control public; add_header Last-Modified ""; add_header ETag ""; access_log off; try_files $uri =404; } # All requests, except for asset requests, are sent to the Spring Boot backend. location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://backend; } } }

Now we need to update the werf.yaml configuration so that werf will build and save two images (backend, frontend) instead of one:

project: werf-guide-app
configVersion: 1

---
image: backend
dockerfile: Dockerfile
target: backend

---
image: frontend
dockerfile: Dockerfile
target: frontend
project: werf-guide-app configVersion: 1 --- image: backend dockerfile: Dockerfile target: backend --- image: frontend dockerfile: Dockerfile target: frontend

Let’s add a new NGINX container to the application’s Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: werf-guide-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: werf-guide-app
  template:
    metadata:
      labels:
        app: werf-guide-app
    spec:
      imagePullSecrets:
      - name: registrysecret
      containers:
      - name: backend
        image: {{ .Values.werf.image.backend }}
        ports:
        - containerPort: 8080
        env:
          - name: GIN_MODE
            value: "release"
      - name: frontend
        image: {{ .Values.werf.image.frontend }}
        ports:
        - containerPort: 80
apiVersion: apps/v1 kind: Deployment metadata: name: werf-guide-app spec: replicas: 1 selector: matchLabels: app: werf-guide-app template: metadata: labels: app: werf-guide-app spec: imagePullSecrets: - name: registrysecret containers: - name: backend image: {{ .Values.werf.image.backend }} ports: - containerPort: 8080 env: - name: GIN_MODE value: "release" - name: frontend image: {{ .Values.werf.image.frontend }} ports: - containerPort: 80

From now on, the Service and Ingress must connect to port 80 instead of 8080 so that all requests are routed through NGINX instead of going directly to our application:

apiVersion: v1
kind: Service
metadata:
  name: werf-guide-app
spec:
  selector:
    app: werf-guide-app
  ports:
  - name: http
    port: 80
apiVersion: v1 kind: Service metadata: name: werf-guide-app spec: selector: app: werf-guide-app ports: - name: http port: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: werf-guide-app
spec:
  rules:
  - host: werf-guide-app.test
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: werf-guide-app
            port:
              number: 80
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: werf-guide-app spec: rules: - host: werf-guide-app.test http: paths: - path: / pathType: Prefix backend: service: name: werf-guide-app port: number: 80

Checking that everything works as expected

Now we have to re-deploy our application:

werf converge --repo <DOCKER HUB USERNAME>/werf-guide-app

You should see the following output:

│ ├ Info
│ │      name: <DOCKER HUB USERNAME>/werf-guide-app:328a492a57c7aa12e8af2ead55341f51f696961c2864d83545c6518d-1687253596426
│ │        id: f05a34a1cdd2
│ │   created: 2023-06-20 12:33:15 +0300 MSK
│ │      size: 16.0 MiB
│ └ Building stage frontend/dockerfile (97.50 seconds)
└ ⛵ image frontend (99.86 seconds)

┌ Build summary
│ Set #0:
│ - ⛵ image backend (85.30 seconds)
│ - ⛵ image frontend (99.86 seconds)
└ Build summary

┌ Waiting for resources to become ready│ 
│ ┌ Status progress
│ │ DEPLOYMENT                                                                                                                            REPLICAS                 AVAILABLE                  UP-TO-DATE                                    
│ │ werf-guide-app                                                                                                                        2->1/1                   1                          1                                             
│ │ │   POD                                                 READY              RESTARTS                  STATUS                                                                                                                             
│ │ ├── guide-app-5bfbc4cdd7-lgm48                          2/2                0                         ContainerCreating->Running       
│ │ └── guide-app-5d54fdd88-vnm8s                           1/1                1                         Running->Terminating             
│ └ Status progress
└ Waiting for resources to become ready (17.26 seconds)

Release "werf-guide-app" has been upgraded. Happy Helming!
NAME: werf-guide-app
LAST DEPLOYED: Tue Jun 20 12:33:40 2023
LAST PHASE: rollout
LAST STAGE: 0
NAMESPACE: werf-guide-app
STATUS: deployed
REVISION: 5
TEST SUITE: None
Running time 127.71 seconds

Go to http://werf-guide-app.test/image in your browser and click on the Get image button. You should see the following output:

Note which resources were requested and which links were used (the last resource here was retrieved via the Ajax request):

Now our application not only provides an API but boasts a set of tools to manage static and JavaScript files effectively.

Furthermore, our application can handle a high load since many requests for static files will not affect the operation of the application as a whole. Note that you can quickly scale Go application (serves dynamic content) and NGINX (serves static content) by increasing the number of replicas in the application’s Deployment.

prev
next