In this chapter, we will discuss the basic Kubernetes resources for deploying applications and making them accessible from inside and outside of the cluster.
Preparing the environment
If you haven’t prepared your environment during previous steps, please, do it using the instructions provided in the “Preparing the environment” chapter.
If your environment has stopped working or instructions in this chapter don’t work, please, refer to these hints:
Let’s launch Docker Desktop. It takes some time for this application to start Docker. If there are no errors during the startup process, check that Docker is running and is properly configured:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Let’s launch the Docker Desktop application. It takes some time for the application to start Docker. If there are no errors during the startup process, then check that Docker is running and is properly configured:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Start Docker:
sudo systemctl restart docker
Make sure that Docker is running:
sudo systemctl status docker
If the Docker start is successful, you will see the following output:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-06-24 13:05:17 MSK; 13s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 2013888 (dockerd)
Tasks: 36
Memory: 100.3M
CGroup: /system.slice/docker.service
└─2013888 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
dockerd[2013888]: time="2021-06-24T13:05:16.936197880+03:00" level=warning msg="Your kernel does not support CPU realtime scheduler"
dockerd[2013888]: time="2021-06-24T13:05:16.936219851+03:00" level=warning msg="Your kernel does not support cgroup blkio weight"
dockerd[2013888]: time="2021-06-24T13:05:16.936224976+03:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
dockerd[2013888]: time="2021-06-24T13:05:16.936311001+03:00" level=info msg="Loading containers: start."
dockerd[2013888]: time="2021-06-24T13:05:17.119938367+03:00" level=info msg="Loading containers: done."
dockerd[2013888]: time="2021-06-24T13:05:17.134054120+03:00" level=info msg="Daemon has completed initialization"
systemd[1]: Started Docker Application Container Engine.
dockerd[2013888]: time="2021-06-24T13:05:17.148493957+03:00" level=info msg="API listen on /run/docker.sock"
Now let’s check if Docker is available and its configuration is correct:
docker run hello-world
You will see the following output if the command completes successfully:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Should you have any problems, please refer to the Docker documentation.
Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:
minikube start
Set the default Namespace so that you don’t have to specify it every time you invoke kubectl
:
kubectl config set-context minikube --namespace=werf-guide-app
You will see the following output if the command completes successfully:
😄 minikube v1.20.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image registry:2.7.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying registry addon...
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄 Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default
Make sure that the command output contains the following line:
Restarting existing docker container for "minikube"
Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment using minikube.
Now run the command in the background PowerShell terminal (do not close its window):
minikube tunnel --cleanup=true
Let’s start the minikube cluster we have already configured in the “Preparing the environment” chapter:
minikube start --namespace werf-guide-app
Set the default Namespace so that you don’t have to specify it every time you invoke kubectl
:
kubectl config set-context minikube --namespace=werf-guide-app
You will see the following output if the command completes successfully:
😄 minikube v1.20.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🎉 minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image registry:2.7.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying registry addon...
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, registry, default-storageclass, ingress
🏄 Done! kubectl is now configured to use "minikube" cluster and "werf-guide-app" namespace by default
Make sure that the command output contains the following line:
Restarting existing docker container for "minikube"
Its absence means that a new minikube cluster was created instead of using the old one. In this case, repeat all the steps required to install the environment from scratch using minikube.
If you have inadvertently deleted Namespace of the application, you must run the following commands to proceed with the guide:
kubectl create namespace werf-guide-app
kubectl create secret docker-registry registrysecret \
--docker-server='https://index.docker.io/v1/' \
--docker-username='<Docker Hub username>' \
--docker-password='<Docker Hub password>'
You will see the following output if the command completes successfully:
namespace/werf-guide-app created
secret/registrysecret created
If nothing worked, repeat all the steps described in the “Preparing the environment” chapter and create a new environment from scratch. If creating an environment from scratch did not help either, please, tell us about your problem in our Telegram chat or create an issue on GitHub. We will be happy to help you!
Preparing the repository
Switch to the existing repository (run in Bash/PowerShell):
cd ~/werf-guide/app
Problems with the repository? Try the instructions on the “I am just starting from this chapter” tab above.
Prepare a new repository with the application:
Run the following commands in PowerShell:
# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
if (-not (Test-Path ~/werf-guide/guides)) {
git clone https://github.com/werf/website $env:HOMEPATH/werf-guide/guides
}
# Copy the (unchanged) application files to ~/werf-guide/app.
rm -Recurse -Force ~/werf-guide/app
cp -Recurse -Force ~/werf-guide/guides/examples/basic/005_kubernetes_basics ~/werf-guide/app
# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial
Run the following commands in Bash:
# Clone the example repository to ~/werf-guide/guides (if you have not cloned it yet).
test -e ~/werf-guide/guides || git clone https://github.com/werf/website ~/werf-guide/guides
# Copy the (unchanged) application files to ~/werf-guide/app.
rm -rf ~/werf-guide/app
cp -rf ~/werf-guide/guides/examples/basic/005_kubernetes_basics ~/werf-guide/app
# Make the ~/werf-guide/app directory a git repository.
cd ~/werf-guide/app
git init
git add .
git commit -m initial
Templates, manifests, and resources
werf uses YAML manifests that describe Kubernetes resources. These manifests are generated based on the Helm templates stored in .helm/templates
and .helm/charts
.
The Helm template that describes a Pod (one of the Kubernetes resources) is stored in .helm/templates/pod.yaml
and have the following contents:
apiVersion: v1
kind: Pod
metadata:
name: standalone-pod
spec:
containers:
- name: main
image: {{ $.Values.image }} # Helm-templating to parameterize the image name for the container
command: ["tail", "-f", "/dev/null"]
Before deployment, werf transforms this Helm template into the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: standalone-pod
spec:
containers:
- name: main
image: alpine # if the user has specified "alpine" as the image name
command: ["tail", "-f", "/dev/null"]
During deployment, Kubernetes creates a Pod resource in the cluster based on this manifest. You can use the following command to view this resource in the cluster:
kubectl get pod standalone-pod --output yaml
This is what its output looks like:
apiVersion: v1
kind: Pod
metadata:
name: standalone-pod
namespace: default
spec:
containers:
- name: main
image: alpine
command: ["tail", "-f", "/dev/null"]
# ...
status:
phase: Running
podIP: 172.17.0.7
startTime: "2021-06-02T13:17:47Z"
Running applications
The Pod resource is the simplest way to run one or more containers in Kubernetes. The Pod manifest describes the containers and their parameters. However, in practice, you do not deploy Pods themselves. Instead, you delegate this task to Controllers that create and manage Pods for you. The Deployment is one of these controllers. By creating Pods using Deployment, you greatly simplify their management.
Below are some features that Deployments provide (and which the Pods themselves do not have):
- The Deployment restarts the Pod if it gets deleted (manually or automatically);
- In most cases, you cannot update the Pod configuration on the fly. To edit its configuration, you need to recreate the Pod. On the other hand, you can edit Pod configuration in the Deployment without the need to restart the latter;
- No downtime is involved in updating the Pod configuration: some of the Pods with the old configuration are running until Pods having the new configuration are up and running fine;
- A single Deployment can run several Pods simultaneously (including on different Nodes).
Different controllers have different capabilities related to creating and managing Pods. Below is the list of standard controllers and their typical usage:
- Deployment is used for deploying stateless applications;
- StatefulSet is used for deploying stateful applications;
- DaemonSet is used for deploying applications only one instance of which can run on each node at a time (logging, monitoring agents);
- Job is used for running one-time tasks in Pods (e.g., database migration);
- CronJob is used for running repeated tasks in Pods on a schedule (e.g., to perform regular cleaning of some resource).
You can get a list of resources of a specific type in the cluster using the kubectl get
command (in the examples below, commands return all Pods, Deployments, StatefulSets, Jobs and CronJobs from all namespaces):
kubectl get --all-namespaces pod
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-8bgk7 0/1 Completed 0 11d
ingress-nginx ingress-nginx-admission-patch-8fkgl 0/1 Completed 1 11d
ingress-nginx ingress-nginx-controller-5d88495688-6lgx9 1/1 Running 1 11d
kube-system coredns-74ff55c5b-hgzzx 1/1 Running 1 13d
kube-system etcd-minikube 1/1 Running 1 13d
kube-system kube-apiserver-minikube 1/1 Running 1 13d
kube-system kube-controller-manager-minikube 1/1 Running 1 13d
kube-system kube-proxy-gtrcq 1/1 Running 1 13d
kube-system kube-scheduler-minikube 1/1 Running 1 13d
kube-system storage-provisioner 1/1 Running 2 13d
kubectl get --all-namespaces deployment
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx ingress-nginx-controller 1/1 1 1 11d
kube-system coredns 1/1 1 1 13d
kubectl get --all-namespaces statefulset
kubectl get --all-namespaces job
kubectl get --all-namespaces cronjob
Also, you can view the configuration of any resource in YAML format — just add the --output yaml
parameter to the kubectl get
command:
kubectl -n ingress-nginx get deployment ingress-nginx-controller --output yaml
You will see something like this:
# ...
kind: Deployment
metadata:
name: ingress-nginx-controller
# ...
The Deployment type is the most commonly used one, so let’s look at it in more detail. You can learn more about the other controller types in the Kubernetes documentation.
Deployment
Let’s look at the Deployment of our application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: werf-guide-app
spec:
replicas: 1
selector:
matchLabels:
app: werf-guide-app
template:
metadata:
labels:
app: werf-guide-app
spec:
imagePullSecrets:
- name: registrysecret
containers:
- name: app
image: {{ .Values.werf.image.app }}
command: ["/app/start.sh"]
ports:
- containerPort: 8000
A more detailed description of Deployment is available in the official documentation.
Let’s deploy our application:
werf converge --repo <DOCKER HUB USERNAME>/werf-guide-app
Let’s check if our Deployment has been created:
kubectl get deployment werf-guide-app
You will see something like this:
NAME READY UP-TO-DATE AVAILABLE AGE
werf-guide-app 1/1 1 1 25s
Now let’s look at the Pod created by the Deployment:
kubectl get pod -l app=werf-guide-app
You will see something like this:
NAME READY STATUS RESTARTS AGE
werf-guide-app-8b748b85d-829j9 1/1 Running 0 25h
Service and Ingress
You can deploy your stateless application using the Deployment we created. However, for users/other applications to be able to connect to your application, you have to configure two other types of resources: Ingress and Service.
Let’s look at our Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: werf-guide-app
spec:
rules:
- host: werf-guide-app.test
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: werf-guide-app
port:
number: 8000
… as well as at our Service resource:
apiVersion: v1
kind: Service
metadata:
name: werf-guide-app
spec:
selector:
app: werf-guide-app
ports:
- name: http
port: 8000
Make sure that the Service resource is created:
kubectl get service werf-guide-app
You will see something like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
werf-guide-app ClusterIP 10.107.19.126 <none> 8000/TCP 35s
Now it is time to check if the Ingress resource is created:
kubectl get ingress werf-guide-app
You will see something like this:
NAME CLASS HOSTS ADDRESS PORTS AGE
werf-guide-app <none> werf-guide-app.test 80 3m21s
Without getting too technical, these two resources will redirect HTTP packets with the Host: werf-guide-app.test
header from NGINX Ingress Controller to the port 8000 of the werf-guide-app
Service. They will then be redirected to the port 8000 of one of the Pods belonging to our Deployment. Note that the Service configured by default distributes requests between the Deployment’s Pods evenly.
The general interaction between different resources within a cluster looks as follows:
Let’s connect to our application via Ingress:
curl http://werf-guide-app.test/ping
You will see the following response:
hello world
Note that the scope of Service resources is not limited to connecting Ingress with the application. They also allow resources within the cluster to communicate with each other. When a Service gets created, a <ServiceName>.<NamespaceName>.svc.cluster.local
is also created. It is accessible from within the cluster. Also, you can connect to the Service using its shorter names:
<ServiceName>
— if the request comes from the same namespace;<ServiceName>.<NamespaceName>
— if the request comes from a different namespace.
Let’s create a new container that is not related to our application:
kubectl run werf-temporary-deployment --image=alpine --rm -it -- sh
In the new container, curl to our application using the service we created:
apk add curl # Installing curl in the container.
curl http://werf-guide-app:8000/ping # curl to one of the pods of our application using the service.
You will see the following response:
hello world
Using Ingress resources is not the only way to access the application from outside the cluster. Services of
LoadBalancer
andNodePort
types also provide access to the application from outside the cluster, bypassing the Ingress. The official Kubernetes documentation provides a detailed description of Service and Ingress concepts.