Prepare the infrastructure
Requirements
-
CI system;
-
Linux host to run CI jobs, with:
-
Shell Runner of your CI system of choice;
-
Bash;
-
Git version 2.18.0 or above;
-
GPG.
-
Setting up the build environment with Buildah
To install Buildah, do the following on the host for running CI jobs:
- Install the Buildah package following the official instructions but refrain from configuring it. If there are no pre-made Buildah packages for your distribution, refer to the following guidelines:
-
Install packages for
newuidmap
andnewgidmap
. -
Make sure that
newuidmap
andnewgidmap
have the proper permissions set:sudo setcap cap_setuid+ep /usr/bin/newuidmap sudo setcap cap_setgid+ep /usr/bin/newgidmap sudo chmod u-s,g-s /usr/bin/newuidmap /usr/bin/newgidmap
-
Install the package that provides the
/etc/subuid
and/etc/subgid
files. -
Make sure that the
/etc/subuid
and/etc/subgid
files have a line similar torunner:1000000:65536
, where-
runner
— name of the user to run the CI jobs; -
1000000
— first subUID/subGID in the allocated range; -
65536
— subUIDs/subGIDs range size (min65536
).
Avoid conflicts with other ranges, if any. Changing files may require a reboot. See
man subuid
andman subgid
for details. -
-
(For Linux 5.12 and below) Install the package that provides the
fuse-overlayfs
utility. -
Make sure the
/home/<user to run CI jobs>/.local/share/containers
path is created and the user to run CI jobs has read and write access. -
The
sysctl -ne kernel.unprivileged_userns_clone
command should NOT return0
, otherwise runecho 'kernel.unprivileged_userns_clone = 1' | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
. -
The
sysctl -n user.max_user_namespaces
command should return15000
or more, otherwise runecho 'user.max_user_namespaces = 15000' | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
. -
(For Ubuntu 23.10 and later) set values
kernel.apparmor_restrict_unprivileged_unconfined
andkernel.apparmor_restrict_unprivileged_userns
to0
with the command:{ echo "kernel.apparmor_restrict_unprivileged_userns = 0" && echo "kernel.apparmor_restrict_unprivileged_unconfined = 0";} | sudo tee -a /etc/sysctl.d/20-apparmor-donotrestrict.conf && sudo sysctl -p /etc/sysctl.d/20-apparmor-donotrestrict.conf
Installing werf
On the host for running CI jobs, run the following command to install werf:
curl -sSL https://werf.io/install.sh | bash -s -- --ci
Configuring the container registry
Enable garbage collection for your container registry.
Preparing the system for cross-platform building (optional)
This step only needed to build images for platforms other than host platform running werf.
Register emulators on your system using qemu-user-static:
docker run --restart=always --name=qemu-user-static -d --privileged --entrypoint=/bin/sh multiarch/qemu-user-static -c "/register --reset -p yes && tail -f /dev/null"
Configure the project
Configuring CI/CD of the project
This is how the repository that uses werf for build and deploy might look:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: {{ .Values.werf.image.app }}
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- name: app
port: 80
FROM node
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
{
"name": "app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "app",
"version": "1.0.0"
}
}
}
{
"name": "app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
const http = require('http');
const hostname = '127.0.0.1';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
environment_variables:
WERF_BUILDAH_MODE: auto
WERF_REPO: registry.example.org/myrepo
WERF_ENV: "${CI_ENVIRONMENT}"
WERF_ENABLE_PROCESS_EXTERMINATOR: "1"
before_every_job:
- source "$(~/bin/trdl use werf 2 stable)"
- werf cr login -u "${REGISTRY_USER:?}" -p "${REGISTRY_PASSWORD:?}" "${WERF_REPO:?}"
jobs:
prod:
commands:
- werf converge
environment: prod
on: master
how: manually
images:cleanup:
commands:
- werf cleanup
on: master
how: daily
configVersion: 1
project: myproject
---
image: app
dockerfile: Dockerfile
context: ./app
Extras:
- Add authorization options for
werf cleanup
in the container registry by following instructions.