Prepare the infrastructure
Requirements
-
CI system;
-
Kubernetes for running CI jobs with your CI system’s Kubernetes Runner.
Setting up the build environment with Buildah
(For Ubuntu 23.10 and later) on the GitLab Runner host run:
{ echo "kernel.apparmor_restrict_unprivileged_userns = 0" && echo "kernel.apparmor_restrict_unprivileged_unconfined = 0";} | sudo tee -a /etc/sysctl.d/20-apparmor-donotrestrict.conf && sudo sysctl -p /etc/sysctl.d/20-apparmor-donotrestrict.conf
Basic Runner configuration (no caching)
Configure your CI system’s Runner so that the Pods you create have the following configuration:
apiVersion: v1
kind: Pod
metadata:
namespace: ci
annotations:
"container.apparmor.security.beta.kubernetes.io/build": unconfined
spec:
containers:
- volumeMounts:
- name: werf-cache
mountPath: /home/build/.werf
volumes:
- name: werf-cache
emptyDir: {}
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
Basic Runner configuration (with caching using Persistent Volumes)
Configure your CI system’s Runner so that the Pods you create have the following configuration:
apiVersion: v1
kind: Pod
metadata:
namespace: ci
annotations:
"container.apparmor.security.beta.kubernetes.io/build": unconfined
spec:
initContainers:
- name: fix-volumes-permissions
image: alpine
command:
- sh
- -ec
- |
chown :$(id -g) /home/build/.werf
chmod g+rwx /home/build/.werf
securityContext:
runAsUser: 0
runAsNonRoot: false
volumeMounts:
- mountPath: /home/build/.werf
name: werf-cache
containers:
- volumeMounts:
- name: werf-cache
mountPath: /home/build/.werf
volumes:
- name: werf-cache
persistentVolumeClaim:
claimName: ci-kubernetes-runner-werf-cache
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
Create PVC:
kubectl create -f - <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ci-kubernetes-runner-werf-cache
namespace: ci
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
EOF
Configure access to Kubernetes from Runner Pods
If werf will run directly in Runner Pods and you are going to deploy with werf to the same cluster where Runner Pods are running, then you need to configure custom ServiceAccount and ClusterRoleBinding.
Create a ServiceAccount and a ClusterRoleBinding:
kubectl create -f - <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ci-kubernetes-runner
namespace: ci
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ci-kubernetes-runner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: ci-kubernetes-runner
namespace: ci
EOF
For greater security, consider creating a more restricted ClusterRole/Role and using it instead of the
cluster-admin
cluster role above.
Now add this line to Pod spawned by your Runner:
spec:
serviceAccountName: ci-kubernetes-runner
Allow FUSE (for Kubernetes Nodes with Linux kernel older than 5.13)
If the Kubernetes Nodes on which you are going to run Runner Pods have Linux kernel version older than 5.13, then you need to allow FUSE:
kubectl create -f - <<EOF
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fuse-device-plugin
namespace: kube-system
spec:
selector:
matchLabels:
name: fuse-device-plugin
template:
metadata:
labels:
name: fuse-device-plugin
spec:
hostNetwork: true
containers:
- image: soolaugust/fuse-device-plugin:v1.0
name: fuse-device-plugin-ctr
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
---
apiVersion: v1
kind: LimitRange
metadata:
name: enable-fuse
namespace: ci
spec:
limits:
- type: "Container"
default:
github.com/fuse: 1
EOF
Preparing Kubernetes for multi-platform building (optional)
This step only needed to build images for platforms other than host platform running werf.
Register emulators on your Kubernetes nodes using qemu-user-static:
kubectl create -f - <<EOF
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: qemu-user-static
namespace: ci
labels:
app: qemu-user-static
spec:
selector:
matchLabels:
name: qemu-user-static
template:
metadata:
labels:
name: qemu-user-static
spec:
initContainers:
- name: qemu-user-static
image: multiarch/qemu-user-static
args: ["--reset", "-p", "yes"]
securityContext:
privileged: true
containers:
- name: pause
image: gcr.io/google_containers/pause
resources:
limits:
cpu: 50m
memory: 50Mi
requests:
cpu: 50m
memory: 50Mi
EOF
Configuring the container registry
Enable garbage collection for your container registry.
Configure the project
Configuring CI/CD of the project
This is how the repository that uses werf for build and deploy might look:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: {{ .Values.werf.image.app }}
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- name: app
port: 80
FROM node
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
{
"name": "app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "app",
"version": "1.0.0"
}
}
}
{
"name": "app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
const http = require('http');
const hostname = '127.0.0.1';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
image: "registry.werf.io/werf/werf:2-stable"
image_pull_policy: always
environment_variables:
WERF_REPO: registry.example.org/myrepo
WERF_ENV: "${CI_ENVIRONMENT}"
WERF_ENABLE_PROCESS_EXTERMINATOR: "1"
before_every_job:
- werf cr login -u "${REGISTRY_USER:?}" -p "${REGISTRY_PASSWORD:?}" "${WERF_REPO:?}"
jobs:
prod:
commands:
- werf converge
environment: prod
on: master
how: manually
images:cleanup:
commands:
- werf cleanup
on: master
how: daily
configVersion: 1
project: myproject
---
image: app
dockerfile: Dockerfile
context: ./app
Extras:
- Add authorization options for
werf cleanup
in the container registry by following instructions.