Prepare the infrastructure
Requirements
-
GitLab;
-
Kubernetes to run GitLab Kubernetes Executor;
Install GitLab Runner
Follow official instructions to install and register the GitLab Runner. If you are going to install your GitLab Runner in Kubernetes, then install it to gitlab-ci
namespace.
Setting up the build environment with Buildah
(For Ubuntu 23.10 and later) on the GitLab Runner host run:
{ echo "kernel.apparmor_restrict_unprivileged_userns = 0" && echo "kernel.apparmor_restrict_unprivileged_unconfined = 0";} | sudo tee -a /etc/sysctl.d/20-apparmor-donotrestrict.conf && sudo sysctl -p /etc/sysctl.d/20-apparmor-donotrestrict.conf
Basic GitLab Runner configuration (no caching)
Add the following to the GitLab Runner configuration file config.toml
:
[[runners]]
environment = ["FF_USE_ADVANCED_POD_SPEC_CONFIGURATION=true"]
[runners.kubernetes]
namespace = "gitlab-ci"
[runners.kubernetes.pod_annotations]
"container.apparmor.security.beta.kubernetes.io/build" = "unconfined"
[runners.kubernetes.pod_security_context]
run_as_non_root = true
run_as_user = 1000
run_as_group = 1000
fs_group = 1000
[[runners.kubernetes.volumes.empty_dir]]
name = "gitlab-ci-kubernetes-executor-werf-cache"
mount_path = "/home/build/.werf"
[[runners.kubernetes.volumes.empty_dir]]
name = "gitlab-ci-kubernetes-executor-builds-cache"
mount_path = "/builds"
[[runners.kubernetes.volumes.empty_dir]]
name = "gitlab-ci-kubernetes-executor-helper-home"
mount_path = "/home/helper"
[[runners.kubernetes.pod_spec]]
name = "fix helper HOME"
patch = '''
containers:
- name: helper
env:
- name: HOME
value: /home/helper
'''
patch_type = "strategic"
Basic GitLab Runner Configuration (with caching using Persistent Volumes)
Add the following to the GitLab Runner configuration file config.toml
:
[[runners]]
environment = ["FF_USE_ADVANCED_POD_SPEC_CONFIGURATION=true"]
[runners.kubernetes]
namespace = "gitlab-ci"
[runners.kubernetes.pod_annotations]
"container.apparmor.security.beta.kubernetes.io/build" = "unconfined"
[runners.kubernetes.pod_security_context]
run_as_non_root = true
run_as_user = 1000
run_as_group = 1000
fs_group = 1000
[[runners.kubernetes.volumes.pvc]]
name = "gitlab-ci-kubernetes-executor-werf-cache"
mount_path = "/home/build/.werf"
[[runners.kubernetes.volumes.pvc]]
name = "gitlab-ci-kubernetes-executor-builds-cache"
mount_path = "/builds"
[[runners.kubernetes.volumes.pvc]]
name = "gitlab-ci-kubernetes-executor-helper-home"
mount_path = "/home/helper"
[[runners.kubernetes.pod_spec]]
name = "fix helper HOME"
patch = '''
containers:
- name: helper
env:
- name: HOME
value: /home/helper
'''
patch_type = "strategic"
[[runners.kubernetes.pod_spec]]
name = "fix volumes permissions"
patch = '''
initContainers:
- name: fix-volumes-permissions
image: alpine
command:
- sh
- -ec
- |
chown :$(id -g) /home/build/.werf /builds /home/helper
chmod g+rwx /home/build/.werf /builds /home/helper
securityContext:
runAsUser: 0
runAsNonRoot: false
volumeMounts:
- mountPath: /home/build/.werf
name: gitlab-ci-kubernetes-executor-werf-cache
- mountPath: /builds
name: gitlab-ci-kubernetes-executor-builds-cache
- mountPath: /home/helper
name: gitlab-ci-kubernetes-executor-helper-home
'''
patch_type = "strategic"
Create PVCs:
kubectl create -f - <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-ci-kubernetes-executor-werf-cache
namespace: gitlab-ci
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-ci-kubernetes-executor-builds-cache
namespace: gitlab-ci
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-ci-kubernetes-executor-helper-home
namespace: gitlab-ci
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
EOF
Configure access to Kubernetes from GitLab Executor Pods
werf will run in GitLab Kubernetes Executor Pods. Usually you are going to deploy with werf to the same cluster where GitLab Kubernetes Executor Pods are running. If so, you need to configure custom ServiceAccount and ClusterRoleBinding.
Create a ServiceAccount and a ClusterRoleBinding:
kubectl create -f - <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-ci-kubernetes-executor
namespace: gitlab-ci
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-ci-kubernetes-executor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-ci-kubernetes-executor
namespace: gitlab-ci
EOF
For greater security, consider creating a more restricted ClusterRole/Role and using it instead of the
cluster-admin
cluster role above.
Now add this line to the GitLab Runner configuration file config.toml
:
[[runners]]
[runners.kubernetes]
service_account = "gitlab-ci-kubernetes-executor"
Allow FUSE (for Kubernetes Nodes with Linux kernel older than 5.13)
If the Kubernetes Nodes on which you are going to run Kubernetes Executor Pods have Linux kernel version older than 5.13, then you need to allow FUSE:
kubectl create -f - <<EOF
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fuse-device-plugin
namespace: kube-system
spec:
selector:
matchLabels:
name: fuse-device-plugin
template:
metadata:
labels:
name: fuse-device-plugin
spec:
hostNetwork: true
containers:
- image: soolaugust/fuse-device-plugin:v1.0
name: fuse-device-plugin-ctr
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
---
apiVersion: v1
kind: LimitRange
metadata:
name: enable-fuse
namespace: gitlab-ci
spec:
limits:
- type: "Container"
default:
github.com/fuse: 1
EOF
Install Argo CD Image Updater
Install Argo CD Image Updater with the “continuous deployment of OCI Helm chart type application” patch:
kubectl apply -n argocd -f https://raw.githubusercontent.com/werf/3p-argocd-image-updater/master/manifests/install.yaml
Preparing Kubernetes for multi-platform building (optional)
This step only needed to build images for platforms other than host platform running werf.
Register emulators on your Kubernetes nodes using qemu-user-static:
kubectl create -f - <<EOF
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: qemu-user-static
namespace: gitlab-ci
labels:
app: qemu-user-static
spec:
selector:
matchLabels:
name: qemu-user-static
template:
metadata:
labels:
name: qemu-user-static
spec:
initContainers:
- name: qemu-user-static
image: multiarch/qemu-user-static
args: ["--reset", "-p", "yes"]
securityContext:
privileged: true
containers:
- name: pause
image: gcr.io/google_containers/pause
resources:
limits:
cpu: 50m
memory: 50Mi
requests:
cpu: 50m
memory: 50Mi
EOF
Configuring the container registry
Enable garbage collection for your container registry.
Configure the project
Configuring Argo CD Application
- Apply the following Application CRD to the target cluster to deploy a bundle from the container registry:
kubectl create -f - <<EOF
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd-image-updater.argoproj.io/chart-version: ~ 1.0
argocd-image-updater.argoproj.io/pull-secret: pullsecret:myproject-production/myproject-regcred
name: myproject
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: myproject-production
server: https://kubernetes.default.svc
project: default
source:
chart: myproject
repoURL: registry.mycompany.org/myproject
targetRevision: 1.0.0
syncPolicy:
automated:
prune: true
selfHeal: true
EOF
The value of argocd-image-updater.argoproj.io/chart-version="~ 1.0"
means that the operator must automatically deploy the chart updated to the latest patch version in the SEMVER
range 1.0.*
.
- Create a pull secret to access the project container registry:
kubectl create -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: myproject-regcred
namespace: myproject-production
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: BASE64_DOCKER_CONFIG_JSON
EOF
Configuring the GitLab project
-
Create and save the access token for cleaning up the no longer needed images in the container registry; use the following parameters:
-
Token name:
werf-images-cleanup
; -
Role:
developer
; -
Scopes:
api
.
-
-
Add the following variables to the project variables:
-
Access token to clean up the no longer needed images:
-
Key:
WERF_IMAGES_CLEANUP_PASSWORD
; -
Value:
<"werf-images-cleanup" access token you saved earlier>
; -
Protect variable:
yes
; -
Mask variable:
yes
.
-
-
-
Add a scheduled nightly job to clean up the no longer needed images in the container registry and set the
main
/master
branch as the Target branch.
Configuring CI/CD of the project
This is what the repository that uses werf for building and deploying might look like:
stages:
- release
- cleanup
default:
image:
name: registry.werf.io/werf/werf:2-stable
pull_policy: always
before_script:
- source $(werf ci-env gitlab --as-file)
tags: ["<GitLab Runner tag>"]
Build and publish release:
stage: release
script:
- werf bundle publish --tag "1.0.${CI_PIPELINE_ID}"
only:
- main
except:
- schedules
Cleanup registry:
stage: cleanup
script:
- werf cr login $WERF_REPO
- werf cleanup
only:
- schedules
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: {{ .Values.werf.image.app }}
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- name: app
port: 80
FROM node
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
{
"name": "app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "app",
"version": "1.0.0"
}
}
}
{
"name": "app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
const http = require('http');
const hostname = '127.0.0.1';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
configVersion: 1
project: myproject
---
image: app
dockerfile: Dockerfile
context: ./app
Extras:
- Add authorization options for
werf cleanup
in the container registry by following the instructions.