Prepare the infrastructure
Important: this section describes preparing the infrastructure for self-hosted GitHub Runner
Requirements
-
GitHub Actions;
-
Host to run GitHub Runner with:
-
Bash;
-
Git version 2.18.0 or above;
-
GPG.
-
Installing and registering the GitHub Runner
Follow official instructions to install and register the GitHub Runner on your dedicated host.
Setting up the build environment with Buildah
Follow these steps on the GitLab Runner host to install Buildah:
- Install the Buildah package following the official instructions but avoid configuring it. If there are no ready-made Buildah packages for your distribution, refer to the following guidelines:
-
Install the packages for
newuidmap
andnewgidmap
. -
Make sure that
newuidmap
andnewgidmap
have the proper permissions:sudo setcap cap_setuid+ep /usr/bin/newuidmap sudo setcap cap_setgid+ep /usr/bin/newgidmap sudo chmod u-s,g-s /usr/bin/newuidmap /usr/bin/newgidmap
-
Install the package that provides the
/etc/subuid
and/etc/subgid
files. -
Make sure that the
/etc/subuid
and/etc/subgid
files have a line similar togitlab-runner:1000000:65536
, where-
gitlab-runner
— name of the GitLab Runner user; -
1000000
— the first subUID/subGID in the range to be allocated; -
65536
— subUIDs/subGIDs range size (min65536
).
Make sure there are no conflicts with other ranges, if any. Changing files may require a reboot. See
man subuid
andman subgid
for details. -
-
(Linux 5.12 and below) Install the package that provides the
fuse-overlayfs
utility. -
Make sure that the
/home/gitlab-runner/.local/share/containers
path is created and thegitlab-runner
user has read and write access. -
The
sysctl -ne kernel.unprivileged_userns_clone
command should NOT return0
, otherwise runecho 'kernel.unprivileged_userns_clone = 1' | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
. -
The
sysctl -n user.max_user_namespaces
command should return15000
or more, otherwise runecho 'user.max_user_namespaces = 15000' | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
. -
(For Ubuntu 23.10 and later) set values
kernel.apparmor_restrict_unprivileged_unconfined
andkernel.apparmor_restrict_unprivileged_userns
to0
with the command:{ echo "kernel.apparmor_restrict_unprivileged_userns = 0" && echo "kernel.apparmor_restrict_unprivileged_unconfined = 0";} | sudo tee -a /etc/sysctl.d/20-apparmor-donotrestrict.conf && sudo sysctl -p /etc/sysctl.d/20-apparmor-donotrestrict.conf
Configure the project
Requirements
-
GitHub Actions;
-
GitHub-hosted Runner or self-hosted runner.
Setting up a GitHub project
-
Create and save the access token to clean up the no longer needed images from the container registry with the following parameters:
-
Token name:
werf-images-cleanup
; -
Scopes:
read:packages
anddelete:packages
.
-
-
Add the following variable to the project secrets:
-
Access token to clean up the no longer needed images:
-
Name:
REGISTRY_CLEANUP_TOKEN
; -
Secret:
<"werf-images-cleanup" access token you saved earlier>
.
-
-
-
Save the kubeconfig file to access the Kubernetes cluster as a
KUBECONFIG_BASE64
encrypted secret, pre-encoding it in Base64.
Configuring CI/CD of the project
This is how the repository that uses werf for build and deploy might look:
name: cleanup
on:
schedule:
- cron: "0 3 * * *"
jobs:
cleanup:
name: cleanup
runs-on: self-hosted
steps:
- uses: actions/checkout@v3
- run: git fetch --prune --unshallow
- uses: werf/actions/install@v2
- run: |
source "$(werf ci-env github --as-file)"
werf cleanup
env:
WERF_KUBECONFIG_BASE64: ${{ secrets.KUBECONFIG_BASE64 }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
WERF_REPO_GITHUB_TOKEN: ${{ secrets.REGISTRY_CLEANUP_TOKEN }}
name: prod
on:
push:
branches:
- main
jobs:
prod:
name: prod
runs-on: self-hosted
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: werf/actions/install@v2
- run: |
source "$(werf ci-env github --as-file)"
werf converge
env:
WERF_ENV: prod
WERF_KUBECONFIG_BASE64: ${{ secrets.KUBECONFIG_BASE64 }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
WERF_BUILDAH_MODE: auto
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: {{ .Values.werf.image.app }}
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- name: app
port: 80
FROM node
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
{
"name": "app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "app",
"version": "1.0.0"
}
}
}
{
"name": "app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
const http = require('http');
const hostname = '127.0.0.1';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
configVersion: 1
project: myproject
---
image: app
dockerfile: Dockerfile
context: ./app
Extras:
-
To use GitHub-hosted Runner, specify
ubuntu-latest
inruns-on
; -
If you do not use ghcr as a container registry, then enter
WERF_REPO
, run werf cr login, and also take into account features of your container registry when cleaning.