Prepare the infrastructure
Important: this section describes preparing the infrastructure for self-hosted GitHub Runner
Requirements
- GitHub Actions;
- Kubernetes cluster;
- Helm;
-
A host to run GitHub Runner, with:
- Bash;
Enable unprivileged user namespaces (required on worker nodes)
sudo sysctl -w kernel.unprivileged_userns_clone=1
echo 'kernel.unprivileged_userns_clone = 1' | sudo tee -a /etc/sysctl.conf
Install the GitHub Actions Runner Controller (ARC)
Install the Helm chart for the ARC controller:
helm install arc \
--namespace arc-systems \
--create-namespace \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
Create Kubernetes resources
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: job-template-gha-runner-werf
namespace: arc-runners
data:
content: |
spec:
serviceAccountName: arc-runner-sa
securityContext:
runAsUser: 1001
containers:
- name: \$job
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: arc-runner-sa
namespace: arc-runners
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: arc-runner-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: arc-runner-sa
namespace: arc-runners
EOF
Note For greater security, consider creating a more restricted ClusterRole/Role and using it instead of the
cluster-admin
cluster role above.
Deploy
Create a file values.yaml
with the following content:
githubConfigUrl: "https://github.com/myenterprise/myorg/myrepo"
githubConfigSecret:
github_token: "<PAT>"
template:
spec:
serviceAccountName: arc-runner-sa
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
env:
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /home/runner/k8s/index.js
- name: ACTIONS_RUNNER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "true"
- name: ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE
value: /home/runner/job-template/content
volumeMounts:
- name: work
mountPath: /home/runner/_work
- name: job-template
mountPath: /home/runner/job-template
readOnly: true
resources:
requests:
cpu: 400m
memory: 800Mi
volumes:
- name: job-template
configMap:
name: job-template-gha-runner-werf
- name: work
ephemeral:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "<your-storageClassName>"
resources:
requests:
storage: 1Gi
job:
enabled: true
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
memory: 2048Mi
helm install arc-runner-set -f values.yaml \
--create-namespace \
--namespace arc-runners \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
Configure the project
Requirements
- Self-hosted runner.
Setting up a GitHub project
-
Create and save the access token to clean up the no longer needed images from the container registry with the following parameters:
-
Token name:
werf-images-cleanup
; -
Scopes:
read:packages
anddelete:packages
.
-
-
Add the following variable to the project secrets:
-
Access token to clean up the no longer needed images:
-
Name:
REGISTRY_CLEANUP_TOKEN
; -
Secret:
<"werf-images-cleanup" access token you saved earlier>
.
-
-
-
Save the kubeconfig file to access the Kubernetes cluster as a
KUBECONFIG_BASE64
encrypted secret, pre-encoding it in Base64.
Configuring CI/CD of the project
This is how the repository that uses werf for build and deploy might look:
name: cleanup
on:
schedule:
- cron: "0 3 * * *"
jobs:
cleanup:
name: cleanup
runs-on: arc-runner-set
container:
image: ghcr.io/werf/werf:2-stable-ubuntu
steps:
- uses: actions/checkout@v3
- run: git fetch --prune --unshallow
- run: |
. "$(werf ci-env github --as-file)"
werf cleanup
env:
WERF_KUBECONFIG_BASE64: ${{ secrets.KUBECONFIG_BASE64 }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
WERF_REPO_GITHUB_TOKEN: ${{ secrets.REGISTRY_CLEANUP_TOKEN }}
name: prod
on:
push:
branches:
- main
jobs:
prod:
name: prod
runs-on: arc-runner-set
container:
image: ghcr.io/werf/werf:2-stable-ubuntu
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- run: |
. "$(werf ci-env github --as-file)"
werf converge
env:
WERF_ENV: prod
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
WERF_KUBECONFIG_BASE64: ${{ secrets.KUBECONFIG_BASE64 }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: {{ .Values.werf.image.app }}
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- name: app
port: 80
FROM node
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
{
"name": "app",
"version": "1.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "app",
"version": "1.0.0"
}
}
}
{
"name": "app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
const http = require('http');
const hostname = '127.0.0.1';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
configVersion: 1
project: myproject
---
image: app
dockerfile: Dockerfile
context: ./app
Extras:
-
If you’re not using GitHub Container Registry as your container registry, follow these steps:
- Set the
WERF_REPO
environment variable to your container registry address; - Log in to the registry using werf cr login;
- When performing cleanup, make sure to review specific features of your registry that may affect cleanup behavior.
- Set the
-
See the authentication guide for more information on accessing the Kubernetes cluster.