forked from justinthelaw/k3d-gpu-support
-
Notifications
You must be signed in to change notification settings - Fork 0
CI CD Integration for GPU Containers
Andrew Mello edited this page May 2, 2025
·
10 revisions
Use GitHub Actions to verify image builds:
name: Build GPU Image
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker
run: |
docker build -t test/k3d-gpu .You can also run nvidia-smi in a simulated or local runner with GPU.
strategy:
matrix:
cuda-version: ["11.8", "12.0", "12.4"]
steps:
- run: |
docker build \
--build-arg CUDA_TAG=${{ matrix.cuda-version }}-base-ubuntu22.04 \
-t my/cuda:${{ matrix.cuda-version }} .- name: Push Docker Image
run: |
docker tag my/cuda:12.0 myuser/k3d-gpu:12.0
echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
docker push myuser/k3d-gpu:12.0Run these on your GPU server to self-host the GitHub runner:
./config.sh --url https://github.com/your/repo --token TOKEN
./run.shEnsure your host GPU drivers are installed and compatible with CUDA.
When using CUDA in CI pipelines, be aware of:
- π DockerHub rate limits: use caching or GitHub's container registry
- 𧬠Secrets handling: never hardcode credentials in workflows
- π§± Rebuild frequency: avoid full rebuilds unless Dockerfile or base image changes
CUDA images can exceed 3 GB. Reduce build time with:
- Multi-stage builds to remove unnecessary tools
-
--no-install-recommendsfor apt-based installs - Compressing final layers and pruning build artifacts