Skip to content

CI CD Integration for GPU Containers

Andrew Mello edited this page May 2, 2025 · 10 revisions

Use GitHub Actions to verify image builds:

name: Build GPU Image
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Build Docker
        run: |
          docker build -t test/k3d-gpu .

You can also run nvidia-smi in a simulated or local runner with GPU.


πŸ”„ Advanced CI Patterns

Multi-CUDA Matrix Testing

strategy:
  matrix:
    cuda-version: ["11.8", "12.0", "12.4"]

steps:
  - run: |
      docker build \
        --build-arg CUDA_TAG=${{ matrix.cuda-version }}-base-ubuntu22.04 \
        -t my/cuda:${{ matrix.cuda-version }} .

Image Push + Tag

- name: Push Docker Image
  run: |
    docker tag my/cuda:12.0 myuser/k3d-gpu:12.0
    echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
    docker push myuser/k3d-gpu:12.0

πŸ“¦ Private Runner Setup with GPU

Run these on your GPU server to self-host the GitHub runner:

./config.sh --url https://github.com/your/repo --token TOKEN
./run.sh

Ensure your host GPU drivers are installed and compatible with CUDA.

πŸ›‘ Secure Builds for GPU Containers

When using CUDA in CI pipelines, be aware of:

  • πŸ›‘ DockerHub rate limits: use caching or GitHub's container registry
  • 🧬 Secrets handling: never hardcode credentials in workflows
  • 🧱 Rebuild frequency: avoid full rebuilds unless Dockerfile or base image changes

πŸ“ˆ Optimizing CI for GPU Image Size

CUDA images can exceed 3 GB. Reduce build time with:

  • Multi-stage builds to remove unnecessary tools
  • --no-install-recommends for apt-based installs
  • Compressing final layers and pruning build artifacts

Clone this wiki locally