Skip to content

Latest commit

 

History

History
74 lines (54 loc) · 1.72 KB

File metadata and controls

74 lines (54 loc) · 1.72 KB

Local Testing Guide (No GPU Required)

This guide shows how to test the cast-slice GPU-slice webhook in a local KinD/Minikube cluster that has no physical NVIDIA GPU.


1. Mock a Node with nvidia.com/gpu capacity

Kubernetes does not allow kubectl patch to add arbitrary extended resources to status.capacity directly because the status sub-resource is guarded. The standard technique is to patch the node via the k8s.io/node-capacity annotation or, for testing, use the kubectl proxy + raw API PUT.

The simplest approach is to apply the patch with kubectl:

# Replace <node-name> with the output of: kubectl get nodes -o name
NODE=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')

kubectl proxy &
PROXY_PID=$!

curl -s -X PATCH \
  -H "Content-Type: application/json-patch+json" \
  "http://localhost:8001/api/v1/nodes/${NODE}/status" \
  --data '[
    {"op":"add","path":"/status/capacity/nvidia.com~1gpu","value":"4"},
    {"op":"add","path":"/status/allocatable/nvidia.com~1gpu","value":"4"}
  ]'

kill $PROXY_PID

After the patch you should see:

$ kubectl describe node ${NODE} | grep -A5 "Capacity:"
Capacity:
  nvidia.com/gpu:  4
  ...

2. Test Pod – triggers the rewrite logic

Apply docs/test-pod.yaml to verify the webhook mutates the Pod:

kubectl apply -f docs/test-pod.yaml

Then inspect what was actually scheduled:

kubectl get pod gpu-test-pod -o jsonpath='{.spec.containers[0].resources}' | jq .

Expected output (after webhook mutation):

{
  "limits": {
    "nvidia.com/gpu-shared": "1",
    "memory": "4Gi"
  },
  "requests": {
    "memory": "4Gi"
  }
}

The nvidia.com/gpu key is gone; nvidia.com/gpu-shared: 1 took its place.