The simplest micro benchmark for cupy CUDA containerized in NVIDIA AI Workbench
NVIDIA AI Workbench runs inside a containerized environment, and I wanted an environment check that verifies the container has access and that Docker/podman, the NVIDIA driver, and Workbench are all on compatible versions.
Containerization has no effect on performance. The CUDA code is pretty much a direct pass-through to the card.
Environment
- NVIDIA AI Workbench
- One local GPU
- Windows 11
- Docker Desktop
- Program running in a container built by the Workbench based on the PyTorch/Cuda image
Program
I access the containerized environment via a Jupyter Notebook visible to the browser on the Windows machine. This is a snapshot of the Jupyter Notebook.
It found a problem
The program demonstrated that there was a container adapter (or something) mismatch that recently happened. cupy returned that it had access to the GPU. It turns out that it really did not, and that a Docker Desktop upgrade was needed to fix something driver-related. The error message was
"CUDARuntimeError: cudaErrorSymbolNotFound: named symbol not found"
Revision History
Created 2024/07
Corrected NVIDIA capitalization 2025/08
Comments
Post a Comment