Start to finish running AI Workbench project in cloud GPUs with Brev

 I wanted to run an ML project in a bigger environment than I have in my homelab.  I don't have any cloud accounts but can rent GPU capacity via the Brev GPU/CPU marketplace. I just needed to get it there. The project itself is fully containerized and available on git as an AI Workbench project.  This means I just needed to install the AI Workbench server code on a Brev rented GPU instance and then attach to that server over the provided network tunnel.  My local AI Workbench can then manage and run my project remotely from my Macbook.

All the steps

Some of these steps will be automated in future NVidia AI workbench releases.  Use this diagram to follow along in the video.


Video

Revision History

Created 2024 11

Comments

Popular posts from this blog

Installing the RNDIS driver on Windows 11 to use USB Raspberry Pi as network attached

Understanding your WSL2 RAM and swap - Changing the default 50%-25%

Almost PaaS Document Parsing with Tika and AWS Elastic Beanstalk