Posts

Showing posts from 2024

The simplest micro benchmark for cupy CUDA containerized in NVidia AI Workbench

Image
NVidia AI Workbench runs inside a containerized environment and I wanted an environment check that verifies the container has access and that docker/podman, the NVidia driver and Workbench are all on compatible versions.   Containerization has no effect on performance. The CUDA code is pretty much a direct pass-through to the card. Environment NVidia AI Workbench One local GPU Windows 11  Docker Desktop Program running in a container built by the Workbench based on the PyTorch/Cuda image Program I access the containerized environment via a Jupyter Notebook visible to the browser on the Windows machine. This is a snapshot of the Jupyter Notebook. numpy_cupy_sort.ipynb Gist It found a problem The program demonstrated that there was a container adapter (or something) mismatch that recently happened.  cupy returned that it had access to the GPU but it turns out that it really didn't and that a Docker Desktop upgrade was needed to fix something driver-related.  The error message was &qu

Analyzing Possible Failure Modes - The Garage Door

Image
We try to prevent or catch failures in all kinds of everyday tasks.  The cat escapes. The dog eats the socks.  Someone leaves the garage door up at night. Failure mode analysis (FMEA) works for all kinds of problems.  We can apply a version of FMEA to the garage door problem and use it as an example that everyone understands and potentially solves an everyday problem. The talk I'm finally getting around to posting the content to this 2022 talk:  Working through all the ways to fix the problem that I don't always close the garage door Talk content These are the contents of the Keynote presentation used in the talk.  See the talk for an explanation. Related This was part of a series of FMEA talks  Blog Articles Throwing down failures  http://joe.blog.freemansoft.com/2021/01/failure-mode-analysis-ste-one-throwing.html Detection and remediation  https://joe.blog.freemansoft.com/2021/02/failure-mode-analysis-step-two.html Videos Step 1: Throwing down failures  https://youtu.be/R

Decoding common responses

Image
Rating system for meetings, team reviews, code reviews and, disaster war rooms. Bullshit - not true Dogshit - terrible Horseshit - nonsense Apeshit - angry Batshit - crazy The shit - the best Well Shit - disbelief Deep Shit - big trouble Additional meaning in other situations No shit - you don't say? Aint shit - deadbeat Good shit - this is awesome Most of these were pulled from an @threads thread. Revision History  Created 2024/04

Flutter messaging classes and their native platform peers

Image
Communication between Flutter modules and Mobile Native code happens over platform channels.  The docs are all over the place so I created this cheat sheet for the different channel types and their class documentation. YouTube walkthrough video Mobile Native Communication Communication between the Mobile native code and Flutter happens over platform channels. Web communication happens via window messaging. Flutter Channel types There are three native platform channel types. V1 of this application uses the  Message Channel Channel type and Flutter class Description Flutter Native Native Flutter Supports Return Method Invoke method on the other side Yes Yes Via result Message Sends a message to a remote listener Yes Yes Via reply Event Streams and sinks. Events can flow in both directions Yes Yes N/A Channel Implementation Classes Dart/Flutter, Android, and iOS have corresponding `channel` classes for each channel type. The three platform channel implementatio

Migrating Native Applications to Flutter entrypoint by entrypoint

Image
Organizations with significant investments in native applications will probably migrate flutter feature-by-feature or navigation flow by navigation flow.  They may be able to migrate from the back of the app to the front of the application with all of the Flutter code in a single package or bound to a single every-growing function set. The alternative approach is to migrate targets of opportunity in different locations across the existing application.  Features-of-opportunity migration means that we will probably end up with a set of unrelated features that will get invoked at different points in the existing native application.  We organize those work streams or flows into their own package .  Each of those packages is essentially its own mini Flutter program with its own main() program.  Android and iOS native applications can import a single Flutter  module  that should contain  all  the Flutter functionality. We need to include all various Flutter functions in that single module. 

NVidia AI Studio Workbench is a containerized ML playground

Image
NVidia AI Studio creates and manages containerized ML environments that isolate ML projects on local and remote machines.  You no longer have to switch environments  or remember which version of Python or Anaconda you are using in your global machine environment. NVidia simplifies the initial configuration by providing predefined image definitions containing Python, PyTorch and other tools to be used with or without NVidia Graphics cards. The actual development is done via browser-based tools like JupyterLabs notebooks. Workbench spins up local proxies that port forward into the development container. See videos below NVidia Workbench runs in a WSL instance NVidia workbench runs in its own WSL instance.  Each project runs in its own Docker container.  You can look at the NVidia main WSL instance by opening a shell into that WSL instance. The following command can be run in a Windows terminal window. wsl -d NVIDIA-Workbench NVidia projects live in the WSL instance in   /home/workbench/n

Install and use 'htop' when the Linux 'top' command isn't wide enough or is too hard to read

Image
I was trying to troubleshoot an issue on an NVIDIA AI Workbench WSL instance and the top program didn't show enough information.  It was hard to read the monochrome output and the command section was too narrow to tell me the full program launch commands with all the parameters.  The NVIDIA VMs had htop  installed which colorized pieces of information and displayed at full widescreen width. Installation htop on Ubuntu Install with apt apt update && apt install htop in my case sudo apt sudo apt update && apt install htop   Installing htop in a WSL Instance (Ubuntu) Find the WSL instance wsl -l -v Shell into the VM using the wsl command wsl -d <wsl instance name> Install htop sudo apt update && apt install htop Installing htop in a docker desktop WSL instance (Alpine) I wanted to see what was happening in the WSL instance that runs my Docker containers to see the CPU load and process details. Shell into the Docker Desktop wsl instance with  wsl -d docker