Posts

Showing posts with the label Docker

Use an NVIDIA Container Image to verify container access to NVIDIA GPUs

Image
Verify that your container can communicate with the GPU and the type of NVIDIA GPUs your Docker containers can access using one of NVIDIA's container images. docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi  Sample Output The containers on this server have access to a single Titan RTX card Source This came from  NVIDIA getting started with large language models Revision  Created 2024/07  Corrected NVIDIA captialization 2025/08

A simplified version of Kubernetes Network for developing on Windows and WSL and Docker Desktop

Image
Pictures always help. We can use this one to put together a basic description of how the Windows, WSL2 Linux distributions, and Docker on WSL and Kubernetes talk to each other over localhost. We’ll cover why you got to do things like port-forwarding or proxy with a Kubernetes network. There is asymmetric behavior between the different components. I'm sure somebody knows the magical explanation.  Click to enlarge Video Walkthrough Magical localhost networking with Docker WSL2 Windows and Linux Heavily Edited Video Transcript You have a Windows host with its ethernet adapter. Each Linux WSL2 instance has an eth0 network adapter. The Windows host and WSL instances are attached to a private network on the Ethernet Switch (WSL) Then you have the localhost adapter. WSL makes it look like localhost is visible to all WSL instances and to the Windows host.  A port on WSL2 127.0.0.1 is also available on any other WSL2 instance and from the Windows host all on 127.0.0.1.  ...

Address Pools and Interfaces for Docker and WSL and Windows

Image
Docker Desktop and WSL2 integration on Windows 10/11 "just works" in many situations but feels like magic. I needed a map of the networks and names to understand why I needed proxies, port forwarders, and projected ports. May this be useful to you also :-) The Windows 10/11 machine in this diagram is known as Z820. This diagram is an outside-looking-in topology. There are multiple networks and different name resolutions for the same names depending on where you are in the network. In some places, the same hostname is resolved to different IPs if you use DNS vs the host table, /etc/hosts Click to Enlarge This diagram is a simplified version of the previous one with the WSL network and associated Linux installations removed. Click to Enlarge Video YouTube Windows WSL2 Docker Node Pool and Desktop Networks and Names The Six Networks in this diagram The network IP ranges in the diagram above are those of the default Docker / WSL installations and can be adjusted via various mech...

Understanding your WSL2 RAM and swap - Changing the default 50%-25%

Image
The Windows Subsystem for Linux operates as a virtual machine that can dynamically grow the amount of RAM to a maximum set at startup time.  Microsoft sets a default maximum RAM available to 50% of the physical memory and a swap-space that is 1/4 of the maximum WSL RAM.  You can scale those numbers up or down to allocate more or less RAM to the Linux instance. The first drawing shows the default WSL memory and swap space sizing. The images below show a developer machine that is running a dev environment in WSL2 and Docker Desktop.  Docker Desktop has two of its own WSL modules that need to be accounted for.  You can see that the memory would actually be oversubscribed, 3 x 50% if every VM used its maximum memory.  The actual amount of memory used is significantly smaller allowing every piece to fit. Click to Enlarge The second drawing shows the memory allocation on my 64GB laptop. WSL Linux defaul...

Single Node Kubernetes with Docker Desktop

Image
You can run Single-node Linux Kubernetes clusters with full Linux command line support using Docker Desktop and Windows WSL2.  Docker Desktop makes this simple. This is probably obvious to everyone. I wrote this to capture the commands to use as a future reference. There is another article that discusses  Creating a multi-node Kubernetes Cluster on a single machine using Kind http://joe.blog.freemansoft.com/2020/07/multi-node-kubernetes-with-kind-and.html Windows 10 / 11 - WSL2 - Docker The simplest way to run Kubernetes Windows 10 is with the Docker Desktop and WSL2 integration.  Docker will prompt to enable WSL2 integration during installation if you are running a late enough version of Windows 10 or any Windows 11. Prerequisites Docker is installed with WSL2 integration enabled.  The docker command operates correctly when you type a docker  command in a WSL linux command prompt. Install, Enable and Verify Install Docker Enab...

Software Development in a Container - Coding by Copy - a Primer

Image
Containers make it easy to set up a complex data scientist development environment.  A developer can just spin up a Python, Jupyter Notebook, Spark, Hadoop, or another type of container on a local machine in minutes. Containers can be confusing when you first work with them. Here we talk a little about how you can get code and data into your container environment and how you can get it back out. I want to write code  local  to my laptop and run the code inside a fully configured Anaconda container. And, I'm lazy. Two ways to get code onto a container for development Containers are standalone  mini machines  with private disk space, CPU, networking  and other services.  They are not intended to retain state, something that we definitely want to do in a development environment. We need to get our code inside the container. We can do the same thing with data or we can have our code pull the data in at runtime. We plan on doing all  development on ...

Software Development in a Container - Mounting code into the container - A Primer

Image
Containers make it easy to set up a complex data scientist development environment.  A developer can just spin up a Python, Jupyter Notebook, Spark, Hadoop, or another type of container on a local machine in minutes. Containers can be confusing when you first work with them. Here we talk a little about how you can get code and data into your container environment and how you can get it back out. I want to write code local to my laptop and run the code inside a fully configured Anaconda container. And, I'm lazy. Two ways to get code onto a container for development Containers are standalone mini machines with private disk space, CPU, networking  and other services.  They are not intended to retain state, something that we definitely want to do in a development environment. We need to get our code inside the container. We can do the same thing with data or we can have our code pull the data in at runtime. There are two primary ways of getting code onto a machine.  We c...

Capture Confluent Kafka Metrics easily with Jolokia , Telegraf, InfluxDB and Grafana (JTIG)

Image
Kafka provides all kinds of metrics that can be used for operational support and tuning work.  We can use Telegraf/Jolokia to capture metrics from the various Confluent broker nodes and put those metrics into an Influx DB. We can then create Grafana dashboards for those metrics using Grafana Example Topology Deployed Components We run a custom Kafka Broker Docker Image built on top of the confluent/cp-server image. That image just adds the Jolokia JVM Agent jar file.   Run a docker-compose.yml to start Kafka that enables Jolokia as a Java Agent The agent URL for demontration purposes The Telegraf standalone agent conf enables the Jolokia2 input adapter.  The configuration file can be built into a new image or the configuration file can be mounted inside the standard DockerHub Telegraph image. Telegraf runs with this new configuration file. Telegraf retrieves the data from the Jolokia REST endpoints and sends it to InfluxDB Grafana provides visualization.  It u...

Azure PaaS is Dead - Death by Container

Image
One of Azure's big initial innovations was their PaaS platforms.  These services were a level up  from most of Amazon's offerings. Microsoft appears to be in the process of killing of their traditional offerings. They are instead exposing the containerized underpinnings and forcing customers to do their own packaging and Docker image construction.  This makes sense for large enterprises but means that teams need to understand the underlying O/S software modules and system components that may need to be installed in their containers. PaaS PaaS makes it easier for less sophisticated teams to deploy sophisticated scalable applications. Azure PaaS lets customers deploy applications without worrying about the Operating System, System Patching, CVE security scanning , load balancing, log aggregation, or other issues.  The opacity of the underlying system meant teams had to know less and that teams couldn't know more if they wanted too. Microsoft modernized much of their Pa...

Automate pushing multiple Docker tags into DockerHub with hooks/post_push

Image
Sometimes you have a situation where you want to push multiple tags when you push a Docker Image to Docker Hub using the Docker Hub build automation.   We will override the default behavior to do this. Video links related to this blog are available down below. Default behavior The default behavior is to build and push every time  there is a change on master.  That auto-built  image is always tagged with :latest and pushed to the repository as :latest Standard default build on master pushed with tag latest Docker Hub supports overriding its build and deploy behavior with a set of hooks. Custom build files and build hooks are stored in  /hooks  of the code repository.  Hooks are described  https://docs.docker.com/docker-hub/builds/advanced/  .   You can see an example here  https://github.com/freemansoft/cp-server-jolokia Pushing Multiple Tags to Docker Hub This is the approach we will take The following variables are avail...