Posts

Showing posts from February, 2024

Using OpenTelemetry to send Python metrics and traces to Azure Monitor and App Insights

Image
Microsoft has updated its Python libraries that let us send 3rd party library invocation metrics and traces and application-specific custom metrics and traces to Application Insights.  Their OpenTelemetry Exporters make it simple to route the standard Python OpenTelemetry library observations to Application Insights. You bootstrap the Azure config and credentials and then use standard OT calls to capture metrics and trace information. SpeedTest network Metrics in Azure using OpenTelmetry A couple of years ago I wrote a program to record the health of my home internet connection and store that information in Azure Application Insights. I did this with a Python program that leveraged Microsoft's OpenCensus Azure exporter.  That library is becoming obsolete with the move from OpenCensus to OpenTelemetry.   That project has now been ported to Open Telemetry!  It involved 6 hours and 40 lines of code that was preceded by 20 hours of whining and internet surfing. The ...

T-shirt sizing - Estimating workload and capacity without false precision

Image
T-shirt sizing is a great way of estimating the relative size or level of effort of a set of tasks. Effort and cost discussions are significantly easier when decoupled from hours and dollars. It lets people focus on the problem definition and possible approaches without getting hung up on absolute values. Size some work items relative to each other. "This one feels maybe twice the size of this other one".  Bin the sorted items in arbitrary groups (T-Shirt sizes) Small, Medium, Large, etc. based on the amount of work or complexity.   New items are sized the same way when they are first introduced. They are compared to the existing backlog and slotted into the buckets. This process gives you a feel for work and relative complexity without resorting to false precision for specific days and hours. Velocity and Capacity can be extrapolated later after tasks of various sizes are completed. Start with relative sizing and then iterate closer to absolute sizing as you know more. P...

NVIDIA Broadcast - Eye Contact - No more looking down in videos

Image
NVIDIA added a new beta feature to NVIDIA Broadcast that manipulates the speaker's eye position to make it appear as if they are looking at the camera instead of the prompt or contact.  It isn't perfect but it really helps.  You can see the effect in the two pictures below. This has been around for a while but I never noticed. NVIDIA added support in Jan 2023.  Camtasia added support for NVidia Broadcast in Camtasia 2022 update 2 or 3. Virtual Camera works with other programs like Camtasia The NVIDIA broadcast acts as a virtual camera that can be used in programs like Camtasia 2022.  You'll want the latest version of Camtasia NVidia design and performance information https://developer.nvidia.com/blog/improve-human-connection-in-video-conferences-with-nvidia-maxine-eye-contact/ Revision History Created 2024 02

Want to run a local LLM? How much memory does your NVIDIA RTX have?

Image
Want to run LLMs locally? The new NVIDIA  Chat with RTX   requires 8GB of VRAM. Other language models require even more.  You can use the Windows utility ' dxdiag ' to see how much memory your NVIDIA RTX card has. Windows-R or click on the Windows key and type in the search bar. Enter dxdiag Click on Display or in my case Display 1 We're looking for the amount of  Display Memory (VRAM)  I have 12,086MB or 12GB so I'm good to go. Machine Learning models suddenly made me care how much VRAM my NVIDIA graphics card has. I can never remember the exact model let alone how much VRAM it has.  Now I have this handy article to remind me how to find the numbers I need. Vido Revision History 2024 02 created

Supporting command line arguments in Dart programs

Image
Dart programs, really all programs, are either hardcoded against certain targets and configurations or they configurable at startup. Not sure about you but the word "hardcoded" always sounds bad.  The most common way of providing configuration information is probably: Command-line arguments Environmental variables Properties files or other accessible storage Properties services The approach you pick depends on the nature ephemeral nature of the program, the environment you run in, and if the configuration can change while the program is running .  Sometimes you use a combination, one configuration method at bootstrap, and then a different one as the program continues running. In all command-style programs, I've found that I need some way to bootstrap the process.  It almost always has command-line arguments or environmental variables. Environment variables are great but can be a bit opaque depending on how the program is run. My go-to is to include run-time argument suppo...

Creating Service Accounts for programmatic access to Google Drive APIs

Image
Google Drive is one of those cloud technologies that democratized cloud access to data storage. It lets you securely push all kinds of data into and out of the cloud via Google-provided APIs. APIs and documents are bound to permissions, roles, and identities. Programs accessing Google Docs require credentials, preferably least-privilege credentials, which exist just for a single program's needs. Google IAM supports Service Accounts that are not tied to any human .  They can be enabled and disabled without impacting individual users.  Accessing Google Docs via API means you have to enable  Google Drive API  in a project, create an identity/credentials for the program, and then give the identity access to the docs or the API.  There are plenty of good tutorials that walk you through setting up an account. They are often light on the overall process or how the steps tie together.  It can be confusing the first time or 10 you go through it. Google services are...