Posts

What do we want out of load or performance test?

Image
We use performance tests to verify the raw throughput of some subsystems and to verify the overall impact some subsystem has on an entire ecosystem.  Load tests act as documentation for performance indicators and re-enforce performance expectations. They are vital in identifying performance regression. Load and performance tests are an often overlooked part of the software release lifecycle.  Load tests, at their most basic level, are about stress testing a system by dropping a lot of work onto it. Sometimes it is a percentage of expected load, other times it is the expected load, and other times it is future expected levels of load.  A failure to test  expected  near-term load can lead to spectacular public failures. Video Measurements  Your  business requirements  determine requirements for throughput, latency. Your  financial requirements impact the choices  you take towards achieving those goals

Creating Features in Python using sliding windows

Image
The first step to using ML for intrusion analysis detection is the creation of Features that can be used in training and detection.  I talk in  another blog  about creating features from sliding windows bound statistics of packet streams.  We can walk through the steps of   GitHub repository   contains Python code that creates features from Wireshark/tshark packet streams. The program accepts live tshark output or tshark streams created from .pcap files.  Network Traffic into Sliding Windows The example program requires Python and Wireshark/tshark.  The Python code uses 4 multiprocess tasks making this essentially a 5 core process.  It is a 100% CPU bound on a 4 core machine so I suspect it will run faster on a hex-core or above. There was a tshark+3 task version that ran 15% faster consuming 85% of a 4 core machine.  The Python modules/processes communicate via Multiprocessing Queues.

Network Intrusion Features via Sliding Time Windows

Image
Feature creation is one of the first steps towards creating Machine Models that apply to network monitoring or other stream-oriented data processes.  We massage independent variables into a form that can be used by ML models or other statistical tools. This often involves transforming source data through numerical conversion, bucketing, aggregation, and other techniques. For this project, we'd like to try and train a machine model to detect intrusion events by having it look at network traffic. People sometimes try and  directly consume events  as inputs. An individual network packet does not contain enough context to be useful on its own. A sliding time window makes it possible to create features with more context than you would get with a single message. This GitHub repository contains Python code that creates features from Wireshark/tshark packet streams. The program accepts live tshark output or tshark streams created from .pcap files.

Visualizing Covid Vaccinations - Python data prep and steps

Image
We want to plot the Covid vaccination rates across different countries world-wide or different states in the USA. We need to create a standardized dataset that is accurate enough for our graphing purposes. The folks at Our World in Data (OWiD) gather that information to create composite data sets.  Each independent entity reports data on its own schedule.  The composite  dataset can be missing entire days of data for some entities or individual data attributes in some of the days that are actually reported.   Lets look at the steps required to create reasonable comparisons and progress graphics. Source Data and Code Dataset courtesy of  Our World in Data : GitHub Repository Python code and scripts described here are available  on GitHub Videos  links at the bottom of this article Data Consistency We want time-series data that lets us exactly line up the data for each reporter. This table shows two different countries, C1 and C2.  They each report data on their own schedules.   C1 does

Monitor Internet Broadband service with a Raspberry Pi 4 and some Python

Image
You can easily automate capturing broadband connection statistics with some Python code  running on a Raspberry Pi, a Mac or, a PC.  I used a Raspberry Pi 4 as my test appliance because it is cheap and can support 1GB/s ethernet connections. That means it is fast enough to service most residential or low-end commercial connections. I'm lazy and wanted the data to end up in a secure public cloud that could be populated and viewed from anywhere.  We can send our broadband statistics from 1 or more locations and graph the different locations against each other. Any tool could be used. Monitoring One or Compare Two  We wanted to compare two different internet provider's service levels.  One provider is a FIOS 1GB down / 1GB up.  The other is a cable service with 1GB down / 50MB up. The providers and the technology were different.  We wanted to know if the complaints about one of the providers were valid. Relies on Speedtest.net infrastructure We're going to leverage the popular

Querying Python logs Azure Application Insight

Image
You can send your Python logs to Azure Application Insights from anywhere and then leverage the Application Insights query and dashboard capabilities to do log analysis.  Getting access to the logs is trivial. I wanted to plot basic internet performance information from data generated from two different machines in two different locations.  The source code is on GitHub here  freemansoft/speedtest-app-insights . That project runs speedtest.net measurements and then posts them to Azure Application Insights.  It logs the raw data when the --verbose switch is set.  That verbose output is sent to Azure App Insights. Execution pre-requisites You have an Azure login You have created an Azure Application Insights Application key https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource You have pushed data to Application Insights.  I used https://github.com/freemansoft/speedtest-app-insights with the _--verbose__ switch Video walkthrough not yet available Data Capture Notes

Querying Python Metrics custom tags as CustomDimensions in Azure Application Insight

Image
Azure Application Insights can be a collection point for Python Metrics that you can query and filter against.  We can send Open Census metrics from any data center into Azure Application Insights. This lets us see our program events from anywhere that can reach the Azure console.  It provides a zero admin performance console. We can add custom dimensions (attributes) to every metrics record we send to Azure Application Insights. The OpenCensus Azure Exporter sends tags to Azure Application Insights as CustomDimensions . Execution pre-requisites You have an Azure login You have created an Azure Application Insights Application key https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource You have pushed data to Application Insights.  I used https://github.com/freemansoft/speedtest-app-insights Video walkthrough

Displaying Python Metrics in Azure Application Insights

Image
We can capture Python performance metrics in Azure Application Insights. This will let us see our program performance from anywhere that can reach the Azure console.  I've used this to capture a variety of Python data manipulation and process timing without having to stand up any metrics databases or dashboards. I wanted to plot basic internet performance information from data generated from two different machines in two different locations.  The source code is on GitHub here freemansoft/speedtest-app-insights . That project runs speedtest.net measurements and then posts them to Azure Application Insights.  We can create charts for any of the data gathered as part of this process. Target Graphic We want to create a graphical tile that shows our connection ping time broken out per test machine. The program code above posts new ping time data every 5 minutes.  This graphic shows the ping results for the last 4 hours. Execution pre-requisites You have an Azure login You have created a

Failure Mode Analysis - Step Two - Detection and Remediation

Image
We evaluate the identified possible faults and issues to determine how we can detect the failure and how we can remediate it.    For this discussion, we will bucket the failure modes into three types which can help us determine how they can be detected.   We will categorize failures as technical, design time and business types of failures.  We can use the category to determine how we wish to remediate the failures. Some of the business rule failures will be "by policy" and their remediation will be in the business departments. The other failures will be remediated via technical means. Capturing - Detection and Remediation We want to fill in the  Detection  and  Remediation  columns.  You can tune the meanings of these columns to your use case.  For this walkthrough We sweep across all the faults to determine how the fault would actually be detected and then how we would permanently, tactically, manually, transiently remediate that.  Detection Classify how this can be detected