Wednesday, December 23, 2015

Enabling Microsoft Application Insights for Mule ESB monitoring

Microsoft Azure Application Insights requires Mule 3.7 or later. Application Insights depends on org.apache httpclient and httpcore versions that are first bundled with Mule 3.7

Application Insights is an Azure based application performance dashboard that can monitor applications deployed inside, or outside, Azure.  Application Insights SDKs are available for a variety of languages with a heavy focus on standard library web driven applications or services.

This blog entry describes how easy it is to enable Application Insights for a Mule ESB application that does not use any of the out-of-the-box supported web hooks. In this case, we monitoring the out-of-the-box JMX beans provided by Mule. Performance information is gathered by Application Insights where it is displayed in the Azure Portal.

Mule exposes performance data about applications and flows via JMX.  Any of this can be forwarded to the Application Insights Dashboard.


  1. Create an Application Insights in Azure
  2. Enable Application Insights in mule application.
  3. Bind the Mule application and Application Insights with a shared Instrumentation Key
  4. Select the Mule JMX data to be displayed in the ApplicationInsights dashboard.
  5. Configure the Mule applicaton to send Mule JMX data to Application Insights
  6. Run the Application
  7. Configure the Application Insights instance dashboard to show the enabled Telemetry and metrics.

Create an Application Insights instance in Azure

Create an Azure Application Insights instance as described in this previous blog

Enable Application Insights

Create a Spring bean that loads the Telemetry Configuration. Put this Spring Bean in one of your xml configuration files.   Instructions can be found in this previous blog.

Bind a Mule application to Application Insights

Create an ApplicationInsights.xml file and add the InstrumentKey found in the Azure dashboard.   Instructions can be found in this previous blog  You can find an example ApplicationInsights.xml file below 

Select Mule application data to send to Application Insights

Determine which JMX beans you wish to expose to Application Insights and add those to our configuration. 
Use a JMX inspection tool to see information available in a Mule application.  We will JVisualVM in this example.   These screen shots are based on the example application below.
  1. Run JVisualVM from a command prompt
  2. Add the MBean Plugin and restart jvisualvm.  This has to be done only once.
  3. Select  your running application in the left pane.
  4. Select the MBeans tab at the top of the right hand pane.
  5. Select Mule.<appname> in the right tab. This will show you the various Application and Flow level metrics that are available. This example app only has one flow so flow and application statistics will mostly be identical.

This target bean is nested two deep in the hierarchy. This makes for slightly complicated naming conventions in the JMX configuration.


We will monitor the Average Processing Time , Max Processing Time and number of Processed Events for a the flow in this single flow application.

  • ObjectName: Mule.muleappinsightexample:type=Flow,name=muleappinsightexampleflow
  • .Attribute: AverageProcessingTime
  • Average: Should stay within operation ranges.
  • ObjectName: Mule.muleappinsightexample:type=Flow,name=muleappinsightexampleflow
  • Attribute: MaxProcessingTime
  • Raw number: grows without bounds over times
  • ObjectName: Mule.muleappinsightexample:type=Flow,name=muleappinsightexampleflow
  • Attribute: ProcessedEvents
  • Raw number: grows without bounds over times

Configure Mule to send Mule application data

Add the monitors to the mule application ApplicationInsights.xml file.  This example monitors basic flow data.

... ...

Sample Application Flow

This example was written against a very simple Mule HTTP echo type application.

The mule corresponding Mule XML contains ApplicationInsights bootstrap Spring Configuration and the flow itself. It looks like:

... ...

WARNING: Mule flow name JMX beans include double quotes.  This means you must add escaped quotes to the name value.  This is done by wrapping the flow name with &quot;

Monitors in Application Insights

Charts can be created for any custom metric.  Just create a chart and then bind one or more Custom metric to the chart by selecting them in the chart details.

Each custom metric, can be added to the Application Insights dashboard. This image shows the three Mule Flow JMX values exposed by this example program.

2015 12 23 Created 

Monday, December 21, 2015

Enabling Microsoft Application Insights for JMX monitoring of a Spring wired Java application

Application Insights is an Azure based application performance dashboard that can monitor applications deployed inside, or outside, Azure.  Application Insights SDKs are available for a variety of languages with a heavy focus on standard library web driven applications or services.

This blog entry describes how easy it is to enable Application Insights for a Spring wired Java application that does not use any of the out-of-the-box supported web hooks. In this case, we are enabling Java / JMX monitoring of a custom Spring application running in my home lab. Performance information is gathered by Application Insights where it is displayed in the Azure Portal.

I did this work as part of building a message driven application that had no true web interface. The application runs in a container that does not support tomcat or web filters normally used to enable Application Insight.

Microsoft does a good job of describing how to monitor Java Tomcat, Struts, Spring MVC and other standard web applications on their web site.


  1. Create an Application Insights instance
  2. Enable Application Insights in a Java application, binding the application and dashboard with a shared Instrumentation Key
    1. Create a Spring Bean that enables Application Insights for a non-MVC/non-Struts application.
    2. Create an ApplicationInsights.xml file that adds monitors for standard Java VM statistics exposed via JMX.
  3. Run the Application
  4. Configure the Application Insights instance dashboard to show the enabled Telemetry and metrics.

Create an Application Insights instance.

Step 1 is to create an Application Insights instance Use this Microsoft Azure blog article, Getting Started with Application Insights in a Java web project, to create an Application Insights instance for your new application. Application Insights gathers metrics/telemetry information from applications based on a set of monitors.  You can use the same Application Instance instance for any number of components if you manage the names of the monitors to be unique across the application components.  I am monitoring a single application for the purposes of this exercise.

Log into your Azure portal and create a new Java web application instance of Application Insights. My application isn't a true java web application but "Other" is experimental so I went with it. As of 2015 Dec 20, all Application Insights instances seem to be in Central US.

Find the Insights instance key.  This key binds your application to this instance/dashboard.

Do Not check this key into any public repository or expose this key. It can be used by any application to push events to your dashboard. Azure Application Insights is a metered service so messages cost you money in a non-development environment

Enable JVM metrics in a Spring Application

We are going to enable ApplicationInsights metrics for a standalone Spring MVC application or one that runs in a semi-custom platform like the Mule ESB which has its own filter/web configuration. This assumes that you are not able to enable monitoring through the web filter or other methods described in the Microsoft documentation.  The Application Insights team documentation describes how to monitor web applications running with a variety of containers and frameworks.  

Make sure to add the ApplicationInsights jar files to your project dependencies. The previously mentioned Microsoft Azure blog article describes how to do this.

Add a Telemetry Configuration Spring bean

Create a TelemetryConfiguration instance using Spring to call TelemetryConfiguration.getActive().  This will force the loading of the ApplicationInsights.xml configuration file that you put in src/main/resources.

... ...

Create an ApplicationInsights.xml file

This sample xml config file sends basic JVM statistics to the Applications Insight instance bound by the included key. This configuration gathers information about the number of loaded classes, the amount of heap memory used and the number of threads in this VM.

Microsoft examples show how to monitor heap and cpu. Any exposed JMX monitors can be added.  The following example adds an active thread count monitor.

You can use the JVisualVM MBean inspector to find other information that can be added to your ApplicationInsights dashboard. 

... ...

WARNING: Some JMX beans include double quotes in their paths, names or attirbutes. Elements with quotes in their name must be surrounded by escaped quotes to the name value.  This in the config file by wrapping with with &quot;

Run the Application.

The previous configuration enables debug output for testing. You should see trace data like the following show up in your logs if trace logging is enabled

AI: TRACE 21-12-2015 07:52, 17: Metric JMX: Heap Memory Usage-used, 4.2598904E7
AI: TRACE 21-12-2015 07:52, 17: Metric JMX: Loaded Class Count, 8150.0
AI: TRACE 21-12-2015 07:52, 17: Metric JMX: Thread Count, 53.0
AI: TRACE 21-12-2015 07:52, 17: Sent performance counter for 'Processor % Processor Time'(Processor, % Processor Time, _Total): '3.7053394317626953'<
AI: TRACE 21-12-2015 07:52, 17: Sent performance counter for 'Process IO Data Bytes/sec'(Process, IO Data Bytes/sec, javaw): '20.875185012817383'
AI: TRACE 21-12-2015 07:52, 17: Sent performance counter for 'Memory Available Bytes'(Memory, Available Bytes, ): '9.881853952E9'
AI: TRACE 21-12-2015 07:52, 17: Performance Counter: Process Private Bytes: 1.14066984E8
AI: TRACE 21-12-2015 07:52, 17: Performance Counter: Process % Processor Time: 0.06347814202308655

Configure the Dashboard

JMX beans show up in the Application Insights dashboard as custom metrics.  Add new graphs for those metrics or re-task the existing graphs to display your JMX information. The image to the right shows where JMX information appears as Custom metrics in the configuration screen.

You can pick the metrics in each graph by selecting the graph and then selecting the metrics you want for that graph in the Chart Details section to the right of the graph.

More in the future

This article is already pretty long.  I'll show how to add the metrics to the insights dashboard in a future posting.  This blog was driven by a desire to add Application Insights monitoring to a Mule ESB application.

2015 12 21 Created
2015 12 23 Added Escaped values and bean hierarchy GC example

Thursday, August 27, 2015

How TFS 2013 Calculates Story points for Velocity Graphs

TFS 2013 grafts Agile on top of their traditional task model.  The TFS web interface is significantly more functional than previous iterations.

Sprint-to-Sprint Velocity

TFS Scrum and Agile templates automatically calculate per sprint velocity based on the Story Points of User Stories with the current iteration path. Setting the Iteration Path essentially commits the team to the User Story for that sprint. The following diagram shows the number of story points for User Stories committed to over the last 5 sprints.

  • Green represents story points for Completed stories
  • Blue represents  story points for Active  stories.  Previous stories can have some blue if stories were left open at the end of previous tasks.

TFS Story Point Calculation Commitment Issues

Teams commit  to certain stories in the sprint by putting them in the sprint Iteration Path.  They base the number of committed stories the sum of the story points for those stories. Many teams commit to User Stories but leave those stories in the New state until work has actually started.  This makes it easy to see which stories are being swarmed on and which ones have not yet been started.  Teams move User Stories to the Active state when work actually starts and finally to the Closed state when they meet the Definition of Done (DoD)..

TFS shows story point based velocity for only Active and Completed User Stories.  It ignores all stories in the New state.  This can make it difficult determining how many stories can be brought in during sprint planning.  Teams either have to sum the story points for User Stories on the current iteration path or they need to set all the committed User Stories to Active.

Story Points vs Hours and Stories vs Tasks

Story points represent they amount of required work based on a relative scale of the types of work in the backlog. They represent a best guess at the amount of work relative to other work. TFS story points correlate to some number of  hours but are use to avoid the common problem of false precision in estimates.. 

TFS tasks are estimated in hours. They represent a best guess at hours for general sizing. Sprint burn-down charts are based on task/hour burn-downs. Management often treats TFS task hour estimates as correct when the reality is they are best guess estimates.

Friday, July 31, 2015

Cloning a Mac / Bootcamp disk to a larger drive

OWC and others recommend that you do a fresh installation when moving a Mac with Windows BootCamp to a larger drive.  That is probably a good idea.

Sometimes you just decide to do things the hard way. This is what I did for my Macbook Air when I bought a larger SSD.  It had OS/X on the first partition and Windows 7 on the second partition.

  1. Backup your data.
  2. Create a Clozilla bootable thumb drive.
  3. Get an OWC drive with external case. Plug in the new drive in the external case.
  4. Boot Clonezilla from the thumb drive. 
  5. Clone the internal disk to the new, larger, external disk.
    1. Make an exact copy of the disk using Clonezilla.
    2. This exact copy will not make use of the full drive.
  6. Power off the machine
  7. Remove the old disk. 
  8. Install the new disk inside your machine.
  9. Boot into the mac OS on the disk you just installed.
  10. Move GPT bookkeeping data to the actual end of the disk. Download and use gdisk to copy the MBR/GPT info to the end of the new drive. You may be able to do this from inside clonezilla.
    1. Run“sudo gdisk” from a Mac terminal  
    2. Enter expert mode. Type “x” and hit “Enter”.  
    3. Move the backup GPT data structures from the end of the old disk length to the end of the new disk. Type “e”  and hit “Enter”.   
    4. Exit gdisk. Type “w” and hit “Enter”.
  11. Reboot into the Windows OS
  12. Run the check disk utility to verify the disk.
  13. Extend the Windows partition to fill the empty portion of the disk.
    1. Download Mini Tool Partition Wizard
    2. Use MTPW to extend the windows partition to fill the disk.
  14. Restart into OS/X
  15. Use CampTune to reallocate some space from the Windows partition to the OS/X partition.
    • CampTune may tell you it needs to re-align the MBR and GPT.  Let it.
  16. Party like its 1999.

Early step of this are based on

Sunday, July 12, 2015

Fix partition issues relatively safely with P2V and V2P

I foolishly decided to convert a Windows operating system drive from MBR partitioned to GPT partitioned. That conversion didn't go right. The boot portion of the drive was erased or destroyed. The partitions were still there and no data was destroyed.   I tried to convert the drive back to MBR.  That didn't work easier.  This is usually the point where you might start panicking.

The cycle time is pretty long when you try and fix the boot portion of a physical disk.  There are continual physical machine restarts.  There is also risk that data will be lost while attempting to fix this. I wanted to attempt drive repair without risking destruction of data on the drive I was working on. In addition, I didn't want to work on this drive without access to the internet.

My target laptop laptop has two bootable drives in it.  I realized that I could convert the trashed disk from Physical to Virtual using Paragon Drive Copy. Then I could hack on the virtual drive without risking losing data. Bad mistakes could be "undone" by running P2V again to rebuild my virtual disk.

Tools used:

  • Paragon Drive Copy P2V to create virtual disk images of my physical hard drive
  • Windows installation ISO for system repair
  • VMWare Player
  • Paragon Drive Copy V2P to clone the virtual disk image back onto the physical disk
  • Google to find out what kinds of commands might be needed.

Short version

  1. Convert physical disk to virtual, vmdk in my case
  2. Create Virtual machine with newly created disk and Windows installation ISO
  3. Boot the virtual machine from the ISO and use the Windows recovery tools to repair the virtual disk/machine.
  4. Clone the repaired virtual disk back to a physical disk

Longer version


Paragon Drive Copy converted the physical disk to virtual. I then created VMWare virtual machine that mounted that virtual disk as its primary and attached a Windows ISO as the boot CD.

Create the VM

  1. Create a new VMWare VM using VMWare Player using the ISO as the installation media and the VMDK created by P2V as the hard drive.  The VMWare New Machine Wizard didn't let me create a VM with an existing vmdk. 
  2. Edit the machine configuration before running it. Add the P2V vmdk and remove the empty one created by the wizard.
  3. Boot the VM.  It should boot from the ISO by default.
Note: You can give time to boot to bios by adding a boot delay where the time is milliseconds.  The following vmx file entry waits 5 seconds for user input before continuing the boot process.
bios.bootDelay = "5000"


This let me boot the VM in recovery mode. The disk was fixed and bootable with about 1-2 hours of system recovery disk work. Most problems can be fixed with just the windows recovery disk.  My problems were a little more complex :-(

Note: I ended up deleting my recovery partitions, rebuilding the MBR and then letting the windows recovery process make it bootable.

  • bootrec /fixmbr
  • bootrec /fixboot
  • ren c:\boot\bcd bcd.old
  • bootrec.exe /rebuildbcd
  • bcdboot c:\windows
"x:\sources\recovery\StartRep.exe" runs the recovery app.
"bcdedit" tells your your boot config


Paragon Dive Copy documentation says it supports Virtual to Physical (V2P).  It turns out you do this by mounting the virtual drive with Drive Copy.  This makes the (newly repaird) virtual drive look like a physical disk.  I was then able to clone the virtual disk onto the original physical drive.  The clone operation copies all the partition and boot information.

Paragon Drive Copy

There are plenty of disk cloning tools out there. I used Paragon Drive Copy because I own a copy and have had good luck with it in the past.
  • P2V: There is a button on the Paragon Drive Copy ribbon bar that does this.
  • V2P: This is done by cloning a mounted virtual drive.

P2V and V2P can be useful tools when trying to fix certain types of disk issues.

Thursday, June 25, 2015

Trust No One Architectures

A Trust No One Architecture is one where each organizational unit minimizes accidental risk by owning as much of their processes as possible.  Companies end up with a Trust No One architecture where each sub-organization is most likely to meet its' goals if it controls as much of its development, technical and operational processes as possible. Each division / operational unit acts as an independent entity with loose coupling at the edges and just enough cooperation to meet the company goals. 

I recently attended a talk of a Departmental Information Officer for a large bank who said their software process accelerated and their business deliverables came in earlier when they pulled architecture, operations and infrastructure back from the corporate level to the department level. The bank traded costs, standards and duplicate work for time to market and agility. This was in strong opposition to the previous attempt at minimizing risk by centralizing functions.

Trust No One Architectures may be analogous to the agile concept of a cross functional team that owns all aspects of a deliverable. The team owns requirement, development, production, customer satisfaction, security and testing. The organizational team takes ownership of the project's success.  Scale that up to the 100-300 person organization and you end up with a self-contained mini company that attempts to minimize issues around external dependencies.

Tool for self-preservation or sign of organizational dis-function?

I always thought of Trust No One Architectures as a sign of dis-function.  I've come to realize it is really the only way many teams have any hope of meeting their business and technical needs without creating a system of unreasonable complexity and instability. Modern software continues to grow in essential complexity  because of ever expanding business requirements and ever increasing complex interactions.  

Concepts like Enterprise Architecture, Enterprise Services and Centralized Operations cyclically rise up and fall as they crash upon the rocks or organizational realities. Some claim SOA is a technology issue.  Public press tends to mostly describe the failures of SOA as organizational failures. SOA architectures are hard because they are technically hard.  They are brutally hard because they do not align with organizational structures.

Architecture Follows Organization

Developers and organizational anthropologists have known that software product form follows the structure of the organization that creates it.  Conway said this as far back as 1968.  The organization creates the accountability and incentive boundaries. 

Current software  shiny objects include distributed applications and micro services.  People rant about how services Amazon Services are the company's secret sauce.  This only works if people make it completely mandatory in a no if's and's or but's kind of way.  This is can only happen if companies organize around this.  A forward looking distributed architecture cannot be back ported into an existing organizational structure where teams attempt to minimize risk by owning as much as possible.

Orignally written 6/25/2015

Monday, June 1, 2015

Protect RabbitMQ data by encrypting the Mnesia database on Windows Server

RabbitMQ is one of the many caching and messaging tools that uses local disk persistent storage or as a backing store for in memory data.  These systems normally put data to disk in some format that is optimized for speed and not for security. Ex: RabbitMQ, ActiveMQ, Coherence, Gemfire, MongoDB.

This can cause issues when trying to comply with policies around protecting Personally Identifiable Information , making systems Payment Card Industry Data Security Standard (PCI DSS) compliant or when implementing S/Ox controls.

RabbitMQ Installation

We assume that you are running RabbitMQ under the local system account.  Users who run RabbitMQ under different accounts or in different locations must change certain commands or settings.  The RabbitMQ team has a good set of documentation on their web site.

Directory Encryption.

We'll use Encrypted File System (EFS), available with Microsoft operating systems, to encrypt the directories and files that contain the disk based information. EFS directories will be come unreadable by anyone other than the Local Service account.  This means you have to decide if you want to encrypt data directories, configuration directors or just individual files as they will be come unusable by anyone other than the service account.

Portions of this document are based on the following web postings.

General Process

  1. Stop RabbitMQ
  2. Download PsExec which is part of the awesome Sysinternals suite.  It is essentially a Windows version of remote commands and sudo.  
  3. Run the psexec command to encrypt the directory acting as the Local System account.
  4. Restart RabbitMQ

Enabling Encryption

  1. Download PsExec and unpack the zip file. The files can be run from the un-zipped archive without installation. Remember the path you put them on. I unzipped PSTools into \tools.
  2. Open an administrative command prompt.
  3. Stop the RabbitMQ service using the the services control panel. Find services via windows search or run services.msc from an administrative command prompt.
  4. Used Psexec to open a command prompt that is owned by the Local System account
    • <app-path>\psexec -sid cmd.exe
  5. Change focus to the new command prompt window.
    • Verify this command worked by typing whoami in the new command prompt window. It should say "nt authority\system"
  6. CD to the directory where your mnesia database is located. 
    • The default location is the AppData directory of the user id that installed RabbitMQ.  
    • On my machine it was in C:\Users\<userid>\AppData\Roaming\RabbitMQ\db\<clusternode>-mnesia
  7. Use the cipher command to encrypt the node's mnesia directory
    • cipher /e /s:rabbit@<machinename>-mnesia  
    • Ex:   cipher /e /s:rabbit@WIN8-MACBOOK-15-mnesia
    • You should see messages listing the files and directories that are encrypted
    • The directory will show in green in the file explorer on windows 8
  8. Start the RabbitMQ  service.
  9. Verify the service is working and accepting messages.


Move the rabbitmq data directory outside of the installer's AppData\Roaming\RabbitMQ folder.
  1. Open a rabbitmq command prompt
  2. rabbitmq-service.bat remove
  3. set RABBITMQ_BASE=<some_folder>
  4. rabbitmq-service.bat install
  5. rabbitmq-service.bat start
Note: I had to edit rabbitmq-defaults.bat to get this all configured correctly.  I'm not sure why

Reversing the process

You can revert to an un-encrypted mnesia database by using the cipher command with /d in place of /e

Performance Impact

My simple test showed no measurable performance difference when posting persistent messages to a local RabbitMQ server.  My test program publishes
  • 6800-6900 persistent 1500 byte messages per second with EFS enabled
  • 6800-6900 persistent 1500 byte messages per second with EFS disabled
using a local SSD in a quad core Macbook 15" mid 2012 running Windows 8. My test program is probably throttled somewhere else since that ran the CPUs at 15% and the disks even lower.

I was unable to meaningfully measure the the true CPU impact of this change.  

Operational Impacts

EFS is very easy to use in a situation where you don't expect to move files across systems outside of the applications using the data.  Backups and other system recovery tools may be rendered useless. 

The cipher command is run as the local system account. This means the directory is encrypted and owned by that account irrespective of whose AppData directory the database is actually installed in. I recommend moving your rabbit configuration , log files and mnesia database to some other location out side of some user account home directory.

Encryption certificates and recover keys may need to be retained or managed to facilitate data recovery or migration. Microsoft documents some of the key management issues in a TechNet article. The cipher command can be used to manage certificates and recovery keys.

Created 5/31/2015

Recommendations added 8/26/2015

Sunday, May 24, 2015

IPad, Chromebook, Windows 8 Tablet long term family test

We've been conducting a long term "family" test on our coffee table for the past year or so.

Our test subjects include an IPad (2), an HP Chromebook 14' and a Windows 8 (Winbook) 10" tablet. The devices are primarily used for web surfing, news reading, basic word games, social media and research when people need to prove they are right as part of a discussion. Our previous coffee table device was a Macbook Air that has been appropriated for other uses.

Chromebook #1 Coffee Table

We've been surprised how much we like this device. A Chromebook takes some getting used to at first because you don't install any software on it and you don't have to do any patches.  Software hacks like me are initially completely lost when getting a new Chromebook.  You don't have to install software because you can't install software.  It is the closest thing to an appliance. 

This Wi-Fi and T-Mobile enabled Chromebook has become the house go-to device.  It has a large keyboard and trackpad and all evening battery life.  The classic clamshell design makes it good for lap and table use.  The Chromebook does support different user profiles through google logins. We each can have our own environment without any type of IT administration or configuration. This is the only device without a touchscreen. We wish it had one but wouldn't give up the packaging and keyboard for that feature.

The HP is an ugly color and the case stains easily.  They were looking to do something a little different than the standard metallic silver or black. 

IPad Air 2 Cellular: #1 Travel

I started this when we had an IPad 2.  Our newer IPad Air 2 has been a bit of a revelation. The 4G support makes the device useful anywhere without pairing with a phone or the need for a hotspot. Cellular IPads have GPS which has come in handy on long car trips.  We can sit down somewhere and look at a hardcover book sized map without having to muck with our phones..  The weight is great.  We've been using it without an external keyboard. This is our go-to travel device.  It is also a very nice reader when not in direct sun

IPad 2: #2 Travel and Basic Reader

We've had a Wi-Fi enabled IPad 2 for several years. The iPad 2 is surprisingly heavy even though it was a revolution when it came out..  The IPad 2 has been relegated to a color Kindle App reader or a travel when we don't want to risk losing more expensive device.  Recent iOS 8.x updates have made this device significantly slower.  

The IPad 2 has always been more popular than the Windows tablet device in our house.

Windows Tablet

This device has been a bit of a disappointment.  I thought it would be great because it could be a PC or a tablet/media device.  The funky keyboard and stand mean that it is a lousy laptop (PC).   The small graphics and PC modes for some apps means it is a lousy tablet.  The battery life in this particular device is horrible because of the power saving modes. There really aren't any decent Modern UI applications.  OneNote is confusing. I can't find some of the important features.

Windows 10 is somewhat of an improvement even though Tablet mode is probably actually worse.


We continue to use these devices and look at others.

Curse of the Project's Gold

I've worked for a few different companies, worked in pre-sales calling on lots of vendors and taught classes and did training with from teams from many different organizations.  This is my current list "curses" based on those experiences.  You can find lots of project warnings using a little google foo. Here are mine:

I worked a team where everyone got a box of gold treasure.

Knowing when it is better to leave the gold in the chest...

  1. No one is sure who the business owner is.
  2. The original "primary business problem" is not addressed by the project.
  3. The company believes it is immune to the three legged stool of "Cost", "Quality" and "Time".
  4. More bodies are added in order to make the project go faster in the current sprint. An egregious version of Brook's Law
  5. The project expands to justify the cost.  See Escalation of Commitment
  6. The CIO says that "they have never had a project come in late".
  7. Upper management announces the project and then appears to the staff, only talking to their direct reports..
  8. Executives believe the project can be completed on time (or at all) even though there is no empirical evidence. See Optimism Bias
  9. The project contains multiple subcontractors each with their own separate definitions of success.
  10. Explicit deliverables are planned out on a multiple year timeline.

Other signs

  1. The timeline for a new project is 1/2 the timeline of the previous project even though it includes greater complexity and new tools and platforms.
  2. It feels like management's bonus is tied to a quarterly reorganization.
  3. Management declares that automated testing should be done after the customer signs off so that there is less test rework.
  4. Chaos Monkey is used as a way to test the effectiveness of the on-call system.
  5. The team turns off Continuous Integration build notifications because they make the team look bad.
  6. Your agile team still turns in status reports.
  7. Outsourcing is done to "make failures cheaper"
  8. You hear the phrase "we can archive the code"
  9. You see your project code on
  10. Developers actually believe their code is self documenting.
Last updated 5/24/2015

Sunday, April 12, 2015

C# IOIO and I2C devices

The IOIO C# library on GitHub now has basic I2C support.  Protocol support and resource management is lifted from the Java library.  The upper API is message based rather than object based. TWI/I2C/SPI is managed via outbound messages. Data from I2C devices and message acknowledgements from the IOIO come back asynchronously via inbound Messages.


This was tested with an IOIO V1, Bluetooth module, a JeeLabs Expander Plug and an old Computer power LED.  

I2C device

The Expander Plug is based on the MicroChip MCP23008 port expander.  Programming examples are available on the Jee Labs web site. The Microchip web site has the chip programming guide.

The LED assembly already had a current limiting resistor. I just plugged it in across the "+" pin on the expansion port and the port pin next to the power pin.

The port expander default address is 0x20.  It has 10 registers that control behavior and transfer data.

JeeLabs makes a bunch of 3.3v I2C devices that are easy to use.

IOIO Hardware

You must have pull up resistors on DA and CL.  I wasted 3 hours convinced that I didn't really need the resistors. You can see my 10K Ohm pull resistors tied to the back side of a 3.3v power pin.  Each resistor is tied to its pin and the 3.3v power pin. It is not a parallel resistor even if it looks like that in the picture.

JeeLabs has their own pin naming convention. AIO corresponds to IOIO I2C CLK pin.  DIO corresponds to IOIO I2C pin Data

Connect up 4 wires on the device
  • Expander DIO -> IOIO I2C Data
  • Expander AIO -> IOIO I2C Clock
  • Expander +   -> IOIO +3V
  • Expander GND -> IOIO GND
  • Expander INT -> N/C
  • Expander Pwr -> N/C
For TWI 0
  • Expander DIO -> IOIO Pin 4 (DA0)
  • Expander AIO -> IOIO Pin 5 (CL0)
  • Expander +   -> IOIO +3V
  • Expander GND -> IOIO GND
  • Expander INT -> N/C
  • Expander Pwr -> N/C

Talking with the Port Expander

The TwiI2CTest.cs in the GitHub project shows more details as to how this is set up. The test logs messages from the IOIO.  It does not show how to pick up the return messages using notifications.

Open and Configure TWI

We get two responses back when we configure one of the Twi pin sets.  I don't yet understand why the TXStatus report says 256 bytes remaining when it is actually 0.  

This code posts the TWI configuration message. This code:
int TwiVirtualDevice = 0;
 LOG.Debug("Configuring TWI");
IOIOMessageCommandFactory factory = new IOIOMessageCommandFactory();
ITwiMasterConfigureCommand startCommand = factory.CreateTwiConfigure(TwiVirtualDevice, TwiMasterRate.RATE_400KHz, false);
TwiSpec twiDef = startCommand.TwiDef;
results in this output
TwiI2CTest             - Configuring TWI
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterConfigureCommand
IOIOHandlerCaptureLog  - HandleI2cOpen i2cNum:0
IOIOHandlerCaptureLog  - HandleI2cReportTxStatus i2cNum:0 bytesRemaining:256

logging truncated to improve readability.

Configure port expander I/O pins as outputs

Set a value in the registers to configure the pins as outputs.  We get a confirmation message

This code:
int JeeExpanderAddress = 0x20;
byte RegisterIoDir = 0x00;
byte[] ConfigureAllOutput = new Byte[] { RegisterIoDir, 0x00 };
LOG.Debug("Configuring port direction as all output ");
ITwiMasterSendDataCommand configureDirectionCommand = factory.CreateTwiSendData(twiDef, JeeExpanderAddress, false, ConfigureAllOutput, 0);
results in this output:
TwiI2CTest             - Configuring port direction as all output 
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterSendDataCommandTwi:0 ExpectBack:0 SendingBytes:00 00
IOIOHandlerCaptureLog  - HandleI2cResult i2cNum:0 NumBytes:0 ReceivedBytes:

logging truncated to improve readability.

Set the port expander output pins to"low" values

Note: The LED lights up when the pins sink current. This means the LED lights up when the pins are low.

This code writes to a control register and then queries the output latches for state.
LOG.Debug("Post Low");
LOG.Debug("Check Are Latches Low");
results in this output
TwiI2CTest             - Post Low
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterSendDataCommandTwi:0 ExpectBack:0 SendingBytes:09 00
IOIOHandlerCaptureLog  - HandleI2cResult i2cNum:0 NumBytes:0 ReceivedBytes:
TwiI2CTest             - Check Are Latches Low
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterSendDataCommandTwi:0 ExpectBack:1 SendingBytes:0A
IOIOHandlerCaptureLog  - HandleI2cResult i2cNum:0 NumBytes:1 ReceivedBytes:00

logging truncated to improve readability.

Set the output pins to"high" values

Write the pin output control registered. Write  bits to the register whose values represent pin state
LOG.Debug("Post High");
LOG.Debug("Check Are Latches High");
results in this ouiput
TwiI2CTest             - Post High
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterSendDataCommandTwi:0 ExpectBack:0 SendingBytes:09 FF
IOIOHandlerCaptureLog  - HandleI2cResult i2cNum:0 NumBytes:0 ReceivedBytes:
TwiI2CTest             - Check Are Latches High
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterSendDataCommandTwi:0 ExpectBack:1 SendingBytes:0A
IOIOHandlerCaptureLog  - HandleI2cResult i2cNum:0 NumBytes:1 ReceivedBytes:FF

logging truncated to improve readability.

Close the TWI interface

Remeber to clean up after yourself. This releases Twi pool
ITwiMasterCloseCommand closeCommand = factory.CreateTwiClose(twiDef);
results in
IOIOImpl               - Post: IOIOLib.MessageTo.Impl.TwiMasterCloseCommand
IOIOHandlerCaptureLog  - HandleI2cClose i2cNum:0

logging truncated to improve readability.


Sometimes your first impressions are correct

Sometimes you run across interesting outfits when interviewing or working with other organizations.

You know you should slide away from an opportunity when...

You think in the interview:
"If these guys were an airline..."
"The countryside would be littered with flaming wreckage."
An interviewer describes their new approach and conversation goes something like:
Interviewer: We're moving more of our software development off shore.
You: That doesn't improve success rate if you don't change the way you do other things.
Interviewer: Yes, but our failures will cost less

They may not be as Agile as they think they are when

They describe their agile SCRUM process:
"They have one 3 month sprint per release"
"They made the managers the SCRUM managers since they assign work anyway" 

It may not be the technical opportunity they described...

The senior architect interview includes:
Writing Tower of Hanoi pseudo code in front of 1/2 a dozen people while they critique your algorithm.
The company talks about how big their projects are when:
Their automated builds take under 5 minutes.

I have been lucky with my customers and employers.  

Monday, April 6, 2015

Controlling a Servo attached to an IOIO from Windows WPF application

The IOIO C# library on GitHub now includes a very basic Windows WPF application that lets you control a Servo with your mouse.

The program assumes your IOIO is attached to your PC via Bluetooth and that you have a servo on Pin 3.

Run the program from inside Visual Studio. Step through the disconnect exception if you run this with debugging enabled.

Backup the drivers when installing Windows 10 on a Winbook TW100

Yes, you can install Windows 10 on a Winbook TW100 or other BayTrail tablet.  There are postings on the Microcenter support web site where folks describe how to to this on other WinBook tablets like the tw700. I have only a couple minor things to add.
  1. The tablet has a 64 bit processor and a 32 bit BIOS/EFI.  Use the 32 bit Windows 10 ISO available from Microsoft. Windows 10 32 bit ISOs became available in January 2015. It only has 2GB of memory so a 32 bit OS is fine. I installed the enterprise version.  The standard version didn't work for me but it was probably pilot error
  2. Build a restore medea or backup the TW100 driver directory using one of the available tools. It would be best to build a restore media instead of backing up the drivers. This would let you go back to Windows 8.  I was unable to do this with any of my flash drives. I instead used a tool off of sourceforge to back up the drivers. I now Windows 10+ forever.
  3. Make a copy of in C:\Windows\inf.  Driver tools will not backup this file. This file is critical to getting the digitizer to work.
  4. Create a bootable flash image using the ISO Rufus based the guidance in the forums.
  5. Install Windows 10 per the forum instructions.
  6. Use Device Manager control panel to Update missing device drivers from your backup.
  7. Restart (again).
This zip file includes and should include drivers you need.


  • The tablet will ask for the three finger salute or a power+win-key press to unlock the device if it has gone to sleep when plugged in. I've never been able to get any combination to work in this situation and often just reboot.
    1. There is only one power plan "Balanced" with the March 2015 Windows 10 pre-release build.  The battery only lasts a day with this plan.
    2. I've turned off screen locker by changing one of the Interactive Login group policies. This makes the device act like a normal tablet and lets avoid the ctrl-alt-del screen

    Thursday, April 2, 2015

    Message Routing using Double Dispatch with C#

    This post describes message routing inside an application. It does not describe message routing across message brokers or across systems.

    Message driven, or push based notification, systems stream Messages from message sources to interested parties (Observers).  There are often multiple message types routed to multiple observers.  Message types are represented in code as Message interfaces/classes.

    Message Observers are often interested in some subset of the messages and implement some type of interface or registration scheme that tells the message routing module which message types they are interested in.


    One use case for this is a User Interface that streams UI events (Messages) from various components to event handlers (Observers).  The message sources create messages specific to that event type. The event handlers may receive and process messages of one or more types.

    Another use case might be some type of IoT device like an IOIO that creates events at scheduled intervals or based on some type of external input. Each change event has its own message type.  Components that have interest in some subset of the messages can subscribe for notifications of those types of messages/events.  The dispatcher notifies the interested subscribers every time messages of the correct type appear.


    We want to take messages of various types and route them to handlers without losing the typing information and without resorting to casting in the handlers . This means we'd like to bind message types to some kind of Observer Notify() method that accepts that exact message type or the next closest abstract version.  This is similar to the MVC or other models where inbound web requests are routed to controller methods that have a signature that matches the inbound object type as closely as possible.   

    Message subscribes should be simple with any dispatch complexity hidden in the messages or dispatcher.  We'd like to make it easy to add additional message types and support their associated observers.  This means we want to avoid large God classes with giant case statements that identify messages by types and route based on specific message signatures. 

    Dynamic Dispatch and Double Dispatch

    Double dispatch pushes message type to observer method binding out of the core dispatch (or receiver) to the messages themselves. It is similar to a Visitor pattern.

    1. Observers register interest with the message receiver. Observers implement Observer() methods based on the message types they are interested. Dynamic dispatch will determine if an observer is interested in a specific message type at runtime. An observer can be interested in more than one message type.  
      1. I'll use the Microsoft Observer methods here.  Ex: OnNext(GameStart), OnNext(GameEnd). You could use other interfaces like Visit or Dispatch.
    2. A message is generated and posted to a message queue.
    3. The message dispatcher receives the message. The receiver may be listening on some message stream or queue.
    4. The dispatcher iterates across each observer.
      1. Each observer is Dispatched to Message.
      2. The message calls the Observer's  OnNext() method that is most closely typed to the message type.

    Implementation Example

    Message Types

    The Interface diagram to the right shows the Business message hierarchy. No special API or behavior is specified here.

    All message interfaces are subclassed off.  All the IMessage types other than IMessageBase are implemented by concrete classes.
    IMessageNotification isn't shown here. It is a notification support interface used as part of the double dispatch process.

    Concrete message classes implement two interfaces

    1. he business interface that contains the message properties and any special behavior.
    2. The message notification interface used for observer dispatch.   The dispatcher/router invokes the IMessageNotification interface one time for each registered observer.


    Observers implement two interfaces, a non-genericized IObserver  and a genericized IObserver<MessageType> that expresses interest in a specific message type or its sub types. IObserver is a marker interface.  IObserver<MessageType> is the dynamic dispatch interface. Observers implement the IObserver<MessageType> interface one time for each message type they are interested in.

    Observers register for notification using the Subscribe(IObserver) method modeled on IObservable. They do not register which message types they are interested in. This will be determined at runtime by the Message's dynamic dispatch code.

    Dispatch Mechanics

    The dispatcher/message receiver receives messages as they come in. It invokes the message's  Dispatch(IObserver) one time for each observer.  The Dispatch(IObserver) method tries to find a IObserver<MessageType>  method on the Observer that can receive this message type. It then calls Dispatch<IObserver, MessageType) on itself.  That method then invokes  OnNext<MessageType> on the Observer.  It invokes the OnNext<MessageType> for the message type that is closest to the the type of the message doing the processing. 

    Example Code

    ...begin source code... ...end source code...


    Some of the pattern in here came from this stack overflow posting
    Wikipedia has good descriptions of Multiple Dispatch and Double Dispatch

    This is one of the core patterns in Smalltalk. I'm just saying this so Smalltalker's don't flood me with email telling me how they've been doing this since the 80s.

    This post created 4/2015

    Wednesday, March 4, 2015

    Extremely Rough Cut at C# based IOIO Library

    I've pushed a very rough C# port of the Java IOIO library to GitHub  It communicates with the IOIO as a COM device over Bluetooth. IOIODotNet is a plain old Windows Library. I tried making it a Portable library but .Net portable libraries to don't support Serial devices.

    The library is a mess of pieces-parts right now with some unit tests that show how it works. There is also a WPF app. Tests and the WPF app poll serial ports to find a Bluetooth attached IOIO.

    This is a message based API

    • Inbound traffic packed into messages and  routed to inbound handlers. You can add your own or you can poll the ones that exist.
    • Outbound traffic can be sent directly via the API or through a message based API.  I intend that the Message API is the future of the library once resource management has been added.

    Look at the Integration tests to play with it. They expect at least one IOIO V5xx on a COM port with pins 31 and 32 tied together.

    Last Updated 4/22/2015

    Sunday, March 1, 2015

    log4net configuration with MSTest and Visual Studio 2013

    I use log4net because it gives me easy class level control over logging levels and it has a lot of outbound (appender) types.  Folks that dislike log4net will not find this post useful.

    Visual Studio test output can be pretty confounding. There are way to many forum discussions around viewing test logs in Visual Studio. It sort of makes sense because some of the normal logging channels don't make sense in the test environment.  Phones, web apps, console apps, services all log in different environments.  The Visual Studio team changed the location of the logging views in VS2013 (or possibly 2012). Here is my "how did I do that last time" version of configuring log4net logging for unit tests.  

    Viewing Test Output

    Visual studio has an Output window and an Immediate window buty my output never shows up in either. I've tried routing the console output to the Immediate window via the preferences. My output never shows up there. It seems like unit test output is only available inside TestExplorer.
    1. Run a test
    2. Highlight the test in Test Explorer
    3. Select Output on the bottom of the test results.
    You see the Output link only if you have written something to the Console or one of the Diagnostic logs.

    Logged text shows up different sections of the test output windows depending on the channel and type

    This screen shows messages generated by log4net configured for the ConsoleAppender and Microsoft Diagnostic traces generated using

    The TraceAppender puts the output in the Debug Trace section.  The ConsoleAppender puts information in Standard Output.

    Note: Debug() and Trace() both show up in the same section.

    Configuring Log4Net in Tests

    Appender and Level Configuration

    This assumes you've added a log4net reference to your test project. 

    We need two components to configure log4net. The first is the file that configures the log4net logging levels and appenders.  The second is some piece of bootstrap markup or code to initialize log4net with that file.

    Create a log4net configuration file. Name it and put it in the root directory of your testing project. Set the properties to be CopyAlways.

    I've provided a simple sample below.  You should be able to find more sophisticated example on the Internet.

    Logging Bootstrap

    You're supposed to be able to do it inside your assembly.cs file.  That didn't really work for me with unit tests.

    MSTest supports running assembly initialization C# code one time when the test assembly in the same way it uses assembly.cs.  I create log4net bootstrap code in a test-less C# Unit Test class. that contains one start-up method marked with the [AssemblyInitialize] attribute. You only need to create one class that will initialize log4net for all your tests.  This example has some extra output I used to understand the various destinations.  It should work for projects almost unmodified.

    using log4net.Config;
    using log4net;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using System.IO;
    using System.Diagnostics;
    namespace IOIOLibDotNetTest
        public class LoggingSetup
            public static void Configure(TestContext tc)
                //Diag output will go to the "output" logs if you add tehse two lines
                //TextWriterTraceListener writer = new TextWriterTraceListener(System.Console.Out);
                Debug.WriteLine("Diag Called inside Configure before log4net setup");
                XmlConfigurator.Configure(new FileInfo(""));
                // create the first logger AFTER we run the configuration
                ILog LOG = LogManager.GetLogger(typeof(LoggingSetup));
                LOG.Debug("log4net initialized for tests");
                Debug.WriteLine("Diag Called inside Configure after log4net setup");

    I used Configure() instead of ConfigureAndWatch() because I don't manually change my logging configuration in the middle of a test run.


    This simple properties file configures log4net to write to the Console so that I can see it in Visual Studio 2013.  It also contains Appenders that send traffic to the trace system. This

    Note: Unit test performance can be greatly impacted by excessive logging. You should consider using one of the file Appenders if you have a lot of output or if you want log output in specific directories on build servers.


    I hope this saves others from the hassle I had making log4net output visible while running unit tests inside Visual Studio.