Monday, December 16, 2013

Running the Microsoft ALM TFS 2013 (Hyper-V) Virtual Machine in VMWare without Persisting Changes

This blog describes how to run the Microsoft TFS/ALM Hyper-V virtual machine in one of the VMWare hypervisors (Player/Workstation/Fusion). You can use the same concepts for any other virtual machine/appliance.

The 2013 RTM ALM image comes with a set of TFS and Visual studio labs.  The virtual machine and the labs  have a couple expectations:
  1. It comes as a a Hyper-V VHD running an evaluation copy of Windows Server 2012.
  2. The work items and other information are "point in time" information.  This means sprint, iteration or other information can only be used in the labs if the VMs time is set to a specific date(s). 
  3. The machine resets the time to the same specific date and time on every reboot. This lets you re-run the exercises.  Pay attention to the places the way the time is set if you wish to use this process for Virtual Machines anchored at dates and times other than the ones used for the TFS 2013 RTM ALM image used in this post.
  4. The labs work best if the machine is restored back to its initial state after each lab.
We're going to run this virtual disk as a VMWare guest in a machine that automatically sets the Guest RTC to the correct time for the labs.  We will also restore the VM to its original state every time VMWare is re-launched.

VMWare Configuration

You can create a new virtual machine that uses the ALM virtual disk.  VMWare can consume the disk directly without conversion.  Configure the virtual machine but do not start it until you have manually edited the VMWare vmx file to set up the clock and disks snapshots.

  1. Create a directory to hold the virtual machine with some reasonable name. We will have VMWare use that directory for all its files later.
  2. Move the vhd virtual disk into the previously created directory.
  3. Run VMWare
  4. Create a new virtual machine
  5. Select Install System Later
  6. Select Microsoft Windows on the Guest operating screen. 
  7. Select the Windows version of Hyper-V (unsupported)
  8. Pick a location for the virtual machine.  Select the directory we created above
  9. Pick Store virtual disk as a single file.  We're not really going to use this disk. We're going to reconfigure the disks later to use the ALM VHD.
  10. Select Finish.  This should take you back to the screen that lists all the VMs including the one we just defined.
  11. Select the virtual machine and select Edit virtual machine settings
    1. Add a new IDE disk.  
    2. Select Use existing...
    3. Select the VHD
    4. Delete the old virtual disk.  We only need one disk and it should be the ALM disk
  12. Do not install VMWare tools
  13. Do not boot the machine
  14. Quit VMWare
Manually edit the VM configuration, vmx, file before booting the machine.

Manual Changes

Disabling Disk Persistence 

We want  the machine to roll back all session changes so that the VM always starts in exactly the same state.  We need to add 3 settings to the VMX file.  
ide0:0.mode = "independent-nonpersistent"
snapshot.action = "autoRevert"
snapshot.disabled = "TRUE"
We can remove the settings if we want to do some  work that we want to work across restarts. Be careful though. You often cant push time backwards if you've already done work on the current time.

Time at Startup.

The TFS 2013 RTM ALM VM resets its clock to 7/9/2013 1:30PM ever time it starts via a Task. This puts the machine in a known state and makes time based labs work.  Ex: Think iteration dates here. We want to be inside an iteration when using the sprint planning dashboards. 

You can have problem with this approach though.  A virtual machine's RTC is set to NOW when it boots up. This means the machine boots with the current time before it is reset by the startup tasks.  System or SQL changes can occur in thevirtual machine before the clock reset occurs creating timestamps later than 7/9/2013 1:30 PM. We need to set the virtual hardware clock before the operating system boots and reads the time.

The virtual RTC is used by the VM every time it runs. VMWare lets you set the virtual machine RTC at startup with a vmx setting. This web entry describes how we can set the VM time using the VM's virtual bios and RTC.

VMWare sets the RTC on the Unix Epoch. There are web sites that let you calculate the epoch time from a date and time that you enter. We want to choose an RTC time to just before the time that is set by the ALM vm on startup.  This insures that time moves only forward between bios boot, OS startup, SQL startup and task execution.

For this TFS 2013 RTM ALM virtual machine I have chosen 7/9/2013 1:30 AM which is earlier than,7/9/2013 1:30 PM, the ALM machine's OS will be set to after boot. 7/9/2013 1:30 AM maps to 1373334621 in Epoch Time. We add the following vmx entry based on that calculation:
rtc.starttime = "1373334621"
The time here is a unix style epoch time. There are web pages out there that can convert time between human readable and the Unix time format. The TFS 2012  and TFS 2013 RC ALM virtual machines used other dates so their rtc.startime should be set accordingly.  You can use this same reset process for other lab oriented virtual machines. 

Time Synchronization

VMWare has a set of tools that strive to synchronize time between the guest and the host. The Microsoft ALM Virtual Machine operates as if it is still July 2013 so we need to disable most of the Host/Guest time synchronization 
tools.syncTime = "FALSE"
time.synchronize.continue = "FALSE"
time.synchronize.restore = "FALSE"
time.synchronize.resume.disk = "FALSE"
time.synchronize.shrink = "FALSE"
time.synchronize.tools.startup = "FALSE"
time.synchronize.tools.enable = "FALSE"
time.synchronize.resume.host = "FALSE"
This document describes how VMWare manages time.

Sample VMX File

Here is my vmx configuration file that works with this service This VM described by this file operates on its own concept of time without syncing to any official time server. It also restores the C: drive disk on every restarts.

You can create completely new VM based the vmx describe below.

  1. Create a directory
  2. Put the vhd in the directory.
  3. Create a new vmx file, put it in the created directory and paste the contents from below into the file
  4. You should be able to now just double click on the vmx file to launch the ALM virtual machine.


.encoding = "windows-1252"
config.version = "8"
virtualHW.version = "10"
vcpu.hotadd = "TRUE"
scsi0.present = "TRUE"
scsi0.virtualDev = "lsisas1068"
sata0.present = "TRUE"
memsize = "4096"
mem.hotadd = "TRUE"
sata0:1.present = "TRUE"
sata0:1.autodetect = "TRUE"
sata0:1.deviceType = "cdrom-raw"
ethernet0.present = "TRUE"
ethernet0.connectionType = "nat"
ethernet0.virtualDev = "e1000"
ethernet0.wakeOnPcktRcv = "FALSE"
ethernet0.addressType = "generated"
usb.present = "TRUE"
ehci.present = "TRUE"
ehci.pciSlotNumber = "35"
usb_xhci.present = "TRUE"
sound.present = "TRUE"
sound.virtualDev = "hdaudio"
sound.fileName = "-1"
sound.autodetect = "TRUE"
serial0.present = "TRUE"
serial0.fileType = "thinprint"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
usb.vbluetooth.startConnected = "TRUE"
displayName = "2013 ALM Hyper-V (unsupported)"
guestOS = "winhyperv"
nvram = "2013 ALM Hyper-V (unsupported).nvram"
virtualHW.productCompatibility = "hosted"
gui.exitOnCLIHLT = "FALSE"
powerType.powerOff = "soft"
powerType.powerOn = "soft"
powerType.suspend = "soft"
powerType.reset = "soft"
extendedConfigFile = "2013 ALM Hyper-V (unsupported).vmxf"
ide0:0.present = "TRUE"
ide0:0.fileName = "TD02WS12SFx64.vhd"
scsi0:0.present = "FALSE"
floppy0.present = "FALSE"
uuid.bios = "56 4d 6b 23 25 1b 6f 2d-b7 ca e8 22 47 3b ad a3"
uuid.location = "56 4d 6b 23 25 1b 6f 2d-b7 ca e8 22 47 3b ad a3"
replay.supported = "FALSE"
replay.filename = ""
ide0:0.redo = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
scsi0.pciSlotNumber = "160"
usb.pciSlotNumber = "32"
ethernet0.pciSlotNumber = "33"
sound.pciSlotNumber = "34"
usb_xhci.pciSlotNumber = "192"
vmci0.pciSlotNumber = "36"
sata0.pciSlotNumber = "37"
scsi0.sasWWID = "50 05 05 63 25 1b 6f 20"
ethernet0.generatedAddress = "00:0c:29:3b:ad:a3"
ethernet0.generatedAddressOffset = "0"
vmci0.id = "1195093411"
vmotion.checkpointFBSize = "33554432"
cleanShutdown = "TRUE"
softPowerOff = "TRUE"
usb_xhci:1.speed = "2"
usb_xhci:1.present = "TRUE"
usb_xhci:1.deviceType = "hub"
usb_xhci:1.port = "1"
usb_xhci:1.parent = "-1"
usb_xhci:3.speed = "4"
usb_xhci:3.present = "TRUE"
usb_xhci:3.deviceType = "hub"
usb_xhci:3.port = "3"
usb_xhci:3.parent = "-1"
toolsInstallManager.updateCounter = "8"
sata0:1.startConnected = "FALSE"
unity.wasCapable = "FALSE"
tools.remindInstall = "TRUE"

ide0:0.mode = "independent-nonpersistent"
snapshot.action = "autoRevert"
snapshot.disabled = "TRUE"

rtc.starttime = "1373362200"
tools.syncTime = "FALSE"
time.synchronize.continue = "FALSE"
time.synchronize.restore = "FALSE"
time.synchronize.resume.disk = "FALSE"
time.synchronize.shrink = "FALSE"
time.synchronize.tools.startup = "FALSE"
time.synchronize.tools.enable = "FALSE"
time.synchronize.resume.host = "FALSE"

usb_xhci:4.present = "TRUE"
usb_xhci:4.deviceType = "hid"
usb_xhci:4.port = "0"
usb_xhci:4.parent = "1"

Conclusion

You can use these techniques to create/manage Virtual Machines that need to restore back to the previous state on every restart.  You can configure these same machines to put the hardware back at that same or other times.

Last edited 12/29/2013

Sunday, December 8, 2013

Microsoft Code Analysis results differ based on Configuration

Background

Code Analysis (CA) is a very useful Visual Studio feature that applies good practice type static code analysis to your code base.  You can run CA manually at any time or configure it to run automatically on each build. Automatic execution can be configured per configuration per project. This means you can enable automatic CA on Release builds while ignoring it during builds in Debug mode.  Some teams do this to speed up the debug compilation cycle.  I'm not sure what in CA makes it so slow that you can't run it all the time :-(

You can view which CA rules apply for any configuration or CPU type via the Solution Properties window:




You can set the automatic execution of CA on a per project basis in the Code Analysis pane of the Project Properties:


The method of Code Analysis  menu item and the position of the results varies by Visual Studio Version. The menu item that executes CA is located on the Build menu in VS2013.

Unexpected Behavior

Microsoft Code Analysis will sometimes generate different results based on the configuration.  This may be the result of rule configuration, flag values and/or the level of optimization. I've used static analysis tools on Java for years but this is the first time I've seen different results.  That may be because we only compiled Java one way.

This very simple method generates different results with Microsoft All Rules depending on the configuration.

        public void DummyMethod(){
            HttpStyleUriParser foo = new HttpStyleUriParser();
        }

You can see here that the last Code Analysis error is different. Make sure you are testing your code with CA in the proper configuration for your project's standards.

Release ConfigurationDebug Configuration


We found this when using nested C# using blocks to insure the correct clean up behavior in stream and memory buffer operations.  A CA rule fires in Debug configuration that flags the way using blocks map to close and Dispose() behavor. This Code Analysis error disappears in Release configuration , possibly because the system intelligently calculates the correct behavior.

Conclusion

Make sure you run your CA rules in the same configuration as your build servers.  You don't want to think you are clean and then have the CI builds fail because of a CA related issue.

Wednesday, November 20, 2013

Importing a bootcamp partition into VMWare Fusion when it has Hyper-V enabled.

This is quick note on importing a Windows 8 or Server 2012 BootCamp partition that had Hyper-V hypervisor enabled.

By default, you cannot have Hyper-V booted inside of Fusion because of hardware vitualization conflicts.  You know you have this problem if you see the following messages in the guest machine's window in VMWare Fusion.
Your PC ran into a problem and needs to restart.  We're just collecting some error info, and then you can restart. (100% complete)
If you'd like to know more, you can search online later for this error:  SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (winhv.sys)

Edit the Fusion VMX configuration file directly.

You can fix this by editing the vmx file directly.


  1. Shutdown Fusion
  2. Find your Virtual Machine definition. Navigate to /users/<username>/Library/Application Support/VMWare Fusion/VirtualMachines/Boot Camp.
  3. Right Click on the Boot Camp package and select Show Package Contents
  4. Edit the file that ends in .vmx using TextEdit or your default file editor
  5. Add a line at the end containing
    hypervisor.cpuid.v0 = "FALSE"
  6. Save the file.
  7. Launch Fusion 
  8. Run the imported BootCamp virtual machine


Using the GUI

You're supposed to be able to do this via the Fusion 6 GUI but that did not work for me. This section is here to provide breadcrumbs for future searchers with this problem.

  1. Bring down the crashed imported Boot Camp virtual machine
  2. Edit the Virtual machine settings via the menu Virtual Machine Settings --> Processors & Memory --> Advanced
  3. Check "Enable hypervisor applications in this virtual machine"
  4. Close the settings
  5. Restart the virtual machine






Monday, November 18, 2013

Micro Clound Foundry V2 - PaaS with Stackato - Languages and Applications

A previous blog article described how to set up a developer micro PaaS using the Cloud Foundry based Stackato from ActiveState.  This article will describe some of the languages available in this environment.

Application Languages

Cloud Foundry and the Application Stackato clusters support the following languages "out of the box". You can use any of these languages in the Micro CloudFoundry instance we just configured
  1. Java, using the Tomcat server
  2. Node.js
  3. Perl
  4. PHP
  5. Python
  6. Ruby
.Net languages are supported through either the Mono runtime or integrated .Net support provided by Iron Foundry. Active State has several wiki/blog pages on this.

Other languages, including Clojure, can be added by importing Heroku BuildPacks.

Demo Apps

Stackato demo programs http://community.activestate.com/stackato/demos  Some are also available through the app store in the browser based command console.

Hopefully I'll put together another blog article with some demo deployments







Micro Clound Foundry V2 - PaaS on a stick the easy way with Stackato

Cloud Foundry Open Source Platform as a Service (PaaS). It was initially driven by VMWare and later given to Pivotal when it was spun off from VMWare and EMC. The original version of Cloud Foundry was available as a hosted service and in a Micro developer version that ran on a single VM that could be hosted on a developer workstation.  A community grew around this and there are now multiple versions of Cloud Foundry available.



Cloud Foundry supports a Multi-tenant architecture where a cloud can support multiple organizations each of which can have multiple spaces to partition their work.  This architecture is available across all CF PaaS installations.


Cloud Foundry updated to a new V2 architecture and implementation in June of 2013.  The CF "on a stick" micro version was abandon at that time. The Micro-CF codebase was apparently a separate code base from the main line.  The CF developers do build and test on local (to their machine) virtual machines that run miniature versions of CF.  Documentation on running locally is available on the Cloud Foundry docs site using either Vagrant or Nise-Bosh. I found this really complicated for someone who wants to write apps.

Developer CF the Simpler Way with ActiveState Stackato

ActiveState implements a PaaS based on CloudFoundry that can be hosted by them, deployed in VMWare, hosted in Amazon EC2, hosted onsite or other custom configurations.  Their 3.x, currently 3.0.0 beta, is based on the new CloudFoundry V2 code base.  They provide a really nice pre-built virtual machines for developers that you can download from their website.  ActiveState provides great 3.0 release notes that can give you a feel for how the Micro version is configured and what the differences are between Micro and Enterprise

ActiveState provides developer images in 4 different formats, VirtualBox, VMWare (Workstation/Fusion/Player), VMWare VSphere and KVM.  The virtual machine is a 1.2GB download that uses 5.6GB when unpacked.  The VM grow-on-demand virtual disk can expand to 58GB as you use it.  Plan accordingly.

Setup

You should know the following before installation

Documentation Highlights

  • Stackato has nice documentation on their web site
  • The downloaded Micro PaaS VM uses bridged networking making it visible to all machines on your network. NAT is recommended if you want the VM to be visible only from the machine it is installed on.
  • The VMs network configuration uses Zero-Configuration networking with the Avahai stack. Machines using zero-configuration network are registered in the .local domain. Browsers can browse to .local machines and the ping command can "ping" them.
    • Zero-configuration networking is known as Bonjour Services on Apple machines.  It and mDNS works "right out of the box" on OS/X. 
    • Microsoft does not support this out of the box. A quick test shows that Apple's Bonjour services for windows is not enough to get the VM to work right on a Windows machine. Help can "sort of" be found in the ActiveSite docs
      • The <machine>.local name is picked and can be found by the ping command. 
      • The api.<machine>.local name is not picked up and cannot be found by the ping command.  You may have to edit  c:\windows\System32\Drivers\etc\hosts to add api.<machine>.local.  
      • You will also have to add an entry for each deployed application.  
  • The VM comes with a default user: stackato with password: stackato.  This is used until you set up your own account during the setup process
  • The password for the stackato user will be changed to match the password of the user id you create as part of the setup process.
  • The Micro CF instance is configured for bridged networking by default.  You can use NAT if you wish to hide the VM from machines on your network. 
    • VMNet8 is the NAT network in a VMWare system.

Installation Process

We can now walk through creating a CF Micro VM on our developer workstation based on the Stackato VM. This setup is based on the Stackato documents provided by ActiveState.

  1. Download the VM for your Hypervisor from the ActiveState web site.
  2. Unzip the file.
  3. Move the VM to some directory, probably the default place for virtual machines for your Virtualization software.
  4. Double click on vm.  It can take a while to set up. You should see something like this
  5. You should see a VM console start screen with the machine's .local machine name. Notice the local address that is the admin console address.
  6. Point a browser to https hostname.local you see in the console. In my case it is https://api.stackato-aw7c.local/console/
  7. Create your first user / tenant and log in. 
  8. You should see some help links and be able to browse around to see how your micro-cloud is configured

The Active State Stakato (CF Micro) instance comes with some services pre-enabled but without any service instances created.  You can see all of the components in a given node through the Cluster Nodes admin screen.  This screen shows the internal components and the PaaS services
You can also see just the PaaS services on the Services screen.  These represent services that can be instantiated to support your applications.



Stackato CLI Installation and Configuration

ActiveState provides their own command line program (CLI) that takes the place of the CloudFoundry vmc Ruby command line.  Note: CloudFoundry is also replacing vmc with a new Go based command shell.
  1. Download the client (CLI) from the link on the quick start page. It looks like the latest client is still 2.0.2 even though we're running the V3 3.0.0 beta virtual machine
  2. Unzip the client.
  3. Put the executable somewhere on your $PATH (or %PATH%). I copied it to /usr/local/bin on my Mac and linux machine. 
  4. Make sure the program has the executable flag set on a linux machine.  chmod +x /usr/local/bin/stackato
  5. Launch a terminal window.
  6. Run the client, stackato.
  7. You have to target your CF (Stackato) virtual machine and login from the command line. The Quick Start guide describes this.
    1. Run stackato target
    2. Run stackato login
  8. Notice "Error: No spaces available to chose" in the previous image.  That's because I hadn't created a space for my organization "FreemanSoft Inc".  You can create a space on the organization web screen using a web browser or via the stackato create-space <name> command in the command line client.  Your userid will automatically be bound to the space when you create it.
    • Example: I ran this command:
      stackato create-space dev
      to create a "dev" space in my organization.

Services, Service-Plans, Provisioned Services and Service Instances

Cloud Foundry applications are made up of the application executables plus some set of PaaS services.  Those services can be caching, database, messaging or other 3rd party programs.  Cloud Foundry uses the following terms.

  • Services: The service types that are available in a Cloud Foundry cluster.  Standard types include memcached, Mongodb, RabbitMQ, Redis, etc. A service definition includes charging algorithms and default configuration for instantiated services. 
  • Service-Plans: ActiveState's name for Services. You will see this name in the CLI response
  • Service Instance: A deployed service that can be bound to by an application. There can be multiple instances of a service in a Cloud Foundry cluster.  Each instance has a unique name that is used as part of the binding process.
  • Provisioned Services: Another name for a Service Instance.


Stackato comes with support for several open source packages.  Each service has a default configuration that is applied to any created instance.  You can reach the settings via the web interface.  The following menu shows all the features available "out of the box" in a ActiveState Stackato Micro instance.

The Micro edition comes with three default  service-plans (roles) enabled:  file system, MySql, Postgres.  You can find these in the web console 



or the command line with the stackato services command:
This matches with the 3.0 release notes which mention:
  • [97164] Micro cloud starts with Memcached, Redis, PostrgreSQL, RabbitMQ, and MongoDB roles disabled by default (enable via Managment Console).

Enabling Additional Roles / Services

You can enable additional roles ("service-plans") via the Admin->Cluster admin screen which





Click on the configuration icon on the panel and the role admin floating pane will come up showing the default/current list of services
















Select any of the services you'd like to add.  Here we enable MongoDB, Rabbit MQ 3.x and Redis.















You can also see the newly enabled services with the stackato services command in the Service Plans  section




Service (Plan) / Role Notes

  1. Harbor http://www.activestate.com/blog/2013/04/java-debugging-stackato-harbor and load-balancer can be used to provide external access to components inside the cloud. This is useful when opening up non-http ports for debuggers and other non-web monitors.  Normally you only open http/https and run everything else interal to the micro "data center"
  2. RabbitMQ is rabbit 2.x (2.8.7 as of 11/2013)
  3. RabbitMQ3 is rabbit 3.x (3.1.3 as of 11/2013)
  4. You can add new services to a Cloud Foundry Cluster.  See this web page for details

PaaS Service Instance Creation

You create services instances that can be used by applications. The applications bind to services by name using custom Cloud Foundry configuration.  This decouples the application from the location of the PaaS service.  You can have multiple services of the same service type with different names.  Different Cluster deployed applications can use the different instances or they can share.

Example MongoDB Instance Creation

Here we create two MongoBb services, mongodb1 and mongodb2 using the Command Line Interface (CLI).  
  1. stackato create-service mongodb mongodb1
  2. stackato create-service mongodb mongodb2

We then list the services plans and instances from the command line:
  1. stackato services









Cloud services are not visible from outside the cluster. They are private to the cluster applications.  Cloud Foundry does provide tunneling capability that can make services visible for debugging, data loading or other needs.

You can get a feel for the internal deployment model of cloud foundry by using the Virtual Machine console.  Here we see the two instances of MongoDB deployed above.



Cloud Foundry Command Line Tools

Cloud Foundry's command line tools can also be used with Stackato in the same way they can be used with any true Cloud Foundry based PaaS. Cloud Foundry V2 (June 2013) replaces the Cloud Foundry V1 vmc ruby modules with new Ruby cf modules.  The latest version at the time of this article is cf-V5.  The Cloud Foundry team has already proposed replacing cf-V5 with cf V6 written in Golang, the Go programming language.  In the mean time this section describes how to connect to a CF instance and do basic operations with the Ruby based cf-v5. 
  1. Install Ruby on your machine.
  2. Add "cf" ruby gems with gem install cf
  3. Connect to the Cloud Foundry Instance with cf connect <hostname>.local
  4. Log into the Cloud Foundry Instance with cf login <username>
  5. Get a list of all the configured service instances using the cf services command.
  6. I couldn't find a simple way of seeing all the available services, service-plans.  You can cheat and find a list of services by running cf create-service with no service type .  It will list all the available service types.  Press control-c to end the command.

Writing and Deploying Applications to Micro CF

Future blog articles will describe the languages available in Cloud Foundry in general and the ActiveState PaaS in particular.  Click to read a description of the languages available in this micro cloud. 

Saturday, October 26, 2013

Viewing all files in OSX Mavericks

I was having a problem with VMWare after my Windows 8.1 upgrade.  Windows 8.1 had no network connectivity.  I ran into the same problem after upgrading a Windows 8 guest to Windows 8.1 on a Windows host.

I wanted to look at my VMWare Fusion configuration for my bootcamp partition.  VMWare stores its information in ~/Library/Application Support/VMware Fusion/Virtual Machines which is normally not visible in the Mac Finder.  There are a bunch of pages on the interweb that describe how to make all directories and files show up in Finder.  Many of them are incorrect for Mavericks which is now case sensitive.  This posting describes how to show all files in Finder.

You can show all files by typing the following in a terminal window

defaults write com.apple.finder AppleShowAllFiles -boolean true
killall Finder

You can go back to the default view by typing the following in a terminal window

defaults delete com.apple.finder AppleShowAllFiles
killall Finder

Saturday, August 10, 2013

Monitoring Azure from a Raspberry Pi?

We have an Azure cloud network that we would like to monitor with a standalone status board.  One option is to do the "all Microsoft" Powershell thing with a windows device. Another option is to use the Linux and MAC Azure management tools released in 2012. There is also a Node.js based set of tools. An older MSDN blog entry is also useful for understanding this. Another option is to directly consume the Azure REST services used by the Node.js library.  I decided to try the Node.js tools.   REST service information is available on MSDN.

Hardware

Sometimes when you have stuff lying around you just have to come up with some way to use it. Sometimes it works out fine and some times not so much.  I have a 700MHz 512GB  Raspberry Pi computer sitting on the shelf with nothing to do.  Its that nifty Linux ARM based computer: with built in Ethernet, USB, video and an hardware extension bus. That hosts our monitoring scripts.

I also have the Adafruit LCD Keypad Kit for the Raspberry Pi.  It gives you 2 line LCD display with RGB backlight.

Raspberry Pi setup

Raspberry Pi setup i pretty straightforward.  Follow the instructions in the quick start guide http://www.raspberrypi.org/quick-start-guide to install and boot the operating system.  I had some additional config to do:

  1. Use raspi-config to set the timezone
  2. Use raspi-config to set the locale and keyboard type
  3. Install the chromium browser so that we don't have to download Azure publishing configuration onto a different machine. The default browser can't download the settings.
    • sudo apt-get install chromium-browser

Azure Command Line Interface (CLI) setup

The library is written on Node.js
  1. Install Node JS. I found this article useful. Later pre-compiled ARM versions of Node.js are available on the nodejs.or web site. I used v0.10.13. Install it into /usr/local
  2. Install the Azure Node.js based command line tools using the npm command as documented on windowsazure web site.  
    1. Download the Azure publishing settings per the windowsazure.com instructions.  I found I had to use a browser for this because of the way credentials are handled. The default Raspberry Pi browser is not powerful enough. Use something like chromium or use a different machine and transfer over the network or via thumb drive.
    2. Load those Azure publishing settings into the Azure Node.js configuration so that it can use those credentials to talk with the Azure cloud.
  3. At this point you can run Azure commandline commands from a terminal windows.  Every command starts with Azure.  Run something like Azure VM list to get a list of all virtual machines under that Azure subscription.

Performance Problems with Azure CLI on Raspberry Pi

The main problem is that it takes about 9.5 seconds to run even the Azure Help  command.  This is a pretty simple command with no obvious network or Azure dependencies. Here are some more timing numbers. 

azure help                 9.7 secondsazure vm list               21 secondsazure vm show <machineName> 21 seconds.

Azure CLI Limitations

The Node.js based tools can control and check status VM status.  They cannot retrieve any metrics or watch the load or behavior in the Azure network. This is a showstopper four our monitoring project. 

Adding an LCD panel to the Raspberry Pi

The Adafruit folks have a great tutorial on how to buy, build and write software for their 2x20 LCD display.  It describes all of the things you have to do to enable and configure the I2C based shield on a stock Raspberry Pi.

Conclusion

This project failed for two reasons:
  1. The CLI API does not provide the information needed for monitoring.
  2. The Azure Node.js library has too much overhead to perform on a Raspberry Pi.

Sunday, August 4, 2013

Azure Point to Site VPN - private access to your cloud environment

You don't really have to worry about connectivity when you have a single in-house data center.  All your proprietary data is on "your" network that you manage. You firewall protects your sensitive information from internet intruders.  The internal network provides routing and name lookup services.

You don't really worry about connectivity when your are consuming publicly available resources on the internet.  Your internal network allows outbound connections to the internet.  Your gateway knows which DNS servers provide name support.

Note: IPV4 network numbers in the diagrams are just examples. They happen to be how my internal and Azure networks are configured.

Azure a Cloud Provider

Cloud providers give you the ability to spin up off-site data centers that are visible and reachable from the internet.  The actual remote data center organization and configuration is somewhat opaque to you since it is managed and controlled by the cloud provider.   The cloud provides internal network connectivity between your virtual hosts.  PaaS services handle internal cloud specific configuration handles in-cloud machine-to-machine connectivity usually through proprietary API.  Public endpoints are usually web services or web sites. 

There is limited management connectivity from the outside. Allowing management access via the public internet increases the number of attack vectors and odds of system intrusion.


Azure users get around the remote nature of the environment by enabling Remote Desktop Protocol access to their windows machines. This lets any machine on the internet RDP into the Azure machines as long as they can provide the right credentials.  This is an obvious security risk.  Azure users add remote Powershell, remote profiling, remote management and non-web accesses by enabling those additional public endpoints.  This increases the number of attack vectors on against the Azure machines.  We want to minimize the number of ports/services that are visible to the internet while providing as much corporate/owner and operational access as possible.

Corporate Azure users get around the remote nature by extending a site-to-site VPN tunnel that joins the cloud and the internal corporate network.  Some companies will not allow this type of network configuration because they are worried about the bi-directional open nature of the connection. Site-to-site has additional issues like the fact that does not help off-site developers and operational folks because only one site-to-site connection is supported. 

Note: that this picture shows an Azure DNS service.  This is used for internal name service for processes that run inside your Azure account.  They could always connect by IP address if azure DNS has not been set up.  Azure DNS is not required for external access. That all happens on cloudnet.app and is supported by the cloudapp.net DNS servers. 

See this document for a description of how to set up Azure point to site VPN networks and connections.

Network Organization as a First Step, VLANs

Azure users must first organize their networks in a way that makes it possible to provide both public access to web sites and services while making it also possible to provide secure non-public access to the management functions.  The default Azure configuration throws all VMs and Cloud Services into a single large network.  
The first step is to create a Virtual LAN (VLAN). VLANs provide a way to organize a network into different segments and is the base level network construct for creating subnets including remote connection tunnels (VPNs).  Pretty much everything you previously created will have to be deleted and recreated sitting on a VLAN instead of your default Azure network. You will want to to subdivide the VLAN namespace into subnets. Normally they are just syntactic sugar because Azure doesn't support firewall or filtering between subnets.  They are important to the point-to-site VPN because Azure VPN will use one of the subnets as the landing zone for VPN connections.

Initial Azure Configuration, Preparing for VPN

Azure VPN requires a VLAN subnet of its own that acts as a type of DMZ network between your on-prem network and the Azure VLAN that you configured.  Systems Administrators use the Azure Management Portal to create the external connectivity subnet. It allows you to pick the number of addresses on the VPN subnet. Each connected VPN client consumes an address on this VPN subnet.  The VPN public gateway takes two addresses also. You should size appropriately for the number point-to-site connections that you expect.


The second piece of the VPN connection is the creation of the actual public IP gateway for the VPN connection subnet.  This is another option on the Azure Management Portal.  Microsoft creates a public IP address and creates internal routing from that address to the VPN subnet. It also creates VPN virtual appliances that sit behind that VPN public IP address.  

The VPN is protected by certificates.  I'm not going to go into details here. You can use Microsoft MSDN  point-to-site vpn configuration page for this information

Azure Point-to-Site Remote Back Channel Connectivity

A VPN connection makes the local machine part of a remote connection using a secure tunnel. Microsoft provides a VPN client side configuration program that is customized to a specific Azure VPN public address and network.  This program is dynamically created in the Azure Management Portal and makes use of built-in Windows 7 / 8 VPN capabilities. 

A point-to-site VPN connection creates an additional network on the local machine that is part of the same network defined in the VPN connection subnet.  Essentially it puts a machine "on the network" in the VPN subnet portion of your VLAN. This mans the local machine has access to all private resources on the VLAN.  The local user can to non-internet-public ports that are unavailable to others outside the Azure cloud.  This is because the local machine is part of the Azure network when connected to the VPN tunnel.


The default Windows VPN configuration does not isolate the local machine to the VPN network. It leaves all the other network connections active.  This means a point-to-site connected machine has access to the internal corporate network, the Azure VLAN and the internet.  Users must keep anti-virus, rootkit, and malware protection software up to date to stop attackers from attacking the Azure or corporate network through the local machine. It is possible to disable all other network connections while attached to a VLAN.  You see this a lot with VPN connections that are restricted due to corporate policies.

Conclusion

<to be written>

Thursday, August 1, 2013

The simple but awesome NeoPixel Shield with an Arduino

The folks at Adafruit have put out a nice "NeoPixel shield" which is essentially an 8x5 addressable RGB LED strip built into an Arduino shield.  They have created a nice library available on github. You can see the project on their product web page.  Here is a picture of the board mounted on an Arduino Uno. The LED in the bottom right corner is LED 0.  The LED in the the bottom left corner is node 7.  The second row up is node 8-15 and so on.  The LED in the upper left corner is node 39.



This picture shows the LED panel on my desktop. It totally overwhelmed the camera to the point that the rest of the room looks dark.


Firmware

I've created simple Arduino firmware that lets you send LED blinky commands over the Serial Port via USB.  You can set each pixel color individually along with one of 10 blink patterns.  Pattern 0 is off and pattern 1 is solid on so there are 8 actual blink patterns.  The firmware is located on github.

The LEDs are daisy chained into a single giant shift register.  The far end of the strand is updated by serially shifting the values through all the other LEDs.   The entire strand is updated on each push operation.   The WS2811/2812 controllers have have fairly strict timing requirements when shifting data through and latching data into strand.  Adafruit has implemented a nice library that meets all the timing requirements through hand coding and interrupt control. This means that interrupts are disabled for long enough periods of time that I had to turn down the serial port rate to 9600 to avoid character loss.

TFS Build Watcher Application Driver

I've also added a driver to the TFS Build Watcher software on github.  Individual build statuses are mapped to individual LEDs so you can get an overall feel of your various builds.

video



/*  
  Written by Freemansoft Inc.  
  Exercise Neopixel (WS2811 or WS2812) shield using the adafruit NeoPixel library  
  You need to download the Adafruit NeoPixel library from github,   
  unzip it and put it in your arduino libraries directory  
  commands include  
  rgb  <led 0..39> <red 0..255> <green 0..255> <blue 0..255> <pattern 0..9>: set RGB pattern to pattern <0:off, 1:continuous>  
  rgb  <all -1>  <red 0..255> <green 0..255> <blue 0..255> <pattern 0..9>: set RGB pattern to pattern <0:off, 1:continuous>  
  debug <true|false> log all input to serial  
  blank clears all  
  demo random colors   
  */  
 #include <Adafruit_NeoPixel.h>  
 #include <MsTimer2.h>  
 boolean logDebug = false;  
 // pin used to talk to NeoPixel  
 #define PIN 6  
 // Parameter 1 = number of pixels in strip  
 // Parameter 2 = pin number (most are valid)  
 // Parameter 3 = pixel type flags, add together as needed:  
 //  NEO_RGB   Pixels are wired for RGB bitstream  
 //  NEO_GRB   Pixels are wired for GRB bitstream  
 //  NEO_KHZ400 400 KHz bitstream (e.g. FLORA pixels)  
 //  NEO_KHZ800 800 KHz bitstream (e.g. High Density LED strip)  
 // using the arduino shield which is 5x8  
 const int NUM_PIXELS = 40;  
 Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_PIXELS, PIN, NEO_GRB + NEO_KHZ800);  
 typedef struct  
 {  
  uint32_t activeValues;     // packed 32 bit created by Strip.Color  
  uint32_t lastWrittenValues;  //for fade in the future  
  byte currentState;       // used for blink  
  byte pattern;  
  unsigned long lastChangeTime; // where are we timewise in the pattern  
 } pixelDefinition;  
 // should these be 8x5 intead of linear 40 ?  
 volatile pixelDefinition lightStatus[NUM_PIXELS];  
 volatile boolean stripShowRequired = false;  
 ///////////////////////////////////////////////////////////////////////////  
 // time between steps  
 const int STATE_STEP_INTERVAL = 10; // in milliseconds - all our table times are even multiples of this  
 const int MAX_PWM_CHANGE_SIZE = 32; // used for fading at some later date  
 /*================================================================================  
  *  
  * bell pattern buffer programming pattern lifted from http://arduino.cc/playground/Code/MultiBlink  
  *  
  *================================================================================*/  
 typedef struct  
 {  
  boolean isActive;     // digital value for this state to be active (on off)  
  unsigned long activeTime;  // time to stay active in this state stay in milliseconds   
 } stateDefinition;  
 // the number of pattern steps in every blink pattern   
 const int MAX_STATES = 4;  
 typedef struct  
 {  
  stateDefinition state[MAX_STATES];  // can pick other numbers of slots  
 } ringerTemplate;  
 const int NUM_PATTERNS = 10;  
 const ringerTemplate ringPatterns[] =  
 {  
   // state0 state1 state2 state3   
   // the length of these times also limit how quickly changes will occure. color changes are only picked up when a true transition happens  
  { /* no variable before stateDefinition*/ {{false, 1000}, {false, 1000}, {false, 1000}, {false, 1000}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{true, 1000}, {true, 1000}, {true, 1000}, {true, 1000}}  /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{true , 300}, {false, 300}, {false, 300}, {false, 300}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{false, 300}, {true , 300}, {true , 300}, {true , 300}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{true , 200}, {false, 100}, {true , 200}, {false, 800}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{false, 200}, {true , 100}, {false, 200}, {true , 800}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{true , 300}, {false, 400}, {true , 150}, {false, 400}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{false, 300}, {true , 400}, {false, 150}, {true , 400}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{true , 100}, {false, 100}, {true , 100}, {false, 800}} /* no variable after stateDefinition*/ },  
  { /* no variable before stateDefinition*/ {{false, 100}, {true , 100}, {false, 100}, {true , 800}} /* no variable after stateDefinition*/ },  
 };  
 ///////////////////////////////////////////////////////////////////////////  
 void setup() {  
  // 50usec for 40pix @ 1.25usec/pixel : 19200 is .5usec/bit or 5usec/character  
  // there is a 50usec quiet period between updates   
  //Serial.begin(19200);  
  // don't want to lose characters if interrupt handler too long  
  // serial interrupt handler can't run so arduino input buffer length is no help  
  Serial.begin(9600);  
  strip.begin();  
  strip.show(); // Initialize all pixels to 'off'  
  stripShowRequired = false;  
  // initialize our buffer as all LEDS off  
  go_dark();  
  //show a quickcolor pattern  
  configureForDemo();  
  delay(3000);  
  go_dark();  
  MsTimer2::set(STATE_STEP_INTERVAL, process_blink);  
  MsTimer2::start();  
 }  
 void loop(){  
  const int READ_BUFFER_SIZE = 4*6; // rgb_lmp_red_grn_blu_rng where ringer is only 1 digit but need place for \0  
  char readBuffer[READ_BUFFER_SIZE];  
  int readCount = 0;  
  char newCharacter = '\0';  
  while((readCount < READ_BUFFER_SIZE) && newCharacter !='\r'){  
   // did the timer interrupt handler make changes that require a strip.show()?  
   // note: strip.show() attempts to unmask interrupts as much as possible  
   // must be inside character read while loop  
   if (stripShowRequired) {  
    stripShowRequired = false;  
    strip.show();  
   }  
   if (Serial.available()){  
    newCharacter = Serial.read();  
    if (newCharacter != '\r'){  
     readBuffer[readCount] = newCharacter;  
     readCount++;  
    }  
   }  
  }  
  if (newCharacter == '\r'){  
   readBuffer[readCount] = '\0';  
   // this has to be before process_Command because buffer is wiped  
   if (logDebug){  
     Serial.print("received ");  
     Serial.print(readCount);  
     Serial.print(" characters, command: ");  
     Serial.println(readBuffer);  
   }  
   // got a command so parse it  
   process_command(readBuffer,readCount);  
  }   
  else {  
   // while look exited because too many characters so start over  
  }  
 }  
 /*  
  * blank out the LEDs and buffer  
  */  
 void go_dark(){  
  unsigned long ledLastChangeTime = millis();  
  for ( int index = 0 ; index < NUM_PIXELS; index++){  
   lightStatus[index].currentState = 0;  
   lightStatus[index].pattern = 0;  
   lightStatus[index].activeValues = strip.Color(0,0,0);  
   lightStatus[index].lastChangeTime = ledLastChangeTime;  
   strip.setPixelColor(index, lightStatus[index].activeValues);  
  }  
  // force them all dark immediatly so they go out at the same time  
  // could have waited for timer but different blink rates would go dark at slighly different times  
  strip.show();  
 }  
 //////////////////////////// handler //////////////////////////////  
 //  
 /*  
  Interrupt handler that handles all blink operations  
  */  
 void process_blink(){  
  boolean didChangeSomething = false;  
  unsigned long now = millis();  
  for ( int index = 0 ; index < NUM_PIXELS; index++){  
   byte currentPattern = lightStatus[index].pattern;   
   if (currentPattern >= 0){ // quick range check for valid pattern?  
    if (now >= lightStatus[index].lastChangeTime   
      + ringPatterns[currentPattern].state[lightStatus[index].currentState].activeTime){  
     // calculate next state with rollover/repeat  
     int currentState = (lightStatus[index].currentState+1) % MAX_STATES;  
     lightStatus[index].currentState = currentState;  
     lightStatus[index].lastChangeTime = now;  
     // will this cause slight flicker if already showing led?  
     if (ringPatterns[currentPattern].state[currentState].isActive){  
      strip.setPixelColor(index, lightStatus[index].activeValues);  
     } else {  
      strip.setPixelColor(index,strip.Color(0,0,0));  
     }  
     didChangeSomething = true;  
    }  
   }  
  }  
  // don't show in the interrupt handler because interrupts would be masked  
  // for a long time.   
  if (didChangeSomething){  
   stripShowRequired = true;  
  }  
 }  
 // first look for commands without parameters then with parametes   
 boolean process_command(char *readBuffer, int readCount){  
  int indexValue;  
  byte redValue;  
  byte greenValue;  
  byte blueValue;  
  byte patternValue;  
  // use string tokenizer to simplify parameters -- could be faster by only running if needed  
  char *command;  
  char *parsePointer;  
  // First strtok iteration  
  command = strtok_r(readBuffer," ",&parsePointer);  
  boolean processedCommand = false;  
  if (strcmp(command,"h") == 0 || strcmp(command,"?") == 0){  
   help();  
   processedCommand = true;  
  } else if (strcmp(command,"rgb") == 0){  
   char * index  = strtok_r(NULL," ",&parsePointer);  
   char * red   = strtok_r(NULL," ",&parsePointer);  
   char * green  = strtok_r(NULL," ",&parsePointer);  
   char * blue  = strtok_r(NULL," ",&parsePointer);  
   char * pattern = strtok_r(NULL," ",&parsePointer);  
   if (index == NULL || red == NULL || green == NULL || blue == NULL || pattern == NULL){  
    help();  
   } else {  
    // this code shows how lazy I am.  
    int numericValue;  
    numericValue = atoi(index);  
    if (numericValue < 0) { numericValue = -1; }  
    else if (numericValue >= NUM_PIXELS) { numericValue = NUM_PIXELS-1; };  
    indexValue = numericValue;  
    numericValue = atoi(red);  
    if (numericValue < 0) { numericValue = 0; }  
    else if (numericValue > 255) { numericValue = 255; };  
    redValue = numericValue;  
    numericValue = atoi(green);  
    if (numericValue < 0) { numericValue = 0; }  
    else if (numericValue > 255) { numericValue = 255; };  
    greenValue = numericValue;  
    numericValue = atoi(blue);  
    if (numericValue < 0) { numericValue = 0; }  
    else if (numericValue > 255) { numericValue = 255; };  
    blueValue = numericValue;  
    numericValue = atoi(pattern);  
    if (numericValue < 0) { numericValue = 0; }  
    else if (numericValue > NUM_PATTERNS) { numericValue = NUM_PATTERNS-1; };  
    patternValue = numericValue;  
    /*  
    Serial.println(indexValue);  
    Serial.println(redValue);  
    Serial.println(greenValue);  
    Serial.println(blueValue);  
    Serial.println(patternValue);  
    */  
    if (indexValue >= 0){  
     lightStatus[indexValue].activeValues = strip.Color(redValue,greenValue,blueValue);  
     lightStatus[indexValue].pattern = patternValue;  
    } else {  
     for (int i = 0; i < NUM_PIXELS; i++){  
      lightStatus[i].activeValues = strip.Color(redValue,greenValue,blueValue);  
      lightStatus[i].pattern = patternValue;  
     }  
    }  
    processedCommand = true;    
   }  
  } else if (strcmp(command,"blank") == 0){  
   go_dark();  
   processedCommand = true;  
  } else if (strcmp(command,"debug") == 0){  
   char * shouldLog  = strtok_r(NULL," ",&parsePointer);  
   if (strcmp(shouldLog,"true") == 0){  
    logDebug = true;  
   } else {  
    logDebug = false;  
   }  
   processedCommand = true;  
  } else if (strcmp(command,"demo") == 0){  
   configureForDemo();  
   processedCommand = true;  
  } else {  
   // completely unrecognized  
  }  
  if (!processedCommand){  
   Serial.print(command);  
   Serial.println(" not recognized ");  
  }  
  return processedCommand;  
 }  
 /*  
  * Simple method that displays the help  
  */  
 void help(){  
  Serial.println("h: help");  
  Serial.println("?: help");  
  Serial.println("rgb  <led 0..39> <red 0..255> <green 0..255> <blue 0..255> <pattern 0..9>: set RGB pattern to pattern <0:off, 1:continuous>");  
  Serial.println("rgb  <all -1>  <red 0..255> <green 0..255> <blue 0..255> <pattern 0..9>: set RGB pattern to pattern <0:off, 1:continuous>");  
  Serial.println("debug <true|false> log all input to serial");  
  Serial.println("blank clears all");  
  Serial.println("demo color and blank wheel");  
  Serial.flush();  
 }  
 //////////////////////////// demo //////////////////////////////  
 /*  
  * show the various blink patterns  
  */  
 void configureForDemo(){  
  unsigned long ledLastChangeTime = millis();  
  for ( int index = 0 ; index < NUM_PIXELS; index++){  
   lightStatus[index].currentState = 0;  
   lightStatus[index].pattern = (index%8)+1; // the shield is 8x5 so we do 8 patterns and we know pattern 0 is off  
   lightStatus[index].activeValues = Wheel(index*index & 255);  
   lightStatus[index].lastChangeTime = ledLastChangeTime;  
  }  
  uint16_t i;   
  for(i=0; i<strip.numPixels(); i++) {  
   strip.setPixelColor(i,lightStatus[i].activeValues);  
  }     
  strip.show();  
 }  
 //////////////////////////// stuff from the Adafruit NeoPixel sample //////////////////////////////  
 // Input a value 0 to 255 to get a color value.  
 // The colours are a transition r - g - b - back to r.  
 uint32_t Wheel(byte WheelPos) {  
  if(WheelPos < 85) {  
   return strip.Color(WheelPos * 3, 255 - WheelPos * 3, 0);  
  } else if(WheelPos < 170) {  
   WheelPos -= 85;  
   return strip.Color(255 - WheelPos * 3, 0, WheelPos * 3);  
  } else {  
   WheelPos -= 170;  
   return strip.Color(0, WheelPos * 3, 255 - WheelPos * 3);  
  }  
 }  

Saturday, July 13, 2013

Examining how NServiceBus configures and uses RabbitMQ / AMQP

NServiceBus acts a .Net (C#) distributed messaging abstraction layer for message driven systems.  It runs on top of a variety of transports including MSMQ, Azure Storage Queues, ActiveMQ on RabbitMQ.  This blog describes how NServiceBus configures and uses RabbitMQ using the Video Store Sample program available in the NServiceBus github repository .


Background


Message driven systems are asynchronous distributed systems that have two main message concepts, at least in our project. They have commands, targeted messages, where the sender knows the intended destination. The producer sends the messages to a target. They also have publish/subscribe or event based messages where the sender target is unknown. The sender posts the event where is then received by parties that have registered interest, subscribed.
  • Producers: Programs that send messages or post events 
  • Consumers: Programs that receive messages or capture events 
  • Queues: A place where messages are collected and distributed. Messages are put in queues where they are read by consumers.

AMQP Background

AMQP is an open standard based messaging system. Some of the fundamental building blocks leveraged by NServiceBus are

  • Channels: These are the path that producers communicate over when sending messages 
  • Queues: Consumers attach to these to read messages. Queues exist to serve consumers 
  • Exchanges: Aggregation and distribution points used to route incoming messages from the channels to the queues. There can be multiple upstream producers and multiple consumers for the data coming out of an exchange. There are a variety of exchange types discussed in the RabbitMQ documentation. Exchanges can have upstream and downstream exchanges meaning a message can pass through multiple exchanges before getting to a queue.
All AMQP messages flow through some type of exchange on their way to a queue. The standard mandates a default exchange that binds to every queue by routing key. Messages targeted at a specific queue route through the default exchange with the routing key set to the target queue name.

Useful rabbit tutorials can be found here http://www.rabbitmq.com/getstarted.html

NServiceBus Background

NServiceBus uses a type driven model similar to MVC that is familiar to .Net developers.  NServiceBus messages are explicitly defined concrete classes or interfaces  Each message or event type is a different C# type.   It supports message routing and topic subscriptions based on explicit C# types, namespaces or assemblies.  Content based routing is not supported.  Classes and interfaces can be declared as NServiceBus messages and events with marker interfaces or through a config file.  The latter is nice when you don’t want any NServiceBus dependencies in your message model.

  • Commands: Point to point messages sent by a client with an expected target. The target is configured in the executable’s properties config file. Commands may or may not expect a reply or an event. The caller must know the target and routing is configured on the sender’s end. 
  • Messages: Usually these are the types of the asynchronous replies to commands. The system provides auto-correlation support so that the originator receives the reply. These are different from Events where all interested parties can seen message sent as a result of some action. The process replying doesn’t have to know the target. NServiceBus takes care of the routing. 
  • Events: These are pub/sub messages that are broadcast to interested parties. The message producer has no idea who might be interested in the message. Routing is driven off of the endpoints that are interested in the message.
Message producers know where they want to send commands. Commands, direct messages, are routed to explicit endpoints or queues.  Message target queues are defined in an XML file configuration usually web.config or app.config.  Here is an example from the NServiceVideoStore example that binds a namespace to an outbound queue name:

     <add Assembly="VideoStore.Messages" Namespace="VideoStore.Messages.Events" Endpoint="VideoStore.Sales"/>

The pub/sub model is different.  Message publishers do not know who is interested in a messages.  It is the consumers that declare their interest in messages of various types.  NServiceBus discovers  program interest in specific event types when it scans assemblies for messages with certain interfaces and signatures. This example declares a handler class interested in the OrderAccepted event type:

   class OrderAcceptedHandler : IHandleMessages<OrderAccepted>

The actual handler code itself other .Net MVC conventions. This handler processes OrderAccepted events:

       public void Handle(OrderAccepted message)

The VideoStore Example
NServiceBus has a Video Store example that has five (5) executables including a web interface, saga processor and hosted NServiceBus endpoints:
  • ContentManagement: Simulates a download service
  • CustomerRelations:
  • ECommerce: A simulated web store
  • Operations:
  • Sales: A sales management saga 
There are three (3) targeted messages, commands:  
  • ProvisionDownloadRequest: Sent from Content Management to Operations
  • SubmitOrder: Sent from ECommerce to Sales
  • CancelOrder: sent from ECommerce to sales. 
There are five (5) published events that are processed in a publish/subscribe type way.
  • ClientBecamePreferred: Published by Customer Relations. Subscribed to by CustomerRelations.
  • DownloadIsReady: Published by ContentManagement. Subscribed to by ECommerce
  • OrderAccepted: Published by the Sales saga. Subscribed to ContentManagement and CustomerRelations
  • OrderCancelled: Published by the Sales saga. Subscribed to by ECommerce
  • OrderPlaced: Published by the Sales saga. Subscribed to by ECommerce
Here is the command/message/event configuration from the VideoStore example that tells NServiceBus how to categorize the various message types.

   Configure.Instance
       .DefiningCommandsAs(t => t.Namespace !=
null
            && t.Namespace.StartsWith("VideoStore")  
            && t.Namespace.EndsWith("Commands"))
       .DefiningEventsAs(t => t.Namespace !=
null
            && t.Namespace.StartsWith("VideoStore")
            && t.Namespace.EndsWith("Events"))
       .DefiningMessagesAs(t => t.Namespace !=
null
            && t.Namespace.StartsWith("VideoStore")
            && t.Namespace.EndsWith("RequestResponse"))
       .DefiningEncryptedPropertiesAs(p => p.Name.StartsWith(
"Encrypted"));

NServiceBus on RabbitMQ

NServiceBus endpoints all have a single inbound queue.  All messages sent to that endpoint go over that single queue where they are then distributed to the individaul handlers.  NServiceBus configures the message and exchange routing so that all Commands and Events destined for an endpoint end up in that endpoint’s single inbound queue.  RabbitMQ creates that inbound queue on endpoint startup when NServiceBus binds that endpoint to a queue name in RabbitMQ. Note: AMQP brokers create queues when a receiver binds to queue name, not when a message is sent to a queue name.

NServiceBus makes use of the default message exchange for all Command routing. It explicitly RabbitMQ exchanges for Event pub/sub traffic.
 
Commands
NServiceBus uses the AMQP default exchange that is automatically configured in every RabbitMQ VHost.  The VideoStore example has two Command producers with three commands between them. There are two endpoints that receive these commands.

  • ContentManagement sends the ProvisionDownloadRequest command to the Operations endpoint through the Operations queue 
  • ECommerce sends the SubmitOrder and CancelOrder commands to the Sales endpoint through the Sales queue.


Commands are sent with Bus.Send. NServiceBus knows to set the AMQP routing key to the endpoint name before pushing onto the channel and into the default exchange.  The default exchange naturally uses the routing to find a queue with the same name.


Events

NServiceBus uses the AMQP exchange-to-exchange and exchange-to-queue routing to build a delivery map for published events. This lets NServiceBus support any many-producer each with many event types with many interested parties.

The NServiceBus publish command matches the C# Type of the event to an exchange name sends the event to that exchange. The exchange points at all the endpoint exchanges that front the individual endpoint queues. This diagram shows how the Video Store’s three (3) event producers post the five (5) different event types to three (3) different consumers.



It’s pretty straightforward once you see the diagram.

  1. NServiceBus scans for all the Event types based on either the marker interfaces or the Unobtrusive configuration file. It builds a Fanout exchange for each event type. 
  2. NServiceBus scans every endpoint to see what if they support any events.  It builds a a Queue Proxy exchange for each identified endpoint with the same name as the queue.   
    1. You can see in the picture above that two of the endpoint queues do not have matching exchanges. This means they don’t subscribe to any events. 
  3. NServiceBus uses the same scan to build a binding between the Event exchanges and the appropriate Queue proxy exchanges. 
  4. The endpoints listen on the queues for messages.

Events are posted using Bus.Publish() . The publish process looks like:

  1. NServiceBus sends them to the exchange with the same name as the event type. 
  2. The exchange then passes the message to all bound queue (proxy) exchanges . 
  3. The queue proxy exchanges forward all the gathered messages onto their bound queues.

Wrapup


NServiceBus has a type based messaging and routing model that it implements on top of RabbitMQ transport by leveraging the AMQP exchange and queue architecture.  It only uses the AMQP fanout exchange.  AMQP header and content based routing is not used because it is not needed to support the NServiceBus routing and distribution design.