Sunday, January 25, 2015

Building a development cloud using nested virtualization.

This article is not for you if you are happy developing/testing with at most two machines, your developer machine and a machine under test.  It is not for you if you do all your product evaluation, training and test can only be done in Azure or some other cloud environment that doesn't support nested virtualizaiton.

This is really about building your own virtual labs or data centers to simulate larger installations or to use as training environments.  Microsoft applications or machine clusters often include Active Directory, a database and some application servers.

You can some times install all this on single machines.  Multiple machines make sense if you are working on clustering or wish to leverage portions of your setup for future projects.

Microsoft often provides VHDs for some of their more complicated products that save you configuration time. These machines are often more interesting when integrated into to some type of application including AD databases or other tools.  I've used their TFS and SCOM evaluation servers. I always needed ActiveDirectory and some other machines to monitor, build or otherwise build out my micro data center.

Virtual Data Center Concept

The normal "build a couple virtual machines on my dev machine or server" makes for muddy system diagram for the machines under test. They are dependent on each other and on the host machine is also used in a non server type of way possibly as a development or monitoring machine.  It is hard to simulate to separate data centers in this configuration since the main host machine can't be "cloned" to the 2nd environment.

We'd really like to take a "micro cloud" or virtual data center that can be moved from machine to machine leaving unmodified.  This implies that all of the machines reside inside a virtual container that can be cloned or moved.  We can clone as many class or lap clusters as we need by cloning the virtual container.

This lets us create a portable micro data center that stays coherent and intact across copy and move operations.

N Level Virtual Nesting

The above diagrams describe a minimum of 3 levels of virtual containers and machines. This implies that we want to run nested virtualized environments.
  1. The container that hosts the Virtual Lab environment.
  2. The inner host representing a Virtual Lab instance, a container that contains all of the virtual guest machines for a given environment.
  3. The guest machines themselves that do all the work.
VMWare has pretty much always supported nested virtual machines using they lightweight Hypervisor.  This is the lowest resource way of handling the problem.  It won't work in this case because we have Microsoft provided VHDs that have hard Hyper-V dependencies.

Many Microsoft example machines come in VHD format. They can have unexpected VM dependencies that force a Hyper-V virtual host into the mix.  Microsoft Hyper-V doesn't support nested virtual machines making our target configuration impossible in a Hyper-V only environment.

VMWare supports nested virtual host environments and also suports Windows Server Hyper-V as a guest machine. This means We can run VMWare virtualization
 on our server or development machine with a Windows Server Hyper-V host inside it. You could add one more layer of VMs by using another layer of VMWare Hypervisor.

We can slightly simplify this diagram when running
this on a developer workstation if we assume that each micro-cloud Hyper-V host can run in its own virtual host. Replace the outside hypervisor with a "virtualization environment".  We run Windows Guest machines inside a Windows Server Hyper-V host inside a Windows virtual environment.

Infrastructure: Networking

The virtual-lab / micro-center environment works because the internal part of the network always looks the same no matter where the outer host moves to.  The external network can be DHCP with no reverse DNS capability.

I usually have two network switches, one internal and one external.  I then bind the various network adapters to those two switches to get networking behavior I want.

Stable Private Networking

  1. The internal network addresses are either fixed IP or retrieved from a DHCP server that is on the same internal network.  The machines cannot grab their internal network addresses from a server outside the network.
  2. The internal network has its own DNS so that all network lookups inside the virtual cloud always resolve no matter what happens outside the environment. One of the machines should be a DNS server. I usually make the AD server act as the DNS server.
This internal network switch can be of type "internal" or "private" for Hyper-V environments.  I use "internal" because it lets have a common file share on the Hyper-V host machine.

Accessing the Outside World

Machines can operate in a totally isolated environment or they can have access to the internet for updates and other functions. You have a couple choices for providing outside access..

  1. Provide each guest VM an external network address. 
  2. Designating one of the machines to act as a gateway for the others. 
  3. Use the virtualization container as a gateway.  
You can enable the RAS remote client NAT feature and have the Hyper-V host act as a NAT gateway for the internal network switch. This lets you have a firmly fixed internal network while still providing pass through network access to the outside.  You can turn off this feature to isolate the network without having to make any real network changes.

Infrastructure: Active Directory
Active Directory is the primary system, user, DNS and network configuration component. Every environment should have an Active Directory server or have access to an Active Directory server.  Developers often don't have enough access to corporate Active Directory to set up virtual labs and micro clouds. All worker, application or business applications run on guest machines in both configurations.

The Hypervisor is a thin container with no higher level business functions. You don't add or remove features.

Active Directory servers are guest systems like any other guest in all Hypervisor hosted cloud.  This provides a very regular pattern making the network look like a network of physical machines

Hyper-V servers can be treated like a Hypervisor, hosting no higher level functions.

I often add Active Directory and DNS Features to the Hyper-V host making that my "hub" machine.   My lab environments are often small enough that I don't need a sole purpose AD machine.

You are paying the fully loaded cost of running a Windows Server to host the virtual machines in terms of, attack vectors, patching,  disk space and CPU. You might as well add network/administrative functions to the host that are not true parts of your application.

Sunday, January 4, 2015

Using files with embedded Mule Expression Language for better looking HTML

Our team returns a HTML home page when anyone makes a GET request at the root of our API or monitoring web endpoints.  This  service help page includes a combination of static and dynamic content.

We struggled building decent looking pages until we started using the Mule Parse Template component and groovy component that invokes the Mule Expression Language (MEL) processor against the markup. The example to the right shows how our default behavior in a our web choice router processes a web template.

Sample Code

You can find sample code in the Coda Hale exception metrics counter demo on GitHub

Parse Template

The Parse Template component loads a file into the Mule payload. You can use this to return any raw file to the caller based on the request path. This lets you return html, css or js type files from iniside your application.  We will use this feature to load an HTML file into the payload that includes embedded MEL.  The sample program sets the Location property of Parse Template to point at the index.html file in docroot in the ${app.home} directory.

Processing MEL embedded in a payload using Groovy

Mule uses the current Expression Manager to process MEL expressions used in normal component.  We can use the same Expression Manager to parse and process our parameterized payload. I use a Groovy script component for this because it is simple and because the component lets us set the outbound MIME type without an additional component.  The magic groovy script that takes the payload, applies the expression processor and puts the results into the outbound message isis:
return muleContext.getExpressionManager().parse(payload,message)
 Here is how this looks in Anypoint Studio.

return muleContext.getExpressionManager().parse(payload,message)

Sample Markup

This HTML fragment displays various http properties in an HTML list.

http.context.path: #[message.inboundProperties['http.context.path']]</li>
    <li>http.context.uri: #[message.inboundProperties['http.context.uri']]</li>
    <li>http.relative.path: #[message.inboundProperties['http.relative.path']]</li>
    <li>http.request.path: #[message.inboundProperties['http.request.path']]</li>
    <li>http.query.params: #[message.inboundProperties['http.query.params']]</li>

Last Edited 2015 Jan 04