Thursday, January 31, 2013

Run a Microsoft virtual lab on a Mac with Fusion and Hyper-V

An integrated Microsoft environment provides a lot of standard infrastructure for your project.  It also requires a certain amount of standard infrastructure irrespective of the size of the project. Active Directory is the most obvious example of this   Developers that want to test out Microsoft tools or sub-systems almost always need a multi-system environment because most  Microsoft products work best when bound to Active Directory, a tool that needs to run on its own system.  Virtualization can help here making it easy to build virtual labs , "personal clouds" or use something like Azure.

I've been working on a Microsoft project where all of our virtualization is done using Microsoft's Hyper-V.  This pushes me towards Hyper-V as my virtualization platform when sharing work with others on the team.  My most powerful laptop is my 16GB Quad-Core Macbook Pro.  Hyper-V doesn't run on OS/X so we can instead make use of a technique used by a lot of VMWare users.  Run a virtualized environment inside of another virtualized environment.  You can snapshot or backup and restore your entire environment by manipulating a single virtual drive.  All we need to do is to figure out how to run Hyper-V on a Mac.

Virtualized Lab Network Topology

You have two main networking choices when when it comes to a virtual lab on your desktop or laptop.  You can either use bridged or NAT networking. Bridging the virtual machines to your network putting them on the same backbone as your primary machine.  NAT networking isolates the virtual machines on their own network (VLAN) that is reachable from the host.

Bridge networking uses address space on the main network and makes the machines routable from other machines on the network.  Active Directory requires a fixed IP address so this system doesn't work well when the host machine moves from network to network or when multiple developers need to run their own virtual labs on the same network/project.


NAT networking creates a private VLAN that cannot be seen from any machine on the main network.  Machines on the VLAN do have outbound access to the main network including DNS.  Services on the VLAN virtual machines can be made visible through port forwarding.  The Port forwarding can be set up on the host machine to forward requests from the main network to the VLAN in the same way reverse proxy servers or firewalls forward to protected networks for "real" systems.

The isolated, non-routable nature of the network means that the fixed IP addresses can be used no matter what external network addresses are used.  It also means that each developer can have an identical private network with the same addresses without any conflicts.

I'm going to put the lab on a private network.  VMWare Fusion installs and uses network device VMNet8 to support NAT connections.  You can see that on your mac by running the command ifconfig in a terminal window.

DNS

VMWare Fusion's default DNS behavior is to make the DNS servers used by the (Mac) host available to the virtual machines.  This means the virtual machines can see the same hostnames that the MAC itself can.  It doesn't mean that the VMs can find each other or that anyone else can find them. 

Registering Virtual Machines in DNS 

Virtual machines can be added to a domain's DNS if running in bridged network mode because they are routable from the outside. The virtual machines cannot be added to global DNS if running on a private network via NAT because they are not routable from the public network.  

Virtual machines on the private LAN, in a NAT configuration, are visible to each other but are not registered by name even on the VLAN.  The usual way to handle this is to install a DNS server on the VLAN, assign fixed IPs on the VLAN and, register those VMs with that DNS server.  The private LAN VMs then all use that DNS server for the subdomain that is the VLAN.  Normally that DNS server also forwards or passes requests to the public server for machines not on the VLAN.

Active Directory and DNS

Microsoft servers are tied together through Active Directory. It acts as a controller for a standalone domain or subdomain.  That machine provides security authentication, common configuration, a common management point and acts as a DNS server for members of the domain. We need an AD machine in our virtual lab and we want all of the other machines to use that AD machine for AD and DNS.  We normally install and configure the AD Domain Controller and then tell all other machines that the DC is their DNS server.  They will then be able to bind to the AD domain and have DNS access for all VMs that are part of the domain.  We need to either manually configure each VM to point at this "non default" DNS server or we can configure the DNS returned by the Fusion DHCP server to point at  this AD/DNS server for DNS.  This makes it easy to configure the machines to bind to the doman.

Non-domain machines on the same private network will want name based access to the sub domain's DNS services.  They can either be manually configured to use the AD/DNS or you can configure the Mac's resolver to use the AD machine for that subdomain.  The non-domain machines will pick up DNS support for that AD domain when they request form the main Mac's DNS service.  It will route those requests to the AD controller.  

Virtual Machine Organization

There are a couple different ways of organizing the Virtual Machines and Virtual Disks (VMDKs/VHDs) when building a lab environment.  Network topology does not affect this decision.  Your can either run your virtual lab as a set of independent machines each rooted on the host file system or you can nest your virtual machines inside of a Hypervisor installed as a virtual machine.  The former is simpler but the latter a more powerful "virtual lab" or "private cloud" tool.

Independent VMs

This is the deployment diagram for a deployment where each virtual machine is in their own standalone VM.  Each VM is started and stopped manually though this could probably be scripted.  Each VM has its own virtual drive.


Virtual in Virtual

The virtual machines can all run inside a nested hypervisor.  This lets us simulate an isolated / firewalled network. People build whole sophisticated test networks using this technique. Nested virtual machines can use virtual drives nested inside the hypervisor's drive or they can use external SAN.

VMWare supports running ESXi inside of their other Hypervisors.  This is how the VMWare lab environments work.  ESXi acts like it does running on physical hardware while really running on virtual hardware. This diagram shows a deployed set of microsoft services running on the ESXi's Hypervisor all wrapped inside a Fusion virtual environment.  You can actually run multiple Hypervisors on top of the Fusion Hypervisor.



This is essentially the same story with the Fusion VM simulating a physical box that we deploy Windows Server on to.  That Windows Server thinks it is running on physical hardware that can run Hyper-V.  We can the deploy our app and SQL server inside the Hyper-V environment.  This is a great option for someone wanting to doing multi-machine Microsoft development on a Mac.




Making use of Fusion DHCPD.conf to Provide Static IP and DNS

AD requires a fixed IP address as does the private network DNS server. Application and data base servers are easier to configure and scale up if they have fixed IP address and are configured in DNS.  We can manually configure all the machines with static IP and DNS addresses.  Or, we can modify the DHCP server's dhcpd.conf file to have it provide the same functionality.  We just need to know the MAC addresses of each of the virtual machines that will be on our network. Here is a sample that shows how to configure a machine to have a fixed address and use a DNS server on the private network.


host AppServer1 {
    hardware ethernet 00:15:5D:90:84:00;
    fixed-address 172.16.144.101;
    option domain-name-servers 172.16.144.100;
}

Installation

Sunday, January 13, 2013

cctray.xml Continuous Integration Build Monitor Update

I've updated the sample Java based build monitoring program to add support for some of the LED feedback lights I've built in the last year.  The program obtains build status from Hudson, Jenkins, Cruise Control and others by parsing the very common cctray.xml format originally provided by Cruise Control.

Code is available on Git Hub .  It was a messy combination of older hacks.  Now it's marginally improved. Configuration happens via properties files instead of with bunches command line arguments.  Properties files provide flexibility to increase the number of configuration parameters for various devices.  The best way to do fix this is  to use Dependency Injection like I did for the C#/Spring.Net TFS build monitor I built last year. In the mean time you can configure the properties file in the tree or create an external one that you pass in on the command line.

Device support includes.

  • Ambient Orb with serial port
  • Arduino Dual RGB discussed in an older blog article controlled via serial port
  • Arduino Ethernet 3-wire LED strip controlled via HTTP POST 
  • Arduino Quint RGB , I2C port enhanced version of the Dual RGB controlled via serial port
  • MSP 430 low cost single light controlled via serial port
  • Seeed Studio 4x4x4 LED cube controlled via serial port


Hopefully I'll get some time in the near future to really turn the crank on improving this code.