Tuesday, October 30, 2012

Demonstrating Gemfire Components Configured using spring-data

Gemfire, and the applications that use it are made up of serveral components. There is the data caching tier itself that can exist as one or more data nodes. The data tier can be extended to provide database write-behind, write-through and on-demand read-through. The data tier can also be replicated to remote data clusters through the WAN gateway. WAN replication is not currently part of this example. There is also a client tier that can be pure client, client with local caching and notification client or a combination of the three. Client applications can consume data as they would a no-SQL store, database or they can register for data change notifications with the appropriate callback handlers. Gemfire relies on and makes use of other components including the Gemfire locator and JMXAgent. Gemfire clusters will almost always be be coordinated and linked through the use of Gemfire locator processes. JMXAgents add JMX bean access to a gemfire cluster for management and monitoring through either GFMon or some other JMX monitoring tool.


Demonstration program.

You can access this Gemfire demonstration source on github https://github.com/freemansoft/fire-samples Gemfire is a set of Java programs or modules. Gemfire components can be run in as standalone programs using the scripts included with the Gemfire product download or they can be run as part of some other executable. This demonstration provides support for running all of the PoC/demonstration components as part of custom java main() executable or as via a WAR file. The java main() programs provide a simple testing framework for running inside Eclipse. The WAR files provide a method of deploying Gemfire components in environments where all Java programs must run from inside a Java Servlet or J2EE container.



Gemfire Components and Functionality

This is a very basic demo of some of VMWare's Gemfire product. It is demonstrable as both standalone components and as deployable war files. The following is an incomplete list of demonstrated components.
Locator A locator process that shows how the ports are selected
Server Cache nodes A standalone data server or clustered server that includes listeners and continuous queries
Cache Client A standalone client program that has local cache
Continuous Query Clients A standalone client program that receives call backs when data changes in certain regions
WAN Gatway A cluster node that could be used to replicate data to another site. It currently logs batch data inserts and updates as they become available from the cluster nodes.
Cache Listener A component that runs in the server that logs listener events
Cache Loader Two examples, one that just logs when there is a cache miss and the other that is configured to load cache misses form the demo H2 database.
Cache Writer (write-through) Two examples, one that just logs when data changes in the cache and can be written some backend system. The other is configured to write to a DB when data is saved.
Replicated regions Gemfire regions where the data is replicated across all nodes
JmxAgent A JMXAgent wrapper that lets it be run in the same way as the other components.
Disk write-through persistence(?) One of the regions is configured for disk based persistence
Disk write-behind hooks demonstrated via WAN gateway and a wan gateway listener
The @cacheable annotation One of the commands and regions is dedicated the @cacheable annotation. See the "power" command
Spring Batch Used to load data files and to auto-map flat entities to the backing H2 database.

Some things that are not yet in the demonstration
  • WAN Gateway Basic Spring-Data-Gemfire configuration for a WAN gateway that is used, in this case, to log activity.
  • Database Write-behind This is built on top of WAN gateway. The demonstration includes logging mostly because I couldn't use spring batch to do no code writes they way I could do reads
  • Database Write-through The demonstration includes logging write-through. It will eventually have fully functional write-through
  • Partitioned regions This requires that users run at least two nodes to be useful.
  • Co-located data in queries These make use of data whose node location is picked based on its relationship to other data. All regions are currently fully replicated without any partioning.
  • Security hooks TBA
  • Data morphing based on permissions This is dependent on identity management.
  • Index based queries Queries on indicies with logging to show that Gemfire can search without deserializing the data on the server nodes.
  • Properties/meta-data region An example of how gemfire can use gemfire itself to store cluster behavior information for java proceedures or listeners.

Demonstration Components


Shell

The demo has a lightweight menu system available on stdin/stdout/log4j when running as standalone applications. The commands are available in both client or server processes. This provides simple way to muck around with Gemfire features without building your own application. See the menu in the individual programs for the current list of available commands. That command line lets you do the following functions:

Save a note to the notes table. Saves a command line entered note to the NOTES-REGION-WITH-GATEWAY table which is hooked to an asynchronous listener via the WAN gateway. Usage: <cmd> <double quoted message>
Generate 2^n as a demo of @Cacheable. Calls a method that calculates 2^n and caches or uses a previously calculated value. That method is marked with the @Cacheable annotation. Uses the cache version if you repeat the same number more than once. Cache entries expire 15 seconds after creation. Usage: <cmd> <n>
Return Cache sizes for all regions. Clients with local caches return the local cache size Usage: <cmd>
Create random data and stuff into Gemfire Cluster. This creates 400,000 rows in all tables Usage: <cmd>
Load a region from a CSV file (absolute or relative to project root). <cmd> <region_name> <csv_file_name>
Load data from DB pre-defined automapping assumes 1-1 mapping. Uses the Spring batch auto-binding layer <cmd> <region_name>
Load all possible regions from data from DB pre-defined automapping assumes 1-1 mapping. Runs one thread per region Usage: <cmd>
Modify a string attribute. Retrieves the object from the cache and changes its value. <cmd> <region_name> <pk> <property_name> <attribute_name>
Retrieve entire cacheName and show the first entries (requires template). Calls toString() <cmd> <cacheName> <showRowCount>
Retrieve all data in all Regions. Does not display. May run parallel fetch, one per region (uses templates) <cmd>
Retrieve from cacheName the object specified by key key <cmd> <cache_name> <key>
Rebalance cache. Only works on data/server node <cmd>
Set a property in the property region. Ex 'cacheWriterEnable|cacheLoaderEnable' 'true|false'. Useful for testing properties. <cmd> <property-name> <new-value>
Purge all data from cache: WARNING! <cmd>
Exit this program without hesitation. Terminates command line programs <cmd>

Executables

The demonstration is made up of several executables that exist as main() programs that can be run from inside Eclipse with RunAs-->JavaApplication. Some of these can also be run as standalone java programs with their own bootstrap. They were wrapped in the demonstraton to provide a common run interface. Java application class files containing the main() entry point exist in demo.vmware.app to make it easier to explore. These programs are all in the src/main/java/demo/vmware/app folder.
DB.java This is a demonstration database that can be used to test bulk loading and cache-loader code when there is a cache miss. Run this first if you have any of the loaders and listeners enabled.
Locator.java Run this to start your Gemfire locator that is required before any client or server can startup. This code supports an arbitrary number of locators per machine but is currently configured for 2 through spring wiring.
Server.java Run this to start a Gemfire server process. You can run multiple copies of this on the same machine. The caches are configured through spring so you can convert the regions from simple replicated to partitioned though those files. The demo command menu is available in this process type.
JmxAgent.java GFMon, the Gemfire monitoring tool, communicates with a cluster via JMX.This program starts the gemfire/jmx bridge. Run this agent before running gfmon.
Client.java Clients can be standalone or caching clients. This is a caching client that runs the same command menu as the server. Clients should be launched after all the server and services components have been started.
ClientCq.java This is a standalone non-caching client that registers a continuous query for data changes of a certain type. Clients should be launched after all the server and services components have been started.
WanGatewayWritebehind.java This runs a WAN gateway process that records all data that would either be written to a database as database write-behind or data that would be replicated across the WAN gateway to another data center. Only the NOTES-REGION-WITH-GATEWAY region is currently configured for this so you must use the "add a note" command in a server or client to insert data that is sent to this process.
Server, Client, Locator and JmxAgent can also be run inside container using the separate war files defined in src/main/resources. The WAR files are built as part of the standard maven build.

Spring Data Gemfire

The entire demonstration is configured using spring-data-gemfire and spring configuration files. They are located in src/main/resources. No external or custom Gemfire XML configuration files are needed.
spring-cache-client-core.xml Configures the locator and pdx serialization for Gemfire clients. It is shared by both cq and region-only clients.
spring-cache-client-cq-only.xml Configures the continuous query and callback for the Gemfire CQ client.
spring-cache-client-region-only.xml Configures the client cache regions for the Gemfire client application. Adds support for multiple clients on same machine.
spring-cache-gateway-writebehind.xml Configures the WAN gateway for a single replicated region, in this case to simulate write-behind.
spring-cache-jmxagent.xml Wires the JMX agent into the wrapper so we can run it as a java application without using the Gemfire supplied scripts
spring-cache-locator.xml Configures a Gemfire locator service including support for running multiple locators on the same machine.
spring-cache-server.xml Configures the Gemfire locator service including multiple nodes on the same machine. Defines all the server regions
spring-cache-templates.xml Defines Spring-data-gemfire cache templates. These act as cache (brand) agnostic proxies and support data insert and retrieval.
spring-cache-command-processor.xml Configures the stdin/stdout command processor that is available in demonstration clients and servers.
spring-cache-datasync.xml Creates the mappings for Gemfire Region and database table synchronizatoin. Used by the spring batch framework that the demonstration leverages for some I/O
spring-cache-db.xml Configures the H2 database connection
spring-cache-pdx-serializer Enables pdx serialization as the object serialization model.

Quick-Start Build & Demo

Logging levels are set in log4j.properties. Logging level changes require a restart to be picked up. Note: The demonstration programs can be run from inside Eclipse. Server and Client executables have command interpreters embedded in them. You can view and enter commands for them in the Eclipse console panes.

Demonstrate basic cache server functionality

  1. Download the code from github https://github.com/freemansoft/fire-samples
  2. Run mvn clean install to pull down all the dependencies and to build the class files and war packages. You will not be deploying the war packages as part of this quick-start.
  3. Start the H2 database using DB.java. This acts as the read/write-through database for one of the Gemfire regions. The database optional if you disable or remove the database connected cache loaders.
  4. Connect to H2 on http://localhost:8082 with your browser. Use the default H2 username (sa) and password (sa). This can be done from inside Eclipse
  5. Paste the contents of src/test/resources/sql/create-tables.sql into the H2 command window and execute it. This creates the schema and loads a couple rows of test data. You could add 100s or 1000s of rows in the sql file to increase the DB size
  6. Start a Gemfire locator instance using Locator.java. This can be done from inside Eclipse using runAs Java Application. You can run more than one Gemfire Locator on the same machine. The startup code will automatically bind to an unused port for each instance
  7. Start a Gemfire server using Server.java. This can be done from inside Eclipse runAs Java Application. Run the Server.java with larger memory settings if you're going to want to load big data in one of the steps down below. You can run more than one Gemfire server on the same machine. The startup code will automatically bind to an unused port for each Server instance. Load will be distributed across them via the locators.
  8. Verify the size of each cache using the command console of one of the Server instances. One of the commands prints the region sizes. They should all have zero elements in them because no cache operations have been processed yet.

Demonstrate databse read/write-through

  1. Each table in the database backs a region in the cache cluster. Query the database for a key. Then Request an object in one of the regions by key using a commond in the server console. The cache loader will fetch one row in before returning it to you.
  2. Query the region sizes again to see that the region used in the previous operation now has an element in it.
  3. Copy the contents all of the tables from the database into the cache using the server/command console.
  4. Verify the size of each cache using the server command console. Several of the caches should now have more than one object in them.
  5. Look at the server logs to see what happened. The logs may be intermixed with your command output. If you have more than one server running then you may see the log output on the other server(s) also.

Loading more Data

  1. Load some data from a csv using the command console. The command asks you for the file name and the table to be loaded. The path is relative to the eclipse project root so it wold be src/test/resources/datafiles. The filenames are not exactly the same as the region names so pay attention to that when entering the command. The companies file loads the ATTIRBUTE table and the relationships file is for the RELATIONSHIP table.
  2. You can create a 2 million record cache through auto-generate command in the command console. You will receive an "out of memory" error if you ran the servers with the default memory settings.
  3. Start a client running Client.java using the Eclipse runAs Java Application. You will have to use the larger heap settings described in Client.java if you want to pull down big data in the next step.
  4. You can pull down the 2 million rows down to the client via the command menu. It runs one thread per region about 13 seconds on a 2011 macbook.
  5. Pull down the content of one of the regions using the command console and have it print a couple objects using the command options.

Continuous Query

  1. Start a continuous query (cq) client ClientCq.java using the Eclipse Eclipse RunAs Java Application.
  2. Modify one of the objects that is being monitored by the cq client and you should see it log the fact that it got a callback.

Wan Gateway

  1. Start the WAN gateway write-behind process WanGatewayWritebehind.java using the Eclipse RunAs Java Application The WAN gateway is only bound to the "NOTES-REGION-WITH-GATEWAY". Operations on that region will be logged to the WAN gateway console.
  2. Create a note using the note command using one of the server (or client) command consoles. Double quote the notes string to include spaces.
  3. Watch the logs on the WAN gateway to see the gateway pick up the created note. The WAN gateway is "write-behind" so it will receive the data after some delay.

Using the Spring @Cacheable

  1. Find the power 2^n command in the command menu. The command calculates powers of 2 and stores them in the cache where they are purged after 15 seconds
  2. Run the power of two command with some number, say 3 on one of the servers. Note that the output says "calculating"
  3. Run the same command within 15 seconds. Note the same results are given without the "calculating" message
  4. Run the same command after 15 seconds. Not that the same results are given and the output says "calculating."
  5. Try the same with other numbers.
The cache listener and partitioning behavior can be changed via spring config files in the src/main/resources directory. You can leave the locator and datbase running across Server and Client restarts.

Data Organization

Gemfire Regions

The gemfire regions exist to demonstrate a couple different ideas and capabilities.
Data Model Regions
These were created for a specific demonstration and have self and cross reference joins that make them less than ideal candidates for partitioning. Most data models have good breakpoints for partitioning. These regions are replicated across all nodes. All of the data loading and creation happens in these regions. These are the regions we created GemfireTemplates for.
  • ATTRIBUTE
  • RELATIONSHIP
  • ACTIVITYLEGAL
  • CONTACT
  • TRANSFORM.
Properties regions.
These regions act as meta-data for some of cluster-side custom code. This lets us create state that can turn on and off features in our custom code. They can also be used for applicaton wide properties that might otherwise be pushed to mulitple file systems or an RDB.
  • PROPERTIES
"Cache-able" regions.
This codebase demonstrates how to use the @Cacheable annotation and how it can be used with gemfire to manage objects. This is a very easy way to push objects into and out of a cache without knowing the cache exists. All ther uses of the cache in this codebase "know" about the cache. The example is very simple only using a single region. Applications could share types of objects in a single region or they could have one region per object type for these caches. These tables have a 15 second TTL so entries expire 15 seconds after they are created
  • POWERTABLE
Regions supporting write-behind activities
This region is hooked configured in the WAN gateway so that any notes saved to it are asynchronously logged. This shows where database persistense code would go if you wanted to do database write-behind. The timeouts and batch sizes for the WAN gateway are configured in their XML file.
  • NOTES-REGION-WITH-GATEWAY
Regions for future demonstrations.
Other regions exist in the config to show alternate cache configurations including disk-persistent or partitioned regions.
  • NOTES-ON-DISK

Data Model

Data model objects exist for the Data Model Regions. The demonstration data model is very simple wit one entity per cache element. I twas done this way to simplify database write-through and read-through. Demonstration data entities all have single attribute keys. This isn't a requirement but simplifies the demonstration. Arbitrary, not necessarily uniform, or primitive data can be stored in the other regions. The properties region, for instance, could hold serialized objects or complex graphs. The Properties region only stores/retrives strings in the demo but they could be arbitrary property model objects.

Demonstration data

The example contains three sets of demonstration data
  1. Sql data that can be loaded into the H2 database. It is located in src/test/resources/sql
  2. CSV formatted data that can be loaded into tables via the command console in the server or client. It is located in src/test/resources/datafiles. Data exists for the ATTRIBUTE and RELATIONSHIP regions
  3. Program generated data is available through the command console. This can generate millions of rows in some of the tables in the server. The server must be launched with larger than default memory settings if you intend to use the data generator command.

Auto-mapping data and file interaction

To be written













Sunday, October 28, 2012

Spring.net configuration in App.config

I recently (10/2012) converted my TFS build monitor program to be Dependency Injected using the Spring.Net container. I did this because I wanted a no-code way of setting certain parameters and of specifying a build device management class without having to convert a configuration parameter into a class name. I instead injected the spring object with the same name as the configuration parameter using Spring.Net's built in facilities.

Spring.Net is moving towards more annotation based configuration but the XML configuration is still widely used and powerful.  You can easily configure Spring.Net in XML with markup inserted into App.config.  The first thing to do is to tell the system that there are going to be some spring.Net sections added to app.config.  This defines two sections, contexts and objects.  context is the section that describes where to find other pieces.  objects is the section that contains the actual object wiring.  Additional sections might be present if you are using the AOP runtime injection facilities in Spring.Net


    
    
      

The spring/context section describes the object configuration location, in this case in the config file (App.config) in the spring/objects section. The first object defined in objects is the VariablePlaceholderConfigurer. This class implements variable replacement from properties for spring.Net object parameters.


  
    
      
    
    
      
        
          
            
              
            
          
        
      
The following configuration creates a TfsBuildConnection instance under the name myBuildServerConnection. the "${Tfs.Url}" notation is the spring expression language replacement where the value will be the application property TFS.Url.
      
        
        
        
        
      


Consumers of a spring wired object can either have it directly injected through auto-wiring or they can retrieve the object from the spring context in a Service Locator pattern. This following cod retrieves the build server connection created above. The IApplicationContext should only be created once and can support multiple hierarchical context files. This context creation syntax uses the spring configuration from App.config (or web.config) when find the location for spring defined objects.
     IApplicationContext ctx = ContextRegistry.GetContext();
    TfsBuildConnection myBuildConnect =
        ctx.GetObject("myBuildServerConnection") as TfsBuildConnection;

More complete example

The following listing is extracted from a sample App.config that contains the spring.net.   The full file is available on github https://github.com/freemansoft/build-monitors/tree/master/build-lights-net That file contains additional log4net and other configuration.

  
  
    
    
      
An example that demonstrates simple IoC features. " lazy-init="true" >
Thanks for reading...

Simple log4net configuration in C# applications

Log4net is the .Net cousin of the very popular Java based log4j logging package.  The simple to use package logs information to a variety of data sinks (appenders) with fine grain control over the amount of output generated by each logging source. log4net is available for download from the Apache Software Foundation  and is part of the Apache Logging Project which includes logging libraries for several languages.

Log messages are generated by calls through named instances of the ILog interface. Each class that wants to log creates an instance of the ILog interface based on the class's fully qualified class name, including namespace.  This means each class has its own entry point into the logging system. In some sense each namespace has it's own common entry point by virtue of the fact that the ILog instances for that namespace all share the same base name/category-name. Callers pass a logging level and message along with an optional Exception objects for certain logging levels.  The system then filters the log messages passing the unfiltered messages to the message sinks, called Appenders.


Logging Levels

log4net supports five (5) logging levels.  Teams create their own policies around when the different levels should be used but the general approach is:
Fatal: This is the highest level severity.  Some type of system error or some type of non-recoverable user error.  I've used this when a user is disconnected from the system because of a user error.  These usually include an enclosed exception or stack trace.
Error: Some type of error that the system recognizes.  The system usually can roll back or somehow salvage some of the work. This might also be used when a database lookup fails but there is some type of fallback action that can be taken.  These usually include an enclosed stack trace or exception.
Info: This is the minimum level enabled in most systems. These are usually used for checkpoint messages or other information that should be retained in log files, event logs, or other external data stores.
Debug: Developer output that helps debug a system. This level is only enabled when troubleshooting. Messages at this level are often wrapped in a debug-enabled check to save the system from creating logging messages that will never appear in production.
Trace: Verbose debugging output. Probably a major performance hog.
Messages at this level are often wrapped in a debug-enabled check to save the system from creating logging messages that will never appear in production.  I've used this in unit tests where we log at at a trace level with a Trace level enabled appender.  This lets us track tests through the code without having to modify the code for tests and production.

Filtering messages based on category and logging level. 

Filters set the pass-through level for logging names/categories based on full or partial category names. Classes set up their loggers using the class names using typeof. The following creates two loggers with category names BuildWatcher.BuildWatchDriver and BuildWatcher.TfsBuildAdapter


    namespace BuildWatcher
    private static ILog log = 
                     log4net.LogManager.GetLogger(typeof(BuildWatchDriver));
    private static ILog log = 
                     log4net.LogManager.GetLogger(typeof(TfsBuildAdapter));
The following configuration sets the default logging level to DEBUG except for BuildWatcher namespace related loggers that log at WARN and above.  Note that BuildWatcher WARN level messages are actually routed to a special Appender.  log4net supports multiple appenders.

  
    
    
  
  
    
      
      
    
  

Routing messages to sinks using Appenders

Log4net output is routed, formatted and delivered through the Appenders.  Applications can any number of Appenders to route information to different places.  The two most common are ConsoleAppender and Rolling FileAppender.  Others exist for message buses , event logs and data stores. Here is a simple Console appender configuration.


    
    
      
        
      
    



Configuring log4net using App.config or Web.config

You can configure log4net using an external config file, through code using the API or through your already existing config file.  The following shows an log4net configured as part of app.config.  This sample comes from my TFS build monitoring program on github https://github.com/freemansoft/build-monitors.  

This configuration supports logging of all messages to the console and to a file that is rolled over on regular intervals. The default logging level for this configuration is DEBUG which means DEBUG and above all appear in both logging locations.


  
  
    
    
Thanks for reading...

Tuesday, October 9, 2012

Bending NuGet to Your Will by Manipulating the AppData Cache

NuGet decides which versions of dependencies to pull in based a combination of the way dependencies are specified in the dependent packages and the packages that are available on your feeds.  NuGet drills down pulling all of a components direct dependencies and its indirect dependencies based on the nspec specifications of each pulled in component. This lets you pull in some  OSS component and then its dependencies down to bottom level which might be something like a Microsoft component. NuGet compares the versions specifications when resolving those to NuGet packages on a feed. Here are some example version specifications for the Microsoft MVC

  • "4.0.20505.0"  MVC 4.0 RC or later
  • "[4.0.20505.0]" MVC 4.0 RC only
  • "[4.0.20505.0, 4.1] MVC 4.0 RC through any version of 4.1
  • "[4.0.20505.0, 4.0.20710.0) MVC 4.0 RC and all up to but not including MVC 4 RTM
You can run into problem when new versions of dependencies appear on feeds and dependencies are inccorectly bounded.

Example Problem

Lets say you have library based on MVC 4.0 RC, 4.0.20505.0, that hasn't isn't yet ready to update to MVC 4.0 RTM, 4.0.20710,0.  Now Microsoft updates MVC on the official NuGet feed from 4.0.20505.0 to 4.0.20710.0 making it available to you.  You then update a consuming application pull in this version of your library.

If the library specified its' NuGet dependency as "4.0.20505.0" then NuGet will automatically delete 4.0.20505.0 from your packages folder and add version 4.0.20710.  You now have a component that was built against RC (4.0.20505) but which is sitting in an app with MVC RTM (4.0.20710.) One option is to fix any compile errors resulting from inclusion of 4.0.20710.0 In large projects that is complicated because you might have a multi-solution rol lup to get everything updated and builds rolled up. "Web Component C" in the diagram would require 3 consuming updates to carry this 3rd party version upgrade to the top.



The right way to fix this is to fix this problem is to change the libraries description of it's MVC dependency from "[4.0.20505.0]" which says ignore any other versions on the feed.  This works well also but in a big project may mean that you have to update dozens of nuspec definitions to include the square brackets. This can get complicated in a larger project. I saw one 50 solution project where it took almost a week to roll this type of upgrade from bottom to top.  

Microsoft themselves have some trouble with this.  The MVC/WebApi 4.0.20505.0 RC NuGet internal dependencies are specified as "4.0.20505" for things like Razor, etc.  This means you can potentially end up in a mixed RC / RTM environment where the later dependencies were all pulled in as RTM even though the previous dependencies were RC.

Hacking NuGet to "Use the Cache, Luke"

Another option is to hack NuGet to provide the versions you want.

NuGet caches all of its downloaded dependencies in %HOME%\AppData\NuGet\cache.  This includes any packages that have been updated by the NuGet Package Manager. We make use of this and the fact that this cache looks like a "local" feed.  
  1. Check the cache to see if your packages are already ther.  If not , create a temporary project, and use the package manager to create dependencies that are automatically downloaded to your NuGet cache directory.  Sometimes you need seeding and sometimes you don't.
  2. Turn off your normal feeds including NuGet.org and any local/hosted feeds.. 
  3. Create a new NuGet feed pointing a the AppData\NuGet\cache.  
  4. Purge any conflicting NuGet packages from the cache directory.
  5. Run the NuGet package manager. It will assume that your cache contains the correct versions and it will install from the cache instead of from the actual feeds.

NuGet Package Restore

I'm not sure how this works with Package Restore.  You may be able to use a similar technique

Sunday, October 7, 2012

Why is the Installer is My Least Favorite Piece of the Microsoft Platform.

I worked on a project where we were contractually restricted to Microsoft products after years of using Java and Open Source tools.  Microsoft tooling is kind of interesting because you get a integrated environment where the MS teams have taken on the burden of implementation. It's not a confused jungle of choices where some big pieces integration are left to you.  The MS world moves at a pace driven by Microsoft itself, often faster than JCP driven Java.  OSS and Java either barely move or go really fast but in a whole bunch of different directions all at once.  Which is pretty, is  much a matter of personal taste.

Microsoft and its tooling is installed and managed through GUI installers that bind the installed software/tool to the target machine. Its not like a directory is created and some stuff is registered.  Files are copied different places and system libraries can be updated or overwritten.  This installer and its implications has turned out to be the most annoying thing about the Microsoft environment.  It makes it hard to create and "make available" customized tools across a team. In the Microsoft world this almost always involves software distribution through a centralized team or the work of creating a custom installer to do what you want.

Java requires that you install the JRE/JDK but every Java tool I've used is installed just by unpacking a jar file or by installation as a plugin to an ide.  Tools can be distributed via any fileshare for instance checked into a Version Control System (VCS) or dropped onto a shared drive in its runable format. Developers or users then update their VCS workspace or select the shared drive and run the updated tools often without any configuration work.  Force updating a new version of Eclipse to 100 users can be done by the dev team without "operations" support.  The tool owner can in VCS and then tell everyone to "get latest".  In the Microsoft world, this requires an installer, rollout plan and help from the software distribution team.

It may be a side benefit of the Java/Linux/OSS route is that you can run multiple versions of the same tool on the same machine  worrying that int conflicts with a previous version or that a later installation will overwrite something.  You can usually install different versions out of sequence to their original release dates.  Microsoft Windows products often require that software products be installed in the order of their release dates because they sometimes overwrite or update systems components.

I worked on a Microsoft project where everyone had to have AppFabric, SQL Server and Visual Studio installed.  All of these required an installer to get the software onto the developer machines.  They also required custom configuration to get them to work together.  I worked on a Java project with Coherence, H2 and Eclipse.  All components were distributed, pre-configured, through VCS updates.

The de-installation process in the Microsoft world is even worse.  You often find out that you have to install several, or many, packages to uninstall something.  This gets very complex by the time you layer on the OS and feature patches.   Abandoning a development VM followed by a  re installation is sometimes the simpler way to upgrade a product. Recently I had 3 virtual and physical machines with two versions of Visual Studio and/or two versions of SQL server.  The uninstall process for the earlier version of SQL server ended up being different on each machine. I tried to uninstall the older versions of SQL Server with up to 10 different un-installer packages.   This turned out to be a confusing and error prone process.  You just delete the directory in the Mac World or delete the tools from the file system for Java world and delete the .app directory on the mac.

Maybe it all goes back to the desktop productivity roots where everything is optimized for space and a user takes all actions through a GUI.  The other tools and platforms seem to take a less intrusive approach that simplifies unattended distribution. I currently find find the Microsoft installer, its implications and its side effects the most annoying thing about this environment