Wednesday, November 29, 2017

Browsers implement CORS to protect users, not servers

CORS is designed to protect users from cross site attacks where one site has the browser execute code to connect to another site without the user recognizing that it happened. CORS relies on the Web Browser to recognize and block disallowed cross site requests.  Server side CORS does not block service requests or protect a web site from direct interrogation and or programmatic attack.

Applications and Internet facing services cannot rely on CORS for general site protection. They can rely on CORS to protect the site when the site is connected to by a web browser.    Companies should not consider CORS any type of secure Authorization model. They should implement CORS policies that provide least privilege where possible.

Specified by Server, Implemented by Browser

Application servers can say that they allow connections from any browser no matter where that original browser session / page originally resided. This makes it simple for any page to aggregate information from multiple back-end services on multiple sites.

Application servers can say that they allow connections from a browser only when an original browser session / page were started on an approved web site. The basic flow looks like

  1. A user connects to a web site.
  2. The browser downloads HTML and Javascript.
  3. The browser makes a call for data from the original domain. The request is allowed because the page and data are from the same origin. 
  1. The browser requests data from a 2nd site.
  2. The browser sends a CORS request to the second site asking for a list of domains that are allowed to call the services on that site.
  3. The 2nd site CORS response describes which domains are allowed to connect for this cross-origin scenario.
  4. If not allowed, the browser aborts the call to services outside the page's origin domain with a return code of "Not Authorized"

NOTE: The browser implements cross-site authorization protection. The server specifies the desired protection level and the browser tries to implement it.

Specified by Server, Ignored by Appplications

Remember that CORS is implemented in the Browser to protect the User. CORS restrictions are not honored by web client libraries or the applications that sit on them.

CORS Specification: Allowing Cross Origin Requests

There really only three levels of cross origin that are allowed. They are specific and different: 

  • All : A server can say that it allows requests from pages and applications that originate in all other domains.  Examples uses include: Shared services, advertising services and other public endpoints.
  • None: A server can say that "no cross origin requests are allowed". This tells the browser that the service is only available to content/pages that originate on the same web server (FPDN).
  • Specified Exactly: A server can enable cross-site requests from specific hosts. Companies may use this setting to white-list their own cooperative applications while disabling request from outside organizations. Note that each host has to be listed here.
Wild cards are not supported.  This means that "*" does not work.

Scenario: Multiple Enterprise Applications

A company could have a set of applications that exist as static content on web servers. The same company could put out a set of services to be shared across the organization.

Every web service host would list all internal Static Content host names in their CORS definitions.

This would let all applications share services while blocking consumption by non-firm web applications.

Working around CORS

CORS protection is all about "Cross Origin". It has no impact if everything looks like it comes from the same origin.  An alternative of building cooperative applications without explicitly listing hosts is to proxy all static content and and  HTTP/HTTPS web services through a single proxy. This would make all requests look like they came from the same domain.

    This post on YouTube


    Created 2017/11

    Tuesday, November 21, 2017

    Another computer is using the printer MG5500 wireless

    Printing over my wireless connection from my Windows 10 machine often results in the confusing error.
    Another computer is using the printer
     This is weird because no other compute is using the printer at that time :-(  I found the fix in the Cannon forums. Restart the Windows Print Server

    1. Search for "Services" on the Windows 10 Start Menu
    2. Scroll down to "Print Spooler"
    3. Click on restart

    Tuesday, October 3, 2017

    Validate your Spring yml properties files with a unit test in your CI build

    Protect yourself!

    Validate yaml configuration file for syntax errors  before deploying your application.  Don't wait until you fail a deployment to recognize simple copy/paste errors and typos.

    Unit Test Code

    Create JUnit tests that run as part of every build.


    Find the  source code on GitHub in freemanSoft/ValidateSpringYml

    Source Code

    The following code validates application.yml.  You can pass in any file name or the wildcard "*"

    The previous Unit Test exercises the following utility method.  This method can validate all files that match the passed in pattern where  "*" means all yml files.

    Find the  source code on GitHub in freemanSoft/ValidateSpringYml

    Original Post 2017 Oct 3

    Saturday, September 30, 2017

    The value curve for new hires in skills positions

    What is the relative value of a new hire in their first year?   The value is impacted based on many factors, the hire's motivation, their prior experience, the hiring company's onboarding process, the culture fit and other factors.  Good processes can dramatically impact the contributions made by new hires.  

    This posting is really not about about the interview and hiring process's impact on the quality and eventual capabilities new hires.  This posting is about the general rate at which new team members contribute as measured against their eventual capabilities. My main area of experience with this is with technical teams, software developers, testers and technical analysts.  I suspect it is also true for other skill positions and integrated team.

    Understanding the Learning and Networking Curve

    The graph shows my gut feel for the rate at which team members contribute within their first year relative to their capabilities. It doesn't rate the new team member against others.

    I like to use this picture when I talk with a motivated new hire who is a couple months onto our new team.

    The graph moves up and down based on motivation, and company processes. Flat spots represent perceived stalling in improvement even though improvement is continual.

    1. New team member are always a drag on the team for at least several months.  They don't know anything about the company, culture, business problem, technology or processes.  They have no cross-team support network. 
    2. The 2nd quarter is where a new team member starts to contribute at some level.  People can start leveraging previous experience at this point.  They may make suggestions that don't make sense because they don't know the global context of the problem they are discussing. I call this the "copy/paste" phase for software developers.   This is often the point at which the new hire is nervous about their position or unhappy with the way they are contributing. 
    3. The 3rd quarter is the first time new team members feel like they are making a real contribution. This period can also be frustrating because it often doesn't feel like they are ramping fast enough.  Management may expect 100% capabilities at this point.  Newer teammates often ask questions of the 3rd quarter veteran because of their relative experience.
    4. The 4th quarter is the point where other team members start to rely on a person.  They have enough experience here to run their own efforts in conjunction with others.  I don't feel a person is fully enabled until the end of the first year.

    Continually Communicate with New Team Members

    Don't be that team that assumes hiring extra people converts into immediate productivity teams. Plot their personal graph in your head. Set expectations at regular intervals and review with people to make sure you and they are on the right track.  Make onboarding last beyond the first months.

    Final Thoughts

    I had an advisor in graduate school.  I talked to him immediately after getting several job offers.  He looked at me and said something like the following. My pride said he was wrong but experience says that he was pretty accurate.
    "You think that diploma means you know something.  That diploma means you have the ability to know something in the future.  Those companies are offering you this good money based on the future contribution that the diploma implies you are capable of."

    Posted 2017 09

    Sunday, September 24, 2017

    Black Swan IT Projects: The Loan Servicing mainframe replacement

    This blog discuss a little the "the Mainframe Servicing System Migration", a project that should considered a Black Swan

    A Black Swan Event  
    The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying which presumed black swans did not exist, but the saying was rewritten after black swans were discovered in the wild. 
    The Fannie Mae loan processing servicing system replacement was
    1. Initially budgeted for 18 months and $75M. 
    2. Eventually cost about 72 months and > $800M.
    The project turned out to be a black swan that could have bankrupted other less stable companies.

    In the Mid 2000s

    Fannie Mae closed out either Q3 or Q4 in that year with a recorded profit of $1B. This was a "peak profit" period that stood out. They decided to push some of this excess cash into IT modernization targeting the Mainframe Mortgage Servicing system which handled all inbound Fannie Mae mortgage payments from servicer's and outbound servicing payments to servicers and Collateralized Mortgage Securities.

    Fannie Mae announced that they were going to rewrite all mainframe applications and eliminate all Mainframe positions in 18 months by a total system replacement of all mainframe systems. 1/2 the staff were told they would be allowed to re-interview for new positions at the end of the 18 months. 1/2 of the staff were told they would be laid off at that time and receive termination packages. All of those positions and the systems they supported still existed 6 years later.

    Software Project Complexity

    There is a lot of data that shows the complexity and difficulty in estimation and construction in software projects.  Success rates drop dramatically with increases in size and scope until a point at which almost all projects of a certain size are considered failures when measured against cost, time and the needs of the business. The most well known is probably Standish Chaos Report.  You can find publicly accessible versions in various places.  Gartner has its own versions of these types of reports that back up the notion of high software project failure rates. You can also find competing claims that there is no software crisis.  My personal experience tells the opposite, that the Standish Group may actually be optimists. 

    Mortgage Servicing System

    Fannie Mae and Freddie Mac owned approximately 40% of the US Home Mortgage market around this time. The Fannie Mae mainframe systems handled load acquisition from the sales side, loan servicing on the payment side and other financial operational processes.   The Fannie Mae servicing system was essentially run/managed by a core team of about 6-10 people.   The two senior team members were near or past retirement ages.  Fannie Mae was essentially living on borrowed time.    

    The Project Announcement

    Fannie Mae executives announced that they were going to replace the mainframe systems in 18 months at a cost of $75M starting within a couple months.  The declared that they were moving folks from other projects and bidding out the core work to a major integrator.


    Project Initiation

    A large (3 letter) computer company/integrator won the project.  Their winning bid included a quick staff ramp up targeting bringing 400 people "on site" to work on this project.  Note that this project was targeted at 18 month including the spin up phase.

    SOA custom transactional application

    The first attempt was an individual transaction oriented system based on asynchronous Service Oriented Architecture.  It was a completely "modern" approach for that time period. 

    The initial system was started with partial requirements.  It had up to 400 new contractors, many of whom had no enterprise experience in the tool suite.  The team picked an architecture that no one on the team had ever built to that scale before.  A data center was populated based on the assumption that the system would go live within the project window. The system was complicated by the fact that it was replacing the heart of the entire servicing operation while not impacting current operations.  

    Phase 1 ran for 2+ years and was a complete failure when based on cost, features and meeting the business needs.  Many of the hardware leases expired before any production software was ever installed. The $100M+ spent was not a complete waste in that the time was used to shake out the requirements and get a better feel for the operational needs. I'd guess Fannie Mae had already spent over $200M at this point on a $75M project.


    The second attempt was basically an ETL based system with large data import/export operations bookending batch type processing.  Think of this as a slightly more mainframe type approach. Staff was reorganized. Management shuffled.  The new technology was used.  

    Phase 2 ran for 2+ years and was pretty much a failure based on the criteria of cost, schedule and features.  The project was > 100% over time and > 400% over budget at this point.

    Shrink the Project.  Shove it all in the Database

    The teams realized, by the third attempt, that moving massive amounts of data in and out of relational databases was too slow for their operations.  Mainframes were optimized for this type of work.  The project went with an approach where all the data stayed in the database.  They also reduced the scope of the project to certain transaction types,  reducing the amount of data to less than half of the previous system attempts.

    Phase 3 ran for several users and "went to production".   

    Final Analysis

    Is a project successful it if makes it to production? Let us standardize on a PMI definition for success.
    “Achieving project objectives within schedule and within budget, to satisfy the stakeholder and deliver business value"
    Some will adjust the definition to declare a project successful to be just the fact that it made it to production.  This is a false definition that makes it impossible to measure progress and compare projects.

    I would normally say that managing scope on a project to keep it under 18 months would be a way of increasing the odds of success (cost, time and business value).  This is only true if you adjust one of the axis, cost, time or features, to make the project fit.  Projects fail when they are too large or when they are force fitted into political realities.  

    This particular project was like many others.  It would eventually "go live" making it a success at some level.  It was a failure in the sense that the entire team turned over several times, that it was over budget, over time and had a smaller feature set than envisioned.

    Disclaimer: This represents my personal observations and does not reflect the opinions of anyone working for Fannie Mae or any of its contracted employees or companies during that time.

    Saturday, September 23, 2017

    Sales Engineer Guide: Hunter or Farmer

    Enterprise level sales representatives are a whole other breed of person from their pre-sales engineer. Enterprise sales representatives execute and help formulate corporate sales strategies and programs.  They must be extremely self-confident sometimes carrying entire companies on their backs. Sales representatives performance directly impact the job stability of everyone else in the company. Pre Sales Engineers do best when they understand the personalities and styles of their partner representatives.  Two major personality types are hunters and farmers. Most people are a mix of the two but some are hard core hunter or farmer.

    A Note on the Danger of Stereotypes

    Hunters and Farmers are descriptive stereotypes.  You rarely run into someone who is completely anything.  Think of this as you would any other personality classifications. It is a useful way of reminding yourself that you may need different approaches with different people in the same jobs.


    These folks seek out, track down and kill whatever deal they can find.  They tend to be more aggressive in planning meetings and in sales calls. Hunters have a shorter more aggressive attention span than farmers.  Their sales situations can be very fluid with more aggressive posturing and positioning. 

    Sales Engineers may struggle when working with their first Hunter sales reps.  Sales Engineers tend to be more cautious wanting to provide right answers.  Hunters can truly partner with their sales engineers but often use them as accessors to back their stories.  

    Startup companies tend to use Hunters when spinning up.  They also tend to appear in tech companies looking for an IPO or acquisition.


    These folks tend to cultivate accounts taking the longer view. They build broader relationships and take approaches that can really frustrate their Hunter brethren. Farmers take the long view. Sometimes they end up with huge deals and sometimes they have crop failure and go hungry. I've seen farmers sell nothing for a 9 months and still end up on stage at the sales meeting because they closed their long term projects.

    Farmers tend to do better with customer companies that actually want a partner.  Their approach is generally more holistic.  Consultive farming can be wasted on prospects that treat all vendors as adversaries.

    Know Your Partner

    Sales Representatives are the ones that are held accountable for performance.  They carry the quota, earn the bigger rewards and take the lead on customer interaction.  Sales Engineers act as their adjunct. Both parties are smart and driven.  If you are lucky, this creates a dynamic relationship where the zone between the two parties core competency is dynamic based on need.

    The best sales reps know how to leverage their team resources including pre-sales engineers.  They know when and where to coach Sales Engineers in their operating style and the role then want the Engineers to play.  The reps best know when and where to let their pre-sales resources operate independently to increase account coverage and visibility.  

    Pre Sales Engineers are engineers at heart and tend to focus on hard skills like product knowledge. The best sales engineers study how their various representatives operate and adjust to their styles. They learn where their boundaries and when to jump in or stay back. 

    Wrap Up

    Sales Representatives own the deal and the Pre Sales Engineers own making sure the customer has no reason to say no.  There is variation in how this works because both the sales team and customers are people.  I've seen Pre Sales Engineers carry big deals across multiple account realignments. I've seen Sales Representatives do great jobs at answering detailed product discussions.  Sometimes those both work and sometimes they end in smoking craters.  Know your representatives, know your limits an continually improve. Working in a technical sales environment can sometimes be the best job in the world.

    Created 9/2017

    Sunday, September 17, 2017

    Playing with Web Apps in Azure? Create a Resource Group and App Service plan first.

    I dabbled in Windows web app Azure deployments for 3 or 4 years before I realized I needed to pay attention to the Resource Group and App Service Plans I was using.   This became especially expensive when deploying CI/CD pipelines while teaching classes or when doing random operations while trying to understand how stuff worked.  I partially blame the great Visual Studio integration / wizards for this.  They made it easy to "start clean" every time I created a new project.

    Resource Groups let you bundle all the components that make up an applications or composite system.   See the Azure Resource Manager overview for more information.  

    Application Service plans are specific to web and task type deployments.  They describe the compute resources that will be sued by one or more Web Application deployments. You can think of it as a PaaS or Docker type container which is filled with deployments.  Multiple deployments and run in a plan.  A plan should generally not be used across application stacks.

    Create a resource group for related work

    My personal policy is to create a single prototyping/demo Resource Group that is my ad-hoc default.  I tend to name them in a way that I can figure out my intention for them from the main Azure dashboard and include my subscription, general usage area and the targeted region.  

    This image shows the Resource Group creation screen.  Pay attention to the "Resource Group Location"
    This shows three resource groups in in two different regions.  My MSDN free Visual Studio VSTS instance is in one resource group.  The other two resource groups are for application work.

    Demos and prototypes can share the same resource group. More serious work requires more serious thought. You will have to decide which system components should be in the same resource group. Think about the lifecycle and retention policies of each component of a system.  Long lived components like data stores may be managed by other teams or have other management policies.

    Create Application Service plan(s) into which you can deploy applications

    Azure billing for PaaS style web applications is based on the App Service plan cost.  You can save money, especially in a prototyping or demonstration environment, by deploying multiple web applications into the same Application Service plan.  This is a good way of stretching MSDN Azure credits or saving your company money.  Enterprises tend to create Service Plans along business unit or billing code boundaries.

    I keep a Free plan and a shared plan live for my projects. The free plan lets me do basic stuff without charge as long as SSL isn't required.  The shared plan runs $10/mo/app and adds SSL and 4X the CPU minutes. Free (F1) and Shared (D1) tiers have compute limits that may be good enough for demos and low volume sites.  

    Create one or two plans.  One free 60/CPU minutes and one shared with 240/CPU minutes per month. 

    Azure always defaults to "Standard" Pricing Tier when creating a new Application Service plan.  It does this in the Azure portal AND in the Visual Studio wizard. The picture at right shows a new plan using "D1 Shared". You must change the Pricing Tier if you want Free are Shared.  This is another good reason to create your App Service plan prior to coding.

    You can fit one always-live Basic (B) or Standard (S) into an MSDN subscription if you want more performance or want to play with autoscale. 

    Enterprises probably have standard Application Service plan sizes starting with the Basic (B1/B2/...) in development environments and Standard (S1/S2/...) semi-production environments.  

    Microsoft has great documentation on Application Service plans.  These two can get you started

    Wrap Up

    Deploying Applications in Azure is easy.  Cost and resource management is also easy with just a little bit of pre-staging of Resource Groups and Application Service plans. 

    These suggestions may not apply in situations where automation completely tears down and builds environments or where they are in conflict with larger enterprise policies. 

    Sunday, August 13, 2017

    Setting Mac ITerm tab titles to the current directory

    It is easy to set the iTerm titles to final part of the current working directory and the iTerm window title to be the full path of current tab.

    Start a new terminal window or tab after making the following changes.  New tabs and iTerm windows create new login sessions that read these file contents.

    Modify ~/.bashrc

    1. Edit  ~/.bashrc.  Create ~/.bashrc if it doesn't exist.
    2. Add the following text to the file.  Note that this text has comments that document where I found this on the internet

    # Set iTerm2 tab titles to the last directory in PWD
    tabTitle() { echo -ne "\033]0;"$*"\007"; }
    # Set iTerm2 win titles to the full directory of PWD
    winTitle() { echo -ne "\033]2;"$*"\007"; }

    # Alias 'cd' to list directory and set title
    cd() { builtin cd "$@"; ls -lFah; tabTitle ${PWD##*/}; winTitle ${PWD/#"$HOME" /~}; }

    Modify ~/.bash_profile

    Mac terminal emulators start a login session for every new tab.  This means we need update ~/.bash_profile to invoke ~/.bashrc
    1. Edit ~/.bash_profile.  Create ~/.bash_profile if it doesn't exist.
    2. Add the following text to the file.  Note that this text has comments that document where I found this info on the internet.
    if [ -f ~/.bashrc ]; then . ~/.bashrc; fi 

    Sunday, April 30, 2017

    Rasberry Pi, Z-Wave and Domoticz: Setup Part 2

    This article is about using Z-Wave with a Raspberry Pi.  Z-Wave and ZigBee are the two big wireless players in the Home Automation automation market.  A single z-wave wireless controller can communicate with a large number of devices.  These devices include outlet switches, power meters, alarm sensors, remote controlled light bulbs and other accessories.

    The USB stick on the left is a Z-Wave Z-Stick S2 that acts as an interface between a computer and a network of wireless devices. It can be controlled via COTS software open source libraries like openzwave.  The outlet on the right is a Z-Wave wireless controlled outlet that reports back power consumption and state.

    I received this controller / switch pair at the Microsoft Build conference a couple years back.  They were one of the "prizes" you could buy when you earned conference credits for running through the labs.  I really had no idea what they were for a couple years until I took the time to do some research.

    The Raspberry Pi can be an ideal Home Automation host that lets people use web interfaces or mobile apps to control devices via the USB Z-Wave controller.  I recommend using the Raspberry Pi 3 as a control device because it has the most capacity headroom.  

    The picture at the right is a Raspberry Pi V1 Running Domoticz Home Automation software with a Z-Wave USB control dongle and a PiFace digital I/O daughter board.

    A Z-Wave stick control dozens of wireless devices.  The PiFace Digital I/O daughter-board supports direct connection of up to 8 digital input and 8 digital output devices.

    About Domoticz

    Domoticz is an open source Home Automation dashboard that can be installed on a Raspberry Pi.  It is one of several automation platforms that run on the Raspberry Pi under Linux. Domoticz supports dozens of automation hardware controllers, Z-Wave, hardware boards, LAN based controllers, etc. Each controller can manage one or multiple individual devices, depending on the controller type and the device  Each device can have multiple switches, monitors, outputs or inputs.

    Domoticz provides a web interface that can configure new hardware (controllers), identify associated devices and their features.  Those mappings are then managed and controlled through a set of dashboards.

    Main Menu

    The Domoticz consists of 3 main parts
    1. Setup: The main utility menu with some interesting sub menus.
      1. Hardware: Attached controllers like the ZWave USB stick or a PiFace.
      2. Devices: Controlled endpoints like switches and utility monitors.
    2. Configured Devices
      1. Switches: Exactly what it sounds like
      2. Temperature: Thermometers.
      3. Weather: Weather stations
      4. Utility: power and current monitoring
    3. Dashboard: Unified view across types

    Installation Preconditions

    1. You have a Raspberry Pi running the standard Raspberry Pi O/S.  I installed the full version mostly because I'm too lazy to put together the thinnest custom pac
    2. kage needed.
    3. You have a Z-Wave controller stick or add on board for the Raspberry Pi.
    4. Your Z-Wave controller is paired with one or more devices.  There are instructions online that show how to pair devices with out using a computer. 
    5. Your Raspberry Pi has an internet connection that can be used to install and update software.
    6. Your Raspberry Pi has had all updates installed.
    7. Decide if you want the Raspberry Pi to use static or dynamic IP addresses.  The Domoticz  wiki recommends static IPs to make the web UI easier to find.  The system will work with either.

    Installing Domoticz

    I tend to use an HDMI display for this but you could do everything through SSH or command line instead. You need to know the Raspberry Pi's IP address.  Either set a static IP, use some type of dynamic DNS or look up the address on the screen hooked to the Pi's HDMI port. 
    1. Plug the Z-Stick into a USB port on the Raspberry Pi.
      1. The "first" Z-Stick is accessible via device /dev/ttyUSB0 .
    2. Install Domoticz using the Domoticz project guide. Use "the easy way". The installer will most of the heavy lifting. You don't have to manually compile anything.
      1. The recommended install command looks something like
        sudo curl -L | sudo bash
    3. Record the URL provided at the end of the installation process. That is the web address of the Domoticz Admin page.
    4. Connect to Domoticz using the address from you got at the end of the installation.   My Raspberry on a dynamic IP can currently reached with the following 

    Configuring Z-Wave 

    Hardware Controllers

    Lets configure a Z-Wave stick hardware device in Domoticz and identify device nodes like switches and outlets.
    • Connect to Domoticz with with your web browser using the URL found above.  Your screen should look like that:
    • Click on the "setup your Hardware" link.
      • Select "OpenZWave USB" for the "Type".
      • Select "/dev/ttyUSB0 for the Z-Wave control device
      • Enter an optional name
    • You should see the ZWave Devices controller on the next page.  
      • Click on the Setup button.

    • This will take you to the "Nodes" screen where you can see devices controlled by this Z-Wave stick. The picture below shows two digital switches and their controller.  These particular switch "nodes" are multi-function. We will see later that they can measure power usage in addition to acting as switches.

    Controlled Devices

    We next add the switches and other controlled nodes to the their appropriate sections.
    • Select the "Setup" menu and then "Devices" Setup-->Devices. or click on the Dashboard links.  Click on Devices if you see this screen.
    • The next picture shows two switches with Power Meter and Voltage monitoring capability.  Functionality is enabled and disabled by clicking on the lightbulb icons on the left hand side
    • Add the functions to the appropriate Domoticz panel, Switch, Temperature, Weather or Utility by clicking on the green arrow on the right hand side of that device's row.  Domoticz will automatically select the correct menu.  The green arrows will turn to blue arrows for functions/features that have been enabled in the GUI

    • Enter the name of the device in the panel that pops up after clicking on the green arrow.I took the default settings in this picture.

    • Click on the Switches main menu item. You should see the just-added switches in the panel.  Switches are operated by clicking on the light bulb icon.

    Accessing a Switch

    The previous sections configured a switch socket.  That device can be enabled and disabled in the GUI via

    1. Setup-->Devices
    2. In the Switches menu
    3. In the Dashboard

    Useful links

    Created 2017 04 30

    Sunday, April 9, 2017

    Maven Lifecycle Phases - Fitting in Code Analysis and Other Tools

    The build management portion of Maven operates on a type of Template Pattern. Maven moves from lifecycle-phase to lifecycle-phase until there a step failure or until all steps are complete. The following diagram lists the build lifecycle phases. The orange squares represent the main targets that people run. Every phase is executed starting with Validate until the requested end phase is reached.

    For example
    • "mvn validate" runs just the Validate phase.
    • "mvn compile" runs Validate, Initialize, Generate Sources, Process Sources, Generate Resources, Process Resources and Compile.
    Each Maven Plugin executes with in a phase. The Surefire unit test plugin, as an example, typically runs the tests in the Test phase.  This means that unit tests don't run if Validation, Compilation, class processing or any of the other preceding phases run with errors.

    Maven plugins can execute in their default phase or in any phase of your choosing.  Lifecycle phase selection affects if and when the plugin runs.  You have to decide which mvn command you wish to run the plugin in and which types of failures obviate the need for that plugin.

    Code Quality

    Teams can use Maven to generate code quality metrics with a variety of plugins. These plugins can be used to fail the build based on minimum quality values.  Failed metrics execution aborts the process at whatever lifecycle phase they run in.  Most code quality Maven plugins have default phases but teams may wish to move them based on how they value different phases. Code quality tools can be run early prior to completing major operations or they can be run late so that some of the standard operations complete/fail prior to getting into code quality analysis.

    Static Analysis Phase Selection

    Static analysis tools include Checkstyle and PMD.  They can run prior to or after compilation. Some static analysis tools, like Findbugs, run against the compiled binaries. Those should be run after the compilation phase.

    When should source code static analysis be run? It is really up to the team based on the priority order for failing a build.
    • It can be run prior to compilation, possibly in the Validate phase.  This lets a build skip a long compilation cycle when the code doesn't meet a minimum bar.  
    • It can be run after compilation, possibly in the Process Classes phase. This lets the analyzer only run against code that is known syntactically correct because the code is known to compile

    Code Coverage Phase Selection

    Code coverage tools work by instrumenting the source code or binaries prior to test execution. This means you wish to put the code coverage plugin last after other quality plugins, especially if you have generated code, JAXB, GWT or some other. 

    Selecting a code coverage maven phase can take some thought.
    • Some teams just bind it to the Test phase. This has the advantage of only running the tests once.  It has the disadvantage of subtly changing the code under tests as the code coverage instrumentation makes minor changes to the test binaries.
    • Some teams bind it to the Verify phase.  This has the advantages of failing the build prior to code coverage generation.  It also has the advantage of executing the unit tests without any binary changes. Code coverage in the Verify phase has the downside of requiring the tests to be run twice, once for the test phase and once for the post-test phase.

    Generating Reports in the Site Targets

    Metrics are usually generated in the Build lifecycle.  They are usually aggregated into web page reports as part of  mvn site web sitegeneration cycle.  It is almost mandatory that maven metrics reports require two different executions. This means metrics are generated in one pass and then formatted for viewing in another pass:
    • mvn clean install
    • mvn site  


    Maven profiles let you change the plugins executed based on a profile name.  You can use the default profile for normal developer builds and use some custom profile to execute different plugins.

    • Teams that execute quality metrics as part of all builds don't have a lot of work to do. 
    • Teams that execute policy metrics in only some special builds may wish to create a custom CI policy that only executes as part of the the CI build.

    Maven and Gradle 

    Gradle has a completely different approach. It eschews build pipelines based on configuration with those based on scripts.  You can find more information about the differences between Gradle and Maven at

    Sunday, March 26, 2017

    Static Analysis from IDE to CI with IntelliJ

    Static program analysis is the analysis of computer software that is performed without actually executing programs (analysis performed on executing programs is known as dynamic analysis).[1] In most cases the analysis is performed on some version of the source code, and in the other cases, some form of the object code.[2] 

    Static analysis provides a low cost way of automating code review of certain types of source code errors and standards.  Static code analysis, automated tests and code coverage are staples of the Continuous Integration process replacing manual effort with automation.

    Full featured IDEs implement their own integrated static analysis and test measurement tools. IntelliJ comes with a comprehensive set of integrated static analysis tools and rules.  It can run the rules in an incremental fashion updating results as code is edited.  Rule violations are immediately reflected in the user interface.

    CI servers and IDEs each have their own system from running code analysis, tests and test analysis. There are some times differences in the results generated by IDEs and build servers.  This makes it hard for developers to reliably implement coding standards, pass static analysis and generate expected test results and coverage. IDE code quality extensions, or plugins, can run the CI tools inside the IDE so that developers can troubleshoot inconsistencies and predict results in the CI builds.

    The above diagram shows a possible workflow where developers start with the integrated IDE tools. They then use IDE extensions to run the batch/CI automated tools. Finally the project code is then analyzed again by the CI build tools.

    Example Static Analysis Plugins within IntelliJ

    The next three videos walk through the installation and usage of several 3rd party IntelliJ plugins that let developers run standard Java static analysis and code coverage tools.  We are really interested in
    1. Checkstyle:  A code format and style tool.
    2. PMD:  A static analysis tool
    3. Findbugs : A static analysis tool out of University of Maryland
    4. Clover: A test code coverage tool.
    5. Cobertura:  A test code coverage tool.
    6. Emma:  A test code coverage tool.
    The following IntelliJ demos highlight Checkstyle, Findbugs and PMD.


    This single unified plugin supports the running Checkstyle, Findbugs and PMD with custom configuration files.  This means can use the same custom rulesets in IntelliJ that they use in a Hudson, Bamboo, TFS or other CI build.  

    QAPlug is run via the Analyze context menu available on a right mouse click.  It is configured in the IntelliJ preferences screen and can be connected to one or more IntelliJ profiles

    The video on the right walks through installation and usage of the Checkstyle and PMD module for this.

    FindBugs - IDEA

    This single purpose plugin supports running Findbugs with custom rule files.  It can share the same rule files used in a CI build.  Findbugs-IDEA can be invoked via the "Findbugs" right mouse context menu. 

    The video at the right walks through installation and usage of the Findbugs plugin

    Checkstyle - IDEA

    This single purpose plugin supports running Checkstyle from the right mouse Analyze context menu.  It can share Checkstyles with those used on the CI server.

    The video at the right walks through installation and usage of the Checkstyle-IDEA IntelliJ extension. 


    1. Jump up^ Wichmann, B. A.; Canning, A. A.; Clutterbuck, D. L.; Winsbarrow, L. A.; Ward, N. J.; Marsh, D. W. R. (Mar 1995). "Industrial Perspective on Static Analysis." (PDF)Software Engineering Journal: 69–75. Archived from the original (PDF) on 2011-09-27.
    2. Wikipedia "Static Program Analysis"

    Sunday, February 26, 2017

    Time Warp: Business Cycle Testing

    "Let's do the time warp again..."


    A video version of this blog

    Business Cycle with Time dependencies?

    What is a business cycle and why do I need to test it?  I'm really talking about any type of business process that has time based business rules.  The rules can periodic in that they fire on a regular basis or they can one-time based on some time based criteria.  Most of the ones I've worked with are contract oriented or billing cycle oriented. Examples include telecom contracts, home mortgage servicing systems, term based insurance to just name a few.   They usually have some time based sequence of operations, date based rules and may have some type of state machine.  

    Testing is complicated by the fact that data may need to be of a certain age before processing begins.  Loan payments may need to be delinquent.  An insurance policy may start the renewal process some time before expiration.  Collateralize debt may have payment, reimbursement  and equity distribution segments based on the time of the month.

    Compressing the clock

    The length of the business process and the length of a test are different.  In this picture we see that we have 30 hours to run simulated processing for a 30 day month. We are compressing a 30 day process into a 30 hour test without changing any of the business rules or system behavior.  

    Understanding dates and time

    Computer software can involve a lot of dates. They are used to represent action in the system, time of events in log output and are analyzed as part of business rules.  Three relatively common date concepts are 
    • "Current Date" meaning "today".  
    • "Activity Date" an event occurs and is recorded in the system. This usually must be within the cycle of the test.  This is different from an "Effective Date" when there is a delay in processing when there is a data fix or when something is back dated.
    • "Effective Date" when an operation actually takes effect.  This is may be different than the "Activity Date" when something is processed out of order.  A system that runs only on effective date may not need any date/time manipulation.
    Business data and information may use "any" or "all" of these data types. Integrated business cycle testing must manipulate all three to maintain the correct relationships.

    Manipulating Time

    We want to put our data in the "right place" in our timeline for each portion of the test cycle. The data needs to be coherent and must interact correctly with users, business rules and other systems.. 

    Let's talk about four different ways of manipulating time while testing.  There are probably plenty of others. These are four that I've seen in person or heard about.

    Explicit Effective Date

    It is possible to run a full cycle test using just effective dates if all of the APIs support passing in effective dates. This lets the test back or forward date the operations. This approach will probably not work with most user interfaces since they will not allow user input in effective date for all operations.

    Manipulating System Time

    Change the system clock on the operating systems where the business logic runs. This approach works best in systems with few connecting components and few servers.

    No application changes are required
    Works easily with single machine applications
    Records the internal test timestamps in logs and databases

    Doesn't work for shared machines
    Doesn't work for SaaS applications
    Doesn't work in the cloud.
    Logs will show the in-test time and not the external world time.

    Manipulating Data Time

    Change the recorded time-stamps for all data at rest between stages.

    Works if all data is in mutable data stores.
    Requires no code or system changes

    Hard to test with partner systems.
    File and log time will show different times
    Partner data must also be modified
    Hard to find all impacted systems.

    Manipulating Program Time

    Alter the program's concept of current time.  This can be done by adding crosscutting code on Date and Time libraries.  The aspects go to a central time server to obtain the current "system test" time and use that in their calculations.

    No downstream manipulation.
    Downstream systems see the test time

    Must reliably shim whole application
    Doesn't work with services owned by other teams.
    Will not work with SaaS components. 

    Isolating Data by Time

    Create unique data sets for each test phase.  

    No system changes
    No application changes
    Test can run overlapping
    Works for SaaS
    Works for cloud

    No true end to end test with same data
    Data must be synthesized for all non-initial phases


    Full process lifecycle can be difficult, especially if there are time based rules that cannot be changed to run a test. Try some of these approaches and see how they work for you.  I've used Program Time manipulation via libraries, Separate Data for each phase, Data Fixes and Effective Date testing when we planned on it early in the project.  The choice depends on the business problem, the operational model and other factors.

    Good luck with your full lifecycle testing!

    Created Feb 2017

    Wednesday, February 8, 2017

    AWS Relationships between EC2, ELB and ASG

    This post describes the basic relationships of ELBs (now ALBs), EC2 instances and ASGs.  I used AWS for over a year before I realized how Auto Scaling Groups actually interacted with ELBs and EC2 instances.


    EC2: An Amazon virtual machine used to host applications and services.  EC2 instances can be pooled for scale or failover.  EC2 instances can be based on any of the Amazon EC2 machine types.

    Elastic Load Balancer (ELB): The basic load balancer provided by Amazon.  They are used as a reverse proxy servers for pools of EC2 instances.  ELBs determine instance health via basic health check operations.

    Auto Scaling Group (ASG): A control mechanism that manages how many EC2 instances make up a pool. ASGs will create new EC2 instances based on configured pool sizes. They can also auto-scale up and auto-scale down the pool sizes based on load.  ASGs can register created EC2 instances with associated ELBs.

    Availability Zones (AZ): An Amazon region is made up of various data centers that have isolated power and communication.  Those data centers are referred to as Availability Zones. An ASG can spread its' EC2 instances across those data centers. This increases the availability of the EC2 instance pool and reduces the impact of data center failure.

    Standard Deployment 

    Auto Scaling Groups (ASG) and Availability Zones (AZ)

    An ASG controls the number and location of EC2 instances that make up a worker pool.

    The ASG creates new instances when needed based on a Launch Configuration. It creates new instances when a new ASG is first created, when an ASG needs to "size up" based on configuration changes or increased demand.

    It destroys all associated instances when an ASG is destroyed. It individual instances when an ASG's pool size settings are changed or when demand falls below the load required for the current number of instances.

    An ASG can distirbute instances across one or more availability zones. This means the ASG can configure the worker pool for basic HA.

    Elastic Load Balancer (ELB) and ASG

    An ELB acts as a load balancer acts as a reverse proxy server for a set of EC2 instances that have been registered with the load balancer.  The ELB knows nothing about the type or purpose of nodes in the worker pool.  An ELB can forward to nodes in different Availability Zones (AZ)

    ASGs know how to register instances with an ELB. This means an ALB can automatically add and remove worker nodes as they are created or destroyed.

    ELBs know about instances.  ASGs know about instances and ELBs.


    • ELB
      • Distributes traffic across target EC2 instances
    • Application Scaling Group
      • Creates Instances
      • Distributes Instances across Availability Zones
      • Registers instances with ELB
    • ELB
        • Knows about Instances
        • Does not know about ASG

      ELB Configuration with Cloud Formation

      Application components are best created as part of a Cloud Formation template.  Basic ELB Cloud Formation templates define the load balancer type, the availability zones and the configuration of the listeners.  The ELB configuration does not define the instances that the listener forwards to. Note that the cloud formation template defines a HealthCheck used by the ELB to determine if a target instances is "healthy" enough to accept traffic.

      Here is a sample ELB configuration from the AWS tutorial:
       "ElasticLoadBalancer" : {      "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",      "Properties" : {        "AvailabilityZones" : { "Fn::GetAZs" : "" },        "CrossZone" : "true",        "Listeners" : [ {          "LoadBalancerPort" : "80",          "InstancePort" : "80",          "Protocol" : "HTTP"        } ],        "HealthCheck      }    },

      ASG Behavior and ELB Linkage

      ASGs create, destroy and generally manage a pool of worker nodes / instances.  They allocate created instances across a set of subnets or Availability zones to create low cost highly available applications.  The diagram at right describes a three node network. It does not describe any set of Availability zones.

      ASGs are configured with minimum, maximum and desired number of instances.  They manage there worker pool to meet those numbers.  Some teams manage costs by problematically adjusting their min/max pool sizes in the evening or during times of low demand. It is possible to take all of an ASG's nodes off line by setting the pool sizes to "0".

      Dynamic autoscaling is not a mandatory "must use" feature of an ASG.

      This is separate from autoscaling behavior where the pool sizes are adjusted based on CPU utilization or some other metrics.  Monitors can listen for resource consumption events and adjust the ASG pool size accordingly.. Here is an ASG that will maintain from 1-3 worker nodes.

      "WebServerGroup" : {      "Type" : "AWS::AutoScaling::AutoScalingGroup",      "Properties" : {        "AvailabilityZones" : { "Fn::GetAZs" : ""},        "LaunchConfigurationName" :
                  { "Ref" : "LaunchConfig" },        "MinSize" : "1",        "MaxSize" : "3",        "LoadBalancerNames" :
                   [ { "Ref" : "ElasticLoadBalancer" } ],

      EC2 Names and other stuff

      EC2 Instances have Instance Id's that define individual instances that are unique in an Accouint.  You will often also see a "Name" value in various EC2 dashboards. This represents the "component" or "common" name for a pool of machines doing the same function.  Amazon implenets the "Name" attribute as an EC2 Tag. It is a well known tag that has psuedo-special meaning.  I highly recommend adding a Name tage in your Cloiud Formation template or whenever manually creating machines.
      • EC2 Names
        • Name field on EC2 instances, just a Tag
        • Name is convention so that related instances have a common value
        • Do not define instance relationships in AWS
        • Same EC2 Name could be used for different roles. Don’t do that.
      • ASGs create instances with same Name but different Instance IDs
      • ELB is a proxy for a pool of instances
        • Could target instances with different names.
        • ALB Target Group behaves like an ELB.

      Example Cloud Formation Tempalate

      Amazon has a sample Cloud Formation template that sets up a web app with an ELB, ASG and some scale up and scale down behavior.  You can find this example and the associated Cloud Formation json file at

      The Cloud Formation section of the AWS Console has a "design" function that creates a picture of how your Cloud Formation template actually hangs together.  The example one is shown below.  The primary area of focus above was the Elastic Load Balancer, the Web Server auto-scaling group and the Launch Configuration used to create new instances to populate the ASG and be configured into the ELB.

      It also contains some Notification monitors and dynamic ScalingPolicies for scale-up and scale-down.  AWS pretty much standardizes on using message based notifications for communicating with either AWS components or appropriately configured cusctom components.

      Created 2017 Feb 01