Saturday, September 30, 2017

The value curve for new hires in skills positions

What is the relative value of a new hire in their first year?   The value is impacted based on many factors, the hire's motivation, their prior experience, the hiring company's onboarding process, the culture fit and other factors.  Good processes can dramatically impact the contributions made by new hires.  

This posting is really not about about the interview and hiring process's impact on the quality and eventual capabilities new hires.  This posting is about the general rate at which new team members contribute as measured against their eventual capabilities. My main area of experience with this is with technical teams, software developers, testers and technical analysts.  I suspect it is also true for other skill positions and integrated team.

Understanding the Learning and Networking Curve


The graph shows my gut feel for the rate at which team members contribute within their first year relative to their capabilities. It doesn't rate the new team member against others.

I like to use this picture when I talk with a motivated new hire who is a couple months onto our new team.

The graph moves up and down based on motivation, and company processes. Flat spots represent perceived stalling in improvement even though improvement is continual.

  1. New team member are always a drag on the team for at least several months.  They don't know anything about the company, culture, business problem, technology or processes.  They have no cross-team support network. 
  2. The 2nd quarter is where a new team member starts to contribute at some level.  People can start leveraging previous experience at this point.  They may make suggestions that don't make sense because they don't know the global context of the problem they are discussing. I call this the "copy/paste" phase for software developers.   This is often the point at which the new hire is nervous about their position or unhappy with the way they are contributing. 
  3. The 3rd quarter is the first time new team members feel like they are making a real contribution. This period can also be frustrating because it often doesn't feel like they are ramping fast enough.  Management may expect 100% capabilities at this point.  Newer teammates often ask questions of the 3rd quarter veteran because of their relative experience.
  4. The 4th quarter is the point where other team members start to rely on a person.  They have enough experience here to run their own efforts in conjunction with others.  I don't feel a person is fully enabled until the end of the first year.

Continually Communicate with New Team Members

Don't be that team that assumes hiring extra people converts into immediate productivity teams. Plot their personal graph in your head. Set expectations at regular intervals and review with people to make sure you and they are on the right track.  Make onboarding last beyond the first months.

Final Thoughts

I had an advisor in graduate school.  I talked to him immediately after getting several job offers.  He looked at me and said something like the following. My pride said he was wrong but experience says that he was pretty accurate.
"You think that diploma means you know something.  That diploma means you have the ability to know something in the future.  Those companies are offering you this good money based on the future contribution that the diploma implies you are capable of."

Posted 2017 09

Sunday, September 24, 2017

Black Swan IT Projects: The Loan Servicing mainframe replacement


This blog discuss a little the "the Mainframe Servicing System Migration", a project that should considered a Black Swan

A Black Swan Event  
The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying which presumed black swans did not exist, but the saying was rewritten after black swans were discovered in the wild. 
The Fannie Mae loan processing servicing system replacement was
  1. Initially budgeted for 18 months and $75M. 
  2. Eventually cost about 72 months and > $800M.
The project turned out to be a black swan that could have bankrupted other less stable companies.

In the Mid 2000s

Fannie Mae closed out either Q3 or Q4 in that year with a recorded profit of $1B. This was a "peak profit" period that stood out. They decided to push some of this excess cash into IT modernization targeting the Mainframe Mortgage Servicing system which handled all inbound Fannie Mae mortgage payments from servicer's and outbound servicing payments to servicers and Collateralized Mortgage Securities.

Fannie Mae announced that they were going to rewrite all mainframe applications and eliminate all Mainframe positions in 18 months by a total system replacement of all mainframe systems. 1/2 the staff were told they would be allowed to re-interview for new positions at the end of the 18 months. 1/2 of the staff were told they would be laid off at that time and receive termination packages. All of those positions and the systems they supported still existed 6 years later.

Software Project Complexity

There is a lot of data that shows the complexity and difficulty in estimation and construction in software projects.  Success rates drop dramatically with increases in size and scope until a point at which almost all projects of a certain size are considered failures when measured against cost, time and the needs of the business. The most well known is probably Standish Chaos Report.  You can find publicly accessible versions in various places.  Gartner has its own versions of these types of reports that back up the notion of high software project failure rates. You can also find competing claims that there is no software crisis.  My personal experience tells the opposite, that the Standish Group may actually be optimists. 

Mortgage Servicing System

Fannie Mae and Freddie Mac owned approximately 40% of the US Home Mortgage market around this time. The Fannie Mae mainframe systems handled load acquisition from the sales side, loan servicing on the payment side and other financial operational processes.   The Fannie Mae servicing system was essentially run/managed by a core team of about 6-10 people.   The two senior team members were near or past retirement ages.  Fannie Mae was essentially living on borrowed time.    

The Project Announcement

Fannie Mae executives announced that they were going to replace the mainframe systems in 18 months at a cost of $75M starting within a couple months.  The declared that they were moving folks from other projects and bidding out the core work to a major integrator.

Phases

Project Initiation

A large (3 letter) computer company/integrator won the project.  Their winning bid included a quick staff ramp up targeting bringing 400 people "on site" to work on this project.  Note that this project was targeted at 18 month including the spin up phase.

SOA custom transactional application

The first attempt was an individual transaction oriented system based on asynchronous Service Oriented Architecture.  It was a completely "modern" approach for that time period. 

The initial system was started with partial requirements.  It had up to 400 new contractors, many of whom had no enterprise experience in the tool suite.  The team picked an architecture that no one on the team had ever built to that scale before.  A data center was populated based on the assumption that the system would go live within the project window. The system was complicated by the fact that it was replacing the heart of the entire servicing operation while not impacting current operations.  

Phase 1 ran for 2+ years and was a complete failure when based on cost, features and meeting the business needs.  Many of the hardware leases expired before any production software was ever installed. The $100M+ spent was not a complete waste in that the time was used to shake out the requirements and get a better feel for the operational needs. I'd guess Fannie Mae had already spent over $200M at this point on a $75M project.

ETL 

The second attempt was basically an ETL based system with large data import/export operations bookending batch type processing.  Think of this as a slightly more mainframe type approach. Staff was reorganized. Management shuffled.  The new technology was used.  

Phase 2 ran for 2+ years and was pretty much a failure based on the criteria of cost, schedule and features.  The project was > 100% over time and > 400% over budget at this point.

Shrink the Project.  Shove it all in the Database

The teams realized, by the third attempt, that moving massive amounts of data in and out of relational databases was too slow for their operations.  Mainframes were optimized for this type of work.  The project went with an approach where all the data stayed in the database.  They also reduced the scope of the project to certain transaction types,  reducing the amount of data to less than half of the previous system attempts.

Phase 3 ran for several users and "went to production".   

Final Analysis

Is a project successful it if makes it to production? Let us standardize on a PMI definition for success.
“Achieving project objectives within schedule and within budget, to satisfy the stakeholder and deliver business value"
Some will adjust the definition to declare a project successful to be just the fact that it made it to production.  This is a false definition that makes it impossible to measure progress and compare projects.

I would normally say that managing scope on a project to keep it under 18 months would be a way of increasing the odds of success (cost, time and business value).  This is only true if you adjust one of the axis, cost, time or features, to make the project fit.  Projects fail when they are too large or when they are force fitted into political realities.  

This particular project was like many others.  It would eventually "go live" making it a success at some level.  It was a failure in the sense that the entire team turned over several times, that it was over budget, over time and had a smaller feature set than envisioned.


Disclaimer: This represents my personal observations and does not reflect the opinions of anyone working for Fannie Mae or any of its contracted employees or companies during that time.

Saturday, September 23, 2017

Sales Engineer Guide: Hunter or Farmer

Enterprise level sales representatives are a whole other breed of person from their pre-sales engineer. Enterprise sales representatives execute and help formulate corporate sales strategies and programs.  They must be extremely self-confident sometimes carrying entire companies on their backs. Sales representatives performance directly impact the job stability of everyone else in the company. Pre Sales Engineers do best when they understand the personalities and styles of their partner representatives.  Two major personality types are hunters and farmers. Most people are a mix of the two but some are hard core hunter or farmer.

A Note on the Danger of Stereotypes

Hunters and Farmers are descriptive stereotypes.  You rarely run into someone who is completely anything.  Think of this as you would any other personality classifications. It is a useful way of reminding yourself that you may need different approaches with different people in the same jobs.

Hunters

These folks seek out, track down and kill whatever deal they can find.  They tend to be more aggressive in planning meetings and in sales calls. Hunters have a shorter more aggressive attention span than farmers.  Their sales situations can be very fluid with more aggressive posturing and positioning. 

Sales Engineers may struggle when working with their first Hunter sales reps.  Sales Engineers tend to be more cautious wanting to provide right answers.  Hunters can truly partner with their sales engineers but often use them as accessors to back their stories.  

Startup companies tend to use Hunters when spinning up.  They also tend to appear in tech companies looking for an IPO or acquisition.

Farmers

These folks tend to cultivate accounts taking the longer view. They build broader relationships and take approaches that can really frustrate their Hunter brethren. Farmers take the long view. Sometimes they end up with huge deals and sometimes they have crop failure and go hungry. I've seen farmers sell nothing for a 9 months and still end up on stage at the sales meeting because they closed their long term projects.

Farmers tend to do better with customer companies that actually want a partner.  Their approach is generally more holistic.  Consultive farming can be wasted on prospects that treat all vendors as adversaries.

Know Your Partner

Sales Representatives are the ones that are held accountable for performance.  They carry the quota, earn the bigger rewards and take the lead on customer interaction.  Sales Engineers act as their adjunct. Both parties are smart and driven.  If you are lucky, this creates a dynamic relationship where the zone between the two parties core competency is dynamic based on need.

The best sales reps know how to leverage their team resources including pre-sales engineers.  They know when and where to coach Sales Engineers in their operating style and the role then want the Engineers to play.  The reps best know when and where to let their pre-sales resources operate independently to increase account coverage and visibility.  

Pre Sales Engineers are engineers at heart and tend to focus on hard skills like product knowledge. The best sales engineers study how their various representatives operate and adjust to their styles. They learn where their boundaries and when to jump in or stay back. 

Wrap Up

Sales Representatives own the deal and the Pre Sales Engineers own making sure the customer has no reason to say no.  There is variation in how this works because both the sales team and customers are people.  I've seen Pre Sales Engineers carry big deals across multiple account realignments. I've seen Sales Representatives do great jobs at answering detailed product discussions.  Sometimes those both work and sometimes they end in smoking craters.  Know your representatives, know your limits an continually improve. Working in a technical sales environment can sometimes be the best job in the world.

Created 9/2017

Sunday, September 17, 2017

Playing with Web Apps in Azure? Create a Resource Group and App Service plan first.

I dabbled in Windows web app Azure deployments for 3 or 4 years before I realized I needed to pay attention to the Resource Group and App Service Plans I was using.   This became especially expensive when deploying CI/CD pipelines while teaching classes or when doing random operations while trying to understand how stuff worked.  I partially blame the great Visual Studio integration / wizards for this.  They made it easy to "start clean" every time I created a new project.

Resource Groups let you bundle all the components that make up an applications or composite system.   See the Azure Resource Manager overview for more information.  

Application Service plans are specific to web and task type deployments.  They describe the compute resources that will be sued by one or more Web Application deployments. You can think of it as a PaaS or Docker type container which is filled with deployments.  Multiple deployments and run in a plan.  A plan should generally not be used across application stacks.

Create a resource group for related work


My personal policy is to create a single prototyping/demo Resource Group that is my ad-hoc default.  I tend to name them in a way that I can figure out my intention for them from the main Azure dashboard and include my subscription, general usage area and the targeted region.  

This image shows the Resource Group creation screen.  Pay attention to the "Resource Group Location"
This shows three resource groups in in two different regions.  My MSDN free Visual Studio VSTS instance is in one resource group.  The other two resource groups are for application work.

Demos and prototypes can share the same resource group. More serious work requires more serious thought. You will have to decide which system components should be in the same resource group. Think about the lifecycle and retention policies of each component of a system.  Long lived components like data stores may be managed by other teams or have other management policies.

Create Application Service plan(s) into which you can deploy applications

Azure billing for PaaS style web applications is based on the App Service plan cost.  You can save money, especially in a prototyping or demonstration environment, by deploying multiple web applications into the same Application Service plan.  This is a good way of stretching MSDN Azure credits or saving your company money.  Enterprises tend to create Service Plans along business unit or billing code boundaries.

I keep a Free plan and a shared plan live for my projects. The free plan lets me do basic stuff without charge as long as SSL isn't required.  The shared plan runs $10/mo/app and adds SSL and 4X the CPU minutes. Free (F1) and Shared (D1) tiers have compute limits that may be good enough for demos and low volume sites.  

Create one or two plans.  One free 60/CPU minutes and one shared with 240/CPU minutes per month. 

Azure always defaults to "Standard" Pricing Tier when creating a new Application Service plan.  It does this in the Azure portal AND in the Visual Studio wizard. The picture at right shows a new plan using "D1 Shared". You must change the Pricing Tier if you want Free are Shared.  This is another good reason to create your App Service plan prior to coding.

You can fit one always-live Basic (B) or Standard (S) into an MSDN subscription if you want more performance or want to play with autoscale. 

Enterprises probably have standard Application Service plan sizes starting with the Basic (B1/B2/...) in development environments and Standard (S1/S2/...) semi-production environments.  

Microsoft has great documentation on Application Service plans.  These two can get you started

Wrap Up

Deploying Applications in Azure is easy.  Cost and resource management is also easy with just a little bit of pre-staging of Resource Groups and Application Service plans. 

These suggestions may not apply in situations where automation completely tears down and builds environments or where they are in conflict with larger enterprise policies.