Saturday, September 23, 2017

Sales Engineer Guide: Hunter or Farmer

Enterprise level sales representatives are a whole other breed of person from their pre-sales engineer. Enterprise sales representatives execute and help formulate corporate sales strategies and programs.  They must be extremely self-confident sometimes carrying entire companies on their backs. Sales representatives performance directly impact the job stability of everyone else in the company. Pre Sales Engineers do best when they understand the personalities and styles of their partner representatives.  Two major personality types are hunters and farmers. Most people are a mix of the two but some are hard core hunter or farmer.

A Note on the Danger of Stereotypes

Hunters and Farmers are descriptive stereotypes.  You rarely run into someone who is completely anything.  Think of this as you would any other personality classifications. It is a useful way of reminding yourself that you may need different approaches with different people in the same jobs.

Hunters

These folks seek out, track down and kill whatever deal they can find.  They tend to be more aggressive in planning meetings and in sales calls. Hunters have a shorter more aggressive attention span than farmers.  Their sales situations can be very fluid with more aggressive posturing and positioning. 

Sales Engineers may struggle when working with their first Hunter sales reps.  Sales Engineers tend to be more cautious wanting to provide right answers.  Hunters can truly partner with their sales engineers but often use them as accessors to back their stories.  

Startup companies tend to use Hunters when spinning up.  They also tend to appear in tech companies looking for an IPO or acquisition.

Farmers

These folks tend to cultivate accounts taking the longer view. They build broader relationships and take approaches that can really frustrate their Hunter brethren. Farmers take the long view. Sometimes they end up with huge deals and sometimes they have crop failure and go hungry. I've seen farmers sell nothing for a 9 months and still end up on stage at the sales meeting because they closed their long term projects.

Farmers tend to do better with customer companies that actually want a partner.  Their approach is generally more holistic.  Consultive farming can be wasted on prospects that treat all vendors as adversaries.

Know Your Partner

Sales Representatives are the ones that are held accountable for performance.  They carry the quota, earn the bigger rewards and take the lead on customer interaction.  Sales Engineers act as their adjunct. Both parties are smart and driven.  If you are lucky, this creates a dynamic relationship where the zone between the two parties core competency is dynamic based on need.

The best sales reps know how to leverage their team resources including pre-sales engineers.  They know when and where to coach Sales Engineers in their operating style and the role then want the Engineers to play.  The reps best know when and where to let their pre-sales resources operate independently to increase account coverage and visibility.  

Pre Sales Engineers are engineers at heart and tend to focus on hard skills like product knowledge. The best sales engineers study how their various representatives operate and adjust to their styles. They learn where their boundaries and when to jump in or stay back. 

Wrap Up

Sales Representatives own the deal and the Pre Sales Engineers own making sure the customer has no reason to say no.  There is variation in how this works because both the sales team and customers are people.  I've seen Pre Sales Engineers carry big deals across multiple account realignments. I've seen Sales Representatives do great jobs at answering detailed product discussions.  Sometimes those both work and sometimes they end in smoking craters.  Know your representatives, know your limits an continually improve. Working in a technical sales environment can sometimes be the best job in the world.

Created 9/2017

Sunday, September 17, 2017

Playing with Web Apps in Azure? Create a Resource Group and App Service plan first.

I dabbled in Windows web app Azure deployments for 3 or 4 years before I realized I needed to pay attention to the Resource Group and App Service Plans I was using.   This became especially expensive when deploying CI/CD pipelines while teaching classes or when doing random operations while trying to understand how stuff worked.  I partially blame the great Visual Studio integration / wizards for this.  They made it easy to "start clean" every time I created a new project.

Resource Groups let you bundle all the components that make up an applications or composite system.   See the Azure Resource Manager overview for more information.  

Application Service plans are specific to web and task type deployments.  They describe the compute resources that will be sued by one or more Web Application deployments. You can think of it as a PaaS or Docker type container which is filled with deployments.  Multiple deployments and run in a plan.  A plan should generally not be used across application stacks.

Create a resource group for related work


My personal policy is to create a single prototyping/demo Resource Group that is my ad-hoc default.  I tend to name them in a way that I can figure out my intention for them from the main Azure dashboard and include my subscription, general usage area and the targeted region.  

This image shows the Resource Group creation screen.  Pay attention to the "Resource Group Location"
This shows three resource groups in in two different regions.  My MSDN free Visual Studio VSTS instance is in one resource group.  The other two resource groups are for application work.

Demos and prototypes can share the same resource group. More serious work requires more serious thought. You will have to decide which system components should be in the same resource group. Think about the lifecycle and retention policies of each component of a system.  Long lived components like data stores may be managed by other teams or have other management policies.

Create Application Service plan(s) into which you can deploy applications

Azure billing for PaaS style web applications is based on the App Service plan cost.  You can save money, especially in a prototyping or demonstration environment, by deploying multiple web applications into the same Application Service plan.  This is a good way of stretching MSDN Azure credits or saving your company money.  Enterprises tend to create Service Plans along business unit or billing code boundaries.

I keep a Free plan and a shared plan live for my projects. The free plan lets me do basic stuff without charge as long as SSL isn't required.  The shared plan runs $10/mo/app and adds SSL and 4X the CPU minutes. Free (F1) and Shared (D1) tiers have compute limits that may be good enough for demos and low volume sites.  

Create one or two plans.  One free 60/CPU minutes and one shared with 240/CPU minutes per month. 

Azure always defaults to "Standard" Pricing Tier when creating a new Application Service plan.  It does this in the Azure portal AND in the Visual Studio wizard. The picture at right shows a new plan using "D1 Shared". You must change the Pricing Tier if you want Free are Shared.  This is another good reason to create your App Service plan prior to coding.

You can fit one always-live Basic (B) or Standard (S) into an MSDN subscription if you want more performance or want to play with autoscale. 

Enterprises probably have standard Application Service plan sizes starting with the Basic (B1/B2/...) in development environments and Standard (S1/S2/...) semi-production environments.  

Microsoft has great documentation on Application Service plans.  These two can get you started

Wrap Up

Deploying Applications in Azure is easy.  Cost and resource management is also easy with just a little bit of pre-staging of Resource Groups and Application Service plans. 

These suggestions may not apply in situations where automation completely tears down and builds environments or where they are in conflict with larger enterprise policies. 



Sunday, August 13, 2017

Setting Mac ITerm tab titles to the current directory

It is easy to set the iTerm titles to final part of the current working directory and the iTerm window title to be the full path of current tab.


Start a new terminal window or tab after making the following changes.  New tabs and iTerm windows create new login sessions that read these file contents.

Modify ~/.bashrc

  1. Edit  ~/.bashrc.  Create ~/.bashrc if it doesn't exist.
  2. Add the following text to the file.  Note that this text has comments that document where I found this on the internet

# Set iTerm2 tab titles to the last directory in PWD
tabTitle() { echo -ne "\033]0;"$*"\007"; }
# Set iTerm2 win titles to the full directory of PWD
winTitle() { echo -ne "\033]2;"$*"\007"; }

# Alias 'cd' to list directory and set title
cd() { builtin cd "$@"; ls -lFah; tabTitle ${PWD##*/}; winTitle ${PWD/#"$HOME" /~}; }

Modify ~/.bash_profile

Mac terminal emulators start a login session for every new tab.  This means we need update ~/.bash_profile to invoke ~/.bashrc
  1. Edit ~/.bash_profile.  Create ~/.bash_profile if it doesn't exist.
  2. Add the following text to the file.  Note that this text has comments that document where I found this info on the internet.
if [ -f ~/.bashrc ]; then . ~/.bashrc; fi 

Sunday, April 30, 2017

Rasberry Pi, Z-Wave and Domoticz: Setup Part 2

This article is about using Z-Wave with a Raspberry Pi.  Z-Wave and ZigBee are the two big wireless players in the Home Automation automation market.  A single z-wave wireless controller can communicate with a large number of devices.  These devices include outlet switches, power meters, alarm sensors, remote controlled light bulbs and other accessories.

The USB stick on the left is a Z-Wave Z-Stick S2 that acts as an interface between a computer and a network of wireless devices. It can be controlled via COTS software open source libraries like openzwave.  The outlet on the right is a Z-Wave wireless controlled outlet that reports back power consumption and state.



I received this controller / switch pair at the Microsoft Build conference a couple years back.  They were one of the "prizes" you could buy when you earned conference credits for running through the labs.  I really had no idea what they were for a couple years until I took the time to do some research.

The Raspberry Pi can be an ideal Home Automation host that lets people use web interfaces or mobile apps to control devices via the USB Z-Wave controller.  I recommend using the Raspberry Pi 3 as a control device because it has the most capacity headroom.  

The picture at the right is a Raspberry Pi V1 Running Domoticz Home Automation software with a Z-Wave USB control dongle and a PiFace digital I/O daughter board.


A Z-Wave stick control dozens of wireless devices.  The PiFace Digital I/O daughter-board supports direct connection of up to 8 digital input and 8 digital output devices.

About Domoticz

Domoticz is an open source Home Automation dashboard that can be installed on a Raspberry Pi.  It is one of several automation platforms that run on the Raspberry Pi under Linux. Domoticz supports dozens of automation hardware controllers, Z-Wave, hardware boards, LAN based controllers, etc. Each controller can manage one or multiple individual devices, depending on the controller type and the device type.it.  Each device can have multiple switches, monitors, outputs or inputs.



Domoticz provides a web interface that can configure new hardware (controllers), identify associated devices and their features.  Those mappings are then managed and controlled through a set of dashboards.

Main Menu


The Domoticz consists of 3 main parts
  1. Setup: The main utility menu with some interesting sub menus.
    1. Hardware: Attached controllers like the ZWave USB stick or a PiFace.
    2. Devices: Controlled endpoints like switches and utility monitors.
  2. Configured Devices
    1. Switches: Exactly what it sounds like
    2. Temperature: Thermometers.
    3. Weather: Weather stations
    4. Utility: power and current monitoring
  3. Dashboard: Unified view across types

Installation Preconditions

  1. You have a Raspberry Pi running the standard Raspberry Pi O/S.  I installed the full version mostly because I'm too lazy to put together the thinnest custom pac
  2. kage needed.
  3. You have a Z-Wave controller stick or add on board for the Raspberry Pi.
  4. Your Z-Wave controller is paired with one or more devices.  There are instructions online that show how to pair devices with out using a computer. 
  5. Your Raspberry Pi has an internet connection that can be used to install and update software.
  6. Your Raspberry Pi has had all updates installed.
  7. Decide if you want the Raspberry Pi to use static or dynamic IP addresses.  The Domoticz  wiki recommends static IPs to make the web UI easier to find.  The system will work with either.

Installing Domoticz

I tend to use an HDMI display for this but you could do everything through SSH or command line instead. You need to know the Raspberry Pi's IP address.  Either set a static IP, use some type of dynamic DNS or look up the address on the screen hooked to the Pi's HDMI port. 
  1. Plug the Z-Stick into a USB port on the Raspberry Pi.
    1. The "first" Z-Stick is accessible via device /dev/ttyUSB0 .
  2. Install Domoticz using the Domoticz project guide. Use "the easy way". The installer will most of the heavy lifting. You don't have to manually compile anything.
    1. The recommended install command looks something like
      sudo curl -L install.domoticz.com | sudo bash
  3. Record the URL provided at the end of the installation process. That is the web address of the Domoticz Admin page.
  4. Connect to Domoticz using the address from you got at the end of the installation.   My Raspberry on a dynamic IP can currently reached with the following 
    1. http://192.168.1.123:8080/#/Dashboard 

Configuring Z-Wave 

Hardware Controllers

Lets configure a Z-Wave stick hardware device in Domoticz and identify device nodes like switches and outlets.
  • Connect to Domoticz with with your web browser using the URL found above.  Your screen should look like that:
  • Click on the "setup your Hardware" link.
    • Select "OpenZWave USB" for the "Type".
    • Select "/dev/ttyUSB0 for the Z-Wave control device
    • Enter an optional name
  • You should see the ZWave Devices controller on the next page.  
    • Click on the Setup button.

  • This will take you to the "Nodes" screen where you can see devices controlled by this Z-Wave stick. The picture below shows two digital switches and their controller.  These particular switch "nodes" are multi-function. We will see later that they can measure power usage in addition to acting as switches.

Controlled Devices

We next add the switches and other controlled nodes to the their appropriate sections.
  • Select the "Setup" menu and then "Devices" Setup-->Devices. or click on the Dashboard links.  Click on Devices if you see this screen.
  • The next picture shows two switches with Power Meter and Voltage monitoring capability.  Functionality is enabled and disabled by clicking on the lightbulb icons on the left hand side
  • Add the functions to the appropriate Domoticz panel, Switch, Temperature, Weather or Utility by clicking on the green arrow on the right hand side of that device's row.  Domoticz will automatically select the correct menu.  The green arrows will turn to blue arrows for functions/features that have been enabled in the GUI

  • Enter the name of the device in the panel that pops up after clicking on the green arrow.I took the default settings in this picture.




  • Click on the Switches main menu item. You should see the just-added switches in the panel.  Switches are operated by clicking on the light bulb icon.

Accessing a Switch

The previous sections configured a switch socket.  That device can be enabled and disabled in the GUI via

  1. Setup-->Devices
  2. In the Switches menu
  3. In the Dashboard

Useful links

Created 2017 04 30

Sunday, April 9, 2017

Maven Lifecycle Phases - Fitting in Code Analysis and Other Tools

The build management portion of Maven operates on a type of Template Pattern. Maven moves from lifecycle-phase to lifecycle-phase until there a step failure or until all steps are complete. The following diagram lists the build lifecycle phases. The orange squares represent the main targets that people run. Every phase is executed starting with Validate until the requested end phase is reached.

For example
  • "mvn validate" runs just the Validate phase.
  • "mvn compile" runs Validate, Initialize, Generate Sources, Process Sources, Generate Resources, Process Resources and Compile.
Each Maven Plugin executes with in a phase. The Surefire unit test plugin, as an example, typically runs the tests in the Test phase.  This means that unit tests don't run if Validation, Compilation, class processing or any of the other preceding phases run with errors.

Maven plugins can execute in their default phase or in any phase of your choosing.  Lifecycle phase selection affects if and when the plugin runs.  You have to decide which mvn command you wish to run the plugin in and which types of failures obviate the need for that plugin.



Code Quality

Teams can use Maven to generate code quality metrics with a variety of plugins. These plugins can be used to fail the build based on minimum quality values.  Failed metrics execution aborts the process at whatever lifecycle phase they run in.  Most code quality Maven plugins have default phases but teams may wish to move them based on how they value different phases. Code quality tools can be run early prior to completing major operations or they can be run late so that some of the standard operations complete/fail prior to getting into code quality analysis.

Static Analysis Phase Selection

Static analysis tools include Checkstyle and PMD.  They can run prior to or after compilation. Some static analysis tools, like Findbugs, run against the compiled binaries. Those should be run after the compilation phase.

When should source code static analysis be run? It is really up to the team based on the priority order for failing a build.
  • It can be run prior to compilation, possibly in the Validate phase.  This lets a build skip a long compilation cycle when the code doesn't meet a minimum bar.  
  • It can be run after compilation, possibly in the Process Classes phase. This lets the analyzer only run against code that is known syntactically correct because the code is known to compile

Code Coverage Phase Selection

Code coverage tools work by instrumenting the source code or binaries prior to test execution. This means you wish to put the code coverage plugin last after other quality plugins, especially if you have generated code, JAXB, GWT or some other. 

Selecting a code coverage maven phase can take some thought.
  • Some teams just bind it to the Test phase. This has the advantage of only running the tests once.  It has the disadvantage of subtly changing the code under tests as the code coverage instrumentation makes minor changes to the test binaries.
  • Some teams bind it to the Verify phase.  This has the advantages of failing the build prior to code coverage generation.  It also has the advantage of executing the unit tests without any binary changes. Code coverage in the Verify phase has the downside of requiring the tests to be run twice, once for the test phase and once for the post-test phase.

Generating Reports in the Site Targets

Metrics are usually generated in the Build lifecycle.  They are usually aggregated into web page reports as part of  mvn site web sitegeneration cycle.  It is almost mandatory that maven metrics reports require two different executions. This means metrics are generated in one pass and then formatted for viewing in another pass:
  • mvn clean install
  • mvn site  

Profiles

Maven profiles let you change the plugins executed based on a profile name.  You can use the default profile for normal developer builds and use some custom profile to execute different plugins.

  • Teams that execute quality metrics as part of all builds don't have a lot of work to do. 
  • Teams that execute policy metrics in only some special builds may wish to create a custom CI policy that only executes as part of the the CI build.

Maven and Gradle 

Gradle has a completely different approach. It eschews build pipelines based on configuration with those based on scripts.  You can find more information about the differences between Gradle and Maven at https://gradle.org/maven-vs-gradle

Sunday, March 26, 2017

Static Analysis from IDE to CI with IntelliJ

Static program analysis is the analysis of computer software that is performed without actually executing programs (analysis performed on executing programs is known as dynamic analysis).[1] In most cases the analysis is performed on some version of the source code, and in the other cases, some form of the object code.[2] 

Static analysis provides a low cost way of automating code review of certain types of source code errors and standards.  Static code analysis, automated tests and code coverage are staples of the Continuous Integration process replacing manual effort with automation.

Full featured IDEs implement their own integrated static analysis and test measurement tools. IntelliJ comes with a comprehensive set of integrated static analysis tools and rules.  It can run the rules in an incremental fashion updating results as code is edited.  Rule violations are immediately reflected in the user interface.

CI servers and IDEs each have their own system from running code analysis, tests and test analysis. There are some times differences in the results generated by IDEs and build servers.  This makes it hard for developers to reliably implement coding standards, pass static analysis and generate expected test results and coverage. IDE code quality extensions, or plugins, can run the CI tools inside the IDE so that developers can troubleshoot inconsistencies and predict results in the CI builds.


The above diagram shows a possible workflow where developers start with the integrated IDE tools. They then use IDE extensions to run the batch/CI automated tools. Finally the project code is then analyzed again by the CI build tools.

Example Static Analysis Plugins within IntelliJ

The next three videos walk through the installation and usage of several 3rd party IntelliJ plugins that let developers run standard Java static analysis and code coverage tools.  We are really interested in
  1. Checkstyle:  A code format and style tool.
  2. PMD:  A static analysis tool
  3. Findbugs : A static analysis tool out of University of Maryland
  4. Clover: A test code coverage tool.
  5. Cobertura:  A test code coverage tool.
  6. Emma:  A test code coverage tool.
The following IntelliJ demos highlight Checkstyle, Findbugs and PMD.

QAPlug

This single unified plugin supports the running Checkstyle, Findbugs and PMD with custom configuration files.  This means can use the same custom rulesets in IntelliJ that they use in a Hudson, Bamboo, TFS or other CI build.  

QAPlug is run via the Analyze context menu available on a right mouse click.  It is configured in the IntelliJ preferences screen and can be connected to one or more IntelliJ profiles

The video on the right walks through installation and usage of the Checkstyle and PMD module for this.


FindBugs - IDEA

This single purpose plugin supports running Findbugs with custom rule files.  It can share the same rule files used in a CI build.  Findbugs-IDEA can be invoked via the "Findbugs" right mouse context menu. 

The video at the right walks through installation and usage of the Findbugs plugin





Checkstyle - IDEA

This single purpose plugin supports running Checkstyle from the right mouse Analyze context menu.  It can share Checkstyles with those used on the CI server.

The video at the right walks through installation and usage of the Checkstyle-IDEA IntelliJ extension. 







Citations

  1. Jump up^ Wichmann, B. A.; Canning, A. A.; Clutterbuck, D. L.; Winsbarrow, L. A.; Ward, N. J.; Marsh, D. W. R. (Mar 1995). "Industrial Perspective on Static Analysis." (PDF)Software Engineering Journal: 69–75. Archived from the original (PDF) on 2011-09-27.
  2. Wikipedia "Static Program Analysis"

Sunday, February 26, 2017

Time Warp: Business Cycle Testing

"Let's do the time warp again..."

Video

A video version of this blog

Business Cycle with Time dependencies?

What is a business cycle and why do I need to test it?  I'm really talking about any type of business process that has time based business rules.  The rules can periodic in that they fire on a regular basis or they can one-time based on some time based criteria.  Most of the ones I've worked with are contract oriented or billing cycle oriented. Examples include telecom contracts, home mortgage servicing systems, term based insurance to just name a few.   They usually have some time based sequence of operations, date based rules and may have some type of state machine.  

Testing is complicated by the fact that data may need to be of a certain age before processing begins.  Loan payments may need to be delinquent.  An insurance policy may start the renewal process some time before expiration.  Collateralize debt may have payment, reimbursement  and equity distribution segments based on the time of the month.

Compressing the clock

The length of the business process and the length of a test are different.  In this picture we see that we have 30 hours to run simulated processing for a 30 day month. We are compressing a 30 day process into a 30 hour test without changing any of the business rules or system behavior.  



Understanding dates and time

Computer software can involve a lot of dates. They are used to represent action in the system, time of events in log output and are analyzed as part of business rules.  Three relatively common date concepts are 
  • "Current Date" meaning "today".  
  • "Activity Date" an event occurs and is recorded in the system. This usually must be within the cycle of the test.  This is different from an "Effective Date" when there is a delay in processing when there is a data fix or when something is back dated.
  • "Effective Date" when an operation actually takes effect.  This is may be different than the "Activity Date" when something is processed out of order.  A system that runs only on effective date may not need any date/time manipulation.
Business data and information may use "any" or "all" of these data types. Integrated business cycle testing must manipulate all three to maintain the correct relationships.

Manipulating Time

We want to put our data in the "right place" in our timeline for each portion of the test cycle. The data needs to be coherent and must interact correctly with users, business rules and other systems.. 

Let's talk about four different ways of manipulating time while testing.  There are probably plenty of others. These are four that I've seen in person or heard about.

Explicit Effective Date

It is possible to run a full cycle test using just effective dates if all of the APIs support passing in effective dates. This lets the test back or forward date the operations. This approach will probably not work with most user interfaces since they will not allow user input in effective date for all operations.

Manipulating System Time

Change the system clock on the operating systems where the business logic runs. This approach works best in systems with few connecting components and few servers.

Pros:
No application changes are required
Works easily with single machine applications
Records the internal test timestamps in logs and databases

Cons:
Doesn't work for shared machines
Doesn't work for SaaS applications
Doesn't work in the cloud.
Logs will show the in-test time and not the external world time.



Manipulating Data Time

Change the recorded time-stamps for all data at rest between stages.

Pro:
Works if all data is in mutable data stores.
Requires no code or system changes

Cons: 
Hard to test with partner systems.
File and log time will show different times
Partner data must also be modified
Hard to find all impacted systems.







Manipulating Program Time

Alter the program's concept of current time.  This can be done by adding crosscutting code on Date and Time libraries.  The aspects go to a central time server to obtain the current "system test" time and use that in their calculations.

Pro:
No downstream manipulation.
Downstream systems see the test time

Cons:
Must reliably shim whole application
Doesn't work with services owned by other teams.
Will not work with SaaS components. 






Isolating Data by Time

Create unique data sets for each test phase.  

Pros:
No system changes
No application changes
Test can run overlapping
Works for SaaS
Works for cloud

Cons:
No true end to end test with same data
Data must be synthesized for all non-initial phases






Conclusion

Full process lifecycle can be difficult, especially if there are time based rules that cannot be changed to run a test. Try some of these approaches and see how they work for you.  I've used Program Time manipulation via libraries, Separate Data for each phase, Data Fixes and Effective Date testing when we planned on it early in the project.  The choice depends on the business problem, the operational model and other factors.

Good luck with your full lifecycle testing!


Created Feb 2017

Wednesday, February 8, 2017

AWS Relationships between EC2, ELB and ASG

This post describes the basic relationships of ELBs (now ALBs), EC2 instances and ASGs.  I used AWS for over a year before I realized how Auto Scaling Groups actually interacted with ELBs and EC2 instances.

Terms

EC2: An Amazon virtual machine used to host applications and services.  EC2 instances can be pooled for scale or failover.  EC2 instances can be based on any of the Amazon EC2 machine types.

Elastic Load Balancer (ELB): The basic load balancer provided by Amazon.  They are used as a reverse proxy servers for pools of EC2 instances.  ELBs determine instance health via basic health check operations.

Auto Scaling Group (ASG): A control mechanism that manages how many EC2 instances make up a pool. ASGs will create new EC2 instances based on configured pool sizes. They can also auto-scale up and auto-scale down the pool sizes based on load.  ASGs can register created EC2 instances with associated ELBs.

Availability Zones (AZ): An Amazon region is made up of various data centers that have isolated power and communication.  Those data centers are referred to as Availability Zones. An ASG can spread its' EC2 instances across those data centers. This increases the availability of the EC2 instance pool and reduces the impact of data center failure.

Standard Deployment 

Auto Scaling Groups (ASG) and Availability Zones (AZ)

An ASG controls the number and location of EC2 instances that make up a worker pool.

The ASG creates new instances when needed based on a Launch Configuration. It creates new instances when a new ASG is first created, when an ASG needs to "size up" based on configuration changes or increased demand.

It destroys all associated instances when an ASG is destroyed. It individual instances when an ASG's pool size settings are changed or when demand falls below the load required for the current number of instances.

An ASG can distirbute instances across one or more availability zones. This means the ASG can configure the worker pool for basic HA.

Elastic Load Balancer (ELB) and ASG

An ELB acts as a load balancer acts as a reverse proxy server for a set of EC2 instances that have been registered with the load balancer.  The ELB knows nothing about the type or purpose of nodes in the worker pool.  An ELB can forward to nodes in different Availability Zones (AZ)

ASGs know how to register instances with an ELB. This means an ALB can automatically add and remove worker nodes as they are created or destroyed.

ELBs know about instances.  ASGs know about instances and ELBs.

Relationships


  • ELB
    • Distributes traffic across target EC2 instances
  • Application Scaling Group
    • Creates Instances
    • Distributes Instances across Availability Zones
    • Registers instances with ELB
  • ELB
      • Knows about Instances
      • Does not know about ASG

    ELB Configuration with Cloud Formation

    Application components are best created as part of a Cloud Formation template.  Basic ELB Cloud Formation templates define the load balancer type, the availability zones and the configuration of the listeners.  The ELB configuration does not define the instances that the listener forwards to. Note that the cloud formation template defines a HealthCheck used by the ELB to determine if a target instances is "healthy" enough to accept traffic.




    Here is a sample ELB configuration from the AWS tutorial:
     "ElasticLoadBalancer" : {      "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",      "Properties" : {        "AvailabilityZones" : { "Fn::GetAZs" : "" },        "CrossZone" : "true",        "Listeners" : [ {          "LoadBalancerPort" : "80",          "InstancePort" : "80",          "Protocol" : "HTTP"        } ],        "HealthCheck      }    },

    ASG Behavior and ELB Linkage

    ASGs create, destroy and generally manage a pool of worker nodes / instances.  They allocate created instances across a set of subnets or Availability zones to create low cost highly available applications.  The diagram at right describes a three node network. It does not describe any set of Availability zones.

    ASGs are configured with minimum, maximum and desired number of instances.  They manage there worker pool to meet those numbers.  Some teams manage costs by problematically adjusting their min/max pool sizes in the evening or during times of low demand. It is possible to take all of an ASG's nodes off line by setting the pool sizes to "0".

    Dynamic autoscaling is not a mandatory "must use" feature of an ASG.

    This is separate from autoscaling behavior where the pool sizes are adjusted based on CPU utilization or some other metrics.  Monitors can listen for resource consumption events and adjust the ASG pool size accordingly.. Here is an ASG that will maintain from 1-3 worker nodes.


    "WebServerGroup" : {      "Type" : "AWS::AutoScaling::AutoScalingGroup",      "Properties" : {        "AvailabilityZones" : { "Fn::GetAZs" : ""},        "LaunchConfigurationName" :
                { "Ref" : "LaunchConfig" },        "MinSize" : "1",        "MaxSize" : "3",        "LoadBalancerNames" :
                 [ { "Ref" : "ElasticLoadBalancer" } ],
    },

    EC2 Names and other stuff

    EC2 Instances have Instance Id's that define individual instances that are unique in an Accouint.  You will often also see a "Name" value in various EC2 dashboards. This represents the "component" or "common" name for a pool of machines doing the same function.  Amazon implenets the "Name" attribute as an EC2 Tag. It is a well known tag that has psuedo-special meaning.  I highly recommend adding a Name tage in your Cloiud Formation template or whenever manually creating machines.
    • EC2 Names
      • Name field on EC2 instances, just a Tag
      • Name is convention so that related instances have a common value
      • Do not define instance relationships in AWS
      • Same EC2 Name could be used for different roles. Don’t do that.
    • ASGs create instances with same Name but different Instance IDs
    • ELB is a proxy for a pool of instances
      • Could target instances with different names.
      • ALB Target Group behaves like an ELB.

    Example Cloud Formation Tempalate

    Amazon has a sample Cloud Formation template that sets up a web app with an ELB, ASG and some scale up and scale down behavior.  You can find this example and the associated Cloud Formation json file at http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/example-templates-autoscaling.html

    The Cloud Formation section of the AWS Console has a "design" function that creates a picture of how your Cloud Formation template actually hangs together.  The example one is shown below.  The primary area of focus above was the Elastic Load Balancer, the Web Server auto-scaling group and the Launch Configuration used to create new instances to populate the ASG and be configured into the ELB.


    It also contains some Notification monitors and dynamic ScalingPolicies for scale-up and scale-down.  AWS pretty much standardizes on using message based notifications for communicating with either AWS components or appropriately configured cusctom components.





    Created 2017 Feb 01

    Friday, January 20, 2017

    A Chrome Plugin: IsItUp serverless service dashboard

    A coworker created a Chrome Extension that acts as a zero-infrastructure dashboard. It provides a simple home page that displays service health and support or documentation links related to that service.  The plugin reads JSON file/text to make service calls and build the dashboard. The following picture shows 5 services across up to 6 environments.  The top service does not have a Production environment. The bottom service represents a 3rd external service that has one test environment and one production environment.


    The IsItUp chrome executes health checks via HTTP/HTTPS calls.  The extension requires connectivity to the services being monitored.

    Video Walk-through

    The video explains various cell examples and describes how the extension might be used. The plugin used for the video was downloaded from the Chrome Web Store .

    Video created with version available Jan 21 2017

    Cell Explanation

    Each cell contains one Service Status plus any number of supporting links.  All link icons open up a new tab to the appropriate supporting web page. It could be documentation, log aggregation, real operations consoles, dashboards or something else.  The "reload" icon will refresh the service status by re-calling the service.

    The image to the right represents a service deployed in AWS. It shows a healthy service status along with a chrome link and an AWS console link.

    This example represents a service deployed in a development environment. The supporting icons link to swagger documentation, Jenkins, Sonar, a Confluence wiki and the GitHub repository for this service.

    Cells in the top row represent the fact the service doesn't exist in that environment.

    Cells in the bottom row represent the same application deployed in two different cloud environments. Each contains a link to some web site plus a link to the appropriate cloud console for that environment.

    Basic Configuration 

    IsItUp has an options panel that accepts JSON based configuration.  The extension comes with the sample file used in the video and these screen shots. The sample configuration can be extracted from the Configuration Text box on the options page. The sample will change over time as new features are added or as users make suggestions.

    The file has two main sections, header information and site specifications. You can find the sample JSON containing these sections on the options screen.

    Header Section

    The header contains a description of this dashboard in "title", row header names and column header names. The extension validates the number of row headers matches the number of rows. It also validates that the number of column headers matches the number of columns.

      {
      "title": "Demonstration Status",
      "headers": {
        "cols": "dev/ci,integration,func test,qa,prod azure,prod aws",
        "rows": "Web App 1,Web App 2,Service 1,Service 2,External Service"
      },

    Site Specification Section

    A single cell contains a "healthUrl" and zero-N "other" links and corresponding icons.  This sample shows a healthUrl plus two additional links. One is a link to this Chrome extension's GIT repository. The other link is to the AWS console. This could have been to a deeper link like some CloudWatch dashboard.

      {
        "healthUrl": "http://headers.jsontest.com/",
        "other": [
          {
            "url": "https://chrome.google.com/webstore/search/isitup%20naveen?hl=en",
            "icon": "https://www.google.com/s2/favicons?domain=chrome.google.com"
          },
          {
            "url": "https://aws.amazon.com/console/",
            "icon": "https://www.google.com/s2/favicons?domain=aws.amazon.com"
          }
        ]
      }

    Plugin Source
    Plugin source code is located on GitHub https://github.com/NaveenGurram/IsItUp


    Created 2017 Jan 20

    Sunday, January 15, 2017

    A Chrome Plugin: Highlighting your AWS Account

    I'm working on a set of projects based in AWS. Our projects have somewhere between 7 and 9 different environments representing different levels of software maturity.  Production is the most restricted.  Development is the least restricted.  The rest fall somewhere in between.

    The company partitions the different levels of their SDLC into separate AWS accounts. Each account can have multiple environments that are of similar concerns and access controls.

    AWS account isolation makes it easy to identify and implement security rules and vary developer , dev-ops and operations access based on the account.

    The diagram at right shows a typical 3 account set-up where some of the accounts contain multiple environments. Our company actually has over 20 accounts used for various pre-prod, prod and partner purposes


    The AWS Console.

    The AWS console lets a user operate in a single account at one time.  Enterprise users log into the AWS console with Federated User ids that can provide access to some portion of that Enterprise's accounts. The console displays the AWS Account Alias and some permission and user id information at the top of the screen.  It can be tedious to read that inform
    ation when switching between AWS accounts in a short period of time.

    A Chrome Extension

    One of the developers on my team wrote a Chrome Plugin that highlights  he AWS account information. The plugin provides bassic account information for non-federated accounts like personal or standalone accounts. It optionally color codes a banner at the top of the screen based on some very simple rules related to Account (alias) names.   You can find this plugin 
    Try it out or check out the source code.

    Samples

    Standalone Account

    Non federated accounts are all treated the same.

    Federated Accounts

    With PROD in the AWS account name in any case / capitalization

    With QA  or QC in the Aws account name in any case or capitalization


    Any other Account name string.





    Wednesday, January 4, 2017

    Protecting Data in Transit: Trust Chains

    Web traffic is protected in-flight when it is transferred via TLS encrypted links using HTTPS.  HTTPS is a protocol that is based on encryption algorithms using asymmetrical keys.  Asymmetrical keys are managed, packaged and distributed via certificates.

    Browsers, applications and servers trust certificates and their associated encryption keys based on their trust of the issuing parties known as Certificate Authorities (CA). Public web sites are identified by public/private certificates pairs that are purchased from one of the well known CAs. Their certificate pairs contain an identity component signed by the Certificate Authority and an encryption key that is encrypted by the CA.

    • Server identity is encrypted in the server certificate with the Certificate Authority public key. 
    • Server traffic is encrypted by the server using the private encryption key embedded in the Server's private certificate.  
    • Server traffic is decrypted by clients using the public encryption key embedded the server's public certificate.



    Certificates are used to validate server identity and encrypt payloads so that clients can trust the source and content of data they receive. All of this depends on the ability of certificates to securely represent their intended parties.

    Public Trust Chains

    This trust is implemented via trust chains of Root Certificate Authorities and their child Intermediate Certificate Authorities.

    Programs trust certificates signed by CAs in their CA trust list. Public CAs are well known by virtue of the fact that the their root and intermediate Certificate Signing Authority certificates come pre-installed in browsers, applications and some operating systems.


    Well known Certificate Authorities charge fees for their certificates. Fees are based on the certificate expiration date, the number of Subject Alternate Names included in the certificate, any wild card status and various other factors. Companies save costs by using these expensive certificates only at the edges of their organizations where 3rd party trust is important.

    Corporate Trust Chains 

    Certificate authorities are just programs that issue certificates signed using the CA's certificate. This means that companies can issue their own certificates using an internal CA. Corporations can then issue as many certificates as they wish for internal traffic without paying fees to the well known CAs.

    Corporate issued certificates are trusted inside their infrastructure if their internal CAs public certificate is installed in corporate browsers and operating systems certificate authority trust stores. This is one of the reasons large corporations automatically install their corporate standard browsers.

    Browsers and programs can base trust on any number of corporate and public CAs as long as their Root Certificates are installed in the trust stores.

    Self Signing

    Certificates can be issued by anyone that can create a Certificate Authority root cert.  Anyone can create a CA root certificate to create their own signing authority. This tends to be of limited use since no one will trust these self-signed certificates since they don't trust the root certificate.

    Self-signed certificates can be useful in a development environment because they can be used for on-box testing.  Developers can create a local signing certificate that is used for local server certificates. They can then install the local  signing certificate in their O/S and application trust stores. This lets the developer issue as many certificates as they want for local testing.


    Self-signed  certificates can be useful for link encryption where host validation and repudiation is not an issue. This is often useful in a Cloud environment where all link traffic must be encrypted and where hosts are repeatedly created or destroyed as in auto-scale environments.

    AWS, for example, uses trusted certificates on ELBs while using self signed certificates between the ASG hosts and the ELB.  The ELB is configured to trust all certificates on the load balanced servers. This means the server HTTPS certificates can be self signed by the individual servers.


    Self-signed  certificates can be useful for private communication between servers in a pool.  You see this in cloud environments with clustered products that that have some kind of internal communication bus.

    The same self-signing certificate can be installed on each pool server. That same certificate is used to create the back-side bus encrypted link certificates. Each pool server will trust the pseudo-self-signed connections from the other pool servers because they all have the same self-singing root cert.

    Trust chains are not required where certificates are only used for encryption and where host validation and repudiation is disabled. Most application platforms automatically do host and certificate validation. Validation must be explicitly disabled if trust validation is to be disabled.

    Federated Trust

    <to be written later>


    Related Topics

    1. Protecting Hybrid Environments: Blog not yet written - Video
    2. Enrypton Basics: Blog - Video
    3. Trust Chains: Blog Video