Showing posts from 2016

Protecting data in-transit. Encryption Basics

Web traffic is protected in-flight when it is transferred via TLS encrypted links using the HTTPS protocol. HTTPS is a protocol for payload encryption that is based on algorithms using encryption asymmetrical  keys.  Asymmetrical keys are managed, packaged and distributed with via certificates Encryption Basics Asymmetrical encryption relies on a key pair where one key can decrypt any data that is encrypted by the other.  Data encrypted with Key-A can be decrypted with Key-B only.  Key-A cannot be used to decrypt data encrypted with Key-A.  Key-B cannot be derived by knowing Key-A. Internet encryption relies on asymmetry and key anonymity in order to create secure links over a public and untrusted Internet.  A server or party can publish a public key  that other parties can use to encrypt their data.  The server then can decrypt the message using the corresponding private key . Encrypted messages are secure as long as the server keeps the private key  secret. Private keys m

Deploying DotNet Core in Azure with GIT and Kudu

I starting this project trying to build and deploy the  ASP.NET Core   example application  first on my local box, then in Microsoft Azure via Web Deploy, Microsoft Azure via local Azure GIT integration and finally via Visual Studio Team Services (VSTS) via SCM integration. Deployment Types Local deployment into a local IIS is pretty straightforward. We won't talk about it here.  Remote web deployments are the legacy way of pushing Web applications to the (Azure) cloud that works with IDE, CI or command line. Compiled and static application artifacts that are then sent to the remote application servers via FTP.   The servers unpack the archive and deploys it. Remote SCM deployments are a relatively new and interesting way to support automated deployments and to support multiple branches with little work. The IDE or build system pushes source code to a monitored repository.  Azure (Kudu) monitors the source code repository, runs a build and deploys the resulting artifacts to

Visual Studio Team Services Git your build face on

This page describes configuration settings required to enable GIT integration when building code in Visual Studio team Services.  It will show you how to Enable CI builds when a specific GIT repository and branch are updated Provide the CI build with permissions required to make changes to a GIT repository Provide the CI build with credentials required to make changes to a GIT repository This diagram shows how GIT and Visual Studio Team Services (VSTS) might be used to implement a CI build triggered on check-in that merges code into another branch and deploys it.  The actual deployment commands are out of scope for this document. The following changes must be made on the Repositories configuration at the Project (Team) level and on the affected individual build definitions. We first show project level configuration and then Build Definition  configuration. Let VSTS Builds Update GIT Repository Some builds may need to update a GIT repository upon build success.  This

Classifying your return codes

Document the meaning, ownership and handling behavior of your Service return codes.  Do not assume your partner teams and calling systems have any expectations or understanding beyond success and not-success . Ask other teams, you call,  for their Service return code documentation. Force them to document their expectations. Proposed Return Code Category Types Create response categories.  Determine the owner and expected behavior (possibilities) for each category for services you build.  The following is a simple proposed of categories HTTP Code Category Remediation Owner Remediation Success Everyone Application or n/a Business Error Business Manual process or Application rule Technical Error IT / Technology Manual Embedded Redirect IT / Technology Application Library NACK / Retry IT / Technology Library and/or delayed retry mechanism Asynchronous Messaging. You can create the same types of categories for message driven systems.  They can post return codes to re

Success and Failure in the world of Service Calls and Messages

Victory has 100 fathers. No-one wants to recognize failure.     The Italian Job Developers and designers spend a fair amount of time creating API, streaming and service contracts.  The primary focus tends to be around happy path API invocation with a lot of discussion about the parameters and the data that is returned.   Teams seem to spend little time on the multitude of failure paths often avoiding any type of failure analysis and planning.  Plan ahead and learn what failure or partial failure looks like and how you will handle it. At the API level: Some groups standardize on  void  methods that throw Exceptions to declare errors. The list of possible exceptions and their causes is not documented.  Exceptions contain messages but no structured machine readable data. At the REST web service level: Some groups standardize on service return values other than 200  for errors. They do not document all of the HTTP return codes they can generate or their meanings.  API consumers are lef

AWS EBS storage balancing size versus throughput

One of my teams is rolling out a new application in AWS on EC2 instances where the operating system runs on an EBS drive. We selected one of the M3 machines that came with SSDs.  The application makes moderate use of disk I/O.  Our benchmarks were pretty disappointing. It turns out we really didn't understand what kind of I/O we had requested and where we had actually put our data. The Root Drive The root drive on an EC2 instance can be SSDs or Magnetic based on the type of machine selected. All additional mounted/persistent disk drives on that machine will probably be of the same type.  This is an SSD but it is a network drive.   EBS disks IOPS and MB/S are provisioned exactly as described in the EC2 documentation.  The most common GP2 SSDs have a burst IOPS limit and a sustained IOPS limit. They also have a maximum MB/S transfer rate.  Both the sustained IOPS limit and the maximum transfer rate are affected by the size of the provisioned disk.   Larger disks can sustain

An Environment for Every Season

Video Presentation This is really talking about different types of software environments and how they are used during the various phases of software development, integration and support.   Environments are the touch points between your system and other teams operating on their cycles and timelines. They are also control points where controls is incrementally applied or where access is restricted based on the sensitivity of the data in that environment and its distance from production or development. System Complexity The number of system environments (sandboxes) required depends on your system complexity, your tolerance for defects, your level of Ops and DevOps automation and the needs of other systems that yours integrates with. This diagram shows a couple application components that integrate with 4 different partner teams and 3 data stores.  Each partner team has its own release cycle and testing needs.  We aim for loose testing coupling but that is often impossible either

The Cloud is an Opportunity

"Excellence is a continuous process and not an accident" A hybrid cloud is just an offsite data center if you migrate your applications and processes as is . This Topic on Video Cloud as an Opportunity Life in the Cloud Should Be Different  Opportunity to bake in policies and practices Full automation is possible and required to feed continuous processes Continual building and destruction of infrastructure is desirable over stale configurations Dynamic and on-demand capacity is available and should be leveraged It is easy to isolate teams, applications and partner firms using built in tools. Resiliency must be part design and not an afterthought. A cloud migration is an opportunity to bake in  policies and practices that were impossible in your previous environments.   It is an opportunity to leverage cloud vendor provided security, automation and pre-built services in a way that increases your team's capabilities.  The cloud lets you automate your i

Thinking Putty in the Cube Farm: free range or caged

Various companies like Think Geek and Crazy Aaron's  sell palm sized blobs of  "Thinking Putty" as a thinking aid, as a form of stress relief and as nervous energy sink.  Once released into an office, you will find people subconsciously picking up their putty to pull, knead, squeeze and fold it while working on projects or while in discussion. Putty has some odd properties.  It can be squeezed and shaped. It can also bounce like a rubber ball. It is solid but acts sort of like a very thick liquid.  Sculptures made out of the putty start slumping immediately and end up as a pool of putty within hours.  Putty can be pressed and worked but can also be torn as if it is a solid. Putty, comes in fun  colors with and effects.  It can be plain, UV sensitive, glow in the dark, magnetic and heat sensitive.  The can at the right is heat sensitive, changing color from orange to yellow when warmed by friction, body heat or by some other heat source. Caged or Free Range I pe

Received and sent messages in a single mailbox with MS Outlook for OSX

Microsoft Outlook for the Mac and PC behave differently when showing conversations in the Inbox. The PC shows received and sent messages. The Mac shows only the received messages.  There is no default way to show a threaded conversation on Mac Office 2016. Microsoft Outlook for the Mac is integrated with OS/X spotlight search so that AppleScript and Spotlight can be used to create Outlook  Smart Mail folders. Smart Folders  are more like views into mailboxes than actual mailboxes. They are virtual folders that are created from the results of a search.  This blog leverages Outlook's raw search  capabilities that come from OS/X integration.  You can find out more information about this integration on the Microsoft answers web site . Portions of this blog came from this excellent blog posting . Identify Mailboxes to be included in Smart Folder Our conversation SmartFolder is made up of the contents of the Inbox and Sent mailboxes. We first need to identify the Microsoft Outlook

Almost PaaS Document Parsing with Tika and AWS Elastic Beanstalk

The Apache Tika project provides a  library capable of parsing and extracting data and meta data from over 1000 file types.  Tika is available as a single jar file that can be included inside applications or as a deployable jar file that runs Tika as a standalone service. This blog describes deploying the Tika jar as an auto-scale service in Amazon AWS Elastic Beanstalk.  I selected Elastic Beanstalk because it supports jar based deployments without any real Infrastructure configuration. Elastic Beanstalk auto-scale should take care of scaling up and down for for the number of requests you get. Tika parses documents and extracts their text completely in memory. Tika was deployed for this blog using EC2 t2.micro instances available in the AWS free tier. t2.micro VMs are 1GB which means that you are restricted in document complexity and size. You would size your instances appropriately for your largest documents.   Preconditions An AWS account. AWS access id and secret key.

Slice Splunk simpler and faster with better metadata

Splunk is a powerful event log indexing and search tool that lets you analyze large amounts of data. Event and log streams can be fed to the Splunk engine where they are scanned and indexed.  Splunk supports full text search plus highly optimized searches against metadata and extracted data fields.  Extracted fields are outside this scope of this missive. Each log/event record consists of the log/event data itself and information about the log/event known as metadata.  For example, Splunk knows the originating host for each log/event.   Queries can efficiently filter by full or partial host names without having to specifically put the host name in every log message. Message counts with metadata wildcards One of the power features of metadata is that Splunk will provide a list of all metadata values and the number of matching messages as part of the result of any query.  A Splunk query returns matching log/event records and the the number of records in each bucket like #records