Cloud and Software Architecture, Soft skills, IOT and embedded
BDD with SpecFlow - Features Scenarios Steps and Contexts
Get link
Facebook
X
Pinterest
Email
Other Apps
SpecFlow is a tool that bridges the gap between business-level behavior
specification and the technical implementations of automated testing. SpecFlow
is an acceptance criteria definition and testing tool that makes it easier to
integrate Behavior Driven Specifications into software projects earlier, in a
shift-left fashion.
Business value is defined and modeled as Business Features. Those features are
built from more granular components, often called User Stories. The
Feature and User Stories contain sets of Acceptance Criteria that represent
the target state for the Feature.
The Feature is the top-level construct in Specflow/BDD. Gherkin Business
Features are made up of a set of Scenarios. Each Scenario
represents one or more acceptance criteria. Each SCenario is validated
as a single automated unit test or a parameter-driven unit test with a list of
parameter sets. One or more features and their associated scenarios
define the criteria needed to verify that some business functionality
implements the business behavior.
Example Walkthrough
This Feature contains three Scenarios. The Feature is described in a SpecFlow
feature file. The Feature and the Scenarios are materialized into Unit
Tests via SpecFlow's code-behind generated Feature .cs file. Scenarios
results are verified every time the Unit test set passes a test run.
Features and Scenarios
Features represent some business outcome or value stream. Scenarios act
as the acceptance criteria for the implementation of that business outcome.
Features should contain as many Scenarios as required to describe the desired
business behavior. This includes positive and negative outcomes.
Some teams will break the Functional Behavior and the Non-Functional Behavior
into separate features the the end result is still that some Features contain
enough Scenarios to define the behavior in enough detail to agree that the
requirements have been met if the Scenario (BDD Tests) passes.
SpecFlow implements Features via Gherkin Syntax Feature Files. SpecFlow
generates code-behind .cs files that provide Unit Test Scaffolding that set up
each individual Scenario as its own Unit test. Features contain
scenarios. Feature Files result in Unit Tests sets one for each Scenario
in the Feature.
A Scenario in a Feature is written something like
Scenario: Example - Search with BingGiven I search the internet using site "bing"When I use the term "facebook"Then There should be at least 1 trademark holder site link "facebook.com"
Scenarios and Steps
Scenarios are made up of multiple steps. SpecFlow, and Cucumber, use
Gherkin Given When Then Syntax for this specification. This is
similar to other Arrange, Action, Assert frameworks. Developers
implement the steps with the appropriate assertions around the acceptance
criteria. Each Given When Then clause is implemented as an atomic
step. Steps are global functions essentially visible to any test that wishes
to use them. Steps are grouped in step definition files whose
organization does not impact how steps can be used. Scenario (Test) can
mix and match previously created steps with new steps created just for this
scenario.
Code generated Scenario Unit Test in the Feature File call each test Scenario
step in turn in the order specified by the Gherkin Scenario. SpecFlow, and
Cucumber, implement Steps as standalone Functions. Steps are stateless.
Steps operate against input parameters and context. They store the state in
the context to be possibly used by later steps.
Scenario Steps and Contexts
Steps are grouped in Step Definition Files. Each step is an
independent function that is globally visible across all BDD
tests. Steps can be reused across an unlimited number of scenarios. Step
Definitions are instantiated for each individual test making the individual
Step Functions visible inside that test. If a Scenario contains
steps from multiple Step Definition Files then all of the relevant Step
Definition Files are instantiated to bring the Steps into context.
Scenario Steps operate, at run time, within a Scenario Context or scope.
The scope could contain data like the results of a previous step, intermediate
values, calculated values, security credentials, configuration information, or
other data. The Steps receive or access that context via invocation parameters
or dependency injected scope objects. SpecFlow creates context objects
that are referenced in Step Definition File constructors and inject those same
instantiated objects across all the Step Definition Files that contain Steps
referenced in a Scenario. This means the same context is available to
Scenario Steps no matter what Step Definition File they are implemented in as
long as they have the same data type injected via the Step Definition
Constructor.
Context design and standardization is an ongoing concern and a source of
refactoring work as BDD testing projects grow and mature.
Step Visibility
Step Definitions are global by default. All of the Step Definitions in
all of the Step Definition Files in a project can be used in any of the
Scenarios. There are some design considerations in that not all Steps
may accept or expect context state in the same form or location. Step
re-use considerations can drive refactoring efforts across BDD testing
projects.
In some cases, it is better to assume that Steps are global but are normally
used in pools of steps for certain types of scenario interactions. In
this case, the Step Definition Files may contain related Steps more than the
Steps for a specific Scenario.
Step Visibility and reusability design is an ongoing concern in large BDD
projects.
Video
Execution
The IDE or CI/CD process run tests using the standard test runner. The
code-behind .cs file for Feature is recognized as a Unit Test file for the
runner (NUnit/Xunit) that you are using. Each Scenario is modeled as its
own test. The standard test runner invokes the unit tests via the code-behind.
The feature code-behind file loads all the step definition files bringing them
into the scope and then runs each scenario in turn. Each scenario is actually
a unit test (from the runner's point of view). Each scenario is just
made up of calls to the appropriate given/when/then steps. Test failures are
bubbled up in the usual fashion for that test library.
I do a lot of my development and configuration via ssh into my Raspberry Pi Zero over the RNDIS connection. Some models of the Raspberry PIs can be configured with gadget drivers that let the Raspberry pi emulate different devices when plugged into computers via USB. My favorite gadget is the network profile that makes a Raspberry Pi look like an RNDIS-attached network device. All types of network services travel over an RNDIS device without knowing it is a USB hardware connection. A Raspberry Pi shows up as a Remote NDIS (RNDIS) device when you plug the Pi into a PC or Mac via a USB cable. The gadget in the Windows Device Manager picture shows this RNDIS Gadget connectivity between a Windows machine and a Raspberry Pi. The Problem Windows 11 and Windows 10 no longer auto-installs the RNDIS driver that makes magic happen. Windows recognizes that the Raspberry Pi is some type of generic USB COM device. Manually running W indows Update or Upd...
The Windows Subsystem for Linux operates as a virtual machine that can dynamically grow the amount of RAM to a maximum set at startup time. Microsoft sets a default maximum RAM available to 50% of the physical memory and a swap-space that is 1/4 of the maximum WSL RAM. You can scale those numbers up or down to allocate more or less RAM to the Linux instance. The first drawing shows the default WSL memory and swap space sizing. The images below show a developer machine that is running a dev environment in WSL2 and Docker Desktop. Docker Desktop has two of its own WSL modules that need to be accounted for. You can see that the memory would actually be oversubscribed, 3 x 50% if every VM used its maximum memory. The actual amount of memory used is significantly smaller allowing every piece to fit. Click to Enlarge The second drawing shows the memory allocation on my 64GB laptop. WSL Linux defaul...
The Apache Tika project provides a library capable of parsing and extracting data and meta data from over 1000 file types. Tika is available as a single jar file that can be included inside applications or as a deployable jar file that runs Tika as a standalone service. This blog describes deploying the Tika jar as an auto-scale service in Amazon AWS Elastic Beanstalk. I selected Elastic Beanstalk because it supports jar based deployments without any real Infrastructure configuration. Elastic Beanstalk auto-scale should take care of scaling up and down for for the number of requests you get. Tika parses documents and extracts their text completely in memory. Tika was deployed for this blog using EC2 t2.micro instances available in the AWS free tier. t2.micro VMs are 1GB which means that you are restricted in document complexity and size. You would size your instances appropriately for your largest documents. Preconditions An AWS account. AWS ac...
Comments
Post a Comment