old-blogs/new-testing-blog/cucumber-essentials.md
2021-04-04 14:26:38 +01:00

8.1 KiB
Raw Permalink Blame History

DRAFT

Introduction

In addition to acting as a source of documentation close to MOO's application code, our automation is itself a software product. Although we are testers, when writing test automation we should nevertheless also see ourselves as software developers of a product that tests products. As software professionals (both testers and developers), it behooves us to keep in mind the essential principles and practices of quality software development.

What I will cover here are a mixture of guidelines for good BDD practice, and principles commonly taught in programming classes, and employed in the practice of unit testing. Interestingly, the fundamentals of good unit testing are also almost entirely applicable to creating high value, feature scenarios. If we can reach a common understanding of these principles, our automation products will perform more consistently, and provide more valuable results.

Know what youre testing - I know it seems cliché to say so, but knowing what you want to accomplish (and why) with any given test is an essential starting point for writing effective tests. There are a number of ways to apply this principle when writing Gherkin specs:

Clearly define your feature: - We need to discuss this more with each of the dev teams (especially the product owners) to discover what our specific features are, but in general, a “feature” should be understood as some finite set of functionality that is meant to enable a user to accomplish a finite set of goals related to that functionality. Given this rough definition, what should be clear is that an entire product is not a feature, it is a collection of features.

The goal that the user wants to accomplished is best expressed as a user story. The story should help you discover who your user is, what he wants, and how he thinks he can get it from the software. From this, you should be able to model some behaviours.

Collaborate on scenarios first, then steps (i.e. You are writing tests secondarily, but a product design spec first):

  • Limit yourself to scenario titles in planning meetings: Product managers should be collaborating with you directly, on what feature behaviours are the most important. Treat these collaborations as brainstorming sessions, rather than code reviews. Once you have a clear set of situations, then you can go back and flesh out the steps in each situation.

  • Limit your scenarios to the minimum necessary to demonstrate that were delivering on our promises: The scenarios in a feature file are promises to users. We are promising that the user will be able to accomplish some specific goal, and that the software will behave in a certain way when she uses it to accomplish that goal. We should not be writing scenarios for every conceivable way in which the product might behave under any possible condition. That is what exploratory testing is for. I will have more to say on this later.

  • In your scenario title, remember who is acting. To clearly understand the conditions, actions, and outcomes of a given test, it is helpful to keep in mind who the “I” in your test is. What does he or she want? Why do they want it? What do they do to get it? Focusing on context in the scenario title will narrow your focus and make writing your steps much easier, by allowing you to put yourself in the users shoes.

Understand that your scenarios are simplefinite state machines”: (original source) - As testers (or script coders), we feel a powerful impulse to write scenarios as step-by-step instructions, as if were providing imperative commands to the computer, or documenting reproduction steps for a bug. But using Gherkin in this way, is to misunderstand its purpose both as a design language, and as a testing tool.

  • Scenarios can be seen as state-transition tables that only have one row in each table: The “Given, When, Then” syntax is really meant to express the three arguments in a state transition: Condition, Event, Result (or “state 1”, “transition event”, “state 2”). Under this model, scenarios are not imperative in any way. They are descriptive: “Given initial condition A, When transition event X occurs, Then resulting condition B is produced”. This approach will force you to keep scenarios terse, and well defined, and will improve readability and maintainability over the long run.

  • Scenarios can also be understood as “unit tests”: Seeing features as state machines affords us another benefit. You can think of each scenario (or state transition) as though it is testing a “unit of product behaviour”, and all of the essential rules of unit testing will apply: independence - scenarios should not be reliant upon the execution of any other scenarios; isolation - scenarios should be self-sufficient and avoid polluting each other; determinism - scenarios, like lawyers, should already know the answers to the questions theyre asking; single-focus - scenarios should zero in on one specific behaviour of the product.

Self-sufficiency and discreteness:

Scenarios should be discrete, and independent. Meaning, each scenario should be able to run all by itself, without relying upon any other scenario. It should test and report on a finite, single-focused circumstance and a single path through the application, to the user's goal. The best way to demonstrate this, is to show it.

Let's imagine a text editor application. We're defining a scenario for users who want to save a newly created file they've just edited.

Non-Discrete Scenarios:

Scenario: User creates a new file
    Given I am at the editor window
    When I click on "new"
    Then A new file appears in the edit window

Scenario: User edits the new file
    Given The new file is open in the edit window
    When I type some text
    Then the text appears in the file edit space

Scenario: User saves edited file
    Given The file has been edited
    When I click the "save" button
    Then the edited file is saved

These scenarios lack discreteness and self-sufficiency in a number of ways:

  • The scenarios are sequentially dependent upon each other - they are obviously meant to be run in sequential order, making the lower scenarios vulnerable to failures accumulated in the early scenarios.
  • The scenarios share the same fixture data - the edited text file is the same throughout the sequence.
  • The scenarios will mask multiple failures - if a problem occurs with an early scenario, any additional bugs in later scenarios will be invisible, until the first is fixed.

Discrete Scenarios:


Background:
    Given two existing files for editing
    And
Scenario: User
    Given I am at the editor window
    When I request a new file
    Then a new file appears in the edit window

Parsimoniousness:

  • Simplify your steps: A sort of "Ockham's Razor" of test writing should be applied to scenario steps - what is the simplest and least verbose way you can state the situation, while still accomplishing your goal? The best approach for achieving this is with the state-machine analogy. Scenarios written as state transitions help to distill the test down to its essential components, and will train you to avoid thinking about them as "steps to be executed". It will help avoid the brittleness of overspecification, and take the focus off implementation details and place it on test results.

  • Simplify your step definitions: The same razor can be applied to your step definitions as well. What is the minimum necessary to create a reliable test? Can we execute this step "under the covers"? The principle of staying close to the code applies here. As Matt Wynne put it, just because you're writing Cucumber, doesn't mean you must open a browser. But, always be mindful of what you're testing. If the goal is to exercise some piece of the UI as part of the user's journey, then the browser becomes a necessary component of the test.