Chapter three of James Bach's *Lessons Learned In Software Testing* begins by pulling us down from the cloud of philosophical abstraction we've been revelling in, to talk about actual testing practices. The author takes us from the ideal to the concrete by first providing a classification system of their own making, and walking us through nearly every known test technique, explaining how the technique fits into the system. #### Testers Are Experiment Designers While this chapter may first appear as though it is merely a mundane catalogue of testing activities, what Bach, et. al. are really offering us (in keeping with the metaphor of science), is an intellectual toolbox from which to begin our own *[Experimental Design](http://www.sciencebuddies.org/science-fair-projects/top_research-project_experimental-design.shtml)* in software testing. Through the descriptions offered of each testing technique, the authors are conditioning us to think very carefully and systematically about what we actually want to *do* to test a software product, and why we would want to do it. One approach to systemetizing our testing, taken by the authors in chapter three, is the "Five Fold Testing System". Given this framework as a basis, what we are really being tasked with is *designing testing experiments* that answer questions about the product under test, or attempt to falsify assertions about that product. The "Five Fold System" is not a direct analog to the techniques and categories employed in the design of actual scientific experiements. Yet, if we look at the system's five aspects specifically, we can see that they share many similarities with a scientific experiment: > * Testers [People]: Who does the testing. For example, user testing is focused on testing by members of your target market, people who would normally use the product. This might be understood as test subject selection. This is where a social scientist would be considering problems like demographics, sample size, environmental conditions, and so forth. > * Coverage: What gets tested. For example, in function testing, you test every function. In the design phase of an experiment, a scientist might consider this the "scope" of his experiment. If I'm working on the chemical effects of Cannibas on the body, do I limit my experiment to only neurochemical effects, or do I include other physiological factors as well? > * Potential problems: Why you’re testing (what risk you’re testing for). For example, testing for extreme value errors. This third category might be thought of as the hypotheses themselves. What questions are being asked? What assertions are we trying to prove, or disprove? > * Activities: How you test. For example: exploratory testing. This category is about the "method" of the experiment. To borrow from the social sciences again, this might be about whether I'm going to rely entirely on self-reporting surveys, or entirely on neurological data, or a mix of both, or something else entirely. > * Evaluation: How to tell whether the test passed or failed. For example, comparison to a known good result. If you've read any published scientific papers, you'll recognize this as the analysis and conclusion of a scientific experiment. What results did we get? What can we reasonably say about those results? What are the implications? What needs further study? One important feature of evaluation, in both science and testing, is the degree of reproducibility of your results. I'll go into this topic more in the future. Ultimately, the idea of this framework is to provide testers with a tool for "making better choices" about the testing techniques applied to various software testing problems: > Despite the ambiguities (and, to some degree, because of them), we find this classification system useful as an idea generator. By keeping all five dimensions in mind as you test, you might make better choices of combinations. Better choices make for better test plans, better test plans make for better testing, and better testing makes for better software. Bach, et. al. pack this chapter thick with specific details and examples, and focus intently on all the ways one could scrutinize a piece of software or its features. And, although it is admittedly not a comprehensive "how-to" guide, it does provide a solid path of further study for any motivated tester who reads the book. Each of the different test design techniques cannot be covered here in detail. There is so much material to cover, it would fill at least one thick volume on the topic (See Cem Kaner for more on this[^1]). Yet, they each deserve discussion on their own merits. So, in future posts, I'll be discussing each technique outside the context of this book review, providing examples and context from my own testing experiences, and those of my colleagues. Stay tuned for that! [^1]: http://kaner.com/?p=100