Today, I want to briefly discuss three common industry misconceptions that Bach, et. al.[^1] either hint at, or point out explicitly in the book "Lessons Learned in Software Testing". These misconceptions often affect the way testing, as a business value, is evaluated. And mostly, the effect is negative. ### The Myth Of Ignorance The first, and perhaps the most pervasive, is the notion that testers are - and must be - by definition, ignorant of the software they are testing. As noted in lessons 22 and 23, this is often associated with traditional "black box" testing. In other words, an ignorance of the application's source code. The underlying implication that Bach, et. al., do not address, is that it is this view of ignorance that makes the tester seem far less valuable to the team than the developer, in the minds of managers and engineers. But knowledge of the product extends far beyond just the underlying code. And much of this can have a significant impact on testing approaches. The authors seem to agree with me, that it is a mistake to take the narrow view that ignorance of the code is a defect in the testing role. In lesson 22, they point out that: > We don’t object to a tester learning about how a product works. The more you learn about a product, and the more ways in which you know it, the better you will be able to test it. But if your primary focus is on the source code and tests you can derive from the source code, you will be covering ground the programmer has probably covered already... The authors would probably not go as far as I would, however, in arguing that the tester's role on a project could in fact be just as valuable as the the engineer's role. Distinctions in the domains of knowledge do not necessitate a hierarchy of value. It is certainly possible that on some projects, testing may not be as necessary as other roles. But this is a circumstantial pressure on the value of the role, not a structural one. As I've discussed in the first installment on this chapter, and as the authors of *Lessons Learned* have also argued implicitly up to this point, the tester demonstrates his value to the team precisely by bringing to the table a different skillset and a different knowledge domain than the developer. ### The Myth Of Certainty The myth of certainty, loosely stated, is the belief that testing will grant your project the blessing of certitude against failure. It is this false belief that drives impulses like quality "gatekeeping", and release "certification" exercises. As the authors clearly warn in lesson 30: > Beware of tests that purport to validate or certify a product in a way that goes beyond the specific tests you ran. No amount of testing provides certainty about the quality of the product. As noted in the second installment on this chapter, this is due to the the kinds of questions that our tests are answering. As with any good scientific experiment, the best an experiment can offer is that the hypothesis is not falsified. Cumulatively, then, we can only say that the product could not be demonstrated to be defective given the suite of tests we ran. Bach, et. al., state it very succinctly in lesson 35, this way: > In the end, all you have is an impression of the product. Whatever you know about the quality of the product, it’s conjecture. No matter how well supported, you can’t be \[absolutely\] sure you’re right. One thing the authors do not address directly in chapter two (perhaps they do later), is how many project teams are extremely uncomfortable with having this knowledge made conscious. Uncertainty, I find, is one of the most unwelcome states of mind in most areas of our lives. Software projects are no exception. The software tester should not fall into the trap of thinking that he can somehow provide this certainty. Neither should managers or other team members fall into the trap of thinking that this devalues the role of testing. To believe that it does, betrays a fundamental misunderstanding of the role of testing within a project. Instead of letting anxiety drive brittle pursuits for concrete certainty about the product, teams should strive for an agreed upon degree of confidence that promises to customers are being kept, taking conscious account of the potential risks. This more honest asssessment of the state of the product will facilitate better decision-making, and minimize the number of unpleasant surprises that arise after release. ### The Myth Of Precision This myth is one primarily harbored in the minds of my fellow testers. And Bach, et. al., describe it perfectly, in lesson 32: > If you expect to receive requirements on a sheaf of parchment, stamped with the seal of universal truth, find another line of work... A tester who treats project documentation (explicit specifications of the product) as the sole source of requirements is crippling his test process. This is one way in which the myth of certainty shows up in the tester's own mindset. A tester who is expecting his team to be more certain about the desired state of the product than he is about the actual state of the product, is deceiving himself, and treating his team mates unfairly. The origin of this myth, it seems to me, harkens to the days of factory testing, where teams of testers are given fixed lists of requirements and test cases from manufacturing engineers and designers, and are expected, much like the factory's assemblers and packers, to simply execute their piece-work. By contrast, authors of *Lessons Learned* describe a far more modern ideal. One including a highly collaborative process of "Conference, Inference, and Reference", when gathering requirements for software testing. My own 7 years of experience in the field is very much consonant with this description. Particularly in Agile environments, where software development teams value "[working software over comprehensive documentation][^2]", a good tester must be extremely persistent and flexible when attempting to discover all the implicit and explicit expectations for any given feature. Bach, et. al., only hint at this, but what all of this suggests is that good testers will want to learn to be good negotiators. In an environment where requirements are dynamically defined as part of an ongoing set of interactions between team members, the best negotiators will set the standard for how the product's requirements are set, propagated, and refined. Testers, clearly, have a significant role to play in that effort. And, in the end, negotiation does not give you precision. It gives you tentative conclusions, and temporary compromises. A good tester will learn to cope with these conditions, and as stated in lesson 33, will learn to *"use whatever references are required to find important problems fast."* ### Taking Responsibility All of the ideas raised in chapter 2 have led me to an inescapabable conclusion. If the role of testing in software is to be rescued from the dustbin of 19th century industrial anachronism, it is not just the business leaders and the engineers who's minds must be changed. It is our own. Within each of these myths (and a few others unmentioned), runs a single, constant golden thread: Testers need to step up and take responsibility for their role on the team, and in the organization. And, absent the will to do so, it doesn't matter how many managers we try to convince of its value. The beginning of that work, starts with taking responsibility for our capacity to think like *real testers*, and not simply rely upon the antique photograph stereotypes because its more comfortable, or more safe. Taking responsibility means learning to think critically, and scientifically. It means being willing to make judgements and decisions, and being robust enough to bear the burden of critical analysis of those choices, by supporting those judgments and decisions with facts, and evidence, and solid reasoning. Taking responsibility means also learning how to negotiate, and to compromise, and to have the empathy to take our negotiating partners seriously, and to treat them as peers. It means not relying on the comfort and security of subordination and deference to authority. For a tester to be valuable to any software development team, then, means being a *thinking human being*, who tests. And not simply a "mechanical turk" substitute for a turing machine that has yet to be invented. [^1]: https://www.amazon.com/Lessons-Learned-Software-Testing-Context-Driven/dp/0471081124 [^2]: http://www.agilemanifesto.org