old-blogs/new-testing-blog/advocacy-vs-observation.md

56 lines
9.2 KiB
Markdown
Raw Permalink Normal View History

2021-04-04 13:26:38 +00:00
# Advocacy vs Observation
### Scientist, Storyteller, or Spokesman?
Chapter four of James Bach's _Lessons Learned In Software Testing_ ("*Bug Advocacy*") was quite a difficult read for me. Not because it's any more obtuse or intellectually dense than the first three chapters. But because it's so conflicted.
The question I ask in my subtitle is an interesting one, to me. In some ways, a tester is actually all three. Ultimately, there's no single right answer to this question. Some great storytellers are also respectable scientists. And some excellent scientists are amazing storytellers. But the more specific question here is how we should think of ourselves when we are creating and stewarding our bug reports within a team? Bach, Kaner, and Pettichord offer us a very mixed answer to that question.
With the lessons provided in this chapter, the authors paint two significantly different - and deeply contradictory - portraits of the tester. On the one hand, he is a disciplined, objective, and thorough *reporter*, who steels himself against the urge to exaggerate, providing only the cold hard facts necessary in order for the appropriate authorities to make rational decisions about how to respond to his reports:
> You are an information service... Your responsibility is to report bugs accurately, and in a way that allows the reader to understand the full impact of the problem... If you make the bugs you report seem more serious than they really are, you'll lose influence... Your job is to report problems, not identify root causes... keep your tone of voice neutral... Don't insist that every bug be fixed; pick your battles...
On the other hand, he is an *advocate*, emotionally invested in (and politically motivated by) the outcome of all his bugs. He is willing to exploit office power relationships in order to end-run his colleagues in an effort to achieve a preferred objective with regard to those reported bugs:
> Any bug report that you write is an advocacy document that calls for the repair of the bug... your bug report is a sales tool; it's designed to convince people... you can take a relatively minor-looking bug and discover more severe consequences by doing follow-up testing... To get a bug fixed, you have to convince the Change Control Board to approve the fix... if you think it might be difficult to convince the programmers to fix a bug, but you want it fixed, consider who else in the company will benefit if this bug is fixed...
Which portrait is accurate? Which is preferable? We don't really get a good sense of this, from the lessons provided in this chapter. Actually, I'm not sure a universal principle can be extracted from these lessons. The reality is that sometimes you have to be a reporter and sometimes you have to be an advocate, and knowing which to be at any given time requires the wisdom of experience. I just wish Cem Kaner and James Bach had offered a bit more of their own, in this regard.
#### Stick To The Truth
In my own experience, I have found that taking the objective approach is far more productive than trying to be an advocate. In keeping with the view I've held in my reviews of previous chapters, I think testers need to see themselves more like research scientists, than as science journalists.
Our job is to design and execute experiments that provide us with demonstrable knowledge about the test subject, and then to report that knowledge as thoroughly and accurately as possible. When we vary from this, inevitably, we drift into the realm of confirmation bias, self-fulfilling prophecy, and tunnel vision. No longer are we simply reporting the observed effects of caffeine on the biochemistry of the body, we are *demanding that somebody do something right now* about the dangers of coffee drinking.
The minute you lose your objectivity as a tester, you become someone with an agenda. Someone who needs to be "handled", or resisted, avoided, or at best, suspected of partiality. Bach, et. al. were careful to point this out in lessons 65 and 66, and 86, warning us not to use bug statistics as performance measurement tools, and to avoid emotionally charge language in reports. But they didn't seem to notice the same problem when suggesting in lesson 64, that we use stakeholder authority to pressure programmers into doing work they would not otherwise do. This approach, in my view, is just as toxic as the toxicity mentioned in lessons 72, 98, and 99, of letting fallow or ugly bugs disappear into the system.
Staying dispassionate gives you an authority you would not otherwise have. Even our authors recognized this when, in lesson 84, they stated:
> Your credibility is fundamental to your influence. If you make the bugs you report seem more serious than they really are, you'll lose influence.
#### The Tester's New Clothes
In my view, the most valuable lessons of Chapter 4 are lessons that the authors could not have penned explicitly themselves at the time this book was written. But, to their credit, did indeed hint at it throughout the chapter. They are lessons that the authors are teaching implicitly (perhaps by accident), to those of us who enjoy the vantage point of a retrospective future.
Software development as an organizational activity, and testing as a discipline within that activity, has undergone substantial upheaval since the authors penned this book in January of 2002. The processes and tools used to bring new technologies and applications to market now is almost unrecognizable, compared to the processes and tools used in the very early days of the internet -- most of which had been borrowed from the legacy years if the 80's and 90's.
In 2002, "Agile Developers" were some fringe splinter sect of renegade XP programmers, who themselves were rare and defiant unicorns in a world full of heirarchy, bureaucratic structure, and physical paperwork.
It is within this context that we get the first implicit lesson, in the form of lessons 91, 92, and 95 (destined to become an industry standard 10 years later):
> Meet the programmers who will read your reports... As soon as they find a bug, some testers walk over to the programmer who covers that area and describe it or show it off... the tester can learn from the programmer, and the programmer has access to the system... let him talk with you when he's ready... if a bug fix fails repeatedly... take it directly to the programmer.
In the modern world of small, nimble, and highly focused development teams (ones dominated at least nominally by informal verbal commitments to Agile principles), testers sit not only on the same project team as, but usually in the same space with, developers, product managers, and designers. Short feedback loops between commits and test reports are not only encouraged, they are essential to the success of the project.
Even where "Agile" is not a formal commitment, this arrangement seems to be true. I have worked in organizations in the US, UK, and Europe where the first principle of "people and interactions over processes and tools" has been accepted implicitly (almost accidentally) as the most effective approach to software development.
#### Our Challenge
The second implicit lesson, is one we see by comparing the world described in the book, to the one we exist in now. Organizational structures like "Change Control Boards" appear comically whimsical, in a world where "move fast, and break things" is the motto of the second largest web service in the world.
Yet, Bach, Kaner, and Pettichord seem to sense that this transformation was imminent, and vaguely recognize the implications of that transformation, in lessons like 69:
> Test groups can make themselves more capable of evaluating design errors by hiring people into the group who have diverse backgrounds. A tester with domain expertise... can focus tests and explanations... If one tester knows database design, another knows network security, another knows user interfaces, and so forth, the group as a whole is positioned to make knowledgeable and useful evaluations...
In modern software development, it is no longer enough for testers to simply be good critical thinkers, and good skeptics. They must also be technically competent. Technologies and applications have grown exponentially in complexity and sophistication since the days of the 16-bit desktop computer. The pace of change has quickened, and market demands have kept pace with it.
In this new world, testers must be mindful of the agile admonition to value "responding to change over following a plan", and to value "working software over comprehensive documentation". What this means, in practice, is that there can no longer be any distinction between a "tester" and a "technical tester". Every tester must be a "domain expert" in his own right. He must be just as capable of building a server from scratch as any resonably competent devop. He must be just as capable of debugging a faulty java class as any reasonably competent programmer, and he must be capable of working with the tools those skills require. Things like the command shell, version control systems, and developer tools like debuggers and editors, should be common knowledge to the tester.
Without this basic grounding of technical skills, the tester's critical thinking skills are really no better to him than a high performance auto engine without a transmission. All sound and fury, signifying nothing.