### Confronting Reality Recently, I attended [Test Automation And Continuous Delivery At Scale](http://www.ministryoftesting.com/training-events/test-automation-and-continuous-integration-at-scale/), a one-day class hosted by [The Ministry Of Testing](http://www.ministryoftesting.com/), and headed by [Noah Sussman](http://infiniteundo.com/) and [Dr. Jess Ingrassellino](http://www.teachcode.org/blog). The summary on the Ministry of Testing site was enticing enough, and the name Noah Sussman certainly caught my attention, but the *"At Scale"* in the title was what really convinced me I needed to attend this class. I went in hoping, at last, I would find others who've had to grapple with the hairy problems of testing a patchwork quilt of an application, that's been divided up and dispersed amongst disparate internal teams like so much beef in a lion's den. But it didn't turn out that way. Jess and Noah are highly skilled and experienced, and earnestly worked to provide the room with as much insight and expertise as they could muster. Jess did a terrific job of encouraging group discussion, and Noah was chock full of fascinating concepts, and technical tips. However, I definitely got the sense that neither really had a clear idea of what they were actually trying to teach us, or how they wanted to teach it. Throughout the day, we meandered wildly between high-flung concepts like the problem of [Intractable Complexity](https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability), and microscopic bits of advice like how to use the [`lynx`](http://lynx.browser.org/) command line browser to scrape google for minute bits of useful data. Noah clawed his way though several broad philosophical concepts of testing alongside these micrscopic details, clearly desperate to reveal some single coherent thread of wisdom to us. But struggle as he did, it all seemed a bit arbitrary and confusing. ### The Blind Leading The Blind Near the end of the day, I suddenly realized: nobody, really, has any clear idea of what we're supposed to be doing. Either as testers, or as automators. But more importantly, from a broader perspective, I think it may actually be too soon to expect such clarity. A perfect illustration of this, came in the form of a question, near the end of the class: To be fair to Noah, he actually did a fairly good job of emphasising the importance of trying to find ways to bring *useful and meaningful information* to the surface (a key objective of testing) through the use of programmatic tools; He highlighted the problem of communication breakdowns between dev and test, and described how a clever application of technology could improve improve communications. What's more, the group seemed to eagerly agree that one of the most satisfying things about being a tester was learning the *"how"* of things. Yet, after all of this, an hour before we were scheduled to end, this question still arose: "**Do testers *really* need to learn how to code?**"
![facepalm](http://i0.kym-cdn.com/entries/icons/original/000/000/554/facepalm.jpg)
To her credit, Jess answered with an enthusiastic yes (at least, insofar as learning to read it). She seemed to understand the larger point, when defending her position: "Maybe if you're testing medical devices, you can get by without it; but we're testing *software*, and that domain knowledge is essential...". But I wonder if she understood the implications of what was going on right there, around us. As soon as she answered, the room immediatly erupted in a cacophony of debate. Here we all were, in a class intended to provide insight into test *automation* -- in which they all agreed that *knowing how* is fundamental to the task -- and the occupants of the class still can't even agree on basics like literacy in development. ### Wakeup Call I want to take a moment here, and emphasize that this is *not meant as a disparagement of the hosts/instructors* of the class. They were very professional, and obviously highly skilled. But I do think the lesson to be taken from this class was not quite the lesson they intended. And I think I can sum it up borrowing from a [comment I made](http://theadventuresofaspacemonkey.blogspot.co.uk/2016/01/questioning-premise-of-testing.html?showComment=1453727683323#c790477739473665542) a while back, on a [blog post](http://theadventuresofaspacemonkey.blogspot.co.uk/2016/01/questioning-premise-of-testing.html?showComment=1453727683323#c790477739473665542) that raised very similar alarm bells for me: > Any scientist that would ask whether he really needs to make his experiments reproducible should be immediately suspect (if not dis-barred from his field). And, if I were a scientist, I would be incensed if I discovered somehow that my work had *not* been properly peer reviewed (especially if some had claimed it was). >Likewise, software developers should be *demanding* that their code be tested, not looking for ways to avoid it. And if its not being tested, they should refuse to publish it. Just like any good scientist would do. >But this puts a burden on many testers that I don't think they are willing to accept, yet. Because, just like scientists, it expects testers to be just as competent as developers - able to read code (and even write it if necessary), able to use debug and trace logs to isolate problems, able to execute (or even create) build jobs, able to manage their own workstation development environments, and so forth. This is all *in addition to* the particular skills a tester needs to design/execute his tests, not in lieu of them. >Think of something like Underwriters Laboratories. Goods manufacturers in the 1970's and 1980's used to fall all over themselves to get the U.L. seal of approval. And the people working in those labs were not just "monkey pushes the button" drones. They were highly skilled and highly paid mechanical, electrical, chemical, and structural engineers *in their own right*. > We need to think of ourselves in the same way. And when I see developers sneering at testers and testing, it tells me that we don't. Indeed. This is a debate that stretches back more than 30 years. And we're still having it. Until we grapple with our own growth anxiety, and settle this question, I think it's way too soon to be reaching for more complex conceptual problems like how to properly automate testing.