Reply to Human and Machine Checking

The last couple of days have had some posts regarding the difference between testing and checking – in addition to a further comparison of machine checking compared to human checking.

The posts that I have seen so far are from James Bach and Michael Bolton: Testing and Checking Refined

Iain McCowatt’s blog in response to that post: Human and Machine Checking

This is my response to Iain’s post. You may need to read the above posts before reading this post – or perhaps not.

First let me say that I have been a big fan of the distinction of testing vs. checking since Michael Bolton first mentioned it to me at TWST (Toronto Workshop on Software Testing) about 4 years ago (I think). I am very grateful to James, Michael and Iain for making their stance on this topic very clear with very nice blog posts.

I have a point that I would really like to make and I feel has been clear all along, but after talking with Michael Bolton last night, it is clear to me now that the point I hope to make is not currently clear in the minds of all (or even most) testers.

When a “tester” (for this post I will use this term to refer to someone that is assigned to execute one or more manual scripts) starts to execute a script they will not behave in the same way as a machine executing an automated script execution would. This should be quite obvious to anyone who stops for a moment to think about it. The machine has the ability to execute commands and compare the results much faster than a human BUT the machine can only compare what it has been programmed to compare. In Iain’s post he quite nicely states in his summary:

Computers are wondrous things; they can reliably execute tasks with speed, precision and accuracy that are unthinkable in a human. But when it comes to checking, they can only answer questions that we have thought to program them to ask. When we attempt to substitute a machine check for a human check, we are throwing away the opportunity to discover information that only a human could uncover.

Iain also very eloquently mentions that humans will always be able to do more than just checking:

What a machine cannot do, and a human will struggle not to do, is to connect observations to value.  When a human is engaged in checking this connection might be mediated through a decision rule: is this output of check a good result or a bad one? In this case we might say that the human’s attempt to check has succeeded but that at the point of evaluation the tester has stepped out from checking and is now testing. Alternatively, a human might connect observations to value in a way such that the checking rule is bypassed. As intuition kicks in and the tester experiences a revelation (“That’s not right!”) the attempt to check has failed in that the rule has not been applied, but never mind: the tester has found something interesting. Again, the tester has stepped out from checking and into testing.

The part that I didn’t see Iain mention (and this is the point that I wanted to make) is that not all “testers” will notice much more than a machine. I suggest that the tester will likely only notice more than what they are asked to check (i.e.: more than a machine) IF they possess at least one of these traits:

  • engaged,
  • motivated,
  • curious,
  • experienced,
  • observant,
  • thoughtfully applying domain knowledge (I couldn’t think of a way to shrink this one down to a single word).

Some of the traits above may exist and will not mean that the tester will necessarily notice something “outside” the script – but without any of these traits being present during the script execution I suggest there is little hope that the tester will notice anything “off script”.

I have an acquaintance who is the director of a large system test group at a Telecom company (and not at my previous employer – Alcatel-Lucent). She was wanting to assess the effectiveness of her manual test scripts so she had over 1000 fault reports raised by manual testers analyzed for the trigger that made the tester raise the bug. She found that over 70% of the fault reports that had been raised over the past year had been raised by a tester noticing that something was wrong that was NOT specified in the script. Only 30% of the faults were triggered by following the script.

To me this is incredibly important information! If I was to replace all of those tests with automated tests, then my fault finding rate would drop by 70%. If I was to outsource my testing to the cheapest bidder then I predict that my fault finding rate would drop off dramatically because the above traits would likely not be present (or not as strong) in the off-shore test team.

As I reflect on what I have been saying about testing vs. checking over the past few years I have been assuming that when I talk about “checking” I am talking about unmotivated, disinterested manual testers with little domain knowledge OR I have been talking about machine checking. Once you introduce a good manual tester with domain knowledge, then you will find it very difficult to “turn off” the testing. To me the thought of a good tester just “checking” is absurd.

Good testers will “test” whenever they are interacting with the system – whether they are following a script, a charter, or just galumphing. Bad testers will tend to “check” when they interact with the system – either because they don’t care enough or because they don’t have the required knowledge to “test”. Machines will “check” (at least for now – who knows what the future will bring?).

 

Instructor of Rapid Software Testing courses. Context-driven software testing consultant. 17+ years of experience in software testing

4 comments on “Reply to Human and Machine Checking
  1. You are making a very valid point there Paul. And I can tell you that I have had first-person experiences with this. In a test team I was, there was this tester that was highly scripted. I am not kidding, to the degree that I call him as a person scripted, this is what he was. Whenever he performed any testing tasks, he scripted them first and then followed them as checks.

    It was actually quite funny when he one day came up to me with a surprised smile and told me he had “found two bugs, just like that, without any test cases, you know…exploratory!”

    There are people out there which “are scripted” along with all the others that you mention in your post. Thank you.

  2. Iain says:

    Paul,

    Thanks for a thought provoking follow up! I’m not yet sure if we disagree, or if we’re looking at the same things through slightly different lenses. I look forward to having a discussion about it soon – you’ve triggered a number of ideas insights (which beats the hell out of agreement any day of the week).

    -Iain

  3. Joe Boon says:

    Great follow-upnto the other discussions. It helps explain what we feel about good tester vs human checker vs machine checker.

    I think there is something to learn from this about how we should be writing testcases. These discussions reinforce the fact that testers are better than machines when exploring (testing) and machines are better than testers when following a very tightly defined set of steps (checking). Testers can find problems not anticipated by the test scripts, but find long lists of steps demotivating and restrictive.

    It follows that manual effort should focus on testing business requirements and use test cases expressed in user story / business terms. Eg. Using an ISBN code you have written down, order two copies of the book to be delivered to a friend in Italy for her birthday.

    This ‘story’ style of this test case allows testers to take different routes to goal, and encourages them to make observations based on values, context, user experience and tacit knowledge

    The functional requirements should be tested using finer grained test cases which identify specific actions and elements in a sequence of steps. These tests ensure every component or route gets tested, eg press the login link check the username field appears, enter your username in uppercase, press the Go button.

    This ‘stepwise’ style of test case leans more towards ‘checking’ than ‘testing’. Indeed testers often get reprimanded for deviating from the steps, even (or especially!?) if this leads to finding defects.

    If all test cases are defined this way they will be slow to write, boring to execute, restricted in their observations, and expensive to maintain. Naturally, these are probably the testcases you would first automate.

    You have probably seen both styles of testcase written, maybe as a result of different authors, timescales, or development methodologies. After reading these posts, I feel that both story and stepwise testcases should be written, depending on the type of requirement being tested.

    A plausible conclusion of these discussions is that different styles of testcase suit different types of requirement, and the style should be explicitly chosen to produce different levels of checking and testing.

4 Pings/Trackbacks for "Reply to Human and Machine Checking"
  1. […] Reply to Human and Machine Checking Written by: Paul Holland […]

  2. […] and others in response to it from the Iain McCowatt (Human and Machine Checking) and Paul Holland (Reply to Human and Machine Checking), was how the focus was on what happens during the execution of a check. There was no discussion or […]

  3. […] like to thank you for your kind words with regards my recent post. I agree with your assertion that there are a number of factors at work […]

  4. […] testing versus checking meme is definitely heating up once again. Paul Holland summaries it well and offers his own […]

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Blue Captcha Image
Refresh

*