Blog

Mapping biases to testing, Part 1: Introduction

13 Jan, 2016
Xebia Background Header Wave

We humans are weird. We think we can produce bug free software. We think we can plan projects. We think we can say “I’ve tested everything”. But how is that possible when we are governed by biases in our thinking? We simply cannot think about everything in advance, although we like to convince ourselves that we can (confirmation bias). 

In his book “Thinking, Fast and Slow”, Daniel Kahneman explains the most common thinking biases and fallacies. I loved the book so much I’ve read it twice and I’ll tell anyone who wants to listen to read it too. For me it is the best book I ever read on testing. That’s right, a book that by itself has nothing to do with testing, taught me most about it. Before I read the book I wasn’t aware of all the biases and fallacies that are out there. Sure, I noticed that projects always finished late and wondered why people were so big on planning when it never happened that way, but I didn’t know why people kept believing in their excel sheets. In that sense, “Thinking, Fast and Slow” was a huge eye opener for me. There are lots of examples in the book that I answered incorrectly, proving that I’m just as gullible as the next person. 

thinking-fast-and-slow

But that scared me, because it is my job to ‘test all the things’, right? But if my thinking is flawed, how can I possibly claim to be a good tester? I want to try to weed out as much of my thinking fallacies as I can. This is a journey that will never end. I want to take you with me on this journey, though. The goal is as always: improve as a tester. Enjoy the learning process and explore. I feel the need to put a disclaimer here. This is not a scientific type of blog series. I will provide sources where I think they’re necessary, but the point of this series is to take you on a journey that is for the most part personal. I hope it will benefit you as well! My goal is mainly to inspire you to take a look inwards, at your own biases.

Before we continue I need to explain a few basic concepts: fast and slow thinking, heuristics, biases and fallacies. I will conclude this first post with a list of the biases and fallacies that I will cover in this series. This list can grow of course, based on the feedback I hopefully will receive.

Fast and slow thinking

This is a concept taken from Kahneman’s book. Fast thinking, called “System 1 thinking” in the book, is the thinking you do on autopilot. When you drive your car and you see something happening, you react in the blink of an eye. It’s also the thinking you do when you meet someone new. In a split second you have judged this person based on stereotypes. It just happens! It’s fast, automatic, instinctive, emotional. The system 1 thinking is the reason we are thriving today as a species (it helped us escape from dangerous situations, for example).

On the other hand, there’s “System 2 thinking”. This is the type of thinking that takes effort; it’s slow. It’s deliberate. For example, you use system 2 when you have to calculate (in your head) the answer to 234 x 33 (as opposed to 2 x 3, which you do with System 1).

There is one huge problem: we make all kinds of mistakes when it comes to using these systems. Sometimes, we use system 1 to analyse a problem, while system 2 would be more appropriate. In the context of testing: when someone comes up to you and says “is testing finished yet?”, you might be tempted to answer “no” or “yes”, while this is more a type of question that cannot be answered with a yes or no. If you want to be obnoxious you can say testing is never finished, but a more realistic conversation about this topic would be based around risk, in my opinion.

Often, when people ask a seemingly simple or short question, such as “is testing finished yet?”, they mean something different entirely. In my context, if the Product Owner would ask me “is testing finished yet?”, for me it translates to: “do you think the quality of our product is good enough to be released? Did we build the thing right? Did we build the right thing? I value your advice in this matter, because I’m uncertain of it myself”. But if I happen to be in a foul mood, I have an option to just say “yes”, and that would have been my system 1 answering.

Putting in the mental effort to understand that a simple question can actually be about something else, asking questions to find out what the other person truly means and crafting your answer to really help them, is hard work. Therefore, you have to spend your energy wisely.

Develop your system 1 and system 2 senses: when do you use which system? And then there’s the matter of choice. It would be silly to think you can always choose which system you use.

That brings us to heuristics.

Heuristics

Definition on Wikipedia: “A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics are powerful, but you need to spend time reevaluating and/or adapting them once in a while, and for that you need system 2. Why do you need to reevaluate your heuristics? Because you are prone to fall for biases and fallacies.


hammer

We need to use heuristics, but they are based on system 1. When you are an experienced tester, you have a huge toolbox of heuristics that help you during testing. That’s a good thing, but it comes with a risk. You might start to trust your heuristic judgement a little too much, but you can’t use a hammer for everything, right?

Bias and Fallacy, definition and meaning

A bias “is an inclination or outlook to present or hold a partial perspective, often accompanied by a refusal to consider the possible merits of alternative points of view.”

A fallacy “is the use of invalid or otherwise faulty reasoning, or “wrong moves” in the construction of an argument.”

Most of the thinking errors I will cover in this series are biases, but it is good to know the difference between fallacy and bias. A bias involves a mindset; you see something in a pre-conceived way. It influences how you experience things. Stereotyping is a common example of a bias. A fallacy, on the other hand, is an untruth. It is a statement or belief that lacks truthfulness.

There is more than one type of bias, but in this blog series I will talk about cognitive biases, for which the definition is “[…] a repeating or basic misstep in thinking, assessing, recollecting, or other cognitive processes.”

Since testing is a profession that relies heavily on cognition, mental judgement and we are only human, it’s no wonder that we make mistakes. You cannot get rid of all your biases, that would defy human nature, but in the context of testing it’s a great idea to challenge yourself: which biases and fallacies are actually hurting my testing activities?

However, you have to realise that biases can work to your advantage as well! Since it is part of our human nature to be biased, we should use that fact. With regards to testing that could mean: get more people to do testing. Every person brings his or her unique perspective (with biases) to the table and this will result in more information about the application under test.

What’s next

In this blog series I hope to shed some light on a number of biases and fallacies and what harm or good they can do in testing. I will cover the following biases, fallacies and effects:

If you have more input for biases or fallacies that you want to see covered, please leave a comment or leave me a tweet @Maaikees. In the meantime, you know which book you have to read!

Maaike Brinkhof
Agile Test Consultant @Xebia. Automate sensibly, let humans do the sapient testing.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts