Did you ever find a problem of which you weren’t sure it was a bug? You probably thought it over, looked up the requirements or discussed with a team member. Perhaps you figured it out by yourself, the requirements made things clear or your team member could help you out. Either way, you needed some source of information to recognise the problem as a bug. You used your mental models and oracles.
Mental models and oracles are both useful tools for testing. They help you determine if the behaviour you observe is a problem or not. Recently, I asked a group of developers how they knew something was a bug or not. Let’s explore the answer I liked the most:
“I usually just know, but if I really don’t know whether it’s a problem or not, I will ask the product owner”
Mental Models – “I Just Know”
When a developer says “I just know”, he or she uses some mechanism to determine whether a found issue qualifies as a problem or not. As people we all make use of such mechanisms. One of those mechanisms are mental models. A mental model is an explanation of how something works [1]. It helps us reason about the world, and the tasks we need to do. We construct them in situations that are foreign to us, to provide us with a strategy to solve problems we’ve not encountered before.
“A mental model is an explanation of someone’s thought process about how something works in the real world.” [2]
A simplified mental model of a button on a webpage might include:
- It is an object on a webpage (I can see it)
- It has some affordance [4] to indicate it is an object that can be interacted with (I can click it)
- It will have some effect (When I click it, something happens)
Now let’s assume the developer in question found a problem with a button on their website: when the login button is clicked, the login dialog is not shown.
With this mental model in mind, it is easy to see where the discrepancy is between the observed and the expected situation: the effect of the button never occurred. A clear violation of how buttons are supposed to work and it is highly likely that this observation by the developer is an actual problem.
You’ll note that this mental model of a button can be applied to any website, the developer does not need knowledge of this specific one. Similar to how your mental model of driving a car allows you to drive your own car, but also a rental. Of course, that model might not be enough for me to repair a car in case of an issue. That is because mental models are simplifications of reality [2], they are based on belief and not facts [3]. As all models, they are fallible [5]. I may not be able to repair the car, but a mechanic will use a far more detailed mental model to find and resolve the issue.
Oracles – “Asking the product owner”
When the developer in our example was unable to determine whether the issue was an actual problem, the fallback was to ask the product owner. The product owner will likely have a look at the issue and decide whether it is a problem or not. This is a classic example of the product owner acting as an oracle [8]. In the Rapid Software Testing namespace an oracle is defined as [7]:
“An oracle is a means by which we recognise a problem that we encounter during testing.”
There are many oracles all around us during software development, the specifications in our user stories, behaviour of comparable products, standards for software development, all of these are examples of oracles. A big part of testing (and software development) can be about discovering where, what and who your oracles are. Without it you can never know you are building the right thing [9].
Using mental models and oracles in practice
Mental models and oracles come together while observing and interacting with the product. They feed into my test design and help determine whether my observations are problems.
I often try to make (part of) my mental models explicit in whiteboard discussions. I find these useful to verify my assumptions about what is that I’m building with my team and whether I works as I think it works. A visual representation can help in transferring your models to your team members. This also gives a shared vocabulary to use in future discussions, reducing the communication gap.
Nowadays it can be really hard to find oracles in the form of specifications. Usually you are at least partly responsible for creating these requirements together with your team and business stakeholders. Techniques such as Specification by Example and Example Mapping are becoming necessary more and more to provide high quality input to the development process.
Implications for automated checks
It is hard to write automated checks that are reliable in any situation. Partly, because an automated check does not have any mental model or oracle, besides what you program it to have. Suppose a button moves to a new, incorrect location, due to DOM changes on the page. You did not program your check to verify the exact position of the button. In this case it is likely the test will report success, even though there is in fact a problem. Making all of your mental models and oracles explicit in your automated checks is impossible, as a large percentage of this knowledge is tacit knowledge [6]. This may be a reason to make room for exploratory testing in your testing strategy. A curious tester that is aware of his or her mental models and oracles can make all the difference [10].
References
[1]: Mental Models, James Clear, James Clear on Mental Models
[2]: Mental Model, Wikipedia, Wikipedia entry on Mental Models
[3]: Mental Models, Jakob Nielsen, Nielsen Norman Group on Mental Models
[4]: Affordances, Interaction-Design.org, Interaction Design on Affordances
[5]: All models are wrong, Wikipedia, Wikipedia entry on "All Models Are Wrong"
[6]: Tacit knowledge, Wikipedia, Wikipedia entry on Tacit Knowledge
[7]: Oracles from the Inside Out, Michael Bolton, Michael Bolton’s Blog on Oracles
[8]: Test oracle, Wikipedia, Wikipedia entry on Test Oracles
[9]: Software delivery becomes on-demand, Viktor Clerc, Xebia Article by Viktor Clerc on Software Delivery
[10]: The Ultimate Tester: Curiosity, Maaike Brinkhof, Xebia Blog on The Ultimate Tester: Curiosity