In 2014 Bill Sempf posted this Tweet:
QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv.
— Bill Sempf (@sempf) September 23, 2014
His message caused a chain reaction of awesome responses from people thinking of all the edge cases in this scenario. Among the most hilarious responses were:
— James Hollingshead (@bladesjester) September 24, 2014
@sempf Bartender pours one beer and says "Works on my machine"
— Chris McMahon (@chris_mcmahon) September 23, 2014
You can read the rest of the responses here.
This funny example illustrates perfectly how testers think. We think out of the box and don’t assume that some functionality will work because the developer said so and wrote some unit tests. Sure, automation in testing and scripting has its place and use (as we will discover in the next blog in this series), but it seldom proves that the application works as intended as a whole. Automated scripts are usually following a path without feeding the path new data every time. This can give a false sense of security, “we’ve covered this path”, when inputs matter more to find lingering errors.
This is where testers are needed to provide input and look at the system holistically. We think of all the weird stuff, the stuff that can go wrong. What if we enter corrupt data in this form? Have we tested the boundary values? What if I just ignore the GUI and do all kinds of weird stuff on API level? Is the application performing well enough? How about the load? Are we able to bypass the security mechanisms we built? What if the users don’t get how the application is working? Does it look good on mobile? There are so many questions you can ask yourself and your team. When you start thinking this way, you start to realise that testing is never truly finished.
So, since there is never enough time to test all the things, how do you decide what to test? Assuming that you have made automated scripts that check the happy paths, what do you do with the rest? The nightmare headline game is great to do with your team to explore how your company can get in the news in the worst way possible. You try to think of all kinds of horrible scenario’s: “Banking website down for two hours!” or “Personal information of clients leaked!”. The headlines you come up with are input for tests you can perform.
Context matters….a lot
When deciding what to test you mainly have to look at the context of your application. If you are working on a banking application you have to adhere to standards and your risk is very different compared to someone who is working on a mobile app that doesn’t even have a login function. You can use tools from the context driven testing community to help you with determining the risk. For example, have you heard of the SFDPOT heuristic? This is a great heuristic that helps you structure your test ideas around several aspects that every application has. Try it out! And don’t stop there, because there are many more (Test Heuristics Cheat Sheet, Functional Testing Heuristics, Heuristic Test Strategy Model). To grow your skills in a context driven way, you look at the context of the project you work in and adapt the way you test to that context. The wisdom you build up over time (in the form of heuristics and oracles) is what makes you a very valuable tester. As you get more experience, your personal testing toolkit grows.
Exploratory Testing & Critical thinking
At the beginning of this blog, I said that automated scripts have their place in testing and testers are needed to decide what to automate and what to test as humans. The testing that we do ourselves is very different from what a computer can do. We notice things a script cannot, we are able to look at the total. We use critical thinking skills and exploratory testing techniques. This is usually how the really nasty, lingering defects are found. To learn more about exploratory testing, the book Explore It! is still a great place to start. It explains the idea of using charters to give your test sessions focus and it explains a set of techniques. Exploring can be done on any level: API, database, frontend, it doesn’t matter!
So, you have tested your application using a context driven approach and exploratory testing techniques, heuristics, oracles….Are you done? Can you stop testing? Usually, time is up, a deadline is approaching. Is that a good reason to stop testing? Michael Bolton wrote a blog about this subject that is very eyeopening to read.
I hope I have given you inspiration about why your curiosity as a tester matters. Of course, we all have our days when we don’t feel our best, or are a bit uninspired, but in general: this curiosity is what makes us valuable! So: stay hungry, stay curious!
See you next time!
Books to read:
Course to follow:
Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io