All contending teams, without exception, experience a moment at which all members are standing around the table, looking at the robot; this is not what’s supposed to happen. Shouldn’t be a surprise, we told them in advance there would be bugs. Unfamiliar with the codebase and the hardware, it is now their job to find the bugs and fix them. They do know what the robot should do: ‘it should follow the black line, once you press the start button’.
We have been doing these robot challenge workshops for a while now. They’re great fun, for creating a shared understanding between business and IT, or experiencing an acceptance test-driven approach to software development. The mBot robots bring a highly visible aspect to the software’s behaviour.
There is some method to the madness, however, as we don’t just airdrop these misbehaving robots in the team’s workspace. As the programming part is, at most, secondary to the workshop goal, we need a fast way to achieve some familiarity with the code and hardware. The short introduction we give them is not enough. Hunting for bugs really activates the teams and works as a catalyst when it comes to the needed understanding. However, the teams typically are not very efficient in their hunt.
What you see is all there is
Immediately after it’s switched on, the robot starts turning in place, pivoting, without any delay. It’s not following the black line or at least it is doing a very bad job at it. There is an obvious problem here. The robot did not wait for the ‘start’ button to be pressed. It’s right there in the specifications, it’s the very first story, the very first automated test.
This first issue stands out to me, since I introduced it in the code. This is however not on the minds of the participants, as the weird pivoting is much more interesting, much more visible. The unexpected movements of the robot, this erratic behaviour, trumps any focus on the missing behaviour. Their automatic systems take over, the participants start investigating this obvious bug, without a second thought.
It is interesting to see that the highly visible incorrect behaviour grabs all of the attention. It takes someone on the team to look through the original specification, put mental effort into the exploration instead of just going with the flow. Only at this point attention is focussed on the missing behaviour.
Bugs live in the code
In an effort to pinpoint the pivoting behaviour the participants start to mentally step through the program. The primary source of information is the actual implementation of the feature. They fail to ask the question: “what does it mean to pivot?”
We present the challenge as I have presented it to you, a bug hunt to get familiar with the code before we go into the actual body of the workshop. The participants seem primed in such a way that their first response to observing the unexpected behaviour is to deep dive into the code. There is no proper investigation, no questioning the robot, just code.
So, what does it mean to pivot? If we try to answer that question, it leads quickly to a probable cause. The problem cannot be found by merely looking into the code, since there’s nothing really wrong with the code. The robot should be going forward, the speed of both wheels is identical. To find the problem, we should put away the computer and look at the robot, specifically at the bottom.
No more bugs
The bugs that we introduced are all somehow intertwined with the hardware of the robot. All automated tests were passing fine, but still, the robot was not moving as expected. Specific hardware aspects are not covered or coverable by the automated tests. So you can understand my amazement when teams are eager to put their robots with newly implemented features on the “production environment”, without properly exploring the new functionality.
It never works correctly the first time (we made sure of that), and it’s fun to point out their misjudgement. The first feature they implement is basically a copy of another feature, simply using a different hardware component. All bugs in the software were removed, but it is incorrect to think that copying bug-free code will not introduce any new bugs.
When we are investigating bugs or trying to catch them early, we have to be aware of our own heuristics and biases. Overconfidence, priming and availability biases, as well as many others, will sometimes have a negative effect on our effectiveness. Having a more structured approach to investigating, helps in avoiding common pitfalls. It activates you to put in some mental effort. So the next time you are out exploring, ask yourself: (1) what is the target of my exploration? (2) which information or resources will help me? (3) what am I trying to discover? (source: Explore It! by Elisabeth Hendrickson)
Disclaimer: I have no specific training in or knowledge about robots, bugs or behavioural economics.
I am a specialist at Qxperts. We empower companies to deliver reliable & high-quality software. Any questions? We are here to help! www.qxperts.io