Blog

Do NOT do it right the first time

30 Apr, 2010
Xebia Background Header Wave

I was triggered recently by a status update from someone that mentioned that we will have to get ‘this’ right the first time around in the future.
This particular case was about a test, very late in the project cycle, where lots of things needed to get together perfectly to make it work. Any delays would not only delay the current project, but all other projects that rely on the shared resources being used. This huge cost if things go wrong is why it is so imperative that we do get it right the first time around.
The problem is that this involves tens of people across multiple companies and departments, who have written thousands of lines of code.
Now I do not know what they are going to do to make things right in the future, but if we go by past experience most people will want to enforce even stricter entrance criteria.
There are a couple of problems with this approach:

  • Having to test every component for these extra criteria is a lot of work
  • The longer the list of criteria is, the harder it will be to guarantee that every component does adhere to all of them
  • When integrating multiple components it is the interaction where the complexity is, not the separate components
  • You will only prevent foreseen errors

Especially the last two make it very unlikely that having stricter entrance criteria will work. So what can we do?
All of this reminds me of something in ancient IT history. I was not around myself, but I have heard much about the Punch Card days. Just to refresh everyone’s memory. This was an age when if you programmed something you made a punch card. You then took a stack of these cards to an operator who at some point in time ran your program on a big computer with a fraction of the computing power that you currently have in your microwave.
Now these programmers were in pretty much the same boat. They had to get it right the first time around. Any typo or mistake and at best a couple of hours were wasted, potentially days.
Currently however there are very few programmers around that care about typos before running a program. Let alone trying to figure out more complex mistakes. Why is that?
The answer is of course the compiler. A compiler can check your code for these mistakes much more quickly then you can. Why would you spend hours trying to find errors in your code manually when the compiler can find most syntactic errors for you in minutes or seconds?
The next great invention was the invention of the Unit Test. It was a lot more involved than just running a compiler on your code, but it did allow you to very quickly find not just syntactic errors, but also most of your logical errors.
Then came Continuous Integration. No longer content in finding errors in a single persons code we are now able to find another class of logical errors, which are introduced by the complexity of having multiple people work on the same application and relying on external components.
So what do these three measures have in common? They are very cheap, certainly compared to the cost of not detecting the error and they are very quick. These properties allow you to use them whenever you want, which means you might as well use them after every significant change, just in case you introduced an error.
In the complex environments we are currently working in there is no way we can get it right the first time all the time.
So next time you are saying “We just have to get this right first time in the future” or hear someone else say it, make a slight adjustment. “We have to go get this right the first time around when we try this for real”. And then go find a cheap and quick way of finding out if it works.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts