Blog

The "Performance Series" Part 1. Test Driven Performance.

09 Oct, 2012
Xebia Background Header Wave

A number of my colleagues and myself recently decided to share our knowledge regarding "performance" on this medium. You are now reading the first blog in a series in which I present a test-driven approach to ensuring proper performance when we deliver our project.

Test driven

First of all note that "test-driven" is (or should be 😉 common in the java coding world. It is, however, applied to the unit-test level only: one writes a unit test that shows a particular feature is not (properly) implemented yet. The test result is "red". Then one writes the code that "fixes" the test, so now the test succeeds and shows "green". Finally, one looks at the code and "refactors" the code to ensure aspects like maintainability and readability are met. This software development approach is known as "test driven development" and is sometimes also referred to as "red-green-refactor".

Test driven performance

Now let us see what happens when we try to apply "test-driven" to a non-functional requirement like "performance". Obviously, we need a test and the test result needs to be "red" or "green". There are many aspects in the "performance" area, so let us take one for the sake of our story here: we assume we are building a web-based application and look at its response times. Now our test can be something like "the mean response time of the system when responding to URL such-and-such must be lower than 0.4 seconds. I personally find such a requirement highly interesting as it is time-related! These kind of non-functional requirements are usually given for the final result of the project. But what about during the project?

Test criteria during a project

My claim is that during a project the criteria of non-functional requirements should be changed. Response times of the system should be extremely good at project start as there is hardly any system at all! At the end of the project when almost all development work is done response time only has to be "good enough". Therefore the criteria should be planned for example by using a picture like this:

Figure 1. Planning a mean response time criterion during a project

What happens when we “break the build”?

During development, we constantly run our test by for instance using a tool like JMeter. We collect mean response times of critical URLs and see if we adhere to the criterion level of the day. One day we "break the build": we do not meet our criterion, as the test is "red". Now what? For me this is even more intriguing than the flexible criteria we saw above. In test-driven software development one usually stops all development when the "build is broken". All tests must show green. In our case my strong advise is: don’t act now, plan a performance tuning activity! During such an activity we tune the system until the test is "green" again. So our failing response time test triggers a planning activity rather than triggers immediate action to fix the problem.

Preventing waste

Suppose we have planned a performance tuning activity, as our test is "red". How much work do we have to do? How do we minimize the amount of work? Or in other words, how do we prevent waste? If we tune the system such that the test just show "green" there is a good chance it turns "red" next week and we have to introduce a performance tuning activity again. This does not make sense. On the other hand when we optimize way beyond the "green" criterion we tend to do too much work.
The solution is simple: use a lower limit! So when we do not meet the "green" criterion of, say, 0.2 seconds at a given time we optimize until we have reached a 0.15 second response time and then stop optimizing. This leads to a performance planning like this:

Figure 2. Planning a mean response time during a project while preventing waste

Test driven performance in an Agile perspective

Of course the initial performance-planning figure is a very wild guess. There is nothing wrong with such a guess! It is the best we know at that moment. During the project we of course adapt our performance planning. The key thing here is that we constantly attend to system response time as we always have a test at hand showing us "red" or "green".

Pros and cons

There are two major advantages of the approach sketched above. Obviously, we catch ill design decisions leading to bad response times in an early stage. Therefore project management is in control rather than in the hands of a major project risk, as we are no longer confronted by a bad-performing system in late stages of the project. Secondly, we prevent waste during optimizations due to using a lower limit.
As a possible disadvantage, our approach might very well be more expensive compared to an approach where we inspect the behavior of a system in production only, and rely on quick reactions to fix any issues. My colleague Adriaan Thomas will zoom into this aspect in the next blog of this series.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts