Blog

Web performance in seven steps; Step 4: Test continuously

22 Jul, 2009
Xebia Background Header Wave

Last time I blogged about the importance of representative performance testing. Having production-like properties for hardware, OS, JVM, app server, database, external systems and simulated user load are essential to prevent bad performance surprises when going live. In addition, I described how cloud computing can be utilized to generate high loads on-demand without having to worry about the infrastructure.
Continuous performance testing
With a representative test as one of the last steps before going live we prevent that expensive bad-performance surprises will pop up in production. However, the same surprises will pop-up, only earlier and with less impact. To save costs and prevent large architectural refactoring, it is crucial to test for performance as soon as possible. This is just like any other software defects and Quality Assurance: the later in the development process defects are detected, the more costly these defects are.
At a popular web shop I had the following challenge: we wrote the performance tests only at the end of the six-weekly release period, after functional testing had taken place and functional defects were corrected. In case serious performance defects popped up, a crisis team was gathered and we found ourselves in a stressful situation. There was usually not enough time to fix the defect before the release date, so my recommendation at times was to defer the release date. However, deferring the release date often just was not possible, because TV or radio time was bought to promote the new functionality. So, how to solve this dilemma? We found the solution in applying agile principles: test features as early as possible and the team is responsible.
We included meeting performance requirements in the definition of done in each new or changed feature individually. The development process already included an automatic build, quite common these days. Unit tests of a feature were written as usual by the developer. We now introduced performance tests to the spectrum: the developer writes the performance test script for his feature (service or web page) in JMeter, side-by-side to his unit tests on the classes. When the nightly build with Maven has taken place, the application is deployed on WebSphere and the performance tests are run by the JMeter Ant script. This script generates a report which is e-mailed to stakeholders. In this way, the IT department gets early insight into new and changed features, it can adapt its course more quickly, back-off early from an unfortunate architecture or approach, minimize surprises and have lower costs as well. Additional benefit is that writing test scripts gets done more quickly than before, because the developer has all details of the new feature still fresh in his memory. These details are for instance the conditions under which the service may be called and by which parameters, in which variations and special cases. The usual communication overhead between a performance tester and a developer on these details is drastically reduced, thereby further improving productivity.
Continuous performance testing is too often underestimated and un recognized in my opinion, it really has a whole lot of advantages and it is not that hard to achieve.
Next time I’ll blog about Step 5: Monitoring and diagnostics.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts