Blog

Is automated acceptance testing harmful?

14 Apr, 2010
Xebia Background Header Wave

A lot of automated acceptance testing pioneers have come around and denounced their fate in heavy automated test suites. A recent article on InfoQ sums up the trend quite nicely. I am not going to jump on that bandwagon, but I will try to find the safe middle ground between the overzealously created maintenance burden and anarchy. The main point is that automating acceptance tests is the way to go, you just shouldn’t automate and maintain useless tests. The tricky part is to find out what tests are useful and what tests are not.

Before I start let me emphasize the difference between automated acceptance testing and automated integration testing or unit testing. Unit testing is absolutely essential. Anyone that tells you otherwise is either ignorant because he hasn’t tried yet, or a moron. We can have a long discussion on how to do unit testing properly, but that’s not the topic of this post. Automated integration testing (within reason) is extremely useful, and essential in some areas. Where you touch external systems or things that are otherwise very expensive to mock, some of reservations might be defensible. Automated acceptance testing is testing where the aim is to simulate user interaction with the system. When and how this is useful is at stake here.
Let’s list the arguments pro automated acceptance testing first:

  1. If you don’t test it will break, the shorter the iteration the more you will have to test, hence in the shorter the iteration the faster the ROI.
  2. If you use an automated test framework that is understandable to the customer you can delegate test writing back to them, making properly defined functionality their problem.
  3. Human beings are very easily persuaded to overlook certain details, automated tests are impervious to that sort of corruption

Then there are the cons:

  1. Tests are expensive to maintain. They will never ROI because the more you change the more you need to change your tests ultimately increasing the cost of the project.
  2. Customers don’t understand the test tools, so the developers end up writing and maintaining these tests anyway
  3. Tests fail all the time when you change little things where a tester would use his head and leave you alone if the change was sensible

Ad 1. It is true on the one hand that manual testing is repetitive work. But if there are many changes the level of repetition might be lower than was assumed before. It only makes sense to automate repetitive work, but it only makes sense to automate it once you have verified that it is repetitive. It follows that writing automated tests up front (when you haven’t verified that they will be repeated) is a bad idea. Not writing them at all is a bad idea too, since you’re excluding automation of repetitive work up front. We need to define exactly when it makes sense to automate a test. I’ll revisit this later in Ad 3.
Ad 2. Current automated test frameworks are not understandable to the customer. At least not all by themselves. I’ve seen plenty of counter examples though of Product Owners recording selenium tests to get them through boring flows and customer testers writing Fitnesse fixtures to see if they could break a headless application. This only works when a team puts in some serious effort to help the customer to use the tools. This I blame primarily to the quality of those tools, we’re not there yet. But if that initial investment is done there is a special kind of interaction with the stakeholders that would otherwise be impossible.
Ad 3. There is no way that you can have both well defined specs and Agile evolving software. You need to choose what is variable and what is fixed. A tester should not be the one responsible for this choice it should be the stakeholder. I think that when the stakeholder is happy with a certain feature and he wants to keep it, then is the time to automate the test. You might wait with this until the first bug arises breaking said feature, but you cannot allow regression after regression or manual test run after manual test run to stack up the costs. Once it’s done it’s done and you can cast it in stone.
There are two types of cost to consider when writing automated acceptance tests. What is it going to cost me initially what is going to cost me to maintain those tests. If this outweighs the benefits in savings you shouldn’t invest. In most projects that go for more than a handful of iterations though the cost of regressions are quite steep, so I’d say setting things up properly is efficient more often than not.
Once you have a setup that can be used for automated acceptance testing, there is still the question of what to test. I like the idea of focusing on automating away repetitive work. Because of that a human tester might be your most likely expert on what needs to be automated, simply because he gets to do the repetitive work.
If automated testing becomes mainly a cost saving measure, it becomes rather moot who does the implementation of the tests. If developers can do it more efficiently than the customer because the tools are still too clunky for mere mortal usage, just let the developers do it until they pick or make better tools. Developers are lazy enough to figure out when that gets efficient.

Iwein Fuld
Iwein is an engineer with Xebia and a member of the Spring Integration team. He's an expert on Spring and Test Driven Development. He specializes in Messaging, OSGi, Virtualization.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts