Xebia Background Header Wave

On many occasions when we come at a customer, we’re told the development team is doing TDD. Often, though, a team is writing unit tests, but it’s not doing TDD.
This is an important distinction. Unit tests are useful things. Unit testing though says nothing about how to create useful tests that can live alongside your code. On the other hand TDD is an essential practice for improving the design of your code. These are very different things.

TDD vs unit testing

TDD stands for Test Driven Development. It’s a verb, something you do. TDD has to do with development. The act of designing and writing software. Unit test is a noun. It’s an artefact of software development.
As a matter of coincidence, one of the main tools used in the practice of TDD is a unit test framework. So perhaps it is not surprising that people get confused.
A unit test framework (such as jUnit, ScalaTest, Jasmine) allows you to execute small bits of code quickly and efficiently. I purposely do not mention "test" here.
So we have three different things:

  • TDD: a design process
  • Unit test: a fine grained test case
  • Unit test framework: a library and additional tooling for executing small bits of code
    When writing code using TDD, you follow a "red-green-refactor" cycle.

    The TDD cycle

    First you write a test, there is that annoying word again! Then you run the test to see it fail. This is the "Red" state. When the test fails most frameworks highlight the failure in red, hence the name. Running the test at this point may seem odd, but what this allows us to do is to "test the test". You run the test to check that it fails in the way that you expected. If it doesn’t you have made a mistake somewhere.
    Next you write just enough code to make the test pass and you run the test again to prove it – "Green"!
    There is some subtlety to this, guidelines to help keep your code clean. Do the minimum to make the test pass, even if that minimum seem naive.
    Finally, refactor both the code and the test to make them as clean, simple and readable as possible. Then, just to be on the safe-side, run the test again to make sure that you didn’t break anything while tidying-up.
    So, in the "red" state, you’re writing a test. In the "green" state you’ve implemented just enough code to make the test pass and in the "refactor" state you tidy up your code, ready for the next iteration.
    This red-green-refactor idea is central to TDD. If you don’t follow it, you aren’t doing TDD!
    If you write the tests after you wrote the code, not TDD!
    The reason that these distinctions matter is because this is where the significant value of TDD, way beyond the value of unit-testing, comes from.
    So what is that "significant value"?
    TDD allows us to create higher-quality code. But then again, what defines "high-quality" in code?
    I would argue that high quality code is modular, loosely-coupled, has high-cohesion, good separation of concerns and exhibits information-hiding. You may be able to think of other properties of high-quality, but these attributes are certainly in the list of the defining characteristics.
    What drivers are there to help us achieve high-quality in code? Before TDD the only drivers were the experience, skill and commitment of an individual software developer.
    Let’s think about the mechanical process of TDD for a moment. We write a test that specifies some desirable behaviour of our system. We do that before we have written the code to fulfil the behavioural goals of the test. This means that the test can’t be tightly-coupled to the implementation, because there isn’t an implementation yet. In addition this gives us the ability to think about the functionality before we think about the implementation. Further, if we are writing a test to assert some behaviour of the system, we would have to be pretty dumb to write a test that can’t assert that behaviour — e.i. there should be some observable result. This outside-in approach to design drives the code in some well-defined directions.
    Code that is "testable" in the TDD sense is modular, loosely-coupled, has high-cohesion, good separation of concerns and exhibits information-hiding. Sound familiar?
    So now in addition to the skills and experience of a software developer we have a process that applies a pressure on us to design higher-quality code.
    TDD acts as an amplifier for the skills of any software developer.
    This is the magic of TDD.

    So what about unit tests, then?

    Unit tests have a place. They tend to be somewhat more coarse grained than those written using TDD. They can be useful, but most organisations that do lots of unit tests see some common problems. Tests written after the code-under-test tend to be much more tightly-coupled to it. As a result software that is well unit tested is often difficult to change, because to change it you need also change the tests. TDD leads you to create tests that are naturally more loosely-coupled to the code-under-test and so helps to alleviate this problem.
    Writing the tests first, gives opportunity to think about the problem domain in a non-ambiguous language (the programming language) and think about the interface that has to be exposed from the client standpoint. These are not really tests at all, these are "executable specifications" for the behaviour of our code.
    Refactoring both the code-under-test and the test code means that we can maintain this loose-coupling. It also means that we can ensure that our "executable specifications" are as clear and understandable as we can make them, to make the intent of our design clear. The value of these "specifications" is enormous, one useful side-benefit is that they exist as unit-tests (noun) so while the focus of TDD is not testing, we get great testing as a secondary benefit. We say secondary, because the benefit on the quality of design significantly outweighs the usefulness of even a good suite of regression tests.
    Lastly, what does it take to keep the unit tests maintainable? After the tests have been written, they have to be maintained for the lifetime of the product. If tests just relate to the technical implementation of the application (tightly-coupled to how it’s implemented), they are bound to fail when the code changes. Instead, if unit tests created as specifications in the process of TDD describe the behaviour (functionally, the what), the tests only fail when there is a change in function.
    TDD is the best way to improve the quality of your code.
    Want to know more about TDD? Check out Dave Farley’s TDD training on https://xebia.com/academy/.

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts