TDD myths: the problems

100% code coverage is enough

Code coverage seems to be a bad indicator for the quality of the tests. Take the following code as an example:

public void testEmptySum() {
  assertEquals(0, sum());
}

public void testSumOfMultipleNumbers() {
  assertEquals(5, sum(2, 3));
}

Now take a look at the implementation:

public int sum(int...numbers) {
  if (numbers.length == 0) {
    return 0;
  }
  return 5;
}

Baby steps in TDD could lead you to this implementation. It has 100% code coverage and all tests are green. But the implementation isn’t finished at all. Our experiment where we investigated how much tests communicate the intend of the code showed flaws in metrics like code coverage.

Debugging is not needed

One promise of TDD or tests in general is that you can neglect debugging. Even abandon it. In my experience when a test goes red (especially an integration test) you sometimes need to fire up the debugger. The debugger helps you to step through code and see the actual state of the system at that time. Tests treat code as a black box, an input results in an output. But what happens in between? How much do you want to couple your tests to your actual implementation steps? Do we need the tests to cover this aspect of software development? Maybe something along the lines as shown in Inventing on principle where the computer shows you the immediate steps your code takes could replace debugging but tests alone cannot do it.

Design for testability

A noble goal. But are tests your primary client? No. Other code is. Design for maintainability would be better. You will need to change your code, fix it, introduce new features, etc. Don’t get me wrong: You need tests and you need testability. But how much code do you write specifically for your tests? How much flexibility do you introduce because of your tests? What patterns do you use just because your tests need them? It’s like YAGNI for code exposure for tests. Code specifically written only for tests couples your code to your tests. Only things that need to be coupled should be. Is the choice of the underlying data structure important? Couple it, test it. If it isn’t, don’t expose it, don’t write a getter. Don’t break the information hiding principle if you don’t need to. If you couple your tests too much to your code every little change breaks your tests. This hinders maintenance. The important and difficult design question is: what is important. Test this.

You are faster than without tests

Some TDD practitioners claim that they are faster with TDD than without tests because the bugs and problems in your code will overwhelm you after a certain time. So with a certain level of complexity you are going faster with TDD. But where is this level? In my experience writing code without tests is 3x-4x faster than with TDD. For small applications. There are entire communities where many applications are written without or with only a few tests. But I wouldn’t write a large application without tests but at least my feeling is that in many cases I go much slower. Cases where I feel faster are specification heavy. Like parsing or writing formats, designing an algorithm or implementing a scientific formula. So the call is open on this one. What are your experiences? Do you feel slowed down by TDD?

About these ads

20 Responses to TDD myths: the problems

  1. iongion says:

    The same man, the same, for parser, format dealing stuf, protocols .. and probably a new algorithm invented in this world, i need to have the tests, they are more like a simulation environment … but for the rest, testing gets in my way, really badly.

    Especially on things like application architecture, either i did not understand testing, or it sucks for everyone the same.

    • jenslukowski says:

      In my experience TDD works for small applications but if the application gets larger things like the application architecture must be worked out beforehand and maybe prototyped without tests. I haven’t successfully found a way to TDD something crosscutting like the architecture of a system.

  2. maremara13 says:

    great writeup. when i do my coding, i don’t start with the test, i start with the consumption, in e.g. samples, then when i am finished with api, i write the test. this makes me think of the code from a ‘consumption point of view’, while still having very good coverage of testing

    it’s almost like tdd, though not quite i guess

    you might want to check out my work at http://magixilluminate.wordpress.com

    it’s has some nifty features. among other things, the tests are an integral part of the finished app itself, and can be verified from within the ux, any time the end user wishes

    this might sound silly, but due to the dynamic nature of my project, things change, and the end user might do things which breaks the tests, even without coding in fact, hence this is necessary

    • jenslukowski says:

      Thanks. ‘consumption point of view’ is key here. It would be great if writing the tests would be almost as fast as testing by hand with samples etc

  3. TDD is a practice, and I personally do not think of it as a good practice.

    These automated tests that run before and after the refactoring are no substitute at all to real world testing. Not only that, developing these test cases can often take a very long time.

    TDD does have some advantages and can be applied successfully in certain projects, but, in my opinion, its disadvantages far outweigh the advantages.

    • jenslukowski says:

      I don’t think any pro-TDD would recommend leaving out real world testing. You cannot simulate real users and real systems. Also writing acceptance tests is really really expensive.

  4. ohter says:

    Slowed down up to factor 5. If code parts needs redesign, tests need redesign to.

    • jenslukowski says:

      I tend to disagree here. Yes a change in the interface results in changing the tests but you need to shield your tests from many refactorings like extract method or changing the internal data structure

  5. Zen Master says:

    TDD has its place, but as with any “good practice” it can easily be misused (or over-used). I’m a biased towards writing test after doing the architecture of an application.

  6. Hi,

    I totally agree on the first point. it’s even one of the most complex thing to understand with TDD : “what is my next test ?”

    But I don’t really agree with the other points.
    For design to be testable : yes. If you write test first, then you’re code is more testable. But the test calls your code as if the application would do it. So the design for testability is actually a design for “API usability”. And I don’t break the rule of encapsulation to make an assertion possible. I find other way to write my assertions. it’s not always so easy, but you find ways to do it with experiments. And I don’t know objectively what “maintainability” means. The only meaning for me is : “tested”. Because if I have tests on my code, I’m confident enough to maintain it.

    Point 2 and 4 are quite the same for me.
    The fact that you debug less when you truely do TDD is right, imho. If you need to debug, it means your test is too large. I sometimes forced to debug a code. And the only reason I had is that my test involves too many components (and it’s quite difficult on legacy code not to debug).
    The time you spend debugging now, and even later, should be part of the total of time you spend developping a feature. Thinking the feature is over once you’ve just done it is a short term vision. Some people may break the code, and they’ll spent time in debugging to understand it and fix it. They don’t have a test that tells them where they broke the code. You have to think long term when you measure the time spent on a feature, meaning the application lifetime, not just the first creation of the feature.

    • jenslukowski says:

      Re: What’s the next test? Uncle Bob formed the transformation priority premise for this.

      Well, I found myself often writing code for exposing data structures or internal state for tests to assert. How are you doing this?

  7. I think you wrote about *your* myths…
    • 100% code coverage is not a goal but a consequence of the TDD.
    • Who can think that debugging is not needed???
    • “But are tests your primary client? No. Other code is.“ Heu… what are tests if they are not code???
    • You are *not* faster than without tests, you are just more constant.

    Anyway, who can really develop without tests today???

    • jenslukowski says:

      Tests should always be written. But IMHO it makes a big difference in application architecture, design and mindset if you write test first, test after or test driven.
      Re 100% coverage: what I meant is that code coverage is just a bad metric for the quality of your tests
      Re: debugging: I don’t know where I read it
      Re: yes, but tests are special code, you don’t write tests for it
      Re: faster, that’s a claim made by Uncle Bob

      This post is part 2, the first post contained pro TDD “myths”

      http://schneide.wordpress.com/2013/02/18/tdd-myths/

      • “it makes a big difference in application architecture, design and mindset if you write test first, test after or test driven.“ Totally agree, and test driven is the best ;-)
        • 100% coverage: it is not a bad metric, it is bad only if it is the only one. And imho it is just useless with TDD
        • “tests are special code, you don’t write tests for it“ What do you mean? I don’t know why you don’t want to write your tests to ease the API definition? The best way to write a good API, it’s to use it. Your test could be used as tutos for e.g.
        • “faster, that’s a claim made by Uncle Bob“ *Maybe* (even that it’s not sure) the first version of your parser or your specific algorithm will be write faster, but the time you will spent to debug it will be longer than with a good TDD approach.

  8. Rook says:

    i totally agree with this. The test code basically just verify what you know, or what you assume you know, but unfortunately what you do not know, or you have completely gotten it wrong is the defect. And how do you present the unit tests to the BA?

  9. KolA says:

    Unit test obsession in general and TDD in particular do slow down development x3-x4 times. After over 10 years of dev experienced and number of successful projects delivered (with very few unit tests but with some integration tests – see below) I can say that obsessive unit testing is a good practice for juniors to think harder about SOLID principles and that’s it. For experienced devs it’s only annoying distraction.

    Another myth is that unit tests simplify complex refactorings. They do simplify refactoring of the *units*, kind of refactoring when public interfaces don’t change. But those refactorings are easy and more or less competent developer usually can do them confidently without tests.
    As for complex large scale refactorings when lots of interfaces and classes merge, split and mutate (you know those that are really useful), multitude of unit tests could actually make it worse. It’s a breaking change and your unities are nothing more than small client apps for broken public interfaces – so you have to fix them or even write a new set of tests because old may just don’t make any sense after big refactoring. At this stage many people give up with TDD for good :-)

    It’s automated end-to-end integration tests that are actually useful. They’re not affected that much by big refactorings and test actual scenarios that end users will do every day.

    • jenslukowski says:

      Thank you for your insights. The main focus of tests in general is to secure the status quo hence to prevent regression. In my experience unit tests only help when having a small defined functionality like calculations or parsing routines. Most other problems require too much infrastructure to be unit tested in a reasonable way. Especially in the areas of refactoring or restructuring code unit tests add almost no value and slow down the refactoring. I did find automated end to end test to costly to create and to maintain when bringing new functionality into the system. The cost/value ratio is just plain wrong imho.

      • KolA says:

        You’re welcome. Price/value ratio depends on the projects. My terminology might be wrong, but under integration tests is not always something that requires dedicated nuclear plant to run (however sometimes it is :) ). It can be just testing on module/tool boundaries. The trick is to make them simple enough.

        Example 1.

        I used to write lots of document conversion utilities with gigabytes of poorly structured human made input files (pdf, html and alike) and some custom XML as output. A perfect example when “official” requirements plainly suck because nobody could analyze all those millions of input files.

        And couple of integration (or “acceptance” if you will) tests :

        1) a test to verify assumptions about input docs: check if any unexpected symbols, tags, encoding etc.
        2) After conversion, extract/normalize plain text from input document (pdf) and output document (xml) and compare them.
        3) make sure output XML complies with standard XSD

        These simple tests caught me infinite number of potential bugs and were the best tools to clarify requirements – at very little cost.

        Example 2.

        A classic ERP application i.e. UI + database with orders, customers etc. in this case database integrity and correctness is much more critical than occasional glitch in user interface. Tests to check that underlying database changes as you might expect and all invariants hold after this or that operation. And yes, you have to setup integration tests infrastructure with actual database but it’s a life saver and not as expensive as it seems.
        This project also got many unit tests but from my judgement most of them more burden than benefit.

        Example 3

        Complex stock exchange application, Integrated test script tool for writing automated UI tests, running inside the same process. A team of QA automation specialists is separate from dev team (though work close together). Very expensive setup but completely justified given relentless requirements for performance, concurrency and extreme statefullness/volatility of the system. Almost unbelievable that it works without unit tests and is very stable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 82 other followers

%d bloggers like this: