20 common mistakes when doing Test-Driven Development

These are based on my own experience as a developer and what I've heard and seen from other people working with me.

(All the notes are from my talk TDD - Making sure everything works, at the Agile Transformation Summit 2015. You can download the full deck in PDF here).

  1. Thinking that your code is flawless: Quit thinking that you are above the rest because chances are that you are not: your code is as bad as mine. We all need help, and TDD is a great way to get it.

  2. Thinking that is all about the tests: It's also about design. It's about confidence. It's also about documentation. In a way, tests are a very nice byproduct of the process.

  3. Asking for permission: Why in the world would you consider asking for permission to do your job well in the first place? And if you are asking, it means that there's a different viable alternative, right?

  4. Not having your entire team onboard: Your team should be able to speak your language and give you support whenever needed. During TDD, you want to keep everyone engaged and contributing.

  5. Pursuing a specific code coverage: Code coverage is just about the quantity, not the quality. It doesn't tell how good your tests are. Are they testing the right thing? Are they easy to read and maintain?

  6. Failing to be consistent: An abandoned test suite is worse than not having anything at all. Is not-useful code that's sitting there needing maintenance and accumulating dust every single day.

  7. Not running your tests frequently enough: If you are not, something it’s really wrong with your TDD process. Your test suite is your thermometer, telling you how you are doing every step of the way.

  8. Failing to define proper conventions: You need solid name, structural, and architectural conventions to keep the entire team speaking the same language throughout the project.

  9. Testing other people's code: By definition, you can't do this. Splitting this process is like having one person drinking a beer, and another person puking when the former gets drunk.

  10. Writing code without having a failing test: This is probably the main problem you'll face. It's a habit that will cost you a long time to break, but eventually the right way will become second nature.

  11. Testing code that’s already written: Sometimes the code is already there, and if so, why do you need to write tests for it? I'd rather not write tests unless I have to modify the original code.

  12. Writing a test before all other tests are passing: Go one step at the time. You can't concentrate in multiple failing tests at the same time. Being methodical will pay off big.

  13. Refactoring before all tests are passing: Don't change code without having working tests. Tests are the only way you have to know whether your changes were successful.

  14. Writing code unrelated to the failing test: If the code is not covered by the failing test, nothing can tell you whether the code works as intended. Unrelated code is untested code.

  15. Testing more than one thing at the time: A test testing more than one thing is a complicated test. Keep tests as simple as possible. If you have too many assertions, you'll have to debug when the test fails.

  16. Writing slow tests: If your tests are slow, you aren't going to run them frequently enough. I'd rather have 99 fast tests that I can run every minute than 99 + 1 slow test that never get run.

  17. Writing overlapping tests: Avoid multiple tests testing the same thing. The clue is getting multiple tests failing with every error. This makes the feedback from your test suite hard to interpret.

  18. Testing trivial code: Only test what could possible fail and forget about everything else. By testing trivial code, you get distracted from testing the code that really matters.

  19. Creating dependencies between tests: It makes it impossible to run tests individually or in a different order. A problem in one test will cause other tests to fail.

  20. Writing tests with external dependencies: This usually produces slow tests. If dependencies break or cease to exist, your tests will stop working. This makes your test suite very fragile and unstable.

I'm sure you have seen more. I'd love to hear about it.

Have something to say about this post? Get in touch!

Want to read more? Visit the archive.