Saturday, 26 October 2013

Unit Testing / TDD - why you shouldn't bother

Here's the second in my intended series covering unit testing and TDD and anything else that springs to mind. Earlier in the week I started the series intended to get straight into code, but my intended "intro paragraph" ended up being an entire article, which was then retitled "Unit Testing - initial rhetoric".

So I'm not looking at code today, instead I'm gonna look at why you oughtn't bother doing unit tests.

Huh? Yeah, that's an odd thing to suggest in a series about unit tests, innit? Well let me be abundantly clear now that I'm below the fold, and you're reading the article and not just reading the intro: there are no bloody reasons why you shouldn't unit test. And you're a muppet if you think there are. I should add that I'm not having a go at people who thusfar haven't been doing unit testing etc for whatever reason, but those people generally know they should have been doing it, but haven't for [some reason which they probably concede is invalid]. I'm only having a go at people who try to justify their position as being a valid one.


There's three stages in a person's career as a developer, which map onto the stages of a person's life:
  • a kid. Everyone starts here. As a newbie, one generally accepts that one knows nothing, and the best way to solve that issue is to shut-up and listen / read / learn from those around the community (I know exceptions to this, that said). As a newbie one don't necessarily understand why, but one's generally got the good grace to see advice from a grown-up and go "yeah, there's probably merit in that".
  • a grown-up. Once one's been out in the field for a while, one knows a bunch of stuff, but also realises one doesn't know everything.  And also knows that there are a lot of other people out there who know more things, or different things. There's too much bloody knowledge out there to personally keep on top of everything. And understands that when a common practice is considered "the accepted way of doing something", there's probably a reason for it. "Probably"? Make that "almost certainly".
  • in contrast, there's the adolescent. These are the ones who have been around for a bit, and can kinda take care of themselves, but their chief limitation is that they don't yet know that that they don't know everything, and that the thoughts they are having are almost certainly the same ones that people before them have had. And don't understand that the best way to learn is to listen with one's ears, rather than with one's mouth. And they generally figure they know best. And they're generally wrong. And have pimples.
So, anyway - segueing right along - Bruce Kirkpatrick commented on my previous article and trotted out a bunch of the usual tropes as to why he's got valid reasons for not unit testing. And that, basically, he knows better. Hmmm... and so here I am writing this article.

Writing unit tests take too long

Right, well I'm not going to say that writing a test and the code is going to take less time than writing just the code, but equally I don't really think the total amount of time is actually that much greater. And if you also factor in that writing your test is actually part of designing your code, then you realise you can't simply looking at the coding time as a metric here. When one is using TDD, the iterations of the test and the code writing are very short, and it means that instead of having to bury time into planning stuff before one writes the code, one can actually crack on with the coding sooner. Obviously a lot of people don't plan their code before they write it either, but once we've put all those people up against the wall and shot them, we don't have to worry about them any more.

There's also the consideration that your code - as a result of using TDD - will be better, it's perhaps an acceptable trade-off. Your code will be better in that it'll be organised better, written in a more coherent fashion, only do what it's supposed to do, and you'll be fairly confident that it also actually does what it's supposed to do. There's far less scope for unexpected surprises down the track when you employ TDD, because unexpected surprises usually come from poorly planned code that has waffle in it that creates edge-cases that never got thoroughly tested in the first place.

Also whilst there is a bit of upfront time penalty, your protecting yourself from downstream time penalties when it comes to code maintenance. I've been in too many situations in which I'm terrified to change code because I have no idea what the knock-on effects might be, so instead we kinda slap on band-aids an Heath-Robinson-esque strings and pulleys to make things work around the "here be dragons" code. However if your code has test coverage you've got a much better idea of what the intention was and what the moving parts are for. You also will know with a degree of certainty that if you refactor something, you'll know if it breaks anything because your tests will break. Obviously not everything will have 100% test coverage but you should be able to see this, and write a new test to cover what you're about to change, but you'll still be confident that once that test passes, and with all the other tests passing, your refactoring will be safe.

Having good test coverage just makes you feel confident.

You also get into a rhythm with writing tests, and very quickly it  becomes second nature, and it doesn't take very long to bash one out. As it were.

From a project-planning perspective, TDD is seldom a problem because you factor the timing in before hand, so you're simply saying you need [y] time rather than [x] time to do something. You'll generally find that either [x] or [y] are completely satisfactory timeframes for completion, the thing with saying [y] and then doing your tests: you're more likely to come in on time budget than just diving in and working to [x]. Indeed if you start working to deadline [x], you'll probably end up at [z] anyhow.

There's no point unit testing for code that doesn't change

This one's just f***ing stupid. how the hell do you know ahead of time whether code will change or not? Anyway: yes the code does change. it changes as you're first writing it. Remember that TDD means that the tests come first. You build your code having proven that the preceding step in development actually works, so you're good to go to start writing the test for the next step.

That aside it's just mind-witheringly naïve to pronounce ahead of time "this code will not change".

But, OK, here's a point that I started to consider during the writing of the previous paragraph... what about the idea of premature optimisation? One should not write code for a future that might not occur. So in writing these tests to protect against the future, is one not prematurely optimising? Well... if that's all unit tests were for... yes. In a way. I can think of two things to counter that point though.

Firstly: we're not writing the test after the code is written (and thenceforward not changing). We're doing TDD, so the test is actually proving the current iteration of the code works.

Secondly: being mindful of not prematurely optimising is not an excuse to deny the future exists. One can - with a reasonable amount of certainty - predict how things might change. But you don't write the code to counter that change now (that would be premature optimisation!), but that's not to say you don't write your code in such a way as to later facilitate those changes. And this is where the tests come in. the first thing one wants to be sure about when revisiting code is that any changes one makes don't have a knock-on effect. And for that: one needs tests.

And, face it: your code probably will change. So get over it, and prepare for that happenstance.

I'm a contractor and I can't get the budget for spending time on unit tests

Yes. I personally am sitting in that ivory tower of "permanent work". And my boss has factored in time and resource for us to write code coverage for our work. I'm lucky. But I also (used to ~) to freelance work, and once I got onto the unit testing / TDD bandwagon, I figured "right, well I sell a certain quality of work, and that quality includes thorough, demonstrable testing". And I went to my client and said "this is how I work. The good news is, the result will be more solid, and will make it easier for people to pick up once I am out of the picture". And I insisted on working that way. And, to be honest, the client was reasonably happy that I took stuff seriously.

I think one can turn demonstrable code quality into a selling point. Either that or given you're charging contracting rates, you should probably be delivering a high quality of work, and having test coverage is just part of that.  I have a fairly keen sense of professional ethics (I like to think), and when I go back to freelancing, I will make sure I am delivering work that I am professionally happy with. And I would not feel right delivering code that had not been tested (and, accordingly, was not written in a way that TDD facilitates).

That and it's simply part of your job as a developer. Like source control. And documentation. It's not a "nice to have" (although unlike source control, you can charge for it).

The code is too complex to unit test

If you employ TDD (test first), remember, then the code intrinsically won't be too complex to test. And if you're retrofitting tests to existing code and the code is too complex to test... it's the code that's the problem, not how you'd need to write the tests. If the code is too complex to test, then the code is too complex. Refactor it. It was written wrong in the first place. And remind yourself that if you did TDD, you'd not be getting into this situation in the first place.

The code is too simple to unit test

Again, this is applying the concept of TDD the wrong way around. Or you might be thinking "this is so simple it doesn't need a test". Do one anyway. It'll be a simple test! And it leaves you in good stead for when the code needs to change later on.

This is a destructive process, so it cannot be unit tested

Say you've got a 20-line function and the last line writes to the DB, and you don't want to do that. Fair enough. But refactor so you have one testable function with 19 lines in it, which tests its logic and the inputs it sends to a new function which solely does the DB process. And omit the test for that. Then you're still covered for 95% of the logic.

This is actually an area in which I - and our team at work, in which I am not the "expert" on TDD or unit tests - have yet to come up with a foolproof  approach. But we are now trying the approach above. We have a lot of tests that test DAO proc calls (inputs to the function, and that the correct schema of data comes back from the DB). Strictly speaking these are integration tests, not unit tests, and we should not be unit testing for them. However they have also saved our skin sometimes when the DB team inadvertently change the proc without telling us, and suddenly our tests start failing (instead of not noticing, and having our application failing instead). So having these inappropriate tests are a "net good" for our codebase.

The missing bit here is that we don't have "integration tests", so misusing unit tests is a bandaid. However we also don't have a test DB, so we have to be judicious about how we test processes that write to the DB. Our position is basically that no-one has a 100% answer, but a 90% answer is better than a 0% answer.

Bear in mind that unit tests are supposed to be testing logic, not process. There are other tests for testing process. But one's code should be as unit testable as possible. TDD assists this.

It's all well and good for new code, but we've got millions of lines of existing code!

Yeah, everyone's in that boat. Being in this position can be daunting. But don't feel you need to go from 0% test coverage to 100% coverage in one fell swoop. My approach here is that one draws a line in the sand, and goes "OK, from now on we're doing TDD. Any new code gets the TDD treatment". What this means is that obviously any entirely new function gets the usual TDD treatment, but it also means that any mods to existing functions also get the TDD treatment. I'll demonstrate what to do in this regard once I get onto showing examples, but there's two approaches I take here:
  1. I retrofit 100% coverage onto the existing function before I start. However I need to get clearance from the boss to do this, as it takes time we might not have budgeted for. That said, it's the safest thing to do, and often isn't that bad a prospect. One thing that doing this exercise does reveal is that code not written following the TDD approach is often terribly over-written, does too much, and is a bit incoherent. So writing the tests can be time-consuming. Which leads me to the second approach...
  2. Write 100% test coverage for the code you're changing. If you're adding a new optional argument: test that [passing it in], and [not passing it in] both do the thing that is supposed to be done (whatever that is). If you're adding a conditional statement: test its true and false case. This leaves a lot of the code untested, but it leaves it better than it was before hand. And the next person coming along will add tests for what they are doing, and eventually you'll get there.
Don't fret about not having 100% coverage either on the entire system or an entire class or an entire function. And don't worry about what a daunting prospect it might be to get from 0% to 100%. Getting from 0% to 1% is an improvement. Getting from 0% to 0.1% is an improvement. Just employ the boyscout rule: "always leave [the code] cleaner than you found it". Just doing that is a step in the right direction.

Unit tests won't tell me if the application actually works

Correct. And no-one ever said it would. Unit testing and TDD are not a panacea, and don't claim to mitigate the need for other sorts of testing. One still needs to do integration testing as I mentioned above, UI testing, A-B testing, all sorts of stuff. Load testing: this is absolutely essential for your application, but beyond the remit of unit testing. Unit testing addresses none of those things. You still need to do all that as well. And TDD doesn't either. It is just a practice for making sure your logic units correctly do what you intend them to do. It's part of code quality, and part of application quality. But just part of it.

I can test this stuff myself without writing code for it

You're missing the point of TDD. And, anyway, no you can't. Just eyeballing the code is not adequate to test anything but the most superficial logic. And even then you'll miss really obvious stuff. And eyeballs won't help you with regression testing refactoring jobs. Nor will they help you with changes that have knock-on effects elsewhere in your code.


Here are some articles I read whilst drafting this. Almost all of the above are my own words & based on my own experiences; and indeed what I meant to write when I sat down. However as prep, I read a bunch of other stuff to verify my thoughts before committing to them, as well as grabbing a coupla ideas these articles reminded me about.


Well there you go. Please do feel free to come up with more excuses. But save me some time... google first. But if you come up with a good genuine reason not covered above, in the linked-to articles or on Google: let's discuss it!

And very soon... I'll get onto to some actual code...