Sunday, 3 November 2013

Unit Testing / TDD - why you should bother

OK, I actually showed some code in the last entry in this series, but now we're back to me spouting rhetoric (read: waffling on). This stems from Andrew Scott making a very good observation about a benefit of unit testing (not so much TDD) in a comment in my last article:
Adam, can I add another under estimated use for TDD. Apart from the bit you touched on about refactoring. But there comes a point when one will need to migrate to a newer version of ColdFusion or Railo, and this would help identify migration problems as well.

And Andrew is dead right.

After my earlier article "Unit Testing / TDD - why your shouldn't bother", I meant to write a follow-up article along the lines of this current one - why you should bother - to put a more positive spin on why one should engage in these activities. But I found that most of the things I could think of were just the reverse of the stuff I covered in that article, so it seemed pointless writing another article saying the same thing in reverse. Just like how that sentence ended up.

However one of the things I overlooked completely was Andrew's point: having strong unit test coverage is a godsend when one comes to update the version of the CFML engine (or any other language platform, for that matter) the code is running on, or other low-level dependencies. And I couldn't think of a good place to remind people of this, so I'm going to write another short article on the topic. I'm sitting at Shannon Airport - one of three passengers in the place at the moment as far as I can tell - and have a coupla hours to kill before my flight. So this will help pass the time. The Guinness count, btw, is currently "one". And I'm getting up to order my second.

There's a few half-formulated things I'd like to add here too, but I have to concede I have googled "non-obvious reasons to do TDD unit testing", and am using a bunch of the results from that to pad out / formalise my thoughts. And look for new points to consider.

It's a good basis to testing risk when upgrading

This is Andrew's point. And it also relates to something Bruce said before: that he doesn't see the point in testing code that won't change. Obviously Bruce's position is borne of a lack of "getting" TDD, but if one already had code which hasn't got tests: why back-fill tests?

Well the code itself might not be intended to change (I'd still like to know how Bruce divines this ahead of time?), but the environment around it certainly might. At some point one is going to have to upgrade a version of ColdFusion (or Railo / OpenBD), or migrate from one to the other, or be asked by a client if an existing app written for one engine will run on another. Now unit tests are not the cure-all for this, but they're a good starting point.

A coupla years ago - the last time I did freelance work just as I was starting my current permy role - I had developed an application using CF8, knowing that it would need to be deployed on OpenBD (ColdFusion 8 being the closest match to OpenBD at that point). I wrote it in CF8 because I think CF8 is a better product than OpenBD, and is just easier to work with: error reporting and general behaviour is... just... better (subjective, I know). The app in question was admittedly just an API, so a dead sitter for 100% unit test coverage (which it has), so I was happy to develop on CF8 (and have all my tests pass) and then test on OpenBD and address the incompatibilities reported when the tests failed (and they did... OpenBD does so daft stuff... so does CF8, so it's a match made in hell as it turns out). Still: I was able to rejig the tests and the code and rerun the tests and could easily tell when the thing would work on both. Then I had to port it to Railo too, and that was dead easy with the test coverage I had.

Cross-migration (from one CFML engine to another) will always throw up more challenges than just a version upgrade (eg: CF8 to CF9), but these too always have backwards compat issues and weirdo idiosyncrasies to deal with, and unit tests help pick this up. Not having unit test failures when testing an upgrade does not indicate there's no problem, but if you run your tests and stuff fails all over the show, you know you've got some work ahead of you.

So having the test coverage helps.

Being realistic

Taking a TDD approach kills a lot of the invalid developer optimism we seem to all be born with (yes, including me... it might not show, but I do experience optimism sometimes. Generally misplaced, granted ;-). This optimism I mean is a kind of "she'll be 'right" thing that is intrinsic in all NZers' attitudes to life ("she'll be 'right, mate" is kind of a Kiwi axiom), but seemingly a lot of developers too. What I mean is one will look at a problem and go "yeah, easy, that'll take six minutes to do" and during none of that "planning" has one stopped to think about the ramifications of the work, what potential risks there are, what moving parts are involved, and in general just look at the superficial surface of the challenge.

If one takes a TDD approach, the mindset is already "There Will Be Bugs", but that's fine because we'll approach things more thoughtfully and thoroughly and not simply throw code at the screen. That, and there's also the comfort that when one changes code x, if it has a knock-on effect in code y, you'll find "There's a test for that", and you'll be more likely to spot these knock-on effects elsewhere in the code too.

TDD is a form of programming cynicism, I guess.

Simplifying the challenge

So often I have sat back when given a task, and go "how the f*** am I going to do all that?", and then sit there trying to plan the thing in my head - down to the lines of code - before realising it's far too complicated to hold in my head in its entirety anyhow, so I just get lost with what to do. I like to think I'm not the most stupid person in the world (I like to think a lot of things: this does not make them true, I know ;-), so perhaps you've experienced this too. TDD is not necessarily a solution to this, but it does dilute a lot of the paralysis. This is because one becomes accustomed to thinking in units of "it just needs to do [this small thing] for now... and then another small thing after that... and another small thing... etc". And it's a lot easier to think in a series of small things than it is one big thing. It just makes it seem less daunting, and more manageable. I guess it's expectations management: TDD changes one's expectations to think smaller.

This should not mitigate higher level architecture and design and the like... for a complex task this still needs to be done. But that stuff shouldn't be worrying itself about code, rather it should be worrying about required outcomes.

Understanding the requirements

This is kinda interlinked with the previous point. But to write a test for something, one needs to understand what the requirements are, so doing TDD forces one to think more closely about what one is going. When I say "requirements", I don't mean the ones handed across from a BA - these are at a higher level than TDD addresses - but the requirements of the code: you need to work out exactly what you have to do before doing it (because you need to be able to test the thing before it's written). I guess it's like an exam: to set the questions, one needs to demonstrate one understands what the answer needs to be, if not necessarily the minutiae of the actual answer itself.

Avoiding code stagation

I've been in situations wherein we have had mountains of code that's kinda been designed as if it was a digital game of Jenga. We dared not change any of it because we had no idea what a lot of it did, whether it was supposed to do it, or even if it was still in use (the original devs have since shuffled off. Not only from the job, but from this mortal coil, if my prayers were answered). And that code was bad a few years ago when I was looking at it, and is that much worse and more out of date now because the current developers still can't change it for all the same fears.

On the other hand, with code that has good test coverage, it's much easier to check the level of test coverage, add some more tests if there's holes in said coverage, then just dive in and "upgrade" the code. Coding practices change, and developers get better at their jobs, so to think that code won't change, so to not bother planning for it to change is dumb. I think one should always think that code will change, later on. Because it will. It always does. Make your own job, or the job of those who follow you, easier by planning for that and delivering the test coverage (and make sure the BAs document the work, FFS... they're seldom any good at that side of a tranche of work).

We all like cheap wins

One of the things about web development is that it's pretty easy really: it just is), and we get results quickly when we work. TDD and unit testing helps with this. The "green lights" when tests pass quickly become "easy" wins. We're getting positive feedback that we've done something right when we write the code that changes the red light to a green light. And I say this as a hardened cynic, but it still applies to me.

The resultant code is leaner and better

I don't mind writing tests, but I know when I'm wasting my time and it could be put to better use. If one is just left to write code however the hell one wants, and the testing is "yeah, that looks OK" (and "it works OK on my machine"), then the code that one turns out is likely to be an over-written (or under-written ~) mess. It'll cut corners, it'll implement stuff it doesn't actually need to, it'll cater to eventualities that can't happen, etc. However if one needs to actually test the solution before it's implemented, one is not going to write tests for shit that won't happen, or write tests for "bonus" functionality, or do dumb stuff like making environmental assumptions within the code (relying on specific application or session scope variables, etc). Partly because I think one will see it all as the waste of time that it is when one sits to write the tests, and partly one will be appalled at the hoops one needs to jump through to get the external dependencies set up to be able to test 'em. It has an effect of guilt-tripping one into not writing crappy code like that.

It'll encourage better approaches

I've revisited code in the past that had mountains of model (and view?!) logic in controllers, or vast tracts of business logic in a model cfm (as opposed to a cfc, I mean) files. Now I stress that if there's a way to write bad code, some people will find it, but taking a TDD / unit testing approach will minimise people falling back into bad / lazy / unthinking practices here. The code I was looking at above was old Fusebox code (XML-based Fusebox at that!), so the controllers were not even CFML let along in CFCs, so completely untestable. And the models were old-school act-prefixed CFMs. If yer doing TDD, the first thought is gonna be "how do I test this?", and so you'll not be putting your code in a place in which it can't - sensibly - be tested.

Obviously with less obsolete frameworks the controllers and models are all in CFCs now anyhow, but not all code is written for current-era frameworks (I still have not escaped Fusebox, for one!), unfortunately. That's not to say new code can't be written to leverage newer / safer / more robust concepts. At my current job we're still on Fusebox, but we're replacing our act-prefixed business logic files with CFC-method-based code as we go. This means all this stuff is now TDD-able, and so with all work or rework we do: the stability of our environment is increasing. It'll never be perfect, but it'll improve. That's a win.


That about covers the stuff that was lurking, under-developed in my mind, and a reasonable distillation of the dozen-or-so articles I read from those Google results to support those thoughts. TBH, I'm not sure I've said anything here that isn't reasonably obvious, and it's certainly far from the best piece of writing I've done, but if nothing else it firms-up some of my own thoughts, and possibly will do so for you too.

For the sake of completeness, this is article four in an ongoing series on TDD / unit testing. The previous articles have been:

And here's my third pint... and another 40min until we board. I better get proofreading...