Monday 31 July 2017

Yeah, you do want 100% test coverage

G'day:
I've read a number of blog articles recently that have been eshewing 100% test coverage. The rationale being that there's little difference between 90% and 100%, and that that last 10% can be quite tricky to attain as - we being human 'n' all - sometimes leave the worst to last. Obviously we should all be doing TDD so that situation would not arise, but - again - we're human so don't always do TDD for one reason or another ("can't be arsed" is the usual one).

Another rationale is that it's not a race to 100%. IE: the object of the exercise is delivering working software which adds business value; it's not "getting 100% test coverage".

The latter is true. But we still aim for 100% test coverage for basically one good reason: visibility.

First, look at this:



Not bad I guess. Not great.

Now look at this:



We've accidentally forgotten to cover something new. Can you see it (it's one of the provider methods).

On the other hand look at these two:



Somewhat easier to spot, yeah?

This is bloody handy. In this case I just commented-out one of my tests to get the report off 100%, but in the real world I'd see that and go "huh, WTF?", and perhaps discover I had a logic branch untested, or a test I thought was covering something wasn't actually doing so. Or I just forgot to put the @covers annotation on the test. Whatever the situation: it's handy for the shortfall to be so obvious.

What it also means is part of our pull-request process is running a Bamboo build which runs all our tests (plus some other code quality tools):

 Everything's green here, so it's OK for code review. If it was failing, I'd get a nudge from my reviewers going "erm... you sure this is ready to go?" And even if I thought it was, we can't merge anything whilst the build is failing anyhow. The issue needs to be addressed.

Now the key thing here is this:


We don't test everything, but we cover everything. Where "cover" might be "we don't need to test this". In this case a single variable assignment does not need testing. We trust PHP to follow simple instructions. The thing is: this still shows up as green in the test coverage report.

We do this in a few situations:

  • the method has not branching logic and is overwise pretty simple. Ignore it.
  • we're backfilling coverage on old code we need to maintain, but it's such a mess we simply don't have time to solve all the testing shortfalls right now, so we test what is urgent (the bit we're changing), but just mark as ignore the rest of it. Now that file is green, and we can see if it stops being green.
  • similar to the above but we know the code the code in question is  not long for this world. So we ignore it and stick a TODO cross referencing to the ticket that will be dealing with it. And the ticket must already be lined-up to be worked on "soon".

There might be other situations where we need to be pragmatic and skip some stuff, but this needs to be explained to the code reviewers, and get their agreement.

But being able to run a report that says "100%", and shows up anything that isn't 100% is gold.

Get yer coverage report green, but don't be afraid to be pragmatic and skip stuff, if there's an informed reason to do so.

Now this isn't to say one can just pepper ignore annotations around the place just to get the report green. That's daft. The object of the exercise is not to make some report green / say 100%. It's to get good code quality. This is just a tool to help identify the quality: it's still up to the devs to provide the quality. Don't forget that. It needs to be an informed, honest decision for one to ignore code.

Righto.

--
Adam