Showing posts with label Working Code Podcast. Show all posts
Showing posts with label Working Code Podcast. Show all posts

Monday 24 May 2021

Code smells: a look at a switch statement

G'day:

There was a section in last week's Working Code Podcast: Book Club #1 Clean Code by "Uncle Bob" Martin (pt2) where the team were discussing switch statements being a code smell to avoid in OOP (this is at about the 28min mark; I can't find an audio stream of it that I can deep-link to though). I didn't think they quite nailed their understanding of it (sorry team, I don't mean that to sound patronising), so afterwards I asked Namesake if it might be useful if I wrote an article on switch as a code smell. He confirmed that it might've been more a case of mis-articulation than not getting it, but I ought to go ahead anyhow. So I decided to give it some thought.

Coincidentally, I happened to be looking at some of Adam's own code in his Semaphore project, and something I was looking at the test for was… a switch statement. So I decided to think about that.

I stress I said I'd think about it because I'm def on the learning curve with all this stuff, and whilst I've seen some really smell switch statements, and they're obvious, I can't say that I can reason through a good solution to every switch I see. This is an exercise in learning and thinking for me.

Here's the method with the switch in it:

private boolean function ruleMathIsTrue(required any userAttributeValue, required string operator, required any ruleValue){
    switch (arguments.operator){
        case '=':
        case '==':
            return arguments.userAttributeValue == arguments.ruleValue;
        case '!=':
            return arguments.userAttributeValue != arguments.ruleValue;
        case '<':
            return arguments.userAttributeValue < arguments.ruleValue;
        case '<=':
            return arguments.userAttributeValue <= arguments.ruleValue;
        case '>':
            return arguments.userAttributeValue > arguments.ruleValue;
        case '>=':
            return arguments.userAttributeValue >= arguments.ruleValue;
        case 'in':
            return arrayFindNoCase(arguments.ruleValue, arguments.userAttributeValue) != 0;
        default:
            return false;
    }
}

First up: this is not an egregious case at all. It's isolated in a private method rather than being dumped in the middle of some other logic, and that's excellent. The method is close enough to passing a check of the single-responsibility principle to me: it does combine both "which approach to take" with "and actually doing it", but it's a single - simple - expression each time, so that's cool.

What sticks out to me though is the repetition between the cases and the implementation:

They're mostly the same except the three edge-cases:

  • = needs to map to ==;
  • in, which needs a completely different sort of operation.
  • Instead of just throwing an exception if an unsupported operator is used, it just goes "aah… let's just be false" (and return false and throwing an exception are both equally edge-cases anyhow).

This makes me itchy.

One thing I will say for Adam's code, and that helps me in this refactoring exercise, is that he's got good testing of this method, so I am safe to refactor stuff, and when the tests pass I know I'm all good.


My first attempt at refactoring this takes the approach that a switch can often be re-implemented as a map: each case is a key; and the payload of the case is just some handler. This kinda makes the method into a factory method (kinda):

operationMap = {
    '=' : () => userAttributeValue == ruleValue,
    '==' : () => userAttributeValue == ruleValue,
    '!=' : () => userAttributeValue != ruleValue,
    '<' : () => userAttributeValue < ruleValue,
    '<=' : () => userAttributeValue <= ruleValue,
    '>' : () => userAttributeValue > ruleValue,
    '>=' : () => userAttributeValue >= ruleValue,
    'in' : () => ruleValue.findNoCase(userAttributeValue) != 0
};
return operationMap.keyExists(operator) ? operationMap[operator]() : false

OK so I have a map - lovely - but it's still got the duplication in it, and it might be slightly clever, but it's not really as clear as the switch.


Next I try to get rid of the duplication by dealing with each actual case in a specific way:

operator = operator == "=" ? "==" : operator;
supportedComparisonOperators = ["==","!=","<","<=",">",">="];
if (supportedComparisonOperators.find(operator)) {
    return evaluate("arguments.userAttributeValue #operator# arguments.ruleValue");
}
if (operator == "in") {
    return arrayFindNoCase(arguments.ruleValue, arguments.userAttributeValue);
}
return false;

This works, and gets rid of the duplication, but it's way less clear than the switch. And I was laughing at myself by the time I wrote this:

operator = operator == "=" ? "==" : operator

I realised I could get rid of most of the duplication even in the switch statement:

switch (arguments.operator){
    case "=":
        operator = "=="
    case '==':
    case '!=':
    case '<':
    case '<=':
    case '>':
    case '>=':
        return evaluate("arguments.userAttributeValue #operator# arguments.ruleValue");
    case 'in':
        return arrayFindNoCase(arguments.ruleValue, arguments.userAttributeValue) != 0;
    default:
        return false;
}

Plus I give myself bonus points for using evaluate in a non-rubbish situation. It's still a switch though, innit?


The last option I tried was a more actual polymorphic approach, but because I'm being lazy and CBA refactoring Adam's code to inject dependencies, and separate-out the factory from the implementations, it's not as nicely "single responsibility principle" as I'd like. Adam's method becomes this:

private boolean function ruleMathIsTrue(required any userAttributeValue, required string operator, required any ruleValue){
    return new BinaryOperatorComparisonEvaluator().evaluate(userAttributeValue, operator, ruleValue)
}

I've taken the responsibility for how to deal with the operators out of the FlagService class, and put it into its own class. All Adam's class needs to do now is to inject something that implements the equivalent of this BinaryOperatorComparisonEvaluator.evaluate interface, and stop caring about how to deal with it. Just ask it to deal with it.

The implementation of BinaryOperatorComparisonEvaluator is a hybrid of what we had earlier:

component {

    handlerMap = {
        '=' : (operand1, operand2) => compareUsingOperator(operand1, operand2, "=="),
        '==' : compareUsingOperator,
        '!=' : compareUsingOperator,
        '<' : compareUsingOperator,
        '<=' : compareUsingOperator,
        '>' : compareUsingOperator,
        '>=' : compareUsingOperator,
        'in' : inArray
    }

    function evaluate(operand1, operator, operand2) {
        return handlerMap.keyExists(operator) ? handlerMap[operator](operand1, operand2, operator) : false
    }

    private function compareUsingOperator(operand1, operand2, operator) {
        return evaluate("operand1 #operator# operand2")
    }

    private function inArray(operand1, operand2) {
        return operand2.findNoCase(operand1) > 0
    }
}

In a true polymorphic handling of this, instead of just mapping methods, the factory method / map would just give FlagService the correct object it needs to deal with the operator. But for the purposes of this exercise (and expedience), I'm hiding that away in the implementation of BinaryOperatorComparisonEvaluator itself. Just imagine compareUsingOperator and inArray are instances of specific classes, and you'll get the polymorphic idea. Even having the switch in here would be fine, because a factory method is one of the places where I think a switch is kinda legit.

One thing I do like about this handling is the "partial application" approach I'm taking to solve the = edge-case.

But do you know what? It's still not as clear as Adam's original switch. What I have enjoyed about this exercise is trying various different approaches to removing the smell, and all the things I tried had smells of their own, or - in the case of the last one - perhaps less smell, but the code just isn't as clear.


I'm hoping someone reading this goes "ah now, all you need to do is [this]" and comes up with a slicker solution.


I'm still going to look out for a different example of switch as a code smell. One of those situations where the switch is embedded in the middle of a block of code that then goes on to use the differing data each case prepares, and the code in each case being non-trivial. The extraction of those cases into separate methods in separate classes that all fulfil a relevant interface will make it clearer when to treat a switch as a smell, and solve it using polymorphism.

I think what we take from this is the knowledge that one ought not be too dogmatic about stamping out "smells" just cos some book says to. Definitely try the exercise (and definitely use TDD to write the first pass of your code so you can safely experiment with refactoring!), but if the end result ticks boxes for being "more pure", but it's at the same time less clear: know when to back out, and just run with the original. Minimum you'll be a better programmer for having taken yerself through the exercise.

Thanks to the Working Code Podcast crew for inspiring me to look at this, and particularly to Adam for letting me use his code as a discussion point.

Righto.

--
Adam

Sunday 4 April 2021

TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode

G'day:

Yer gonna need to go back and read the comments on Thoughts on Working Code podcast's Testing episode for context here. Especially as I quote a couple of them. I kinda left the comments run themselves there a bit and didn't reply to everyone as I didn't want to dominate the conversation. But one earlier comment that made me itchy, and now one comment that came in in the last week or so, have made me decide to - briefly - follow-up one point that I think warrants drawing attention to and building on.

Briefly, Working Code Pod did an episode on testing, and I got all surly about some of the things that were said, and wrote them up in the article I link to above. BTW Ben's reaction to my feedback in their follow-up episode ("Listener Questions #1) was the source of my current strapline quote: "the tone... it sounds very heated and abrasive". That should frame things nicely.

Right so in the comments to that previous article, we have these discussion fragments:

  • Sean Corfield - Heck, there's still a sizable portion that still doesn't use version control or has some whacked-out manual approach to "source code control".
  • Ben replied to that with - Yikes, I have trouble believing that there are developers _anywhere_ that don't use source-control in this day-and-age. That seems like "table stakes" for development. Period.
  • [off-screen, Adam bites his tongue]
  • Then Sean Hogge replied to the article rather than that particular comment thread. I'm gonna include a chunk of what he said here:

    18 months ago, I was 100% Ben-shaped. Testing simply held little ROI. I have a dev server that's a perfect replica of production, with SSL and everything. I can just log in, open the dashboard, delete the cache and check things with a few clicks.

    But as I started developing features that interact with other features, or that use the same API call in different ways, or present the same data with a partial or module with options, I started seeing an increase in production errors. I could feel myself scrambling more and more. When I stepped back and assessed objectively, tests were the only efficient answer.

    After about 3 weeks of annoying, frustrating, angry work learning to write tests, every word of Adam C's blog post resonates with me. I am not good at it (yet), I am not fast at it (yet), but it is paying off exactly as he and those he references promised it would.

    I recommend reading his entire comment, because it's bloody good.

  • Finally last week I listened to a YouTube video "Jim Coplien and Bob Martin Debate TDD", from which I extracted these two quotes from Martin that drew me back to this discussion:
    • (@ 43sec) My thesis is that it has become infeasible […] for a software developer to consider himself professional if [(s)he] does not practice test-driven development.
    • (@ 14min 42sec) Nowadays it is […] irresponsible for a developer to ship a line of code that [(s)he] has not executed any unit test [upon].. It's important to note that "nowadays" being 2012 in this case: that's when the video was from.
    And, yes, OK the two quotes say much the same thing. I just wanted to emphasise the words "professional" and "irresponsible".

This I think is what Ben is missing. He shows incredulity that someone in 2021 would not use source control. People's reaction is going to be the same to his suggestion he doesn't put much focus on test-automatic, or practise TDD as a matter of course when he's designing his code. And Sean (Hogge) nails it for me.

(And not just Ben. I'm not ragging on him here, he's just the one providing the quote for me to start from).

TDD is not something to be framed in a context alongside other peripheral things one might pick up like this week's kewl JS framework, or Docker or some other piece of utility one might optionally use when solving a client's problem. It's lower level than that, so it's false equivalence to bracket it like that conceptually. Especially as a rationalisation for not addressing your shortcomings in this area.

Testing yer code and using TDD is as fundamental to your professional responsibilities as using source control. That's how one ought to contextualise this. Now I know plenty of people who I'd consider professional and strong pillars of my dev community who aren't as good as they could be with testing/TDD. So I think Martin's first quote is a bit strong. However I think his second quote nails it. If you're not doing TDD you are eroding your professionalism, and you are being professionalbly irresponsible by not addressing this.

In closing: thanks to everyone for putting the effort you did into commenting on that previous article. I really appreciate the conversation even if I didn't say thanks etc to everyone participating.

Righto.

--
Adam

Thursday 11 February 2021

Thoughts on Working Code podcast's Testing episode

G'day:

Working Code (@WorkingCodePod on Twitter) is a podcast by some friends and industry colleagues: Tim Cunningham, Carol Hamilton, Ben Nadel and Adam Tuttle.


(apologies for swiping your image without permission there, team)

It's an interesting take on a techo podcast, in their own words from their strapline:

Working Code is a technology podcast unlike all others. Instead of diving deep into specific technologies to learn them better, or focusing on soft-skills, this one is like hanging out together at the water cooler or in the hallway at a technical conference. Working Code celebrates the triumphs and fails of working as a developer, and aims to make your career in coding more enjoyable.

I think they achieve this, and it makes for a good listen.

So that's that, I just wanted to say they've done good work, and go listen.

Oh, just one more thing.

Yesterday they released their episode "Testing". I have to admit my reaction to a lot of what was said was… "poor", so I pinged my namesake and said "I have some feedback". After a brief discussion on Signal, Adam & I concluded that I might try to do a "reaction blog article" on the topic, and they might see if they can respond to the feedback, if warranted, at a later date. They are recording tonight apparently, and I'm gonna try to get this across to them for their morning coffee.

Firstly as a reminder: I'm pretty keen on testing, and I am also keen on TDD as a development practice. I've written a fair bit on unit testing and TDD both. I'm making a distinction between unit testing and TDD very deliberately. I'll come back to this. But anyway this is why I was very very interested to see what the team had to say about testing. Especially as I already knew one of them doesn't do automated testing (Ben), and another (Carol) I believe has only recently got into it (I think that's what she said, a coupla episodes ago). I did not know Adam or Tim's position on it.

And just before I get under way, I'll stick a coupla Twitter messages here I saw recently. At the time I saw them I was thinking about Ben's claim to a lack of testing, and they struck a particular chord with me.




I do not know Maaret or Mathias, but I think they're on the money here.

OK So I'm now listening to the podcast now. I'm going to pull quotes from it, and comment on them where I think it's "necessary".

[NB I'm exercising my right to reproduce small parts of the Working Code Podcast transcript here, as I'm doing so for the purposes of commentary / criticism]


Ahem.

Adam @ 11:27:

…up front we should acknowledge you know we're not testing experts. None of us [...] have been to like 'testing college'. […]There's a good chance we're going to get something wrong.

I think, in hindsight, this podcast needed this caveat to be made louder and clearer. To be blunt - and I don't think any would disagree with me here - all four are very far from being testing experts. Indeed one is even a testing naesayer. I think there's some dangerously ill-informed opinions being expressed as the podcast progresses, and as these are all people who are looked-up-to in their community, I think there's a risk people will take onboard what they say as "advice". Even if there's this caveat at the beginning. This might seem like a very picky thing to draw on, but perhaps I should have put it at then end of the article, after there's more context.

Carol @ 12:07:

Somebody find that monster already.

Hi guys.

Ben @ 12:41

I test nothing. And it's not like a philosophical approach to life, it's more just I'm not good at testing

Adam @ 13:09:

Clearly that's working out pretty well for you, you've got a good career going.

I hear this a bit. "I don't test and I get by just fine". This is pretty woolly thinking, and it's false logic people will jump on to justify why they don't do things. And Adam is just perpetuating the myth here. The problem with this rationalisation is demonstrated with an analogy of going on a journey without a map and just wandering aimlessly but you still (largely accidentally) arrive at your intended destination. In contrast had you used a map, you'd've been more efficient with your time and effort, and been able to progress even further on the next leg of your journey sooner. Ben's built a good career for himself. Undoubtedly. Who knows how much better it would be had he… used a map.

Also going back to Ben's comment about kind of explaining away why he doesn't test because he's not good at it. Everyone starts off not being good at testing mate. The rest of us do something about it. This is a disappointing attitude from someone as clued-up as Ben. Also if you don't know about something… don't talk about it mate. Inform yourself first and then talk about it.

Ben @ 13:21:

[…] some additional context. So - one - I work on a very small team. Two: all the people who work on my team are very very familiar with the software. Three: we will never ever hire a new engineer specifically for my team. Cos I work on the legacy codebase. The legacy codebase is in the process of being phased out. […] I am definitely in a context where I don't have to worry about hiring a new person and training the up on a system and then thinking they'll touch something in the code that they don't understand how it works. That's like the farthest possible thing from my day-to-day operations currently.

Um… so? I'm not being glib. You're still writing new logic, or altering existing logic. If you do that, it intrinsically needs testing. I mean you admitted you do manual testing, but it beggars belief that a person in the computer industry will favour manually performing a pre-defined repetitive task as a (prone-to-error) human, rather than automating this. We're in the business of automating repetitive tasks!

I'd also add that this would be a brilliant, low-risk, environment for you to get yourself up to speed with TDD and unit testing, and work towards the point where it's just (brain) muscle memory to work that way. And then you'll be all ready once you progress to more mission-critical / contemporary codebases.

Ben @ 14:42:

I can wrap my head around testing when it comes to testing a data workflow that is completely pure, meaning you have a function or you have a component that has functions and you give it some inputs and it generates some outputs. I can 100% wrap my head around testing that. And sometimes actually when I'm writing code that deals with something like that, even though I'm not writing tests per se, I might write a scratch file that instantiates that component and sends data to it and checks the output just during the development process that I don't have to load-up the whole application.

Ben. That's a unit test. You have written a unit test there. So why don't you put it in a test class instead of a scratch file, and - hey presto - you have a persistent test that will guard against that code's behaviour somehow changing to break those rules later on. You are doing the work here, you're just not doing it in a sensible fashion.

Ben @ 15:18:

Where it breaks down immediately for me is when I have to either a) involve a database, or b) involve a user interface. And I know that ther's all kinds of stuff that the industry has brought to cater to those problems. I've just never taken the time to learn.

There's a bit of a blur here between the previous train of thought which was definitely talking about unit tests of a unit of code, and now we're talking about end-to-end testing. These are two different things. I am not saying Ben doesn't realise this, but they're jammed up next to each other in the podcast so the distinction is not being made. These two kinds of testing are separate ideas, only really coupled by the fact they are both types of testing. Ben's right, the tooling is there, and - in my experience at least with the browser emulation stuff - it's pretty easy and almost fun to use. Ben already tests his stuff manually every time he does a release, so it would seem sensible to me to take the small amount of time it takes to get up to speed with these things, and then instead of testing something manually, take the time to automate the same testing, then it's taken care of thenceforth. It just a matter of being a bit more wise with one's time usage.

Adam @ 16:39

The reason that we don't have a whole lot of automated tests for our CFML code is simply performance. So when we started our product I tried really hard to do TDD. If I was writing a new module or a new section of that module I would work on tests along with that code, and would try to stay ahead of the game there. And what ended up happening was I had for me - let's say - 500 functions that could run, I had 400 tests. And I don't want to point a finger at any particular direction, but when you take the stack as a whole and you say "OK now run my test suite" and it takes ten minutes to run those tests and [my product, the project I was working on] is still in its infancy, and you can see this long road of so much more work that has to be done, and it takes ten minutes to run the tests - you know, early on - there was no way that that was going to be sustainable. So we kind of abandoned hope there. [...] I have, in more recent years, on a more recent stack seen way better performance of tests. [...] So we are starting to get more into automated testing and finding it actually really helpful. [...] I guess what I wanted to say there is that a perfectly valid reason to have fewer or no tests is if it doesn't work well on your platform.

Adam starts off well here, both in what he's saying and his historical efforts with tests, but he then goes on to pretty much blame ColdFusion for not being very good at running tests. This is just untrue, sorry mate. We had thousands upon thousands of tests running on ColdFusion, and they ran in an amount of time best measured in seconds. And when we ported that codebase to PHP, we had a similar number of test cases, and they ran in round about the same amount of time (PHP was faster, that said, but also the tests were better). I think the issue is here - and confirms this about 30sec after the quote above, and again about 15min later when he comes back to this - is that your tests weren't written so well, and they were not focused on the logic (as TDD-oriented tests ought to be), they were basically full integration tests. Full integration tests are excellent, but you don't want your tight red / green / refactor testing cycle to be slowed down by external services like databases. That's the wrong sort of testing there. My reaction to you saying your test runs are slow is not to say "ColdFusion's fault", it's to say "your tests' fault". And that's not a reason to not test. It's a reason to check what you've been doing, and fix it. I'm applying hindsight here for you obviously, but this ain't the conclusion/message you should be delivering here.

Carol @ 20:03:

I also want to say that if you are starting out and you're starting to add test, don't let slowness stop you from doing it.

Spot on. I hope Ben was listening there. When you start to learn something new, it is going to take more time. I think this is sometimes why people conclude that testing takes a lot of time: the people arriving at that conclusion are basing it on their time spent on the learning curve. Accept that things go slow when you are learning, but also accept that things will become second nature. And especially with writing automated tests it's not exactly rocket-science, the initial learning time is not that long. Just… decide to learn how to test stuff, start automating your testing, and stick at it.

Ben @ 21:24:

I was thinking about debugging incidents and getting a page in the middle of the night and having to jump on a call and you seeing the problem, and now you have to do a hotfix, and push a deployment in the middle of the night […]. And imagine having to sit there for 30 minutes for your tests to run just so you can push out a hotfix. Which I thought to myself: that would drive me crazy.

At this point I think Ben is just trying to invent excuses to justify to himself why he's right to eschew testing. I'm reminded of Maaret's Twitter message I included above. The subtext of Ben's here is that if one tests manually, then you're more flexible in what you can choose to re-test when you are hotfixing. Well obviously if you can make that call re manual tests, then you can make the same call with automated tests! So his position here is just specious. Doubly so because automated tests are intrinsically going to be faster than the equivalent manual tests to start with. Another thing I'll note that with this entire analogy: you've already got yourself in a shit situation by needing to hotfix stuff in the middle of the night. Are you really sure you want to be less diligent in how you implement that fix? In my experience that approach can lead to a second / third / fourth hotfix being needed in rapid succession. Hotfix situations are definitely ones of "work smarter, not faster".

Ben @ 21:56:

I'm wondering if there should be a test budget that you can have for your team where you like have "here is the largest amount of time we're willing to let testing block a deployment". And anything above that have to be tests that sit in an optional bucket where it's up to the developer to run them as they see fit, but isn't necessarily tests that would block deployment. I don't know if that's totally crazy.

Adam continues @ 22:36:

You have to figure out which tests are critical path, which ones are "must pass", and these ones are like "low risk areas" […] are the things I would look for to make optional.

Yep, OK there's some sense here, but I can't help thinking that we are talking about testing in this podcast, and we're spending our time inventing situations in which we're not gonna test. It all seems a bit inverted to me. How about instead you just do your testing and then if/when a situation arises you then deal with it. Instead of deciding there will be situations and justify to yourselves why you oughtn't test in the first place.

But I also have to wonder: why the perceived rush here? What's wrong with putting over 30min to test stuff if it "proves" thaty your work has maintained stability, and you'll be less likely to need that midnight hotfix. What percentage of the whole cycle time of feature request to delivery is that 30 minutes? Especially if taking the effort to write the tests in the first place will inately improve the stability of your code in the first place, and then help to keep it stable? It's a false economy.

Tim @ 23:15:

When we have contractors do work for us. I require unit tests. I require so much testing just because it's a way for me to validate the truth of what they're saying they've done. So that everything that we have that's done by third parties is very well tested, and it's fantastic because I have a high level of confidence.

Well: precisely. Why do you not want that same level of confidence in your in-house work? Like you say: confidence is fantastic. Be fantastic, Tim. Also: any leader should eat their own dogfood I think. If there's sense in you making the contractors work like this, clearly you ought to be working that way yourself.

Tim @ 23:36:

Any time I start a new project, if I have a greenfields project, I always start with some level of unit tests, and then I get so involved in the actual architecture of the system that I put it off, and like "well I don't really need a test for this", "I'm not really sure where I'm going with this, so I'm not going to write a test first" because I'm kinda experimenting. Then my experiment becomes reality, then my reality becomes the released version. And then it's like "well what's the point of writing a test now?"

I think we've all been there. I think what Tim needs here is just a bit more self-discipline in identifying what is "architectural spike" and what's "now doing the work". If one is doing TDD, then the spike can be used to identify the test cases (eg "it's going to need to capture their phone number") without necessarily writing the test to prove the phone number has been captured. So you write this:

describe("my new thing", function () {
    it ("needs to capture the phone number", function () {
        // @todo need to test this
    });
});

And then when you detect you are not spiking any more, you write the test, and then introduce the code to make the test pass. I also think Tim is overlooking that the tests are not simply there for that first iteration, they are then there proving that code is stable for the rest of the life of the code. This… builds confidence.

Adam @ 24:22:

That's what testing is all about, right? It's increasing confidence that you can deploy this code and nothing is going to be wrong with it. […] When I think about testing, the pinnacle of testing for me is 100% confidence that I can deploy on my way out the door at 4:55pm on Friday afternoon, with [a high degree of ~] confidence that I am not going to get paged on Saturday at 4am because some of that code that I just deployed… it went "wrong".

Exactly.

Carol @ 25:12:

What difference between the team I'm on and the team you guys have is we have I think it's 15-ish people touching the exact same code daily. So a patch I can put out today may have not even been in the codebase they pulled yesterday when they started working on a bug, or a week ago when they had theirs. So me writing that little extra bit of test gives them some accountability for what I've done, and me some.

Again: exactly.

Ben @ 26:36:

Even if you have a huge test suite, I can't help but think you have to do the manual testing, because what if something critical was missed. [...] I think the exhaustive test suite, what that does is it catches unexpected bugs unrelated. Or things that broke because you didn't expect them to break in a certain way. And I think that's very important.

To Ben's first point, you could just as easily (and arguably more validly) switch that around: a human, doing ad-hoc manual testing is more likely to miss something, because every manual test run is at their whim and subject to their focus and attention at the time. Whereas the automated tests - which let's not forget were written by a diligent human, but right at the time they are most focused on the requirements - are run by the computer and it will do exactly the same job every time. What having the historical corpus of automated tests give you is increased confidence that all that stuff being tested still works the way it is supposed to, so the manual testing - which is always necessary, can be more a case of dotting the Is and crossing the Ts. With no automated tests, the manual tests need to be exhaustive. And the effort needs to be repeated every release (Adam mentions this a few minutes later as well).

To the second point: yeah precisely. Automated tests will pick up regressions. And the effort to do this only needs to be done once (writing the test). Without automated tests, you rely on the manual testing to pick this stuff up, but - being realistic - if your release is focused on PartX of the code, your manual tests are going to focus there, and possibly not bother to re-test PartZ which has just inadvertantly been broken by PartX's work.

Ben also mentions this quote from Rich Hickey "Q: What happened to every bug out there? A: it passed the type checker, and it passed all tests." (I found this reference on Google: Simple Made Easy, it's at about 15:45). It's a nifty quote, but what it's also saying is that there wasn't actually a test for the buggy behaviour. Because if there was one: the test would have caught it. The same could be said more readily of manual-only testing. Obviously nothing is going to be 100%, but automated tests are going to be more reliable at maintaining the same confidence level, and be less effort, than manual-only testing.

Ben @ 28:15:

When people say it increases the velocity of development over time. I have trouble embracing that.

(Ben's also alluding back to a comment he made immediately prior to that, relating to always needing to manually test anyhow). "Over time" is one of the keys here. Once a test is written once, it's there. It sticks around. Every subsequent test round there is no extra effort to test that element of the application (Tim draws attention to this a coupla minutes later too). With manual testing the effort needs to be duplicated every time you test. Surely this is not complicated to understand. Ben's point about "you still need to manually test" misses the point that if there's a foundation of automated tests, your manual testing can become far more perfunctory. Without the tests: the manual testing is monolithic. Every. Single. Time. To be honest though, I don't know why I need to point this out. It's a) obvious; and b) very well-trod ground. There's an entire industry that thinks automated tests are the foundation of testing. And then there's Ben who's "just not sure". This is like someone "just not sure" that the world isn't actually flat. It's no small amount of hubris on his part, if I'm honest. And obviously Ben is not the only person out there in the same situation. But he's the one here on this podcast supposedly discussing testing.

Ben @ 37:08:

One thing that I've never connected with: when I hear people talk about testing, there's this idea of being able to - I think they call them spies? - create these spies where you can see if private methods get called in certain ways. And I always think to myself: "why do you care about your private methods?" That's an implementation detail. That private method may not exist next week. Just care about what your public methods are returning and that should inherently test your private methods. And people have tried to explain it to me why actually sometimes you wanna know, but I've ust never understood it.

Yes good point. I can try to explain. I think there's some nuance missing in your understanding of what's going on, and what we're testing. It starts with your position that testing is only concerning itself with (my wording, paraphrasing you from earlier) "you're interested in what values a public method takes, and what it returns". Not quite. You care about given inputs to a unit, whether the behaviour within the unit correctly provides the expected outputs from the unit. The outputs might not be the return value. Think about a unit that takes a username and password, hashes the password, and saves it to the DB. We then return the new ID of the record. Now… we're less interested in the ID returned by the method, we are concerned that the hashing takes place correctly. There is an output boundary of this unit at the database interface. We don't want our tests to actually hit the database (too slow, as Adam found out), but we mock-out the DB connector or the DAO method being called that takes the value that the model layer has hashed. When then spy on the values passed to the DB boundary, and make sure it's worked OK. Something like this:

describe("my new thing", function () {
    it ("hashes the password", function () {
        testPassword = "letmein"
        expectedHash = "whatevs"
        
        myDAO = new Mock(DAO)
        myDAO.insertRecord.should.be.passed(anything(), expectedHash)
        
        myService = new Service(myDAO)
        
        newId = myService.addUser("LOGIN_ID_NOT_TESTED", testPassword)
        
        newId.should.be.integer() // not really that useful
    });
});

class Service {

    private dao

    Service(dao) {
        this.dao = dao
    }
    
    addUser(loginId, password) {
        hashedPassword = excellentHashingFunction(password)
        
        return this.dao.insertRecord(loginId, hashedPassword)
    }
}

class DAO {
    insertRecord(loginId, password) {
        return db.insertQuery("INSERT INTO users (loginId, password) VALUES (:loginId, :password)", [loginId, password])
    }
}

OK so insertRecord isn't a private method here, but the DAO is just an abstraction from the public interface of the unit anyhow, so it amounts to the same thing, and it makes my example clearer. insertRecord could be a private method of Service.

So the thing is that you are checking boundaries, not specifically method inputs/outputs.

Also, yes, the implementation of DAO might change tomorrow. But if we're doing TDD - and we should be - the tests will be updated at the same time. More often than not though, the implementation isn't as temporary as this line of thought often assumes (for the convenience of the argument, I suspect).

Adam @ 48:41:

The more that I learn how to test well, and the more that I write good tests, the more I become a believer in automated testing (Carol: Amen). […] The more I do it the better I get. And the better I get the more I appreciate what I can get from it.

Indeed.

Tim @ 49:32:

In a business I think that short term testing is a sunk cost maybe, but long term I have seen the benefit of it. Particularly whenever you are adding stuff to a mature system, those tests pay dividends later. They don't pay dividends now […] (well they don't pay as many dividends now) […] but they do pay dividends in the long run.

Also a good quote / mindset. Testing is about the subsequent rounds of development as much as the current one.

Ben @ 50:05:

One thing I've never connected with emotionally, when I hear people talk about testing, is when they refer to tests as providing documentation about how a feature is supposed to work. And as someone who has tried to look at tests to understand why something's not working, I have found that they provide no insight into how the feature is supposed to work. Or I guess I should say specifically they don't provide answers to the question that I have.

Different docs. They don't provide developer docs, but if following BDD practices, they can indicate the expected behaviour of the piece of functionality. Here's the test run from some tests I wrote recently:

root@1b011f8852b1:/usr/share/nodeJs# npm test

> nodejs@1.0 test
> mocha test/**/*.js



  Tests for Date methods functions
    Tests Date.getLastDayOfMonth method
      ✓ returns Jan 31, given Jan 1
      ✓ returns Jan 31, given Jan 31
      ✓ returns Feb 28, given Feb 1 in 2021
      ✓ returns Feb 29, given Feb 1 in 2020
      ✓ returns Dec 31, given Dec 1
      ✓ returns Dec 31, given Dec 31
    Tests Date.compare method
      ✓ returns -1 if d1 is before d2
      ✓ returns 1 if d1 is after d2
      ✓ returns 0 if d1 is the same d2
      ✓ returns 0 if d1 is the same d2 except for the time part
    Tests Date.daysBetween method
      ✓ returns -1 if d1 is the day before d2
      ✓ returns 1 if d1 is the day after d2
      ✓ returns 0 if d1 is the same day as d2
      ✓ returns 0 if d1 is the same day as d2 except for the time part
    Tests for addDays method
      ✓ works within a month
      ✓ works across the end of a month
      ✓ works across the end of the year
      ✓ works with zero
      ✓ works with negative numbers

  Tests a method Reading.getEstimatesFromReadingsArray that returns an array of Readings representing month-end estimates for the input range of customer readings
    Tests for validation cases
      ✓ should throw a RangeError if the readings array does not have at least two entries
      ✓ should not throw a RangeError if the readings array has at least two entries
      ✓ should throw a RangeError if the second date is not after the first date
    Tests for returned estimation array cases
      ✓ should not include a final month-end reading in the estimates
      ✓ should return the estimate between two monthly readings
      ✓ should return three estimates between two reading dates with three missing estimates
      ✓ should return the integer part of the estimated reading
      ✓ should return all estimates between each pair of reading dates, for multiple reading dates
      ✓ should not return an estimate if there was an actual reading on that day
      ✓ should return an empty array if all readings are on the last day of the month
      ✓ tests a potential off-by-one scenario when the reading is the day before the end of the month

  Tests for helper functions
    Tests for Reading.getEstimationDatesBetweenDates method
      ✓ returns nothing when there are no estimates dates between the test dates
      ✓ correctly omits the first date if it is an estimation date
      ✓ correctly omits the second date if it is an estimation date
      ✓ correctly returns the last date of the month for all months between the dates

  Test Timer
    ✓ handles a lap (100ms)

  Test TimerViaPrototype
    ✓ handles a lap (100ms)


  36 passing (221ms)

root@1b011f8852b1:/usr/share/nodeJs#

When I showed this to the person I was doing the work for, he immediately said "no, that test case is wrong, you have it around the wrong way", and they were right, and I fixed it. That's the documentation "they" are talking about.

Oh, and Carol goes on to confirm this very thing one minute later.

Also bear in mind that just cos a test could be written in such a way as to impart good clear information doesn't mean that all tests do. My experience with looking at open-source project's tests to get any clarity on things (and I include testing frameworks' own tests in this!), I am left knowing less than I did before I looked. It's almost like there's a rule in OSS projects that the code needs to be shite or they won't accept it ;-)


And that's it. It was an interesting podcast, but I really really strongly disagreed with most of what Ben said, and why he said it. It would be done thing if he was held to account (and the others tried this at times), but as it is other than joking that Ben is a nae-sayer, I think there's some dangerous content in here.

Oh, one last thing… in the outro the team suggests some resources for testing. Most of what the suggested seems to be "what to do", not "why you do it". I think the first thing one should do when considering testing is to read Test Driven Development by Kent Beck. Start with that. Oh this reminds me… not actually much discussion on TDD in this episode. TDD is tangential to testing per se, but it's an important topic. Maybe they can do another episode focusing on that.

Follow-up

The Working Code Podcast team have responded to my obersvations here in a subsequent podcast episode, which you can listen to here: 011: Listener Questions #1. Go have a listen.

Righto.

--
Adam