Showing posts with label Rhetoric. Show all posts
Showing posts with label Rhetoric. Show all posts

Friday, 27 October 2023

I'm a big meany again

G'day

Chortle:

I'm new to cf development and recently joined the cfml slack community. I noticed some curmudgeonly posts from this fella at first and didn't think anything of it... I'm now realizing that he is an a**h*** to people all the time. For example, there was a question posted today that currently has 14 replies. A few of them are from others trying to answer OP's question; 7 are this guy explaining how stupid of a question it is. What gives? I'm sure he's a knowledgeable and respected developer, but the pedantry is off the charts!

DJgrassyknoll on Reddit

Guilty as charged I suppose.

However the charges are pretty specious. I think the thread in question was the one with a question (paraphrase) "is there a function in CFML that can convert JSON to XML?". This is a largely non-sensical question, and without further elaboration, can only be furnished with non-sensical and unhelpful answers. I don't wanna do that. I wanna get the person's problem solved. I did point this out to the bod in question, and explained why, with both a real-world analogy, plus a code demonstration! Here's the code version:

json = serializeJson({key="value"})
xml = xmlParse("<nowItIsXml>#encodeForXml(json)#</nowItIsXml>")

writeDump([
    json = json,
    xml = xml
])

Completely answers the question as stated, but is a completely frickin useless answer. Other than to illustrate the shortcomings of the question.

I've been answering people's CFML questions for years, and the biggest blocker to actually solving their problem is coercing what their actual problem is out from their brain. Really a lot of people don't seem to get we can't see the code in front of them, or the Jira ticket (sic) they're trying to deal with, or any other thoughts on the matter they might have had beyond what they type in as their question. So… not to be put off… I will explain this, and try to get a decent question out of them. So we all can understand it and then get their issue sorted. I will also often post links to How To Ask Questions The Smart Way and/or The SSCCE. I'm not going to apologise for that. It's good advice and if more people took it on board, they would improve, and also their chances of getting good help when they need it would improve. How is that bad?

We're all busy people, so the quicker and easier it is for us to get the info we need in front of us to help: the more likely we are to help. As one of the participants in that thread said "[when] people ask shitty questions, I mostly will just ignore it". That's your call pal. But I won't ignore them. I'll still try to help.

Ah - fuck it - people can say what they like about me. I don't care (and hey it's given me an in to blow some of the dust off this thing). But try to have a well-formed train of thought before doing so. And also poss sink some time into helping people to understand what it's like before stirring the pot in yer little sewing circle.

But yeah cheers for the new strapline quote.

Righto.

--
Adam

PS: props for using "curmudgeonly" though. Good word.

Tuesday, 24 August 2021

Test coverage: it's not about lines of code

G'day

A coupla days ago someone shared this Twitter status with me:

I'll just repeat the text here for google & copy-n-paste-ability:

My personal opinion: Having 100% test coverage doesn't make your code more effective. Tests should add value but at some point being overly redundant and testing absolutely every line of code is ineffective. Ex) Testing that a component renders doesn't IN MY OPINION add value.

Emma's position here is spot on, IMO. There were also a lot of "interesting" replies to it. Quelle surprise. Go read some of them, but then give up when they get too repetitive / unhelpful.

In a similar - but slightly contrary - context, I was reminded of my erstwhile article on this topic: "Yeah, you do want 100% test coverage".

But hang on: on one hand I say Emma is spot on. On the other hand I cross-post an old article that seems to contradict this. What gives?

The point of differentiation is "testing absolutely every line of code" vs "100% test coverage". Emma is talking about lines of code. I'm talking about test cases.

Don't worry about testing lines of code. That's intrinsically testing "implementation". However do care about your test cases: the variations that define the feature you are working on. you've been asked to develop a feature and its variations, and you don't know you've delivered the feature (or in the case of automated testing: continue to having been delivered in a working state) unless you test it.

Now, yeah, sure: features are made of lines of code, but don't worry about those. A trite example I posited to a mate yesterday is this: consider this to be the implementation of your feature:

doMyFeature() {
    // line 1
    // line 2
    // line 3
    // line 4
    // line 5
    // line 6
    // line 7
    // line 8
    // line 9
    // line 10
}

I challenge you to write a test for "My Feature", and not by implictation happen to test all those lines of code. But it's the feature we care about, not the lines of code.

On the other hand, let's consider a variant:

doMyFeature() {
    // line 1
    // line 2
    // line 3
    // line 4
    // line 5
    if (someVariantOfMyFeature) {
        // line 6
    }
    // line 7
    // line 8
    // line 9
    // line 10
}

If you run your test coverage analysis and see that line 6 ain't tested, this is not a case of wringing one's hands about line 6 of the code not being tested; it's that yer not testing the variation of the feature. It's the feature coverage that's missing. Not the lines-of-code coverage.

Intrinsically your code coverage analysis tooling probably marks individual lines of code as green or red or whatever, but only if you look that closely. If you zoom out a bit, you'll see that the method the lines of code are in is either green or orange or red; and out further the class is likewise green / orange / red, and probably says something like "76% coverage". The tooling necessarily needs to work in lines-of-code because its a dumb machine, and those are the units it can work on. You are the programmer don't need to focus on the lines-of-code. You have a brain, and what the report is saying is "you've not tested part of your feature", and it's saying "the bit of the feature that is represented by these lines of code". That bit. You're not testing it. Go test yer feature properly.

There's parts of the implementation of a feature I won't test maybe. Your DAOs and your other adapters for external services? Maybe not something I'd test during the TDD cycle of things, but it might still be prudent to chuck in an integration test for those. I mean they do need to work and continue to work. But I see this as possibly outside of feature testing. Maybe? Is it?

Also - now I won't repeat too much of that other article - there's a difference between "coverage" and "actually tested". Responsible devs can mark some code as "won't test" (eg in PHPUnit with a @codeCoverageIgnore), and if the reasons are legit then they're "covered" there.

Why I'm writing this fairly short article when it's kinda treading on ground I've already covered is because of some of the comments I saw in reply to Emma. Some devs are a bit shit, and a bit lazy, and they will see what Emma has written, and their take away will be "basically I don't need to test x because x is some lines of code, and we should not be focusing on lines of code for testing". That sounds daft, but I know people that have rationalised their way out of testing perfectly testable (and test-necessary) code due to "oh we should not test implementation details", or "you're just focusing on lines of code, that's wrong", and that sort of shit. Sorry pal, but a feature necessarily has an implementation, and that's made out of lines of code, so you will need to address that implementation in yer testing, which will innately report that those lines of code are "covered".

Also this notion of "100% coverage is too high, so maybe 80% is fine" (possibly using the Pareto Pinciple, poss just pulling a number of out their arses). Serious question: which 80%? If you set that goal, the a lot of devs will go "aah, it's in that 20% we don't test". Or they'll test the easy 80%, and leave the hard 20% - the bit that needs testing - to be the 20% they don't test (as I said in the other article: cos they basically can't be arsed).

Sorry to be a bit of a downer when it comes to devs' level of responsibility / professionalism / what-have-you. There are stack of devs out there who rock and just "get" what Emma (et al) is saying, and crack on with writing excellently-designed and well-tested features. But they probably didn't need Emma's messaging in the first place. I will just always think some more about advice like this, and wonder how it can be interpreted, and then be a contrarian and say "well actually…" (cringe, OK I hope I'm not actually doing that here?), and offer a different informed observation.

So my position remains: set your sights on 100% feature coverage. Use TDD, and your code will be better and leaner and you'll also likely end up with 100% LOC coverage too (but that doesn't matter; it's just a test-implementation detail ;-).

Righto.

--
Adam

Sunday, 25 April 2021

On code review

G'day:

I'm pretty big on code review; I see it as a critical part of the process of developing solution for our clients, and making our own and our colleagues' lives easier. It's a useful communications exercise, and it's a useful technical exercise.

I've used a variation of these notes with a variety of teams in the past, but I've never - until recently - got around to writing a decent (semi-) generic document about it. I've polished things a bit here, and thought I'd get it down in a public-facing fashion. There are references in here to systems I am familiar working with like Bitbucket and Jira and PHP. But I think the guidance carries across, irrespective of what combination of tooling one uses; if not in the precise mechanics, then in the processes the mechanics facilitate.

The CSS for the <h4> through <h6> is not as good as it could be. It's annoying that Blogspot gets me down as far as an <h3> even before we get to the title of the article (the date gets the <h2> for some reason), but: so be it. Sorry about that (you'll see what I mean as you get further down).

Introduction

Before code can be considered ready for release, the dev team needs to give the work a once-over or a "second set of eyes" to make sure it meets code quality guidelines, is appropriately tested, and that it fulfils the requirements of the user story that the code addresses, using the domain design / architecture / approach originally agreed before coding started.

What code review is

It is part of the collaborative team approach to delivering value to the client, just like the initial planning stage is before any code is even written, and any pair-programming that takes place during the initial development cycle. Given a lot of the physical coding work is done in separately from the rest of the team, code review is a "re-union" exercise where the devs get back together to see how their solution has ended up. It's important to remember that any given code is our code, not an individual developer's code.

Code review gives the coding dev a stage to show how they've implemented the solution, draw attention to anything noteworthy or needs flagging for any other reason to the rest of the team. And it gives the reviewers a chance to check things out, and potentially spot anything in the work that might need addressing still, or tweaking, or done slightly differently maybe, or whatever.

Another thing code review is is a learning exercise for everyone concerned. Some nuance of the code, or a suggestion to amend the code could be new knowledge to one or all people on the review. Or it might just clarify how some less-often-trod part of the codebase works. And if nothing else, it'll add to everyone's familiarity with the code in question, and it might be you working on it next. Win.

What code review is not

I have experienced some code reviews that have taken an adversarial approach: pitting the coding developer against the reviewing developers, with the reviewers going "you got this wrong", "this doesn't work", "nup", etc, and trying to seem more clever than their colleagues, with arguments going back and forth in the review comments. This is not what code review is about.

It's also not about people checking up on each other. Nor is it about more senior people pronouncing to less senior people.

We are on the same team, working on the same stories, with the same end-goal in mind.

Where the disconnect happens

Still. If I've done some work and someone else adds a comment saying "this could be done differently because [reasons]", it's easy to get a wee bit defensive / protective of our work (for a moment you're claiming the team's work as your own because it happened to be you who typed it in). This is pretty normal. I mean when I'm doing my own hobby-work and something goes unexpected wrong, I'm usually pretty quick to blame the language or the framework or some library, or something other than just myself for getting something slightly wrong, and forgetting that it's OK to be wrong. I like to think that the blame-casting phase of this is a pretty short cycle these days, and I'm quickly back to going "OK, so what do I need to do to sort this out?". Code review is the same.

I have a good quote here:

An amateur is defensive. The professional finds learning (and even, occasionally, being shown up) to be enjoyable; they like being challenged and humbled, and engage in education as an ongoing and endless process.

Anyway, bottom line: code review is participants on the same team working together.

Code review mindset

A pull request is a way to communicate application changes to the team. The comments that are added to the code in the pull request have good intentions with the intent of improving our work, and the application. Everyone should be open for a discussion like that. However because written communication can lack some nuance of a spoken in-person conversation, remember to over-compensate with the text and make sure it reads as polite, constructive, and respectful. I've found that sometimes even being too terse in a comment ("needs a test") can be seen as being abrupt, rather than just being to-the-point. Also remember to address the code, not the person. If something's not clear: perhaps ask about it rather than telling about. There's no harm in asking for clarification in something. Code is a communications mechanism - it's designed to articulate concepts between people - so if something needs clarification, this could indeed be something that needs to be addressed in the code. Or you might just've not clocked what the code was doing initially and just need a quick pointer from the dev, before going "duh, of course".

By the same token it's important for the recipient of a comment to frame things the same way. If you read a comment as being adversarial or confrontational, you are probably just be using the wrong internal voice to read it. Read it again with a neutral or collaborative voice. It's hardly ever the case that the comment writer was committing a personal attack to writing, and in a code review comment. It's more likely the person chose the wrong wording or didn't take time to polish-up the comment before pressing send. Devs are notoriously bad at articulating themselves in writing, after all, right?

Comments

If you're reviewing and making a comment, read it back to yourself before pressing submit. Would you like to be on the receiving end of it?

If you're replying to a comment, act likewise.

Before adding a reply to a reply (… to a reply… to a reply…), remember it's a code review not a discussion thread. Consider getting on a call and actually talking to the person. This can really quickly solve the misunderstanding, but it is a really good way of defusing any tensions that could possibly be building.

That said, if some call to action or resolution arose from the voice-conversation, make sure to follow up in the code review with a summary comment. That information is important.

Compromise

Some back-and-forth discussions are really useful and interesting! Some aren't. Either way, they're probably not what you all ought to be doing right at that moment. Give some thought as to whether a) the discussion is actually moving us closer to getting the work approved; b) needs to happen right then and there; c) via a text-based discussion. Possibly just go "yeah actually let's just run with it" (whether "it" is how the code already is, or if it's the fix being suggested). Or possibly go "OK, but let's consider revisiting this in another story later… it might be worth it".

Code review logistics

  • All work going into the codebase must be reviewed and approved by at least one other person (not the dev who wrote the code!). Bitbucket can be configured to disabled the Merge button until there are n approvals.
  • If a review yields any rework, this should be undertaken before any other new work (ie: a different new story) is commenced. One should focus on completing existing story before starting the next story. "Stop starting and start finishing".
  • From a reviewer's perspective: reviewing our colleagues existing work ought to generally take priority over continuing to develop our own new code. At any given moment work that is closer to be completed should be the focus, over starting or continuing newer work. This is not to say that one must drop everything when a code review comes in. What it does mean is that when re-commencing from a break (arriving at work, after or lunch or some other intermediary break), and before refocusing on new code: check what code review needs to be done, and deal with that first.
  • That said, it's important to minimise task-switching, so be pragmatic about this. But bear in mind that it is also part of our job to work as a team as well as work as an individual. This is not a situatin of "my work" and "their work": it doesn't matter happened to type-in the code, it is all our work.

Handling code review tasks

Tasks are a feature in BitBucket that allows a code review comment to have a "task" attached to it. This indicates work that needs to be done before the code review can be completed.

(Inadvertently an example of a comment that is probably a bit abrupt-sounding!)

Bitbucket can be configured to disable the Merge button whilst there are outstanding tasks.

As a reviewer, if the nature of an observation you make seems to require action, add a task to the comment. This signifies that you're not just making the observation, you mean it needs action.

As the developer, when addressing a task, mark it "done" in one of two situations:

  • whatever was requested has been done;
  • after further discussion, it's concluded there's no action to take. But add a comment to that effect in reply to the task comment.

Do not mark a task done unless one of those two resolutions have been made. IE: don't simply decide to not address a task because [reasons]. Get agreement from the reviewer as to the legitimacy of those reasons first.

Code review guidelines

Before the code gets there
  • The branch name, commit comments (and they should all have comments!), and pull request should be prefixed with the ticket number for the story the work is for. This is just to make it easier to tie everything together later. The important one of these is the pull request name should have the ticket reference in it so the reviewers can check what they're reviewing. In the past I've used a rule that the pull request title should be "JRA-1234 The Title Of The Ticket" (ie: exactly the title of the ticket). Hopefully Bitbucket has integrations with the ticketing system to it can automatically like through to the ticket just from the ID being there.
  • If there's no ticket, the work should not have been even started, let alone submitted for code review.
  • The ticket needs to have a story in it. The reviewers need to be able to to know what the work is setting out to do, otherwise how can they know it's fulfilled all its cases? For that matter, how did the dev who did the work?
  • Consider squashing your commits when doing a pull request (or configure Bitbucket to do it automatically). By the time the code is going into master, it's no longer useful to log the sequence the work was done it. And we still have the commit log in your own repo anyhow.
Before the review starts
  • There ought to be an automated build that ran code quality checks and unit tests. Hopefully there's integration between the CI tool and Bitbucket so the pull request automatically indicates a green status, or the dev should include a link to the green status report.
  • If that's not green: don't review anything. However the dev possibly did not notice the build is broken, so give them a (gentle) nudge.
  • As the developer, go through the pull request and put your own comments in, where necessary. Sometimes the code can legitimately not be obvious, or it just helps to contextualise which part of the story the code is addressing.
What to look for in a review
General
  • Does the work fulfil all of the requirements of the story on the ticket. This might seem obvious, but sometimes a dev can forget something. Or perhaps their reading of the requirement is different from yours, so what's been delivered might not meet actual expectations.
  • Does the work go off-piste anywhere? It should only address the requirement of the ticket. Not "oh, whilst I was in this file I saw something I could fix in this other bit of code". That can be done later on another ticket. Or: not at all. So check that the work has maintained focus.
  • As a reviewer do you understand the code. This is vital. Remember code is a tool for communicating between developers, and it's not doing its job if you don't understand it. Code can be as fancy as you like, but if no-one can understand it, it should get flagged in code review.
  • All code needs to adhere to any prescribed guidelines:
    • In-house guidelines (whatever they are, but all teams should have them).
    • Probably by implication, external community-recognised guidelines such as PHP's PSR-12. People who are not currently in the team will need to understand this code later on, so let's write code that newcomers have more of a chance of understanding.
Specific
  • Are all the test cases in place and implemented?
  • Some code-sizing boundaries in coding guidelines docs won't literally be checked by the automated quality tests, it's up to the reviewer to think about it. A method will be rejected by the automated tests if it's - for example - over 100 lines long. But if the target is 20, warning bells should be going off at 40. If it's 80 lines long, this probably warrants a closer look. The size restrictions are not there to be gamed (101 bad; 99 good), they're just something we can get code analysis tooling (eg: PHPMD) to flag up. It's down to the humans to agree whether the method is A-OK at 80 lines.
  • Equally if new work takes a method that is 50 lines long and takes it up to 60 lines… it was already too big so the dev ought to have extracted the piece of logic they needed to work on into a separate method, and done the work there. Or if they new work definitely belongs in the mainline of the method, then extract a different section of the method (so employing the boy scout rule). There is seldom any good reason for a method that is already oversized to get even bigger.
  • Do class / method / variable names make sense? Are they spelt right? Yeah, it matters.
  • Are classes / methods doing just one thing?
    • A controller should not be getting stuff from the database or implementing business logic.
    • A method called getAccount should be getting the account. Not creating it if it's missing and then returning it.
    • A repository class should not be implementing logging, etc.
    • A class that hits the DB and also sets constants for other classes to use is doing two things. One or other of them does not belong in that class.
    • Any method that takes a Boolean parameter is almost certainly doing two things: one if the flag is true; another if it's false. This is definitely a warning sign.
    • Look at the levels of indentation. Every level suggests that the method is doing too much.

The value of code review

  • The codebase will be better.
  • You might have learned some new coding techniques. You have improved as a developer.
  • You know more about the codebase, which reduces the bus factor for the company.
  • You help your colleagues stay on track with their work.
  • You get to practise your written communication skills.
  • We get a better historical record of why work was done.
  • Team cohesion. It's a team-building exercise.
  • Reading code. It's good practise reading other people's code.
  • Writing code. It's good practise writing code other people can read.

Let me know what you think about code review. Any considerations I've missed?

Righto.

--
Adam

Tuesday, 6 April 2021

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD. Is. Not. A. Testing. Strategy.

Just a passing thought. Apropros to absolutely nothing. 'Onest guv.(*)

Dunno if it occurred to you, but that TDD thing? It's not a testing strategy. It's a design strategy.

Let's look at the name. In the name test-driven is a compound adjective: it just modifies the subject. The subject is development. It's about development. It's not about testing.

It's probably better described by BDD, although to me that's a documentation strategy, rather than a development (or testing) one. BDD is about arriving at the test cases (with the client), TDD is about implementing those cases.

The purpose of TDD is to define a process that leads you - as a developer - to pay more attention to the design of your code. It achieves this by forcing you to address the requirement as a set of needs (or cases), eg "it needs to return the product of the two operands". Then you demonstrate your understanding of the case by demonstrating what it is for the case to "work" (a test that when you pass 2 and 3 to the function it returns 6), and then you implement the code to address that case. Then you refine the case, refactor the implementation so it's "nicer", or move on to the next case, and cycle through that one. Rinse and repeat.

But all along the object of the exercise is to think about what needs to be done, break it into small pieces, and code just what's needed to implement the customer need. It also provides a firm foundation to be able to safely refactor the code once it's working. You know: the bit that you do to make your code actually good; rather than just settling for "doesn't break", which is a very low bar to set yourself.

That you end up with repeatable tests along the way is a by-product of TDD. Not the reason you're doing it. Although obviously it's key to offering that stability and confidence for the refactor phase.

Too many people I interact with when they're explaining why it's OK they don't do TDD [because reasons], fall back to the validity / longevity of the tests. It's… not about the tests. It's about how you design your solutions.

Lines of code are not a measure of productivity

Tangential to this, we all know that LOC are not a measure of productivity. There's not a uniform relationship between one line of code and another adjacent line of code. Or ten lines of code in one logic block that represent the implementation of a method are likely to represent less productivity burden than a single line of code nested 14-levels deep in some flow-control-logic monstrousity. We all know this. Not all lines of code are created equal. More is definitely not better. But fewer is also not intrinsically better. It's just an invalid metric.

So why is it that so many people are prepared to count the lines of code a test adds to the codebase as a rationalisation (note: not a justification, because it's invalid) as to why they don't have time to write that test? Or that the test represents an undue burden in the codebase. Why are they back to measuring productivity with LOC? Why won't they let it occur to them that - especially when employing TDD - the investment in the LOC for the test code reduces the investment in the LOC for the production code? And note I am not meaning this as a benefit that one only realises over time having amortised it over a long code lifespan. I mean on the first iteration of code<->test<->release, because the bouncing back and forth between each step there will be reduced. Even for code which this might (although probably won't) be the only iteration the production code sees.

It's just "measure twice, cut once" for code. Of course one doesn't have the limitation in code that one can only cut once; the realisation here needs to be that "measuring" takes really a lot less time than "cutting" when it comes to code.

In closing, if you are rationalising to me (or, in reality: to yourself) why you don't do TDD, and that rationalisation involves lines of code or how often code will be revisited, then you are not talking about TDD. You are pretty much just setting up a strawman to tilt at to make yourself feel better. Hopefully that tactic will seem a little more transparent to you now.

Design your code. Measure your code.

Righto.

--
Adam

(*) that's a lie. It's obviously a retaliation to a coupla comments I've been getting on recent articles I've written about TDD.

Sunday, 4 April 2021

TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode

G'day:

Yer gonna need to go back and read the comments on Thoughts on Working Code podcast's Testing episode for context here. Especially as I quote a couple of them. I kinda left the comments run themselves there a bit and didn't reply to everyone as I didn't want to dominate the conversation. But one earlier comment that made me itchy, and now one comment that came in in the last week or so, have made me decide to - briefly - follow-up one point that I think warrants drawing attention to and building on.

Briefly, Working Code Pod did an episode on testing, and I got all surly about some of the things that were said, and wrote them up in the article I link to above. BTW Ben's reaction to my feedback in their follow-up episode ("Listener Questions #1) was the source of my current strapline quote: "the tone... it sounds very heated and abrasive". That should frame things nicely.

Right so in the comments to that previous article, we have these discussion fragments:

  • Sean Corfield - Heck, there's still a sizable portion that still doesn't use version control or has some whacked-out manual approach to "source code control".
  • Ben replied to that with - Yikes, I have trouble believing that there are developers _anywhere_ that don't use source-control in this day-and-age. That seems like "table stakes" for development. Period.
  • [off-screen, Adam bites his tongue]
  • Then Sean Hogge replied to the article rather than that particular comment thread. I'm gonna include a chunk of what he said here:

    18 months ago, I was 100% Ben-shaped. Testing simply held little ROI. I have a dev server that's a perfect replica of production, with SSL and everything. I can just log in, open the dashboard, delete the cache and check things with a few clicks.

    But as I started developing features that interact with other features, or that use the same API call in different ways, or present the same data with a partial or module with options, I started seeing an increase in production errors. I could feel myself scrambling more and more. When I stepped back and assessed objectively, tests were the only efficient answer.

    After about 3 weeks of annoying, frustrating, angry work learning to write tests, every word of Adam C's blog post resonates with me. I am not good at it (yet), I am not fast at it (yet), but it is paying off exactly as he and those he references promised it would.

    I recommend reading his entire comment, because it's bloody good.

  • Finally last week I listened to a YouTube video "Jim Coplien and Bob Martin Debate TDD", from which I extracted these two quotes from Martin that drew me back to this discussion:
    • (@ 43sec) My thesis is that it has become infeasible […] for a software developer to consider himself professional if [(s)he] does not practice test-driven development.
    • (@ 14min 42sec) Nowadays it is […] irresponsible for a developer to ship a line of code that [(s)he] has not executed any unit test [upon].. It's important to note that "nowadays" being 2012 in this case: that's when the video was from.
    And, yes, OK the two quotes say much the same thing. I just wanted to emphasise the words "professional" and "irresponsible".

This I think is what Ben is missing. He shows incredulity that someone in 2021 would not use source control. People's reaction is going to be the same to his suggestion he doesn't put much focus on test-automatic, or practise TDD as a matter of course when he's designing his code. And Sean (Hogge) nails it for me.

(And not just Ben. I'm not ragging on him here, he's just the one providing the quote for me to start from).

TDD is not something to be framed in a context alongside other peripheral things one might pick up like this week's kewl JS framework, or Docker or some other piece of utility one might optionally use when solving a client's problem. It's lower level than that, so it's false equivalence to bracket it like that conceptually. Especially as a rationalisation for not addressing your shortcomings in this area.

Testing yer code and using TDD is as fundamental to your professional responsibilities as using source control. That's how one ought to contextualise this. Now I know plenty of people who I'd consider professional and strong pillars of my dev community who aren't as good as they could be with testing/TDD. So I think Martin's first quote is a bit strong. However I think his second quote nails it. If you're not doing TDD you are eroding your professionalism, and you are being professionalbly irresponsible by not addressing this.

In closing: thanks to everyone for putting the effort you did into commenting on that previous article. I really appreciate the conversation even if I didn't say thanks etc to everyone participating.

Righto.

--
Adam

Monday, 31 July 2017

Yeah, you do want 100% test coverage

G'day:
I've read a number of blog articles recently that have been eshewing 100% test coverage. The rationale being that there's little difference between 90% and 100%, and that that last 10% can be quite tricky to attain as - we being human 'n' all - sometimes leave the worst to last. Obviously we should all be doing TDD so that situation would not arise, but - again - we're human so don't always do TDD for one reason or another ("can't be arsed" is the usual one).

Another rationale is that it's not a race to 100%. IE: the object of the exercise is delivering working software which adds business value; it's not "getting 100% test coverage".

The latter is true. But we still aim for 100% test coverage for basically one good reason: visibility.

First, look at this:



Not bad I guess. Not great.

Now look at this:



We've accidentally forgotten to cover something new. Can you see it (it's one of the provider methods).

On the other hand look at these two:



Somewhat easier to spot, yeah?

This is bloody handy. In this case I just commented-out one of my tests to get the report off 100%, but in the real world I'd see that and go "huh, WTF?", and perhaps discover I had a logic branch untested, or a test I thought was covering something wasn't actually doing so. Or I just forgot to put the @covers annotation on the test. Whatever the situation: it's handy for the shortfall to be so obvious.

What it also means is part of our pull-request process is running a Bamboo build which runs all our tests (plus some other code quality tools):

 Everything's green here, so it's OK for code review. If it was failing, I'd get a nudge from my reviewers going "erm... you sure this is ready to go?" And even if I thought it was, we can't merge anything whilst the build is failing anyhow. The issue needs to be addressed.

Now the key thing here is this:


We don't test everything, but we cover everything. Where "cover" might be "we don't need to test this". In this case a single variable assignment does not need testing. We trust PHP to follow simple instructions. The thing is: this still shows up as green in the test coverage report.

We do this in a few situations:

  • the method has not branching logic and is overwise pretty simple. Ignore it.
  • we're backfilling coverage on old code we need to maintain, but it's such a mess we simply don't have time to solve all the testing shortfalls right now, so we test what is urgent (the bit we're changing), but just mark as ignore the rest of it. Now that file is green, and we can see if it stops being green.
  • similar to the above but we know the code the code in question is  not long for this world. So we ignore it and stick a TODO cross referencing to the ticket that will be dealing with it. And the ticket must already be lined-up to be worked on "soon".

There might be other situations where we need to be pragmatic and skip some stuff, but this needs to be explained to the code reviewers, and get their agreement.

But being able to run a report that says "100%", and shows up anything that isn't 100% is gold.

Get yer coverage report green, but don't be afraid to be pragmatic and skip stuff, if there's an informed reason to do so.

Now this isn't to say one can just pepper ignore annotations around the place just to get the report green. That's daft. The object of the exercise is not to make some report green / say 100%. It's to get good code quality. This is just a tool to help identify the quality: it's still up to the devs to provide the quality. Don't forget that. It needs to be an informed, honest decision for one to ignore code.

Righto.

--
Adam

Wednesday, 14 December 2016

Code Twats and Code Bullies

G'day:
This is something I should have perhaps written about a while back when it first got on my radar, but was never sure how to make a decent-sized article of, so didn't bother. However the situation has arisen again, and I figure it's probably worth raising, even if it's only short.

(ed: having finished it now, I don't think it's my finest-ever article, but... oh well).

A year or so ago we had a new dev who was reasonably good at PHP (certainly better then me, to damn them with faint praise), and not a bad programmer. However they had:

  • very little experience in working within a team,
  • very little experience with adhering to development standards other than their own,
  • and very little experience with anyone making observations about their work.
To be honest, I rate those three points as more important than technically how skilled a person might be at the general vocab and grammar of a language.

We do rigorous code reviews at our joint, and on the whole the team members enjoy having eyeballs on their code as we all "get" that code is not "mine" or "yours", it's "ours", so we all need to be happy with it. We all have to have input into it, and to employ a tired cliche or two: two heads are better than one, and the same applies to "more eyes". Code review will reveal trivial stuff like unused or unVARed variables, typos and coding standard breaks; more significant issues like sub-par or expensive logic, missing test coverage; through to more theoretical considerations such as misuse (or lack of use ~) of design patterns.

A code review will also throw up more subjective stuff like "WTF is going on here?" This can be for a number of reasons:

  • It could be because the reviewer is unaware of the context or the requirement of the code,
  • they might not be aware of a technique,
  • or the code might simply be:
    • obfuscated,
    • obtuse,
    • less than ideal in design,
    • or simply wrong.

We all write code that falls into one or more of these latter categories: shit happens. The thing is that code is collectively owned, so it kinda needs to regress to the mean slightly. "Smart" code ain't smart if they team can't maintain it, or it doesn't follow precedents already set in the code, or doesn't follow coding standard or other various "code collective" considerations. Having code that the team can maintain after it's written is the most important thing about collectively-own code.

So... code needs reviewing. That's a given. If yer not doing formal code review at your gig: you are, IMO, not conducting yerself professionally. If you work by yourself it's even more critical you find a way to get more eyes on yer code, cos working by yerself you're just in a locked-in cycle of kinda code inbreeding (if you'll permit that).

Another thing that less-experienced devs will do is to write code to feed their ego because they think they are clever. Now I use "less-experienced" in a measured way here. I don't mean "less experienced in coding", I mean "less-experienced in working as a team, and less-experienced at realising code is not all about me me me". And less experienced in understanding "no-one gives a shit how clever you think you are". There was a well-realised dev stereotype going around a decade or so ago - and seems to have died off these more enlightened days - of "Code Ninja". There was no such thing: there was just "Code Twats". You're not a "ninja" (FFS!) if the people you're coding with are going "um... WTF?" You're just a Code Twat.

If we combine people who aren't good team players, code almost as an act of hubris, and generally aren't quite so clever as they've lead themselves to believe, then code reviews can get... awkward.

Too many times with non-team-players (Code Twats) I've seen code reviews in which a team member has flagged up some "strategically challenging" (ahem) code and detailed why they thought there was a preferable approach.  And the code's author just wouldn't have a bar of it. They liked their code, and had already decided they were "right", and anyone else's consideration was just mistaken basically cos they weren't clever enough to understand it. So refused to heed the review. More of the code reviewers pitched in, and said "well actually the review's kinda right: the code's not clear, and not the way we do things, even if it's syntactically correct and works". Nope. The author just got more defensive, started googling justifications of why they were right (because, after all, being the "victim" of a code review is an exercise in being right, right? Right?!). And the responses were getting longer and longer. And I suspect typed with heavier and heavier and stabbier and stabbier fingers.

The code review went around the houses for over 100 comments. There was about ten times more text in the comments than there was in the code under review. This is an egregious example, but it sticks in my mind. Also, btw, was not the only review that ended up that way with this individual. Who has thankfully left the company now. There's been similar reviews with other bods too, to a slightly lesser degree. It's not uncommon.

Ultimately the team lead stepped up and gave everyone a ballocking. Although I think perhaps the person under review shoulda been the only person on the receiving-end of said ballocking,because it was clear very quickly the team wasn't happy with the code, therefore the code needed rework. End-of. It's not their fault than the author wasn't having a bar of it. The one point the lead did have is that a couple of us (possibly mostly me) are supposed to be senior, so shouldn't've been so self-indulgent as to perpetuate the review cycle. We did, after all, have our own work to do. That said we were "senior" without any actual degree of seniority over the dev concerned, so we couldn't exactly go "Just. Do. What. Yer. Told" (which is all that needed to be said in this case, realistically). That was for the lead to do. And they shoulda done it earlier. (Said lead has also since left the company, but I look forward to the pint I am having with them this evening ;-)

What didn't occur to me until yesterday as I was reading more about code review failures / challenges, is that these devs who simply won't join the team and perpetuate these seemingly endless cycles of code review back and forth are actually "Code Bullies". It's just bullying. They're badgering their reviewers into submission, hoping they will just go "oh FFS, I don't care enough about this, OK have it your way". I have seen this happen. Hell, I have to admit I've engaged in it! (not my finest hour, and I've tried to learn from this, and I don't do it any more). The problem is I see the artefacts of these bullying tactics in our code every day. Bullying isn't just a temporally-limited thing; its impact has long-reaching consequences. Its victims continue to suffer (hey, if you were looking at some of this code, you'd not be thinking "suffering" was an overstatement ;-).

This situation needs knocking on the head. There's perhaps some guidelines which can help mitigate this:

  • firstly the dev needs to actually consider the bigger picture, and it's not a competition, and no-one cares if they're "right" according to only a subset of considerations. This is difficult to control, but if it becomes symptomatic, it needs proactive attention.
  • If the exchange between the dev and the reviewer goes into the third round, then they need to stop being keyboard warriors, and talk to each other. Odds-on they're sitting next to each other, or on the same campus. If they're not on the same campus, then there's this thing called a telephone. use it. Or, hey, Skype or something more modern than that. But switch to "interactive mode".
  • If more than one reviewer are in accord: they're right and the dev is wrong. Perhaps not completely wrong, but the work needs rework anyhow. But there's a very good chance the dev is outright "wrong" according to more criteria than they might be "right" (which is generally just down to code twattery anyhow). Reject the pull request (or some equivalent thereof; end the code review anyhow) as a first step here.
  • Make sure never to employ a passive-aggressive, sarcastic, supercilious or dismissive tones in your code review comments. You're just being a prick. Being a prick and then covering yerself by putting a winky at the end of the comment doesn't mean you're not a prick. It just demonstrates you know yer a prick, and act that way anyhow. Thus making you even more of a prick. This applies to both dev and reviewer, btw.
  • If there's more than say ten comments on one code review point: the lead needs to step up. This ought to be either cos they're already watching the code review, or one of the participants raises it with them (which they should be completely prepared to do. That's what the lead is for!). Then the lead acts as sole arbiter, and whatever they say gets done. The first step is rejecting the pull request, if the decision is that there's rework to do.
  • Most importantly: code bullies need to be admonished for their behaviour. Like I mean officially admonished, like any other sort of office bullying. It's just not bloody on. They are deleterious to morale, and they infect the codebase with their blight.

We can all do without this sort of anti-social (not to mention unprofessional) behaviour in our work place. Don't be part of the feedback loop, don't be a Code Twat, and seriously... call out Code Bullies when you see them.

Righto.

--
Adam

Sunday, 25 September 2016

CFML: "It's alive!". Really now?

G'day:
Yesterday I was "helping" someone on the CFML Slack channel with some challenges they had with CFC serialisation. The conversation turned to ColdFusion's custom serialisers (see an earlier article: "ColdFusion 11: custom serialisers. More questions than answers"), and how pants I think the implementation & entire undertaking of that part of CF11 was. I'll just quote myself (full transcript of entire discussion here: Gist / adamcameron / slack.txt:

I did try to position the whole undertaking as "here be dragons"

On the face of it, it's a cool idea

But - as per most less-than-obvious things the Adobe CF Team tries to do - they completely cocked it up, having no real understanding of how ppl use CFML

[...]

What concerns me about Adobe's implementation is it seems to be very "not OO" which worries me when it's actually implemented by ppl who are supposedly Java devs.

[...]

I floated it "at the appropriate time" (nod to not speaking about the pre-release programme, but... well... f*** them), but the relevant parties just either didn't care or didn't get it or refused to get it.

It's the perpetual symptom that Adobe never consult the community before they blindly go and implement ill-conceived and poorly realised shite.

They ought to feel free to only consult with their major-client "IT stakeholders" for features like the API Manager and to a degree `<cfclient>` (hohoho), but when it comes to nuts 'n' bolts language implementation... they need to ask the ppl on the ground.

But based on that recent survey Adobe did... there won't be any more focus on the CFML language from them. They're only interested in buzzword-completion/box-ticking IT-Manager-pleasing platform features

In response to that someone observed they admired my sticking power with CFML (thanks for saying that), but my reaction was that Adobe finally "broke" me with CF2016 being such a waste of time for the CFML dev, as they did hardly anything in the language, instead of rolling something that's really nothing to do with ColdFusion: the API Manager into CF2016 and making it the marquee "feature".

At the same time Lucee 5 lost objective direction in that they indulged Micha's whim to re-write the thing to be OSGi capable (or whatever the correct term is), and mess around with the .lucee lang thing which ended up being both stillborn and evidence that LAS is out of their depth when they're not just copying what Adobe does. Harsh, sorry LAS, but that's my opinion.

I have given up on both the vendors when it comes to CFML, and personally I've moved from CFML to PHP now, but I'm still there in the CFML community helping people where I can. So - as far as the community goes - I'm suffering from "just when I thought I was out... they pull me back in", as it were. Apologies to Al Pacino there. But theatrics aside, that's my choice, so no worries.

Patrick came back with a reasonable riposte to my comments about being let down by the vendors:

Yeah, those sorts of public "divorces" (Railo-->Lucee) are never pretty, but it was really necessary. Kudos to the core of the Railo peeps who held it together. LAS is quite strong now, and I believe we're the future of CFML as a platform. I'm especially excited to see what the community can do with the extensible framework in place now. What say we all co-create a really vibrant ecosystem of products? With Adam as our cheerleader (I don't see it as cynical, mate!), I have no doubt we'll get it done.

Good on him, but I disagreed with one thing he said, and this lead to a bit of a diatribe from me:

You can't just say "What say we all co-create a really vibrant ecosystem of products?" and have it happen. The CFML community have demonstrated that they're - for the most part - takers rather than givers. Even with ppl asking for help from the community I see a sense of (misplaced) entitlement from them.

That said there's a minority of good worker bees, and a bunch of people who are good people but they're still just in CFML 9-5 (which is fine), but will not grow or strengthen the community.

Most of the stalwarts of the community are companies who have in the past invested a lot of effort into CFML (not the community necessarily, but for themselves), and it'd be a large $$$ cost for them to move. That's going to be why some of them continue to persist with CFML, and also encourage Lucee[:] as LAS is more likely to work with those small to mid-level companies to provide what they want out of CFML. Same as with Adobe and larger corporates.

The likes of Daemon and MSO.Net and Ortus and Pixl8 aren't going away when it comes to CFML due to how they've stacked their egg baskets, and I reckon they're more likely to stick with CFML than [it is that] LAS will [continue to] exist. Although one of them might consider "buying" Micha to perpetuate CFML for them if LAS founders. At which point Lucee will become kinda like OpenBD is for Alan Williamson's outfit

Sean was reading this and "emoting" with thumbs-ups in agreement and the like, then we got to chatting about OSS & CFML and that sort of thing. I'll spare you that.

But I stand by what I say above. I've been in the CFML community since 2001, and been active in the community all that time, and have been watching the ebb and flow of the CFML tide, be it ColdFusion or BlueDragon or Railo or Lucee. So I think my opinion is possibly an informed one. And I always offer it for the sake of informing, rather than posturing. People are quite welcome to disagree with me and to put a case forward as to why I'm mistaken. I like finding out I'm wrong, and I like debating (or arguing ;-) a topic with people. Provided they actually make a case.

Geoff Bowers (of afore-mentioned Daemon fame: he's the boss; and I think he's El Presidente of LAS) joined the conversation and suggested we should not be having such conversations, as they hurt the CFML community. And Geoff's position is that "The CFML Community" is a meaningful entity.

I disagree.

I disagree with a few points or inferences here:

  • Any conversation by experienced members of a given community will possibly have merit, even if they suggest that the best course of action for the community members is to prepare themselves for the time when "the community" doesn't really mean anything any more.
  • Even if the points being discussed have no merit, I disagree with the notion that the discussion should not be had because one member of the community doesn't like the sound of it, or it doesn't match their agenda.
  • I don't actually think "The CFML Community" is a thing in and of itself which automatically deserves preservation. This might seem odd, but I'm interested in the welfare of the members of the community, not some completely intangible label "The CFML Community". That's meaningless. I will help the individual people in the CFML community whenever I can (or, realistically, when I feel like it, but that's quite often ;-), and I think helpful advice to people who rely on CFML for their crust these days is: plan an exit strategy. It's convenient for Geoff to deny this (I'll get to that), but CFML as a language has had its day. It had a unique selling point a decade ago: it made making dynamic websites quite easy compared to where Perl or PHP or JSPs were at the time. It does not have that USP any more. CFML does a bunch of things really well, but not that much better than any number of more popular (and easy!) languages do these days. The thing is the other languages bring their own participatory communities with them, and with those communities come rich OSS efforts which ensure whatever one wants to do that might be off-piste for a language: there'll be a third-party module for that.


But I digress: no, I don't think "The CFML Community" is worth protecting. The community members are, and my best (-quality and -intentioned) advice is to start planning an exit strategy. You'd have to be an arsehole to place the preservation of "The CFML Community" ahead of the welfare of its members.

CFML has precisely one remaining relevant purpose: to keep CFML devs in work if they can't do anything else. CFML is not going places. It's staying where it is. But the ice floe where it's positioned itself is melting away, growing smaller and smaller, and nothing's gonna reverse this.

Geoff disagrees. That's cool, but other than "I disagree" and a few half-hearted ad hominems ("none taken", btw), he didn't actually refute a single thing I said. I'd be dead f***in' keen to hear what he has to say though, to demonstrate how CFML is still alive in any meaningful way though. And how it's a good prospect for uptake, and how CFMLers shouldn't be planning an exit strategy.

My cynicism suggests to me that Geoff is mostly concerned about "The CFML Community" because it's commercially prudent for him to do so: he's built a company and a livelihood on CFML. So it's beneficial to him for there to be "The CFML Community" out there, as they're potential revenue opportunities. That's completely fine and I see where he's coming from. But it seems disingenuous to me for him to be suggesting myself or others don't have the community's best interests at heart when we offer advice that is not convenient for his business model.

Another issue I'll touch on in this increasingly rambling diatribe is Geoff's assertion that my observations have an actual deleterious impact on the community. This is seemingly based on the notion that any negative comment is intrinsically deleterious, and for "names" such as myself to me making these negative comments is more deleterious still. I guess cos ppl might be inclined to listen to my advice. Heaven f***in' forbid.

He's probably right. But let's get things in perspective here. Things that are deleterious to "The CFML Community":

  1. ColdFusion being a paid-for product in 2016;
  2. Adobe's lack of interest or lack of ability to keep the language contemporary;
  3. every other bloody language being more contemporary and compelling;
  4. and free.
  5. The apathy of most CFML dev resulting in an under-represented community to engender momentum; 
  6. Adobe's lack of ability to market ColdFusion;
  7. Railo v Lucee schism;
  8. Lucee 5 taking an inordinate amount of time to see light of day, for what seems like developer-self-indulgence;
  9. LAS dithering about the place and seeming very incoherent when it comes to what they're doing;
  10. [daylight]
  11. [more frickin' daylight]
  12. Me saying stuff Geoff doesn't like.
One could quibble about some of those items there, but it's the scale of things that is significant. Almost all of CFML users don't know who the f*** I am. A decent subset of those that do can think for themselves and will take what I say with a grain of salt, or already be (dis-)agreeing with me. And the others could do worse than consider what I have to say, and weigh it up for themselves.

But no CFMLer who cannot form their own opinions will have any idea who I am, and will never see any of my written opinions.

I hasten to add that the list above is "things that are deleterious to the CFML community". I see Lucee as a net force for good in the community. But if we're assessing what is unhelpful in the community, then they're a more significant player in all this than I am. And even Lucee's influence one way or the other is completely inconsequential compared to Adobe's.

But go on then, Geoff and others who think CFML is alive and well and is something that should be suggested for new adopters. Tell me why it is. Tell us... "The CFML Community"... why that's the case. Don't just call me an uppity dick or a big meany or whatever the f*** personal attack is easier for you to make. Do something.

Why is CFML still relevant?
Why should new people adopt it?
Why should people stick with it instead of perhaps protecting their future by examining other options?
Why should people not at least suggest that might be something to do?

Go on then...

--
Adam

Monday, 29 June 2015

Useless ColdFusion community members

This:
"I liked all of them, so I decided not to comment."

No.

If you're any sort of CFML developer, you will go and raise feature requests, point out bugs, and generally get in with the swing of furthering your language.

Thursday, 26 February 2015

CFML: Dan Kraus - "more CF devs need to leave The Shire for a bit"

G'day:
I'm going through my in box today, catching up with all the comments ppl have made that I've not replied to.

Dan Kraus made a comment the other week on that article "Lucee: does its future include CFML?". It and its follow-up warrant repeating and more eyeballs on it, so I said I'd promote it to an article so people would def notice it. I should have done this back when the article was fresh, but got sidetracked.

Wednesday, 25 February 2015

Dom posts interesting feedback

G'day:
I was gonna just bury this as a response to a comment in the original article, but I've decided Dom's comment is really excellent, so I'm promoting his comment & my feedback to be an article.

This was in reaction to my "Regarding codes of conduct and the nature of forum participation" article from a few days ago.

Monday, 23 February 2015

Regarding codes of conduct and the nature of forum participation

G'day:

There's been discussion about a Code of Conduct for the Lucee Google Group. Well without having the bottle to actually describe it as such... indeed going so far as to suggest that isn't what this thread was floating: "Tone and community guidelines".

Despite knowing full-well that some of the points are directed at least partially (or possibly entirely) at me, I think it's an appropriate idea. Sorta.

That said, I'd like to have a look at a coupla issues that came up in that list of bullet points, from comments on the thread, and in general.

Saturday, 7 February 2015

Lucee: does its future include CFML?

G'day:
There's a lively thread on the Lucee Google Group at the moment: "Outsider Perspective". There's a fair bit of cruft in there... OK, it's mostly cruft... but there was one very interesting comment from Micha:

ACF compatibility
[...]
So what could be the medium for Lucee, that is very simple, the medium are templates (.cfc,.cfm).
What if Lucee acts different depending on the file extension.
So all templates with .cfm and .cfc extensions are still handled the old way, but files with the extension .lucee are handled in a modern way. this gives you the opportunity to still use your old code but you can extend it with new one and you have no confuguration nightmare anymore!

My emphasis, obviously.

Hmmmm. I tried to get Micha to elaborate for a public audience, but he hasn't been forthcoming yet, leaving it up to community speculation. Most of the speculation thusfar has been around what the file extension should be (yes, really), and no real grist. I've been holding off on that thread until Micha antes-up a bit.

However, on this blog... I'll allow myself to speculate away like nobody's business...

Friday, 7 November 2014

ColdFusion: Adobe kinda trumped me to this

G'day:
An article I've been drafting in my head is a suggestion for Adobe as to how they can make the ColdFusion release cycle less painful for their clients. They've actually done part of what I was thinking of advising today, by releasing the latest updater for ColdFusion 11 to beta for us to test ("ColdFusion 11: a decent bugfix update has been released to beta"). But I'll write down what I was thinking anyhow.

Saturday, 11 October 2014

The received wisdom of TDD [etc]: Sean's feedback

G'day:
During the week I solicited feedback on my assessment of "The received wisdom of TDD and private methods". I got a small amount of good, usefull feedback, but not as much as I was hoping for.

However Sean came to the fore with a very long response, and I figure it's worth posting here so other people spot it and read it.


'ere 'tis, unabridged:

There are broadly two schools of thought on the issue of "private" methods and TDD:

1. private methods are purely an implementation detail that arise as part of the "refactor" portion of the cycle - they're completely irrelevant to the tests and they never need testing (because they only happen as part of refactoring other methods when your tests are already passing).

2. encapsulation is not particularly helpful and it's fine to just make things public if you want to add tests for new behavior within previously private methods.

The former is the classical position: classes are a black box except for their public API, and it's that public API that you test-drive.

The latter is an increasingly popular position that has gained traction as people rethink OOP, start to use languages that don't have "private", or start working in a more FP style. Python doesn't really have private methods (sure, you can use a double underscore prefix to "hide" a function but it's still accessible via the munged name which is '_TheClass__theFunction' for '__theFunction' inside 'TheClass'). Groovy has a 'private' keyword (for compatibility with Java) but completely ignores it. Both languages operate on trust and assume developers aren't idiots and aren't malicious. In FP, there's a tendency toward making everything public because there are fewer side-effects and some helper function you've created to help implement an API function might be useful to users of your code - and it's safe when it has no side-effects!

When I started writing Clojure, coming from a background of C++, Java, and CFML, I was quite meticulous about private vs public... and in Clojure you can still easily access a "private" function by using its fully-qualified name, e.g., `#'some.namespace/private-function` rather than just `private-function` or `some.namespace/private-function`. Using `#'` bypasses the access check. And the idiom in Clojure is generally to just make everything public anyway, possibly dividing code into a "public API" namespace and one or more "implementation" namespaces. The latter contain public functions that are only intended to be used by the former - but, again, the culture of trust means that users _can_ call the implementation functions if they want, on the understanding that an implementation namespace might change (and is likely to be undocumented).

My current position tends to be that if I'm TDD-ing code and want to refactor a function, I'll usually create a new test for the specifics of the helper I want to introduce, and then refactor into the helper to make all the tests pass (the original tests for the existing public function and the new test(s) for the helper function). Only if there's a specific reason for a helper to be private would I go that route (for example, it isn't a "complete" function on its own, or it manages side-effects that I don't want messed with outside of the calling function, etc). And, to be honest, in those cases, I'd probably just make it a local function inside the original calling function if it was that critical to hide it.

Google for `encapsulation harmful` and you'll see there's quite a body of opinion that "private by default" - long held to be a worthy goal in OOP - is an impediment to good software design these days (getters and setters considered harmful is another opinion you'll find out there).

That was longer than I intended!

Yeah Sean but it was bloody good. It all makes sense, and also goes a way to make me think I'm not a lunatic (at least not in this specific context).

And I have a bunch of reading to do on this whole "encapsulation harmful" idea. I'd heard it mentioned, screwed my nose up a bit, but didn't follow up. Now I will. Well: when I have a moment.

Anyway, there you go. Cheers for the effort put in to writing this, Sean. I owe you a beer or two.

Righto.

--
Adam

Tuesday, 7 October 2014

The received wisdom of TDD and private methods

G'day:
As you might know, I've recently taken on a different role as a PHP developer. My employer are shifting our code base from CFML to PHP for various reasons ("So long, and thanks for all the CF"). One facet of this is we're moving off a venerable, well-established CFML code base to a new code base in the infancy of its existence (some work on it had been done before we picked up the project). In a different language.

And we've got from having about 4000 unit tests to zero.



A coupla of us are having a quick exploratory look at PHPUnit to see what it can do, and whether there's any "gotchas" we need to be aware of when testing in PHP. So far the findings seem to be that unit testing in CFML via MXUnit and TestBox - especially in conjunction with MockBox - is a bit easier than the hoops we need to jump through in PHP to achieve the same ends. This is mostly down to CFML's objects being far more dynamic than PHP's, so injecting testing helper methods and exposing non-public methods for testing is much easier.

The challenges that PHP has thrown at us has caused us to revisit our TDD and unit testing strategy. Not necessarily to revise it, but to revisit it and see if it does need revision.

Our existing policy is that all code is written via a TDD & "Clean Code" approach:
  1. the need for a piece of functionality is identified;
  2. a failing test is written to test a facet of the functionality;
  3. code is written to pass the test;
  4. repeat from 2 until the functionality is complete;
  5. refactor if necessary.

This is applied both to new functionality as well as maintenance of existing functionality. The TDD side of things drives the code design, and also demonstrates that downstream changes don't have adverse effects on earlier requirements. The usual sort of thing.

On new work, this will mean creating a new public method (and the class it needs to go in, as required), and implementing the requirement. So all the tests are on that public method. When we refactor the code, those tests all still work, which is fine.

As a result of the refactoring we might end up with some of the code moving out into public methods of other classes, or - as often - private methods of the same class.

The newly refactored public methods go through the same TDD approach as the initial one, although this is quite often just a re-homing of the tests which were previously testing the unfactored functionality from the original public method. And the original public method's tests are selectively updated to remove any tests which are now the domain of the new methods, and mocks are used in lieu of calling these factored-out methods in the remaining tests of the original function.

And traditionally we have done exactly the same thing with the refactoring that was simply moved into private methods.

Perhaps I can demonstrate this with some pseudocode. At the end of our first TDD round, and before refactoring, we have this:

Friday, 3 October 2014

Comments in code: avoid if possible

G'day:
This came up in a code review yesterday... I thought I had already written about it, but cannae find the article anywhere so perhaps I imagined it. On the other hand I apologise if I have already written about this and it's a repeat. Point it out to me if so.

So, anyway, my contention today is that comments in code are grossly over-utilised, and in general one should avoid them. The code I was looking at yesterday had a case of stating the obvious. This is not the actual code, but an analogous example:

Tuesday, 9 September 2014

Please indicate your irritation @ the ColdFusion Team

G'day:
They've gone and done it again, as apporproately described by Adam Tuttle:


Wednesday, 13 August 2014

Even newbies can help solve community problems

G'day:
I was quite pleased yesterday when I got my first "accepted answer" for a Ruby-tagged question on Stack Overflow. I hasten to add that the question was one of converting CFML code to Ruby code, so I had a helping hand, plus it was simply a matter of seeing where the logic of the Ruby code didn't match that of the CFML. I think even someone not versed in Ruby at all could answer it. So I'm not claiming it as much of a personal achievement.

What is good about it is that it demonstrates that answering questions on Stack Overflow is not the sole purview of experts in a given field; even newbies and intermediate-levelled people can provide good input into other people's questions. Also, it's a great learning exercise. Stepping out of one's own comfort zones and into someone else's problem domain exposes us to technology and concepts we might not use every day. The code we use at work is very five-tagger (perhaps "ten-tagger" might be a better description; reading "Five-tagger? what n-tagger am I?"), and I never use CFML's UI tags, PDF tags, image-processing tags, spreadsheet tags, .net integration, etc. So when I see someone asking a question on those topics, it's a good excuse to roll my sleeves up and find out what the answer is. This helps both the petitioner as well as myself. Win. So imagine if you're a less experienced CFML developer (purely chronologically, I've been doing CFML for 14 years, so I have picked up a reasonable amount of knowledge along the way, so I don't think it's hubris to imply I'm "experienced"), there is even more scope to learn more new stuff.  Don't use CFCs? Check how people are using them in the real world. Don't use CFScript? There's a lot of CFScript code on show on Stack Overflow. Have never used a "closure" in your life? Here's your chance to see them in action.

Saturday, 19 July 2014

Oh dear: tags vs script in CFML

G'day:
I'm surprised this one didn't come up earlier (like back when I did this one: "/>"). But Marcus Fernstrom and I started talking about this last night on Twitter, and - despite "best" efforts - I've concluded 140-char-limits are just stupid for such things.

My personal opinion regarding CFML code is that tags are very clunky, and in modern code seldom have an appropriate place in one's code, compared to the clarity and comparative elegance of CFscript-based code.

Some people will already feel their undies bunching up at this point... get over yourselves. This is just my opinion. You're entitled to yours, I'm entitled to mine. This is my blog, so we're getting mine.