Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Thursday 21 January 2021

Listening to the console log of a page loaded with Puppeteer

G'day:

This is a follow-up from something I touched on yesterday ("Polishing my Vue / Puppeteer / Mocha / Chai testing some more"). In that exercise I was using Puppeteer to load a web page I was testing, and then pulling some DOM element values out and checking they matched expectations. The relevant bits of code are thus:

describe.only("Tests of githubProfiles page using github data", function () {
    let browser;
    let page;
    let expectedUserData;

    before("Load the page and test data", async function () {
        await loadTestPage();
        expectedUserData = await loadTestUserFromGithub();
    });

    let loadTestPage = async function () {
        browser = await puppeteer.launch( {args: ["--no-sandbox"]});
        page = await browser.newPage();

        await Promise.all([
            page.goto("http://webserver.backend/githubProfiles.html"),
            page.waitForNavigation()
        ]);
    }

    it("should have the expected person's name", async function () {
        let name = await page.$eval("#app>.card>.content>a.header", headerElement => headerElement.innerText);
        name.should.equal(expectedUserData.name);
    });

  • Load the page with Puppeteer
  • Example test checking the page's DOM

This code seemed to be running fine, and the tests were passing. As I was adding more code ot my Vue component on the client end, I suddenly found the tests started to fail. Sometimes. If I ran them ten times, they'd fail maybe three times. At the same time, if I was just hitting the page in the browser, it was working 100% of the time. Odd. I mean clearly I was doing something wrong, and I'm new to all this async code I'm using, so figured I was using values before they were available or something. But it seemed odd that this was only manifesting sometimes. The way the tests were failing was telling though:

1) Tests of githubProfiles page using github data
       should have the expected person's name:

      AssertionError: expected '' to equal 'Alex Kyriakidis'
      + expected - actual

      +Alex Kyriakidis

The values coming from the DOM were blank. And note that it's not a case of the DOM being wrong, because if that was the case, the tests would barf all the time, with something like this:

Error: Error: failed to find element matching selector "#app>.card>.content>a.header"

The relevant mark-up here is:

<a class="header" :href="pageUrl">{{name}}</a>

So {{name}} isn't getting its value sometimes.

I faffed around for a bit reading up on Vue components, and their lifecycle handlers in case created was not the right place to load the data or something like that, but the code seemed legit.

My JS debugging is not very sophisticated, and it's basically a matter of console.logging stuff and see what happens. I chucked a bunch of log calls in to see what happens:

created () {
    console.debug(`before get call [${this.username}]`);
    axios.get(
        `https://api.github.com/users/${this.username}`,
        {
            auth: {
                username: this.$route.query.GITHUB_PERSONAL_ACCESS_TOKEN
            }
        }
    )
    .then(response => {
        console.debug(`beginning of then [${response.data.name}]`);
        this.name = response.data.name;
		// [etc...]
        console.debug("end of then");
    });
    console.debug("after get call");
}

Along with some other ones around the place, these all did what I expected when I hit the page in the browser:

beginning of js
githubProfiles.js:46 before VueRouter
githubProfiles.js:52 before Vue
githubProfiles.js:23 before get call [hootlex]
githubProfiles.js:43 after get call
githubProfiles.js:63 end of js
githubProfiles.js:33 beginning of then [Alex Kyriakidis]
githubProfiles.js:41 end of then

I noted that the then call was being fulfilled after the mainline code had finished, but in my test I was waiting for the page to fully load, so I'd catered for this. Repeated from above:

await Promise.all([
    page.goto("http://webserver.backend/githubProfiles.html"),
    page.waitForNavigation()
]);

I ran my tests, and was not seeing anything in the console which momentarily bemused me. But then I was just "errr… duh, Cameron. That stuff is logging in the web page's console. Not Node's console from the test run". I'm really thick sometimes.

This flumoxed me for a bit as I wondered how the hell I was going to get telemetry out of the page that I was calling in the Puppeteer headless browser. Then it occurred to me that I would not be the first person to wonder this, so just RTFMed.

It's really easy! The Puppeteer Page object exposes event listeners one can hook into, and one of the events is console. Perfect. All I needed to do is put this into my test code:

page = await browser.newPage();

page.on("console", (log) => console.debug(`Log from client: [${log.text()}] `));

await Promise.all([
    page.goto("http://webserver.backend/githubProfiles.html"),
    page.waitForNavigation()
]);

Then when I ran my tests, I was console-logging the log entries made in the headless browser as they occurred. What I was seeing is:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
    1) should have the expected person's name
    2) should have the expected person's github page URL
    3) should have the expected person's avatar
    4) should have the expected person's joining year
    5) should have the expected person's description
Log from client: [beginning of xxxxxx then [Alex Kyriakidis]]
Log from client: [end of then]
    6) should have the expected person's number of friends
     should have the expected person's friends URL


  1 passing (4s)
  6 failing

Note how the tests get underway before the then call takes place. And shortly after that, the tests start passing because by then the dynamic values have actually been loaded into the DOM. This is my problem! that page.waitForNavigation() is not waiting long enough! My first reaction was to blame Puppeteer, but I quickly realised that's daft and defensive of me, given this is the first time I've messed with this stuff, almost certainly I'm doing something wrong. Then it occurred to me that a page is navigable once the various asset files are loaded, but not necessarily when any code in them has run. Duh. I figured Puppeteer would have thought of this, so there'd be something else I could make it wait for. I googled around and found the docs for page.waitForNavigation, and indeed I needed to be doing this:

page.waitForNavigation({waitUntil: "networkidle0"})

After I did that, I found the tests still failing sometimes, but now due to a time out:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
Log from client: [beginning of then [Alex Kyriakidis]]
Log from client: [end of then]
    1) "before all" hook: Load the page for "should have the expected person's name"


  0 passing (4s)
  1 failing

  1) Tests of githubProfiles page using github data
       "before all" hook: Load the page for "should have the expected person's name":
     Error: Timeout of 5000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/share/fullstackExercise/tests/functional/public/GithubProfilesTest.js)

I had the time out set for five seconds, but now the tests are waiting for the client to finish its async call as well, I was just edging over that five second mark sometimes. So I just bumped it to 10 seconds, and thenceforth the tests all passed all the time. I've left the telemetry in for one last successful run here:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
Log from client: [beginning of then [Alex Kyriakidis]]
Log from client: [end of then]
     should have the expected person's name
     should have the expected person's github page URL
     should have the expected person's avatar
     should have the expected person's joining year
     should have the expected person's description
     should have the expected person's number of friends
     should have the expected person's friends URL


  7 passing (5s)

OK so that was a bit of a newbie exercise, but I'm a noob so yer gonna get that. It was actually pretty fun working through it though. I'm really liking all this tooling I'm checking out ATM, so yer likely get a few more of these basic articles from me.

Righto.

--
Adam

Saturday 9 November 2019

I need to backtrack on some TDD guidance. I think

G'day:
Yes, this thing still exists.

Possibly foolishly my employer has made me tech lead of one of our software applications, and part of that is to lead from the front when it comes to maintaining code quality, including our TDD efforts. So I tend to pester the other devs about TDD techniques and the like. O how they love that. Some of them. Occasionally. Actually seldom. Poor bastards.

Anyway, recently I started to think about the difference between "fixing tests that would break if we made this new change to the code" and "writing a TDD-oriented test case before making the code change". Just to back up a bit... this is in the context of "write a failing test for the case being tested... then writing the code to pass the test". The red and green from the "red green refactor" trope. This started from doing code review and seeing existing tests being update to accommodate new code changes, but not see any actual new test case explicitly testing the new requirement. This did not sit well with me, so I've been asking the devs "where's yer actual test case for the change we're making here?"

To help with this notion I wrote a quick article on WTF I was on about. I was initially pleased with it and thought I'd nailed it, but when I asked my more-knowledgeable colleague Brian Sadler to sanity check it, he wasn't convinced I had nailed it, and moreover wasn't sure I was even right. Doh. I've been mulling over this for the rest of the week and I've come to the conclusion... yeah, he's right and I'm wrong. I think. I'm writing this article to put my thinking down in public, and see what you lot think. If anyone still reads this I mean. I will be chasing some ppl to read it, that said...

I can't just reproduce the original article cos it was an internal document and work owns it. So I'm reproducing a similar thing here.

Here's some sample code. I stress the code used here bears no relation to any code or situation we have in our codebase. It's just a simplified case to contextualise the discussion:

class ThingRepository {

    public function getThing($id) {
        $dao = DaoFactory::getThingDao();
        $dbData = $dao->getThing($id);

        return Thing::createFromArray($dbData);
    }
}

It gets raw data for the requested thing from a DAO and returns the data as a Thing object.

It also has an existing test:


class ThingRepositoryTest extends TestCase {

    private $repository;

    protected function setUp() : void {
        $this->repository = new ThingRepository();
    }

    public function testGetThingReturnsCorrectThingForId(){
        $testId = 17;
        $expectedName = "Thing 1";
        $thing = $this->repository->getThing(17);

        $this->assertInstanceOf(Thing::class, $thing);
        $this->assertSame($testId, $thing->id);
        $this->assertSame($expectedName, $thing->name);
    }
}

And this all works fine:

C:\src\php\general>vendor\bin\phpunit test\tdd\fixingExistingTests\ThingRepositoryTest.php
PHPUnit 8.4.3 by Sebastian Bergmann and contributors.

.                                                                   1 / 1 (100%)

Time: 54 ms, Memory: 4.00 MB

OK (1 test, 3 assertions)

C:\src\php\general>

If you were paying attention, you'll've spotted we're tightly coupling the DAO implementation here to its repository. At least we're using a factory, but still it is tightly coupled to the Repository implementation. We want to use dependency injection to provide the DAO to the repository.

Here's where my stance gets contentious.

Someone has addressed this requirement, and what they've done is updated that existing test to expect the passed-in DAO:

class ThingRepositoryExistingTestUpdatedTest extends TestCase {

    private $repository;
    private $dao;

    protected function setUp() : void {
        $this->dao = $this->createMock(ThingDao::class);
        $this->repository = new ThingRepositoryWithDaoDependency($this->dao);
    }

    public function testGetThingReturnsCorrectThingForId(){
        // implementation that now uses mocked DAO
    }
}

Note that this is exactly how I presented this information in the original work article. With the test implementation changes elided. This becomes relevant later.

Having prepped that test - and if fails nicely, cool - the dev then goes and sorts out the repo class.

I went on to reject that approach as not being valid TDD. My position is that "fixing other tests so they won't fail after we make the change" is not TDD, it's just technical debt recovery. If we're doing TDD, then we need to start with an explicit test case for the change we're going to make. I'll put this in a highlight box to draw attention to it:

Fixing other tests so they won't fail after we make the change is not TDD, it's just technical debt recovery. If we're doing TDD, then we need to start with an explicit test case for the change we're going to make.
This is what I'd expect to see:

class ThingRepositoryTest extends TestCase {

    private $repository;
    private $dao;

    protected function setUp() : void {
        $this->dao = $this->createMock(ThingDao::class);
        $this->repository = new ThingRepositoryWithDaoDependency($this->dao);
    }

    public function testGetThingUsesDaoPassedIntoConstructor(){
        $testId = 17;
        $mockedThingData = ['id'=>$testId, 'thing' => 'stuff'];

        $this->dao
            ->expects($this->once())
                        ->method('getThing')
                        ->with($testId)
                        ->willReturn($mockedThingData);

        $result = $this->repository->getThing($testId);

        $this->assertEquals(
            new Thing($mockedThingData['id'], $mockedThingData['thing']),
            $result
        )
    }
    
    // ... rest of test cases that were already there, including this one...

    public function testGetThingReturnsCorrectThingForID(){
        // updated implementation that doesn't break because of the DAO change
    }
}

Here we have an explicit test case for the change we're making. This is TDD. We're explicitly injecting a DAO into the test repo, and we're checking that DAO is being used to get the data.

This test is red:

C:\src\php\general>vendor\bin\phpunit test\tdd\fixingExistingTests\ThingRepositoryTest.php
PHPUnit 8.4.3 by Sebastian Bergmann and contributors.

F                                                                   1 / 1 (100%)

Time: 61 ms, Memory: 4.00 MB

There was 1 failure:

1) me\adamcameron\general\test\tdd\fixingExistingTests\ThingRepositoryTest::testGetThingUsesDaoPassedIntoConstructor
Expectation failed for method name is "getThing" when invoked 1 time(s).
Method was expected to be called 1 times, actually called 0 times.

FAILURES!
Tests: 1, Assertions: 4, Failures: 1.

C:\src\php\general>

But that means we're now good to make the changes to the repository:

class ThingRepository {

    private $dao;

    public function __construct($dao) {
        $this->dao = $dao;
    }

    public function getThing($id){
        $dbData = $this->dao->getThing($id);

        return Thing::createFromArray($dbData);
    }
}

Now our test is green, and we're done.

I rounded out the article with this:

As a rule of thumb... if you don't start your TDD work by typing in a case name - eg testGetThingUsesDaoPassedIntoConstructor - then you're not doing TDD, you are just doing maintenance code. These are... not the same thing.

Brian read the article and - my words not his - didn't get the distinction I was trying to make. Why was the first test update not sufficient? I could not articulate myself any differently than repeat what I'd said - no, not helpful - so we decided to get together later and look at how the implementation of the test I had elided would pan out, and how that wouldn't itself be a decent test. Due to work being busy this week we never had that confab, but I continued to think about it, and this morning concluded that Brian was right, and I was mistaken.

It comes down to how I omitted the implementation of that first test we updated. I did not do this on purpose, I just didn't want to show an implementation that was not helpful to the article. But in doing that, I never thought about what the implementation would be, and set-up a straw man to support my argument. I went back this morning and implemented it...

public function testGetThingReturnsCorrectThingForId(){
    $testId = 17;
    $expectedName = "Thing 1";

    $this->dao->expects($this->once())
        ->method('getThing')
        ->with($testId)
        ->willReturn(['id' => $testId, 'name' => $expectedName]);

    $thing = $this->repository->getThing(17);

    $this->assertInstanceOf(Thing::class, $thing);
    $this->assertSame($testId, $thing->id);
    $this->assertSame($expectedName, $thing->name);
}


This is what I'd need to do to that first test to get it to not break when we change the DAO to being injected. And... erm... it's the same frickin' logic as in the test I said we need to explicitly make. I mean I wrote the two versions at different times so solved the same problem slightly differently, but it's the same thing for all intents and purposes.

So... let's revisit my original position then:

Fixing other tests so they won't fail after we make the change is not TDD, it's just technical debt recovery. If we're doing TDD, then we need to start with an explicit test case for the change we're going to make.

It's not intrinsically true. Actually fixing existing tests can indeed be precisely the TDD-oriented cover we need.

I love being wrong. No sarcsm there: I really do. Now I'm thinking better about this topic. Win.

The only thing left after the distillation is the difference between these two:

Original test name (detailing its original test case requirement):

testGetThingReturnsCorrectThingForId

And my suggested case name for the new injected-DAO requirement:

testGetThingUsesDaoPassedIntoConstructor

Now I do still think there's merit in being that explicit about what we're testing. But it's well into the territory of "perfect is the enemy of good". We're fine just updating that original test.

I thought of how to deal with this, and came up with this lunacy:

public function testGetThingUsesDaoPassedIntoConstructor()
{
    $this->testGetThingReturnsCorrectThingForId();
}

But I think that's too pedantic even for me. I think we can generally just take it as given that the existing test covers us.



I'll close by saying I do think this is good advice when starting to do the testing for a code change to first come up with the test case name. It gets one into the correct mindset for approaching things in a TDD manner. But one oughtn't be as dogmatic about this as I clearly tend to be... existing tests quite possible could provide the TDD-oriented cover we need for code changes. We do still need to start with a breaking test, and the test does actually need to cover the case for the change... but it might be an existing test.

Oh and making the code change then fixing the tests that break as a result is not TDD. That's still wrong ;-)

I'm dead keen to hear what anyone might have to say about all this...

Righto.

--
Adam

Saturday 19 August 2017

Mishmash: code review as a learning exercise, loops vs higher-order-functions and testing

G'day:
There's a few things going on in my head here, and none individually are worthy of an article, but perhaps if I dwell on a bunch of stuff something might shake out. I have no idea as I have not drafted this one in my head like I usually do, I'm just "let's start typing and see if I get anywhere". Eek.

OK so yesterday I was code reviewing one of my mates' work, and breezed past this code (this is not the exact code, but it's a replication of it):

public function getSomeThings($rowsToGet)
{
    $thingsValues = $this->dao->getThingsFromStorage($rowsToGet);
    
    $things = [];
    foreach ($thingsValues as $thingValues) {
        $things[] = $this->thingFactory->createFromRawData($thingValues);
    }

    return $things;
}

I just put a note in the pull request "array_map?".

Now I do a few things in a code review. Firstly I check for obvious flaws, unclean code, bad test coverage, typos, etc, and I will raise those as "tasks" in BitBucket: stuff that needs to be addressed before I'll press the "Approved" button. This is our standard approach to code review, everyone expects it, and it's fine. It works.

But secondly I'll just make observations about the code, ask questions, suggest alternative approaches we could have taken, etc. Just as food for thought, a discussion point, or a chance for some learning. This was the latter. I was not suggesting to my mate they needed to change their code to match my whim... the code is fine as it is. It was just "did you think of this?".

An interesting conversation ensued, which I will paraphrase here. Note that Fred and I do not work in the same physical office.

Fred (his name's not Fred): I'm not sure the benefit of swapping simple code for a more complex solution, unless there's some performance gain I'm unaware of.

Adam (my name is Adam): Hmmm... well array_map is one expression, and it reduces the complexity here to a single variable assignment statement(*). The loop version requires building the array by hand, and requires two assignments and the whole loop thing too. So I think the array_map approach is less complex innit?

return array_map(function ($thingValues) {
    return $this->thingFactory->createFromRawData($thingValues);
}, $thingsValues);

(*) OK it's two statements. Never let it be said I don't hyperbolise to better make my point.

One thing I did not get around to saying is that performance is not a consideration here. One of the calls is hitting an external DB, so any processing time differences are going to be eclipsed - by orders of magnitude - by the external call. And, yes, I know that if we'd be inclined to worry about micro-optimisations, then calling a higher-order function is slower than a loop. One should simply not care.

Adam (cont'ed): further more the array_map version only requires a single test, whereas the loop version needs three. That in itself demonstrates the increased complexity.

Fred: err... why's that then?

I'll break away from the conversation here.

The dev process here is that we identify the need to have a function to get stuff from the DB, and then model that and return it to the calling code. So doing TDD we write a test which uses a mocked-out DAO to return known data, and we test that the values in the resultant models are what we expected them to be based on the mocked DAO data.

Here's a very simplified, self-contained example:

First our test:

describe('comparing testing for foreach vs array_map', function() {
    given('sut', function () {
        return new ForeachVsArrayMap();
    });
    given('allObjects', function () {
        return [
            (object) ['id' => 1, 'mi' => 'tahi'],
            (object) ['id' => 2, 'mi' => 'rua'],
            (object) ['id' => 3, 'mi' => 'toru'],
        ];
    });
    describe('tests of foreach', function() {
        it('returns the numbers as objects', function () {
            $result = $this->sut->getObjects();
            expect($result)->toEqual($this->allObjects);
        });
    });
});

And now our implementation:

class ForeachVsArrayMap
{
    private $allData;

    public function __construct()
    {
        $this->allData = [
            ['id' => 1, 'mi' => 'tahi'],
            ['id' => 2, 'mi' => 'rua'],
            ['id' => 3, 'mi' => 'toru']
        ];
    }

    private function getNumbers($rows)
    {
        return array_slice($this->allData, 0, $rows);
    }

    public function getObjects($rows = 3)
    {
        $data = $this->getNumbers($rows);
        $numbers = [];
        foreach ($data as $row) {
            $numbers[] = (object) $row;
        }

        return $numbers;
    }
}

And that passes. That's our TDD side of things done.

However we also now have to do edge testing as well. We've got a loop in there, and all loops need tests for three things: zero iterations, one iteration, and more than one iterations. There are easily introduced logic errors around the plurality of loop iteration.

Here our TDD test has coverd the "more than one iterations" case, so we only need the other two:

it('handles one result', function () {
    $result = $this->sut->getObjectsUsingForeach(1);
    $expected = [$this->allObjects[0]];

    expect($result)->toEqual($expected);
});

it('handles zero results', function () {
    $result = $this->sut->getObjectsUsingForeach(0);
    $expected = [];

    expect($result)->toEqual($expected);
});

Pretty simple tests, but still: two additional tests.

Why don't we need these two tests when using array_map? Because array_map's iteration is all handled within its implementation. It's PHP's job to test that, not ours. So all we need to test is that the inline callback works. And we only need to test that once: when it's called, it does what we expect it to do. We can safely stipulate that array_map will always return an array with the same number of elements as the input value. PHP deals with that; we don't need to.

One good thing about TDD (and test coverage in general) is now we can safely swap in the array_map version, and the test coverage still passes, so we know - with a high degree of certainty - that our code is A-OK.

public function getObjects($rows = 3)
{
    return array_map(function ($row) {
        return (object) $row;
    }, $this->getNumbers($rows));
}

Coincidentally I had another conversation, in another code review, on Friday about the necessity of always testing loops for 0, 1, >1 iterations. "One can just look at the code and know it's fine". This statement didn't come up in the conversation, but I've heard it before.

Three weeks ago I was show-stopped by a bug in one of our libraries. Taking the analogy above, this was the code:

public function getObjectsUsingForeach($rows = 3)
{
    $data = $this->getNumbers($rows);

    foreach ($data as $row) {
        $numbers[] = (object) $row;
    }

    // do something else with $numbers here
    // ...
}

The code had no tests. Can you see what's wrong? The general expectation when calling a function to get data is that there's actually data to get, given the provided criteria. In the case I fell foul of, the value of $rows was zero. And this was legit. So if $rows is 0, what value does $numbers have after the loop? Well: $numbers was undefined.

Had the dev done their testing properly, then they'd've noticed their bug. They never initialised $numbers outside of the code in the loop, so there's a chance it might never get defined. In this case not only did the dev not do the edge-case testing, they didn't even do the baseline TDD testing either, but that's a different story.

I needed to down-tools and fix the library before I could continue my own work. And it was made very tricky by the fact the code in the library had no tests at all, and wasn't written in a clean-code fashion (so was basically untestable). But we are not allowed to work that way any more, so I needed to do a bunch of work the original developer ought to have done.

I have also fell foul of this brainfart too:

public function getObjectsUsingForeach($rows = 3)
{
    $data = $this->getNumbers($rows);

    foreach ($data as $row) {
        $numbers = [];
        $numbers[] = (object) $row;
    }

    // do something else with $numbers here
    // ...
}

This usually crops up when code that was previously used once suddenly needs to be used for multiple iterations. Note that the initialisation of $numbers is inisde the loop, so it will re-initialise every iteration. This would be caught by the "more than one iterations" test. This is rarer, and usually only crops up in completely untested code: it'd be a shit tester who tested a loop with only one iteration as the test, but I've seen it done. Saves time not having to mock so much data, you see? Berks.

So, anyhow... always test yer loops. And try to avoid loops if there are built-in functions that handle the iteration for you.

Also always do TDD, but also understand that there is more testing to do after the TDD cycle. TDD is not the end of the story there.



In the title of this article I mention "code review as a learning exercise". I greatly summarised the back and forth between Fred and myself here... there was an order of magnitude more discussion than there was code under discussion. this sort of interaction is gold. Some people get the idea that code review is kinda like a teacher marking homework. It's partly that, but that's only part. As programmers and coders our job is one of communication: communication of instructions to a computer, both to the computer itself, but also to our future colleagues and selves about the nature of the instructions. Programming languages are a means to communicate instructions to other humans. It's tangential the computer can also be made to understand them. Similarly reviewing each other's code should be a communications and collaboration process, and we absolutely should be having these discussions. Both Fred and I benefited from this conversation: a lot of what I typed here wasn't formalised in my brain before explaning it, but now I have a much clearly picture of what I do and why. And Fred knows too. This was not a waste of time, as we're both now better developers for it. And it might have seemed like a lot of back and forth in the code review, but it really only represents about 10min of discussion. And all this came from just a conversation point, not me saying "change the code".

If we were in the same office, we might have had the conversation via voice instead of typing in the code review, and some people might think that's better. In a way it is, but it also means that the details of the conversation don't get recorded, so benefit fewer people. That said, when programmers are not agreeing on things, often it is better to switch to voice instead of continue the discussion via text-based back-and-forth. It's just easier and more collaborative sometimes.

Right, that's that. A bit random, but didn't turn out so bad I think. And I seem to have found a point to make.

Righto.

--
Adam

Monday 31 July 2017

Yeah, you do want 100% test coverage

G'day:
I've read a number of blog articles recently that have been eshewing 100% test coverage. The rationale being that there's little difference between 90% and 100%, and that that last 10% can be quite tricky to attain as - we being human 'n' all - sometimes leave the worst to last. Obviously we should all be doing TDD so that situation would not arise, but - again - we're human so don't always do TDD for one reason or another ("can't be arsed" is the usual one).

Another rationale is that it's not a race to 100%. IE: the object of the exercise is delivering working software which adds business value; it's not "getting 100% test coverage".

The latter is true. But we still aim for 100% test coverage for basically one good reason: visibility.

First, look at this:



Not bad I guess. Not great.

Now look at this:



We've accidentally forgotten to cover something new. Can you see it (it's one of the provider methods).

On the other hand look at these two:



Somewhat easier to spot, yeah?

This is bloody handy. In this case I just commented-out one of my tests to get the report off 100%, but in the real world I'd see that and go "huh, WTF?", and perhaps discover I had a logic branch untested, or a test I thought was covering something wasn't actually doing so. Or I just forgot to put the @covers annotation on the test. Whatever the situation: it's handy for the shortfall to be so obvious.

What it also means is part of our pull-request process is running a Bamboo build which runs all our tests (plus some other code quality tools):

 Everything's green here, so it's OK for code review. If it was failing, I'd get a nudge from my reviewers going "erm... you sure this is ready to go?" And even if I thought it was, we can't merge anything whilst the build is failing anyhow. The issue needs to be addressed.

Now the key thing here is this:


We don't test everything, but we cover everything. Where "cover" might be "we don't need to test this". In this case a single variable assignment does not need testing. We trust PHP to follow simple instructions. The thing is: this still shows up as green in the test coverage report.

We do this in a few situations:

  • the method has not branching logic and is overwise pretty simple. Ignore it.
  • we're backfilling coverage on old code we need to maintain, but it's such a mess we simply don't have time to solve all the testing shortfalls right now, so we test what is urgent (the bit we're changing), but just mark as ignore the rest of it. Now that file is green, and we can see if it stops being green.
  • similar to the above but we know the code the code in question is  not long for this world. So we ignore it and stick a TODO cross referencing to the ticket that will be dealing with it. And the ticket must already be lined-up to be worked on "soon".

There might be other situations where we need to be pragmatic and skip some stuff, but this needs to be explained to the code reviewers, and get their agreement.

But being able to run a report that says "100%", and shows up anything that isn't 100% is gold.

Get yer coverage report green, but don't be afraid to be pragmatic and skip stuff, if there's an informed reason to do so.

Now this isn't to say one can just pepper ignore annotations around the place just to get the report green. That's daft. The object of the exercise is not to make some report green / say 100%. It's to get good code quality. This is just a tool to help identify the quality: it's still up to the devs to provide the quality. Don't forget that. It needs to be an informed, honest decision for one to ignore code.

Righto.

--
Adam

Tuesday 7 February 2017

TDD: still testing too much

G'day:
A while back I wrote this article - Maintaining untested code, and mocking too much when backfilling test (with JavaScript & Jasmine) - wherein my colleague pulled me up for testing too much when I'm doing my TDD. I'm working with PHP this time, and I've just been pulled up on it again. For much the same reason. It created an interesting thought exercise... well interetsing ish... so I'll write it up here.

Firstly, I cannot use the actual code I am working with due to the usual over-baked "commercial sensitivity" considerations, so this is simply an analogy, but it captures the gist of what I was doing.

We have an adapter that handles HTTP calls. It's basically a generic wrapper that masks the API differences between Guzzle 5 and Guzzle 6, both of which we are still using, but we want a unified adapter to work with both. It's very generic, and basically has methods for GET, POST, PUT etc, and doesn't encompass any of our business logic.

Business reality is that we use this mostly to transmit JSONified objects, so we're adding a layer in front of it to take the raw text responses and rehydrate them. I'm writing the GET method.

When I start designing my solution, I know that the happy path will return me a Response object that has JSON as its content. I just need to deserialise the JSON and return it. This is easy enough:

public function testGetAsObject() {
    $expectedResult = ['result' => 'valid JSON'];
    $expectedContent = json_encode($expectedResult);

    $adapter = $this->mockAdapterWithResponse(200, $expectedContent);

    $client = new HttpClient($adapter);
    $result = $client->getWithJson($this->uri);

    $this->assertEquals($expectedResult, $result);
}

mockAdapterWithResponse just does this:

private function mockAdapterWithResponse($statusCode, $content){
    $response = $this->getMockBuilder(Response::class)
        ->setMethods(['getStatusCode', 'getContent'])
        ->getMock();
        
    $response->expects($this->once())
        ->method('getStatusCode')
        ->willReturn(statusCode);

    $response->expects($this->once())
        ->method('getContent')
        ->willReturn(content);

    $adapter = $this->getMockBuilder(Adapter::class)
        ->setMethods(['get'])
        ->getMock();
        
    $adapter->expects($this->once())
        ->method('get')
        ->willReturn($this->response);
        
    return $adapter;
}

(there are other tests that already needed this). All it's doing is enabling us to set what the content for the call to the HttpClient will return, because we don't care how the client goes about doing this; we just care what we do with the content afterwards.

So back to my test code:

public function testGetAsObject() {
    $expectedResult = ['result' => 'valid JSON'];
    $expectedContent = json_encode($expectedResult);

    $adapter = $this->mockAdapterWithResponse(200, $expectedContent);

    $client = new HttpClient($adapter);
    $result = $client->getWithJson($this->uri);

    $this->assertEquals($expectedResult, $result);
}

All I'm doing here is mocking the adapter to return a predetermined result, and I then test that what comes back from my function is the deserialisation of that value. That's it. I then implement some code to make that test pass:

public function getAsObject($uri, $params = [], $headers = []) {
    $response = $this->get($uri, $params, $headers);

    $object = json_decode($response->getContent(), true);

    return $object;
}

And that's green: all good.

I've already done too much in my test, but have not noticed (and not put it in for code review, so no-one's pointed it out).

The next requirement for this method is that if the content coming back from the adapter isn't JSON, then just throw an exception instead of trying to deserialise it. Again, pretty easy to TDD that:

/** @expectedException \Exception */
public function testGetAsObjectThrowsExceptionOnNonJsonContent() {
    $content = 'NOT_VALID_JSON';

    $adapter = $this->mockAdapterWithResponse(200, $content);

    $client = new HttpClient($adapter);
    $client->getWithJson($this->uri);
}

Note there's not even any explicit assertions in there: the @expectedException annotation takes care of it.

This test fails at present: no exception is thrown thanks to PHP's stupid-arse behaviour if json_decode is given non-JSON: it just returns null. Bravo, PHP. But that's what we want... we've designed our code and have tests to prove the design, now we need to implement the code:

public function getAsObject($uri, $params = [], $headers = []) {
    $response = $this->get($uri, $params, $headers);

    $object = json_decode($response->getContent(), true);
    
    if (is_null($object)) {
        throw new \Exception('Unable to decode JSON result');
    }
    return $object;
}

The tests are green, so all good.

But what have I done wrong? Nothing major, but once again I'm testing too much. Actually my tests are fine. It's my helper method that's "wrong":

private function mockAdapterWithResponse($statusCode, $content){
    $response = $this->getMockBuilder(Response::class)
        ->setMethods(['getStatusCode', 'getContent'])
        ->getMock();
        
    $response->expects($this->once())
        ->method('getStatusCode')
        ->willReturn(statusCode);

    $response->expects($this->once())
        ->method('getContent')
        ->willReturn(content);

    $adapter = $this->getMockBuilder(Adapter::class)
        ->setMethods(['get'])
        ->getMock();
        
    $adapter->expects($this->once())
        ->method('get')
        ->willReturn($this->response);
        
    return $adapter;
}

Why am I testing that those methods are called once? It is completely irrelevant to what I'm testing how many times they're called.

What got into my head is I was being too dogmatic when I was writing my code, and I also cheated a bit with the TDD. Instead of designing the test variation first, and then implemented the code to pass the test, I was doing both at the same time. And it shows from my test implementation. Basically I didn't design my solution first, I designed it as I went, so - whilst tricking myself into thinking I was doing TDD - I was asking myself "right, what does my code need to do? It needs to call get on the adapter. OK, so I'll test I call it. What next... I need to get the content from the response. Right: I'll make sure that getContent gets called". All very stolid and admirable, but: wrong. All this is doing is testing my code (almost line by line!). TDD is not about testing my code; it's about testing my design. And it's the design and behaviour of the API of the method I'm testing. As I've said before, it'd be excellent it TDD was retconned to mean "Test Driven Design", not "Test Driven Development". The former is closer to the intent of the exercise.

I don't need to test those methods are called once: there's no logic in my function that will vary how many times they might be called, so they will be called once. We can trust PHP to do what we tell it to do by giving it source code.

The logic flaw here was demonstrated by my code reviewer observed: if you think you need to check how many times that functionality is called, why are you also not concerned with how many times json_decode is called too? It's an important part of your functionality which definitely needs to be called. Ha... yeah "duh", I thought. This drove home I was testing too much.

Another thing is that I was getting too distracted by what my implementation was whilst writing my test. The requirements of that function are:

  • make a request, the response of which might be JSON:
  • if it is: return the deserialised content;
  • if it's not: raise an exception.

That first step is implementation detail from the POV of testing. What we're testing with TDD is the outcome of the function call. Which is: an object, or an exception. We're also only supposed to be testing the logic in the function the test is actually calling. I emphasise "logic" there because we're not testing the code. TDD is not about that. We're testing the logic. This is why we have two test variations: one wherein the logic to throw the exception is not true, so we get our object back; another when that if condition is true, so we get the exception.

We only need to test logic branching code (conditionals and loops, basically).

We do not need to prove every single statement is executed.

I was reminded a rule of thumb to apply when considering whether to use expects or not. Basically it boils down to if you have test variations which might result in a dependency being called a different amount of times, and that variation is germane to the result, then verify the call count.

Another good point made by my other colleague Brian was that when mocking stuff... only mock the bare minimum to get the test passing. Then stop. And that's probably the right place to leave it be.

An example of this might be if we had this logic in our method:

public function getAsObject($uri, $params = [], $headers = []) {
    $response = $this->get($uri, $params, $headers);

    $object = json_decode($response->getContent(), true);
    
    if (is_null($object)) {
        $this->logger("Bung content returned: $object");
        throw new \Exception('Unable to decode JSON result');
    }
    return $object;
}

Logging is an optional outcome of that method call, so we might want to test that the logger definitely gets called once in that second test. We could also be belt-and-braces about it and test it's not called at all in the happy-path test.

Or let's say the code was a caching proxy and one is testing the uncached and cached variations:

function get($id){
    if ($this->cache->exists($id)){
        return $this->cache->get($id);
    }
    return $this->dao->get($id);
}

In the uncached variation, the data is not found in the cache, so we want to make sure that the DAO is called to get the data from the DB. In the cached variation, we want to make sure the DAO isn't called. Even here, in the first example we're not really testing the DAO call was made specifically; we're testing the logic around the cache check didn't trigger the early return. We're testing the if statement, not the DAO call.

Bottom line, I think that if I find myself starting to do a call count on a mocked method, I need to stop to think about whether I will have cases where the expected call count will be different, and only then ought I even be bothering if the call-count variation is important to the outcome of the function call. If I'm only checking for one call count value... I'm doing it wrong.

And now rather than talking about my day job... I should actually be doing it...

Righto.

--
Adam

Tuesday 1 November 2016

PHP: what exactly are we testing?

G'day:
"the rumours of this blog's demise... [etc]". Yeah. I'm still here. More about that in a separate article.

Here's a completely fictitious scenario which bears absolutely no relation to a code review discussion I had at work today. I don't even know why I mention work. This, after all, has nothing to do with my day job. Or my colleagues. I'm just making it all up.

Here's some code I just made up for the sake of discussion:

<?php 

namespace me\adamcameron\someApp\service;

use \me\adamcameron\someApp\exception;

class SomeService {

    public function getTheThingFromTheWebService($id){
        // bunch of stuff elided
        
        try {
            $result = $this->connector->getThing($id);
            
            // bunch of stuff elided
            
            return $finalResult;
        } catch (exception\NotFoundException $e) {
            // bunch of stuff elided
            
            throw new exception\ServiceException("The Thing with ID $id could not be retrieved");
         }
    }
}


Basically we're calling a web service, and the connector we use wraps up all the HTTP bumpf, and with stuff like 404 responses it throws a NotFoundException up to the service tier so that the service doesn't need to concern itself with the fact the connector is dealing with a REST web service. It could just as easily be connecting straight to a DB and the SELECT statement returned zero rows: that's still a NotFoundException situation. That's cool. We're all-good with this approach to things. For reasons that are outwith this discussion (TBH, I'm slightly contriving the situation in the code above, but maintaining the correct context) we don't want to let the NotFoundException bubble any further, we want to throw a different exception (one that other upstream code is waiting to catch), and provide a human-friendly message with said exception.

But for testing, I need to actually make sure that the Service handles this exception properly, and the two elements of this are that it chucks its own exception, and it includes that human-friendly message when it does.

So I've tested both of those.

Because I'm a well-behaved, team-playing dev, I TDD my code as I go, and having just added that exception-handling, I need to test itbeing about to add that exception-handling code, I need to write my tests first. So, anyway, whether I wrote the test before or after I wrote the code is neither here nor there. I came up with this test case:

<?php

namespace me\adamcameron\someApp\test\service;

use \namespace me\adamcameron\someApp\service\SomeService;

/** @coversDefaultClass \me\adamcameron\someApp\service\SomeService */
class SomeServiceTest extends PHPUnit_Framework_TestCase {

    public function setup() {
        $this->setMockedConnector();
        $this->testService = new SomeService($this->mockedConnector);
    }

    /**
        @covers ::getTheThingFromTheWebService
        @expectedException \me\adamcameron\someApp\exception\ServiceException
        @expectedExceptionMessage The Thing with ID 1 could not be retrieved
    */
    public function testGetTheThingsFromTheWebServiceWillThrowServiceExceptionWhenIdNotFound() {
        $testId = 1;

        $this->mockedConnector
            ->method('getThing')
            ->with($testId)
            ->will($this->throwException('\me\adamcameron\someApp\exception\NotFoundException'));
            
        $this->testService->getTheThingFromTheWebService($testId);
    }
    
    private function setMockedConnector(){
        $this->mockedConnector = $this->getMockBuilder('\me\adamcameron\connector\MyConnector')
            ->disableOriginalConstructor()
            ->setMethods(['getThing'])
            ->getMock();
    }

}

I've left a bunch of contextual and mocking code in there so you can better see what's going one. But the key parts of this test are the annotations regarding the exception.

(we're using an older version of PHPUnit, so we need to do this via annotations, not expectations, unfortunately. But anyway).

This went into code review, and I got some feedback (this is a paraphrase. Of, obviously, a completely fictitious conversation. Remember this is not really work code I'm discussing here):

  • Not too sure about the merits of testing the whole message string here. For the purposes of testing, we don't care about text, we just care about the $id being correct. Perhaps use @expectedExceptionMessageRegExp instead, with like "/.*1.*/" as the pattern.
  • not sure 1 is a great ID to mock, TBH

Well I can't fault the latter bit. If one is gonna be actually checking for values, then use values that are really unlikely to accidentally occur as side-effects of something else going on. Basically 0 and 1 are dumb test values. There's always gonna be a risk they'll end up bubbling up via an accident rather than on purpose. I usually use random prime numbers (often taken from this list of the first 1000 primes). They're just not likely to coincidentally show up somewhere else in one's results at random.

But the first bit is interesting. My first reaction was "well what's the important bit here? The number or the error message?" We wanna check the error message cos given that suggested regex pattern the exception could simply return 1, which is not help to anyone. And yet that'd pass the test. Surely the important thing is the guidance: "The Thing with ID [whatever] could not be retrieved".

But that's woolly thinking on my part. I'm not doing tests for the benefit of humans here. And no code is every gonna rely on the specifics of that message. TDD tests (and unit tests in general) are about testing logic, not "results". And they don't give a shit about the Human Interface. We've got QA for that.

There's no need to test PHP here. If we have this code:

throw new exception\ServiceException("The Thing with ID $id could not be retrieved");

We don't need to test:
  • that PHP knows what a string is
  • that PHP knows how to interpolate a variable
  • that one can pass a string to an Exception in PHP
  • etc
That's testing an external system (and showing no small amount of hubris whilst we do so!), and is outwith the remit of TDD or unit testing in general. If we can't trust PHP to know what a string is: we're in trouble.

What we do need to test is this:

public function getTheThingFromTheWebService($id){
    // bunch of stuff elided
    
    try {
        $result = $this->connector->getThing($id);
        
        // bunch of stuff elided
        
        return $finalResult;
    } catch (exception\NotFoundException $e) {
        // bunch of stuff elided
        
        throw new exception\ServiceException("The Thing with ID $id could not be retrieved"); 
    }
}


  • The ID comes in...
  • ... it's used...
  • ... and when the call that uses it fails...
  • ... we use it in the exceptional scenario.

The logic here is that it's the same ID that's passed to the function is the one reported in the exception message. Not what the exception message is: that's PHP's job.

So in reality here... we just need to test the ID is referenced in the exception. TBH, I'd not even usually bother doing that. It's just we're pushing to have more useful exception messages at the moment, so we're trying out "enforcing" this behaviour. I think it's a worthwhile exercise at least whilst the devs are getting used to it. Once it's part of our standard practice, I reckon we can stop worrying about what's in the error message.

But in the interim I'm gonna fix me test and see if I can pass this code review...

Righto.

--
Adam

Saturday 2 April 2016

Endorsement: Ciaran McNulty's "Why Your Test Suite Sucks" presentation is really good

G'day:
I was just gonna put a Twitter message out about this, but anyone with their head screwed on ignores me on Twitter, but might read this nonsense.

Anyway, Ciaran really knows his stuff about TDD, and he gives a good presentation. I fancy myself as reasonably OK with TDD, but this presentation had me go "holy f*** I'm doing that wrong", and "ah, yeah... that's how we work around that challenge we have", and as well ratifies a bunch of stuff I seem to be getting right. Where "right" for the purposes of this is "the way he recommends".

The prezzo is PHP-centric, but really don't let that worry you. The code is easy to follow, and this is more a presentation on technique and approach more than implementation.

I recommend it to anyone who is doing TDD, or who isn't doing TDD and suspects they should. Or isn't doing TDD and doesn't suspect they should (you're wrong, so watch the video!).

Let's see if I can embed this thing...


Thanks Ciaran. I'm gonna be keeping an eye on what you have to say about stuff from now on!

Righto.

--
Adam

Saturday 5 March 2016

CFML: go have a look at a Code Review

G'day:
I've already put more effort into this than a Saturday morning warrants, so I'll keep this short and an exercise in copy and past.

Perennial CFMLer James Mohler has posted some code on Code Review:  "Filtering the attributes for a custom tag". I'll just reproduce my bit of the review, with his code by way of framing.

Go have a look, and assess our code as an exercise.

James's functions:

Original:

string function passThrough(required struct attr)   output="false"  {

    local.result = "";

    for(local.myKey in arguments.attr)    {
        if (variables.myKey.left(5) == "data-" || variables.myKey.left(2) == "on" || variables.myKey.left(3) == "ng-")  {

            local.result &= ' #local.myKey.lcase()#="#arguments.attr[local.myKey].encodeForHTMLAttribute()#"';
        } // end if 
    }   // end for

    return local.result;
}

His reworking:

string function passThrough(required struct attr)   output="false"  {

    arguments.attr.filter(
        function(key, value) { 
            return (key.left(5) == "data-" || key.left(2) == "on" || key.left(3) == "ng-");
        }
    ).each(
        function(key, value)    {
            local.result &= ' #key.lcase()#="#value.encodeForHTMLAttribute()#"';
        }
    );  

    return local.result;
}

Now for my code review (this is a straight copy and paste from my answer):

OK, now that I've had coffee, here's my refactoring:

function extractCodeCentricAttributesToMarkupSafeAttributes(attributes){
    var relevantAttributePattern = "^(?:data-|ng-|on)(?=\S)";

    return attributes.filter(function(attribute){
        return attribute.reFindNoCase(relevantAttributePattern);
    }).reduce(function(attributeString, attributeName, attributeValue){
        return attributeString& ' #attributeName#="#attributeValue#"';
    }, "");
}


Notes on my implementation:
  • I could not get TestBox working on my ColdFusion 2016 install (since fixed), so I needed to use CF11 for this, hence using the function version of encodeForHTMLAttribute(). The method version is new to 2016.
  • we could probably argue over the best pattern to use for the regex all day. I specifically wanted to take a different approach to Dom's one, for the sake of comparison. I'm not suggesting mine is better. Just different. The key point we both demonstrate is don't use a raw regex pattern, always give it a meaningful name.
  • looking at the "single-expression-solution" I have here, the code is quite dense, and I can't help but think Dom's approach to separating them out has merit.

Code review notes:
  • your original function doesn't work. It specifies variables.myKey when it should be local.myKey. It's clear you're not testing your original code, let alone using TDD during the refactoring process. You must have test coverage before refactoring.
  • the function name is unhelpfully non-descriptive, as demonstrated by Dom not getting what it was doing. I don't think my function name is ideal, but it's an improvement. I guess if we knew *why
  • you were doing this, the function name could be improved to reflect that.
  • lose the scoping. It's clutter.
  • lose the comments. They're clutter.
  • don't abbrev. variable names. It makes the code slightly hard to follow.
  • don't have compound if conditions like that. It makes the code hard to read. Even if the condition couldn't be simplified back to one function call and you still needed multiple subconditions: extract them out into meaningful variable names, eg: isDataAttribute || isOnAttribute || isNgAttribute
  • key and value are unhelpful argument names
  • slightly controversial: but unless it's an API intended to be used by third-parties: lose the type checking. It's clutter in one's own code.
  • there's no need for the output modifier for the function in CFScript.
  • don't quote boolean values. It's just false not "false".

Unit tests for this:

component extends="testbox.system.BaseSpec" {

    function beforeAll(){
        include "original.cfm";
        include "refactored.cfm";
        return this;
    }

    function run(testResults, testBox){
        describe("Testing for regressions", function(){
            it("works with an empty struct", function(){
                var testStruct = {};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with an irrelevant attribute", function(){
                var testStruct = {notRelevant=3};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with each of the relevant attributes", function(){
                var relevantAttributes = ["data-relevant", "onRelevant", "ng-relevant"];
                for (relevantAttribute in relevantAttributes) {
                    var testStruct = {"#relevantAttribute#"=5};
                    var resultFromOriginal = passThrough(testStruct);
                    var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                    expect(resultFromRefactored).toBe(resultFromOriginal);
                }
            });
            it("works with a mix of attribute relevance", function(){
                var testStruct = {notRelevant=7, onRelevant=11};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with multiple relevant attributes", function(){
                var testStruct = {"data-relevant"=13, onRelevant=17, "ng-relevant"=19};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
        });

    }

}


Use them. NB: the includes in beforeAll() simply contain each version of the function.

One thing I didn't put as I don't think it's relevant to this code review is that one should be careful with inline function expressions. They're not themselves testable, so if they get more than half a dozen statements or have any branching or conditional logic: extract them into their own functions, and test them separately.

Oh, and in case yer wondering... yes I did write the tests before I wrote any of my own code. And I found a coupla bugs with my planned solution (and assumptions about the requirement) in doing so. Always TDD. Always.

Anyway, all of this is getting in the way of my Saturday.

Righto.

--
Adam

Thursday 6 August 2015

CFML: code challenge from the CFML Slack channel

G'day:
I wasn't gonna write anything today, but then Jessica on Slack (to use her full name) posted a code puzzle on the CFML Slack channel, which I caught whilst I was sardined in a train to Ilford en route home. I nutted out a coupla solutions in my head instead of reading my book, and I saw this as a good opportunity (read: "excuse") to pop down to the local once I got home and squared away a coupla things, and key the code in and see if it worked. I was moderately pleased with the results, and I think I've solved it in an interesting way, so am gonna reproduce here.

Jessica's problem was thus:
so, i am attempting to write a function that lets you set variables specifically in a complex structure [...]
cached:{
    foo: {
        bar: "hi"
    }
}
setProperty("foo.bar", "chicken");
writeDump(cached); // should == cached.foo.bar = chicken

On the Slack channel there was talk of loops and recursion and that sort of thing, which all sounded fine (other people came up with answers, but I purposely did not look at them lest they influenced my own efforts). The more I work with CFML and its iteration methods (map(), reduce(), etc), the more I think actually having to loop over something seems a bit primitive, and non-descriptive. I looked at this for a few minutes... [furrowed my brow]... and thought "you could reduce that dotted path to a reference to the substruct I reckon". There were a few challenges there - if CFML had proper references it'd be easier - but I got an idea of the code in my head, and it seemed nice and easy.

Whilst waiting to Skype with my boy I wrote my tests:

Tuesday 23 December 2014

JavaScript: Jasmine for unit testing

G'day:
At work, I've been tasked with getting the team up to speed with TDD whilst we redevelop our website in PHP. I knocked together a presentation on the subject a coupla months ago, but before having a chance to present it, got shifted about in the internal dept structure for a month or so, and it kinda got temporarily shelved. I posted it online: "TDD presentation". I'm back on the PHP Team now, and need to update said presentation to be more work-requirement-specific, as well as cover unit testing our JavaScript. This has been on our agenda for a coupla years, but was never allowed to get any traction by the decision makers. Decision-making has improved now, so we're all go.

I have heard about Jasmine, and like the look of it, but have never actually downloaded / installed / ran it. I'm gonna do that today.

(Oh, blogging my work is not something I do... I'm actually off on sick leave at the moment - which I feel slightly guilty about - but I need to get this stuff done, so gonna do it today whilst I am unlikely to get interruptions. I figured as I'm doing it on my own time, I get to blog about it too ;-)

I am writing about this as I do it.

Jasmine


First up: Jasmine. This is what Wikipedia has to say about Jasmine:

Jasmine is an open source testing framework for JavaScript. It aims to run on any JavaScript-enabled platform, to not intrude on the application nor the IDE, and to have easy-to-read syntax. It is heavily influenced by other unit testing frameworks, such as ScrewUnit, JSSpec, JSpec, and RSpec.
And from its own website:

Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks.

And a code sample from the same page:

describe("A suite", function() {
  it("contains spec with an expectation", function() {
    expect(true).toBe(true);
  });
});

If you're familiar with TestBox (and if you're a CFML dev, you bloody should be!), then this will look comfortingly familiar. Indeed that code would run on TestBox. I know a bit about TestBox, so this is pleasing: I have a head start!

Download & Install

I'm gonna use 2.1.3, which is - at time of writing - the latest version of Jasmine. The download page is here: jasmine 2.1.3. I've D/Led that and unzipped it into a public directory.

Running

This is too easy... it ships with a file SpecRunner.html, and browsing to that runs the tests. Here are the samples:



Nice!

Example Code

Looking at the code within SpecRunner.html, we see this:

Friday 31 October 2014

TDD presentation

G'day:
A coupla weeks back I was tasked with giving a presentation to the new PHP troops: an introduction to TDD. I'm not on the PHP team now, but they might need to be doing some TDD shortly, so I figured I'd at least expose them to the presentation slides if not actually take them through it. And I figured I might as well put it up for everyone to have a look at, in case it's of any use to anyone.

It's very bare bones, and I dunno how useful it'll be if I'm not there discussing each slide, but... oh well. It's here: "TDD".

If nothing else, it might solicit some questions from people, which will be a good thing.

Cheers.

--
Adam

Saturday 11 October 2014

The received wisdom of TDD [etc]: Sean's feedback

G'day:
During the week I solicited feedback on my assessment of "The received wisdom of TDD and private methods". I got a small amount of good, usefull feedback, but not as much as I was hoping for.

However Sean came to the fore with a very long response, and I figure it's worth posting here so other people spot it and read it.


'ere 'tis, unabridged:

There are broadly two schools of thought on the issue of "private" methods and TDD:

1. private methods are purely an implementation detail that arise as part of the "refactor" portion of the cycle - they're completely irrelevant to the tests and they never need testing (because they only happen as part of refactoring other methods when your tests are already passing).

2. encapsulation is not particularly helpful and it's fine to just make things public if you want to add tests for new behavior within previously private methods.

The former is the classical position: classes are a black box except for their public API, and it's that public API that you test-drive.

The latter is an increasingly popular position that has gained traction as people rethink OOP, start to use languages that don't have "private", or start working in a more FP style. Python doesn't really have private methods (sure, you can use a double underscore prefix to "hide" a function but it's still accessible via the munged name which is '_TheClass__theFunction' for '__theFunction' inside 'TheClass'). Groovy has a 'private' keyword (for compatibility with Java) but completely ignores it. Both languages operate on trust and assume developers aren't idiots and aren't malicious. In FP, there's a tendency toward making everything public because there are fewer side-effects and some helper function you've created to help implement an API function might be useful to users of your code - and it's safe when it has no side-effects!

When I started writing Clojure, coming from a background of C++, Java, and CFML, I was quite meticulous about private vs public... and in Clojure you can still easily access a "private" function by using its fully-qualified name, e.g., `#'some.namespace/private-function` rather than just `private-function` or `some.namespace/private-function`. Using `#'` bypasses the access check. And the idiom in Clojure is generally to just make everything public anyway, possibly dividing code into a "public API" namespace and one or more "implementation" namespaces. The latter contain public functions that are only intended to be used by the former - but, again, the culture of trust means that users _can_ call the implementation functions if they want, on the understanding that an implementation namespace might change (and is likely to be undocumented).

My current position tends to be that if I'm TDD-ing code and want to refactor a function, I'll usually create a new test for the specifics of the helper I want to introduce, and then refactor into the helper to make all the tests pass (the original tests for the existing public function and the new test(s) for the helper function). Only if there's a specific reason for a helper to be private would I go that route (for example, it isn't a "complete" function on its own, or it manages side-effects that I don't want messed with outside of the calling function, etc). And, to be honest, in those cases, I'd probably just make it a local function inside the original calling function if it was that critical to hide it.

Google for `encapsulation harmful` and you'll see there's quite a body of opinion that "private by default" - long held to be a worthy goal in OOP - is an impediment to good software design these days (getters and setters considered harmful is another opinion you'll find out there).

That was longer than I intended!

Yeah Sean but it was bloody good. It all makes sense, and also goes a way to make me think I'm not a lunatic (at least not in this specific context).

And I have a bunch of reading to do on this whole "encapsulation harmful" idea. I'd heard it mentioned, screwed my nose up a bit, but didn't follow up. Now I will. Well: when I have a moment.

Anyway, there you go. Cheers for the effort put in to writing this, Sean. I owe you a beer or two.

Righto.

--
Adam

Thursday 9 October 2014

Proposed TDD logic flow

G'day:
I've been really busy this week, and haven't been able to discover much interesting about either PHP or CFML or anything, hence being quite quiet. I'll admit this is very much a filler article, and a bit of a cheeky nod to something Andy said the other day:



Earlier I asked for people's opinions regarding TDD vs private methods: "The received wisdom of TDD and private methods".

I didn't get much feedback [scowl], but thanks to Dom and Gerry (there's a cat 'n' mouse joke in there somewhere) for offering up some thoughts.

I needed to provide a workflow for the team, and I thought I'd stick it up here as well, as a bit of closure on the previous article. And to give Andy a picture to look at.

What do you think of this approach (other than "unreadable at that size". Click here):


Forget the first two steps "Assign Ticket", etc, as that's just our Jira workflow and a bit of context for our peeps, but the rest of it after that. Also the associated process of how to maintain private methods whilst still adhering to TDD.

I think there's a reasonable mix of pragmatism and dogmatism in that?

Thoughts?

--
Adam

Tuesday 7 October 2014

The received wisdom of TDD and private methods

G'day:
As you might know, I've recently taken on a different role as a PHP developer. My employer are shifting our code base from CFML to PHP for various reasons ("So long, and thanks for all the CF"). One facet of this is we're moving off a venerable, well-established CFML code base to a new code base in the infancy of its existence (some work on it had been done before we picked up the project). In a different language.

And we've got from having about 4000 unit tests to zero.



A coupla of us are having a quick exploratory look at PHPUnit to see what it can do, and whether there's any "gotchas" we need to be aware of when testing in PHP. So far the findings seem to be that unit testing in CFML via MXUnit and TestBox - especially in conjunction with MockBox - is a bit easier than the hoops we need to jump through in PHP to achieve the same ends. This is mostly down to CFML's objects being far more dynamic than PHP's, so injecting testing helper methods and exposing non-public methods for testing is much easier.

The challenges that PHP has thrown at us has caused us to revisit our TDD and unit testing strategy. Not necessarily to revise it, but to revisit it and see if it does need revision.

Our existing policy is that all code is written via a TDD & "Clean Code" approach:
  1. the need for a piece of functionality is identified;
  2. a failing test is written to test a facet of the functionality;
  3. code is written to pass the test;
  4. repeat from 2 until the functionality is complete;
  5. refactor if necessary.

This is applied both to new functionality as well as maintenance of existing functionality. The TDD side of things drives the code design, and also demonstrates that downstream changes don't have adverse effects on earlier requirements. The usual sort of thing.

On new work, this will mean creating a new public method (and the class it needs to go in, as required), and implementing the requirement. So all the tests are on that public method. When we refactor the code, those tests all still work, which is fine.

As a result of the refactoring we might end up with some of the code moving out into public methods of other classes, or - as often - private methods of the same class.

The newly refactored public methods go through the same TDD approach as the initial one, although this is quite often just a re-homing of the tests which were previously testing the unfactored functionality from the original public method. And the original public method's tests are selectively updated to remove any tests which are now the domain of the new methods, and mocks are used in lieu of calling these factored-out methods in the remaining tests of the original function.

And traditionally we have done exactly the same thing with the refactoring that was simply moved into private methods.

Perhaps I can demonstrate this with some pseudocode. At the end of our first TDD round, and before refactoring, we have this:

Thursday 10 July 2014

Regex help please

G'day:
I'm hoping Peter Boughton or Ben Nadel might see this. Or someone else who is good @ regular expression patterns that I'm unaware of.

Here's the challenge...



Given this string:

Lorem ipsum dolor sit

I want to extract the leading sub-string which is:
  • no more than n characters long;
  • breaks at the previous whole word, rather than in the middle of a word;
  • if no complete single word matches, them matches at least the first word, even if the length of the sub-string is greater than n.

I've come up with this:

// trimToWord.cfm
string function trimToWord(required string string, required numeric index){
    return reReplace(string, "^((?:.{1,#index#}(?=\s|$)\b)|(?:.+?\b)).*", "\1", "ONE");
}

It works, but that regex is a bit hoary.

Here's a visual representation of it (courtesy of regexper.com), by way of explanation:



Anyone fancy improving it for me?

Here's some unit tests to run your suggestions through: