Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Tuesday 1 November 2016

PHP: what exactly are we testing?

G'day:
"the rumours of this blog's demise... [etc]". Yeah. I'm still here. More about that in a separate article.

Here's a completely fictitious scenario which bears absolutely no relation to a code review discussion I had at work today. I don't even know why I mention work. This, after all, has nothing to do with my day job. Or my colleagues. I'm just making it all up.

Here's some code I just made up for the sake of discussion:

<?php 

namespace me\adamcameron\someApp\service;

use \me\adamcameron\someApp\exception;

class SomeService {

    public function getTheThingFromTheWebService($id){
        // bunch of stuff elided
        
        try {
            $result = $this->connector->getThing($id);
            
            // bunch of stuff elided
            
            return $finalResult;
        } catch (exception\NotFoundException $e) {
            // bunch of stuff elided
            
            throw new exception\ServiceException("The Thing with ID $id could not be retrieved");
         }
    }
}


Basically we're calling a web service, and the connector we use wraps up all the HTTP bumpf, and with stuff like 404 responses it throws a NotFoundException up to the service tier so that the service doesn't need to concern itself with the fact the connector is dealing with a REST web service. It could just as easily be connecting straight to a DB and the SELECT statement returned zero rows: that's still a NotFoundException situation. That's cool. We're all-good with this approach to things. For reasons that are outwith this discussion (TBH, I'm slightly contriving the situation in the code above, but maintaining the correct context) we don't want to let the NotFoundException bubble any further, we want to throw a different exception (one that other upstream code is waiting to catch), and provide a human-friendly message with said exception.

But for testing, I need to actually make sure that the Service handles this exception properly, and the two elements of this are that it chucks its own exception, and it includes that human-friendly message when it does.

So I've tested both of those.

Because I'm a well-behaved, team-playing dev, I TDD my code as I go, and having just added that exception-handling, I need to test itbeing about to add that exception-handling code, I need to write my tests first. So, anyway, whether I wrote the test before or after I wrote the code is neither here nor there. I came up with this test case:

<?php

namespace me\adamcameron\someApp\test\service;

use \namespace me\adamcameron\someApp\service\SomeService;

/** @coversDefaultClass \me\adamcameron\someApp\service\SomeService */
class SomeServiceTest extends PHPUnit_Framework_TestCase {

    public function setup() {
        $this->setMockedConnector();
        $this->testService = new SomeService($this->mockedConnector);
    }

    /**
        @covers ::getTheThingFromTheWebService
        @expectedException \me\adamcameron\someApp\exception\ServiceException
        @expectedExceptionMessage The Thing with ID 1 could not be retrieved
    */
    public function testGetTheThingsFromTheWebServiceWillThrowServiceExceptionWhenIdNotFound() {
        $testId = 1;

        $this->mockedConnector
            ->method('getThing')
            ->with($testId)
            ->will($this->throwException('\me\adamcameron\someApp\exception\NotFoundException'));
            
        $this->testService->getTheThingFromTheWebService($testId);
    }
    
    private function setMockedConnector(){
        $this->mockedConnector = $this->getMockBuilder('\me\adamcameron\connector\MyConnector')
            ->disableOriginalConstructor()
            ->setMethods(['getThing'])
            ->getMock();
    }

}

I've left a bunch of contextual and mocking code in there so you can better see what's going one. But the key parts of this test are the annotations regarding the exception.

(we're using an older version of PHPUnit, so we need to do this via annotations, not expectations, unfortunately. But anyway).

This went into code review, and I got some feedback (this is a paraphrase. Of, obviously, a completely fictitious conversation. Remember this is not really work code I'm discussing here):

  • Not too sure about the merits of testing the whole message string here. For the purposes of testing, we don't care about text, we just care about the $id being correct. Perhaps use @expectedExceptionMessageRegExp instead, with like "/.*1.*/" as the pattern.
  • not sure 1 is a great ID to mock, TBH

Well I can't fault the latter bit. If one is gonna be actually checking for values, then use values that are really unlikely to accidentally occur as side-effects of something else going on. Basically 0 and 1 are dumb test values. There's always gonna be a risk they'll end up bubbling up via an accident rather than on purpose. I usually use random prime numbers (often taken from this list of the first 1000 primes). They're just not likely to coincidentally show up somewhere else in one's results at random.

But the first bit is interesting. My first reaction was "well what's the important bit here? The number or the error message?" We wanna check the error message cos given that suggested regex pattern the exception could simply return 1, which is not help to anyone. And yet that'd pass the test. Surely the important thing is the guidance: "The Thing with ID [whatever] could not be retrieved".

But that's woolly thinking on my part. I'm not doing tests for the benefit of humans here. And no code is every gonna rely on the specifics of that message. TDD tests (and unit tests in general) are about testing logic, not "results". And they don't give a shit about the Human Interface. We've got QA for that.

There's no need to test PHP here. If we have this code:

throw new exception\ServiceException("The Thing with ID $id could not be retrieved");

We don't need to test:
  • that PHP knows what a string is
  • that PHP knows how to interpolate a variable
  • that one can pass a string to an Exception in PHP
  • etc
That's testing an external system (and showing no small amount of hubris whilst we do so!), and is outwith the remit of TDD or unit testing in general. If we can't trust PHP to know what a string is: we're in trouble.

What we do need to test is this:

public function getTheThingFromTheWebService($id){
    // bunch of stuff elided
    
    try {
        $result = $this->connector->getThing($id);
        
        // bunch of stuff elided
        
        return $finalResult;
    } catch (exception\NotFoundException $e) {
        // bunch of stuff elided
        
        throw new exception\ServiceException("The Thing with ID $id could not be retrieved"); 
    }
}


  • The ID comes in...
  • ... it's used...
  • ... and when the call that uses it fails...
  • ... we use it in the exceptional scenario.

The logic here is that it's the same ID that's passed to the function is the one reported in the exception message. Not what the exception message is: that's PHP's job.

So in reality here... we just need to test the ID is referenced in the exception. TBH, I'd not even usually bother doing that. It's just we're pushing to have more useful exception messages at the moment, so we're trying out "enforcing" this behaviour. I think it's a worthwhile exercise at least whilst the devs are getting used to it. Once it's part of our standard practice, I reckon we can stop worrying about what's in the error message.

But in the interim I'm gonna fix me test and see if I can pass this code review...

Righto.

--
Adam

Saturday 2 April 2016

Endorsement: Ciaran McNulty's "Why Your Test Suite Sucks" presentation is really good

G'day:
I was just gonna put a Twitter message out about this, but anyone with their head screwed on ignores me on Twitter, but might read this nonsense.

Anyway, Ciaran really knows his stuff about TDD, and he gives a good presentation. I fancy myself as reasonably OK with TDD, but this presentation had me go "holy f*** I'm doing that wrong", and "ah, yeah... that's how we work around that challenge we have", and as well ratifies a bunch of stuff I seem to be getting right. Where "right" for the purposes of this is "the way he recommends".

The prezzo is PHP-centric, but really don't let that worry you. The code is easy to follow, and this is more a presentation on technique and approach more than implementation.

I recommend it to anyone who is doing TDD, or who isn't doing TDD and suspects they should. Or isn't doing TDD and doesn't suspect they should (you're wrong, so watch the video!).

Let's see if I can embed this thing...


Thanks Ciaran. I'm gonna be keeping an eye on what you have to say about stuff from now on!

Righto.

--
Adam

Saturday 5 March 2016

CFML: go have a look at a Code Review

G'day:
I've already put more effort into this than a Saturday morning warrants, so I'll keep this short and an exercise in copy and past.

Perennial CFMLer James Mohler has posted some code on Code Review:  "Filtering the attributes for a custom tag". I'll just reproduce my bit of the review, with his code by way of framing.

Go have a look, and assess our code as an exercise.

James's functions:

Original:

string function passThrough(required struct attr)   output="false"  {

    local.result = "";

    for(local.myKey in arguments.attr)    {
        if (variables.myKey.left(5) == "data-" || variables.myKey.left(2) == "on" || variables.myKey.left(3) == "ng-")  {

            local.result &= ' #local.myKey.lcase()#="#arguments.attr[local.myKey].encodeForHTMLAttribute()#"';
        } // end if 
    }   // end for

    return local.result;
}

His reworking:

string function passThrough(required struct attr)   output="false"  {

    arguments.attr.filter(
        function(key, value) { 
            return (key.left(5) == "data-" || key.left(2) == "on" || key.left(3) == "ng-");
        }
    ).each(
        function(key, value)    {
            local.result &= ' #key.lcase()#="#value.encodeForHTMLAttribute()#"';
        }
    );  

    return local.result;
}

Now for my code review (this is a straight copy and paste from my answer):

OK, now that I've had coffee, here's my refactoring:

function extractCodeCentricAttributesToMarkupSafeAttributes(attributes){
    var relevantAttributePattern = "^(?:data-|ng-|on)(?=\S)";

    return attributes.filter(function(attribute){
        return attribute.reFindNoCase(relevantAttributePattern);
    }).reduce(function(attributeString, attributeName, attributeValue){
        return attributeString& ' #attributeName#="#attributeValue#"';
    }, "");
}


Notes on my implementation:
  • I could not get TestBox working on my ColdFusion 2016 install (since fixed), so I needed to use CF11 for this, hence using the function version of encodeForHTMLAttribute(). The method version is new to 2016.
  • we could probably argue over the best pattern to use for the regex all day. I specifically wanted to take a different approach to Dom's one, for the sake of comparison. I'm not suggesting mine is better. Just different. The key point we both demonstrate is don't use a raw regex pattern, always give it a meaningful name.
  • looking at the "single-expression-solution" I have here, the code is quite dense, and I can't help but think Dom's approach to separating them out has merit.

Code review notes:
  • your original function doesn't work. It specifies variables.myKey when it should be local.myKey. It's clear you're not testing your original code, let alone using TDD during the refactoring process. You must have test coverage before refactoring.
  • the function name is unhelpfully non-descriptive, as demonstrated by Dom not getting what it was doing. I don't think my function name is ideal, but it's an improvement. I guess if we knew *why
  • you were doing this, the function name could be improved to reflect that.
  • lose the scoping. It's clutter.
  • lose the comments. They're clutter.
  • don't abbrev. variable names. It makes the code slightly hard to follow.
  • don't have compound if conditions like that. It makes the code hard to read. Even if the condition couldn't be simplified back to one function call and you still needed multiple subconditions: extract them out into meaningful variable names, eg: isDataAttribute || isOnAttribute || isNgAttribute
  • key and value are unhelpful argument names
  • slightly controversial: but unless it's an API intended to be used by third-parties: lose the type checking. It's clutter in one's own code.
  • there's no need for the output modifier for the function in CFScript.
  • don't quote boolean values. It's just false not "false".

Unit tests for this:

component extends="testbox.system.BaseSpec" {

    function beforeAll(){
        include "original.cfm";
        include "refactored.cfm";
        return this;
    }

    function run(testResults, testBox){
        describe("Testing for regressions", function(){
            it("works with an empty struct", function(){
                var testStruct = {};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with an irrelevant attribute", function(){
                var testStruct = {notRelevant=3};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with each of the relevant attributes", function(){
                var relevantAttributes = ["data-relevant", "onRelevant", "ng-relevant"];
                for (relevantAttribute in relevantAttributes) {
                    var testStruct = {"#relevantAttribute#"=5};
                    var resultFromOriginal = passThrough(testStruct);
                    var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                    expect(resultFromRefactored).toBe(resultFromOriginal);
                }
            });
            it("works with a mix of attribute relevance", function(){
                var testStruct = {notRelevant=7, onRelevant=11};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
            it("works with multiple relevant attributes", function(){
                var testStruct = {"data-relevant"=13, onRelevant=17, "ng-relevant"=19};
                var resultFromOriginal = passThrough(testStruct);
                var resultFromRefactored = extractCodeCentricAttributesToMarkupSafeAttributes(testStruct);
                expect(resultFromRefactored).toBe(resultFromOriginal);
            });
        });

    }

}


Use them. NB: the includes in beforeAll() simply contain each version of the function.

One thing I didn't put as I don't think it's relevant to this code review is that one should be careful with inline function expressions. They're not themselves testable, so if they get more than half a dozen statements or have any branching or conditional logic: extract them into their own functions, and test them separately.

Oh, and in case yer wondering... yes I did write the tests before I wrote any of my own code. And I found a coupla bugs with my planned solution (and assumptions about the requirement) in doing so. Always TDD. Always.

Anyway, all of this is getting in the way of my Saturday.

Righto.

--
Adam

Thursday 6 August 2015

CFML: code challenge from the CFML Slack channel

G'day:
I wasn't gonna write anything today, but then Jessica on Slack (to use her full name) posted a code puzzle on the CFML Slack channel, which I caught whilst I was sardined in a train to Ilford en route home. I nutted out a coupla solutions in my head instead of reading my book, and I saw this as a good opportunity (read: "excuse") to pop down to the local once I got home and squared away a coupla things, and key the code in and see if it worked. I was moderately pleased with the results, and I think I've solved it in an interesting way, so am gonna reproduce here.

Jessica's problem was thus:
so, i am attempting to write a function that lets you set variables specifically in a complex structure [...]
cached:{
    foo: {
        bar: "hi"
    }
}
setProperty("foo.bar", "chicken");
writeDump(cached); // should == cached.foo.bar = chicken

On the Slack channel there was talk of loops and recursion and that sort of thing, which all sounded fine (other people came up with answers, but I purposely did not look at them lest they influenced my own efforts). The more I work with CFML and its iteration methods (map(), reduce(), etc), the more I think actually having to loop over something seems a bit primitive, and non-descriptive. I looked at this for a few minutes... [furrowed my brow]... and thought "you could reduce that dotted path to a reference to the substruct I reckon". There were a few challenges there - if CFML had proper references it'd be easier - but I got an idea of the code in my head, and it seemed nice and easy.

Whilst waiting to Skype with my boy I wrote my tests:

Tuesday 23 December 2014

JavaScript: Jasmine for unit testing

G'day:
At work, I've been tasked with getting the team up to speed with TDD whilst we redevelop our website in PHP. I knocked together a presentation on the subject a coupla months ago, but before having a chance to present it, got shifted about in the internal dept structure for a month or so, and it kinda got temporarily shelved. I posted it online: "TDD presentation". I'm back on the PHP Team now, and need to update said presentation to be more work-requirement-specific, as well as cover unit testing our JavaScript. This has been on our agenda for a coupla years, but was never allowed to get any traction by the decision makers. Decision-making has improved now, so we're all go.

I have heard about Jasmine, and like the look of it, but have never actually downloaded / installed / ran it. I'm gonna do that today.

(Oh, blogging my work is not something I do... I'm actually off on sick leave at the moment - which I feel slightly guilty about - but I need to get this stuff done, so gonna do it today whilst I am unlikely to get interruptions. I figured as I'm doing it on my own time, I get to blog about it too ;-)

I am writing about this as I do it.

Jasmine


First up: Jasmine. This is what Wikipedia has to say about Jasmine:

Jasmine is an open source testing framework for JavaScript. It aims to run on any JavaScript-enabled platform, to not intrude on the application nor the IDE, and to have easy-to-read syntax. It is heavily influenced by other unit testing frameworks, such as ScrewUnit, JSSpec, JSpec, and RSpec.
And from its own website:

Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks.

And a code sample from the same page:

describe("A suite", function() {
  it("contains spec with an expectation", function() {
    expect(true).toBe(true);
  });
});

If you're familiar with TestBox (and if you're a CFML dev, you bloody should be!), then this will look comfortingly familiar. Indeed that code would run on TestBox. I know a bit about TestBox, so this is pleasing: I have a head start!

Download & Install

I'm gonna use 2.1.3, which is - at time of writing - the latest version of Jasmine. The download page is here: jasmine 2.1.3. I've D/Led that and unzipped it into a public directory.

Running

This is too easy... it ships with a file SpecRunner.html, and browsing to that runs the tests. Here are the samples:



Nice!

Example Code

Looking at the code within SpecRunner.html, we see this:

Friday 31 October 2014

TDD presentation

G'day:
A coupla weeks back I was tasked with giving a presentation to the new PHP troops: an introduction to TDD. I'm not on the PHP team now, but they might need to be doing some TDD shortly, so I figured I'd at least expose them to the presentation slides if not actually take them through it. And I figured I might as well put it up for everyone to have a look at, in case it's of any use to anyone.

It's very bare bones, and I dunno how useful it'll be if I'm not there discussing each slide, but... oh well. It's here: "TDD".

If nothing else, it might solicit some questions from people, which will be a good thing.

Cheers.

--
Adam

Saturday 11 October 2014

The received wisdom of TDD [etc]: Sean's feedback

G'day:
During the week I solicited feedback on my assessment of "The received wisdom of TDD and private methods". I got a small amount of good, usefull feedback, but not as much as I was hoping for.

However Sean came to the fore with a very long response, and I figure it's worth posting here so other people spot it and read it.


'ere 'tis, unabridged:

There are broadly two schools of thought on the issue of "private" methods and TDD:

1. private methods are purely an implementation detail that arise as part of the "refactor" portion of the cycle - they're completely irrelevant to the tests and they never need testing (because they only happen as part of refactoring other methods when your tests are already passing).

2. encapsulation is not particularly helpful and it's fine to just make things public if you want to add tests for new behavior within previously private methods.

The former is the classical position: classes are a black box except for their public API, and it's that public API that you test-drive.

The latter is an increasingly popular position that has gained traction as people rethink OOP, start to use languages that don't have "private", or start working in a more FP style. Python doesn't really have private methods (sure, you can use a double underscore prefix to "hide" a function but it's still accessible via the munged name which is '_TheClass__theFunction' for '__theFunction' inside 'TheClass'). Groovy has a 'private' keyword (for compatibility with Java) but completely ignores it. Both languages operate on trust and assume developers aren't idiots and aren't malicious. In FP, there's a tendency toward making everything public because there are fewer side-effects and some helper function you've created to help implement an API function might be useful to users of your code - and it's safe when it has no side-effects!

When I started writing Clojure, coming from a background of C++, Java, and CFML, I was quite meticulous about private vs public... and in Clojure you can still easily access a "private" function by using its fully-qualified name, e.g., `#'some.namespace/private-function` rather than just `private-function` or `some.namespace/private-function`. Using `#'` bypasses the access check. And the idiom in Clojure is generally to just make everything public anyway, possibly dividing code into a "public API" namespace and one or more "implementation" namespaces. The latter contain public functions that are only intended to be used by the former - but, again, the culture of trust means that users _can_ call the implementation functions if they want, on the understanding that an implementation namespace might change (and is likely to be undocumented).

My current position tends to be that if I'm TDD-ing code and want to refactor a function, I'll usually create a new test for the specifics of the helper I want to introduce, and then refactor into the helper to make all the tests pass (the original tests for the existing public function and the new test(s) for the helper function). Only if there's a specific reason for a helper to be private would I go that route (for example, it isn't a "complete" function on its own, or it manages side-effects that I don't want messed with outside of the calling function, etc). And, to be honest, in those cases, I'd probably just make it a local function inside the original calling function if it was that critical to hide it.

Google for `encapsulation harmful` and you'll see there's quite a body of opinion that "private by default" - long held to be a worthy goal in OOP - is an impediment to good software design these days (getters and setters considered harmful is another opinion you'll find out there).

That was longer than I intended!

Yeah Sean but it was bloody good. It all makes sense, and also goes a way to make me think I'm not a lunatic (at least not in this specific context).

And I have a bunch of reading to do on this whole "encapsulation harmful" idea. I'd heard it mentioned, screwed my nose up a bit, but didn't follow up. Now I will. Well: when I have a moment.

Anyway, there you go. Cheers for the effort put in to writing this, Sean. I owe you a beer or two.

Righto.

--
Adam

Thursday 9 October 2014

Proposed TDD logic flow

G'day:
I've been really busy this week, and haven't been able to discover much interesting about either PHP or CFML or anything, hence being quite quiet. I'll admit this is very much a filler article, and a bit of a cheeky nod to something Andy said the other day:



Earlier I asked for people's opinions regarding TDD vs private methods: "The received wisdom of TDD and private methods".

I didn't get much feedback [scowl], but thanks to Dom and Gerry (there's a cat 'n' mouse joke in there somewhere) for offering up some thoughts.

I needed to provide a workflow for the team, and I thought I'd stick it up here as well, as a bit of closure on the previous article. And to give Andy a picture to look at.

What do you think of this approach (other than "unreadable at that size". Click here):


Forget the first two steps "Assign Ticket", etc, as that's just our Jira workflow and a bit of context for our peeps, but the rest of it after that. Also the associated process of how to maintain private methods whilst still adhering to TDD.

I think there's a reasonable mix of pragmatism and dogmatism in that?

Thoughts?

--
Adam

Tuesday 7 October 2014

The received wisdom of TDD and private methods

G'day:
As you might know, I've recently taken on a different role as a PHP developer. My employer are shifting our code base from CFML to PHP for various reasons ("So long, and thanks for all the CF"). One facet of this is we're moving off a venerable, well-established CFML code base to a new code base in the infancy of its existence (some work on it had been done before we picked up the project). In a different language.

And we've got from having about 4000 unit tests to zero.



A coupla of us are having a quick exploratory look at PHPUnit to see what it can do, and whether there's any "gotchas" we need to be aware of when testing in PHP. So far the findings seem to be that unit testing in CFML via MXUnit and TestBox - especially in conjunction with MockBox - is a bit easier than the hoops we need to jump through in PHP to achieve the same ends. This is mostly down to CFML's objects being far more dynamic than PHP's, so injecting testing helper methods and exposing non-public methods for testing is much easier.

The challenges that PHP has thrown at us has caused us to revisit our TDD and unit testing strategy. Not necessarily to revise it, but to revisit it and see if it does need revision.

Our existing policy is that all code is written via a TDD & "Clean Code" approach:
  1. the need for a piece of functionality is identified;
  2. a failing test is written to test a facet of the functionality;
  3. code is written to pass the test;
  4. repeat from 2 until the functionality is complete;
  5. refactor if necessary.

This is applied both to new functionality as well as maintenance of existing functionality. The TDD side of things drives the code design, and also demonstrates that downstream changes don't have adverse effects on earlier requirements. The usual sort of thing.

On new work, this will mean creating a new public method (and the class it needs to go in, as required), and implementing the requirement. So all the tests are on that public method. When we refactor the code, those tests all still work, which is fine.

As a result of the refactoring we might end up with some of the code moving out into public methods of other classes, or - as often - private methods of the same class.

The newly refactored public methods go through the same TDD approach as the initial one, although this is quite often just a re-homing of the tests which were previously testing the unfactored functionality from the original public method. And the original public method's tests are selectively updated to remove any tests which are now the domain of the new methods, and mocks are used in lieu of calling these factored-out methods in the remaining tests of the original function.

And traditionally we have done exactly the same thing with the refactoring that was simply moved into private methods.

Perhaps I can demonstrate this with some pseudocode. At the end of our first TDD round, and before refactoring, we have this:

Thursday 10 July 2014

Regex help please

G'day:
I'm hoping Peter Boughton or Ben Nadel might see this. Or someone else who is good @ regular expression patterns that I'm unaware of.

Here's the challenge...



Given this string:

Lorem ipsum dolor sit

I want to extract the leading sub-string which is:
  • no more than n characters long;
  • breaks at the previous whole word, rather than in the middle of a word;
  • if no complete single word matches, them matches at least the first word, even if the length of the sub-string is greater than n.

I've come up with this:

// trimToWord.cfm
string function trimToWord(required string string, required numeric index){
    return reReplace(string, "^((?:.{1,#index#}(?=\s|$)\b)|(?:.+?\b)).*", "\1", "ONE");
}

It works, but that regex is a bit hoary.

Here's a visual representation of it (courtesy of regexper.com), by way of explanation:



Anyone fancy improving it for me?

Here's some unit tests to run your suggestions through:

Wednesday 9 July 2014

Some more TestBox testing, case/when for CFML and some dodgy code

G'day:
This is an odd one. I was looking at some Ruby code the other day... well it was CoffeeScript but one of the bits influenced by Ruby, and I was reminded that languages like Ruby and various SQL flavours have - in addition to switch/case constructs - have a case/when construct too. And in Ruby's case it's in the form of an expression. This is pretty cool as one can do this:

myVar = case
    when colour == "blue" then
        "it's blue"
    when number == 1 then
        "it's one"
    else
        "shrug"
end

And depending on the values of colour or number, myVar will be assigned accordingly. I like this. And think it would be good for CFML. So was gonna raise an E/R for it.

But then I wondered... "Cameron, you could probably implement this using 'clever' (for me) use of function expressions, and somehow recursive calls to themselves to... um... well I dunno, but there's a challenge. Do it".

So I set out to write a case/when/then/else/end implementation... as a single UDF. The syntax would be thus:

// example.cfm
param name="URL.number" default="";
param name="URL.colour" default="";

include "case.cfm"

result =
    case()
        .when(URL.number=="tahi")
            .then(function(){return "one"})
        .when(function(){return URL.colour=="whero"})
            .then(function(){return "red"})
        .else(function(){return "I dunno what to say"})
    .end()

echo(result)

This is obviously not as elegant as the Ruby code, but I can only play the hand I am dealt, so it needs to be in familiar CFML syntax.

Basically the construct is this:

case().when(cond).then(value).when(cond).then(value).else(value).end()

Where the condition can be either a boolean value or a function which returns one, and the value is represented as a function (so it's only actually called if it needs to be). And then when()/then() calls can be chained as much as one likes, with only the then() value for the first preceding when() condition that is true being processed. Clear? You probably already understood how the construct worked before I tried to explain it. Sorry.

Anyway, doing the design for this was greatly helped by using the BDD-flavoured unit tests that TestBox provides. I could just write out my rules, then then infill them with tests after that.

So I started with this lot (below). Just a note: this code is specifically aimed at Railo, because a few things I needed to do simply weren't possible with ColdFusion.

// TestCase.cfc
component extends="testbox.system.BaseSpec" {

    function run(){
        describe("Tests for case()", function(){
            describe("Tests for case() function", function(){
                it("compiles when called", function(){})
                it("returns when() function", function(){})
            });
            describe("Tests for when() function", function(){
                it("is a function", function(){})
                it("requires a condition argument", function(){})
                it("accepts a condition argument which is a function", function(){})
                it("accepts a condition argument which is a boolean", function(){})
                it("rejects a condition argument is neither a function nor a boolean", function(){})
                it("returns a struct containing a then() function", function(){})
                it("can be chained", function(){
                })
                it("correctly handles a function returning true as a condition", function(){})
                it("correctly handles a function returning false as a condition", function(){})
                it("correctly handles a boolean true as a condition", function(){})
                it("correctly handles a boolean false as a condition", function(){})
            })
            describe("Tests for then() function", function(){
                it("is a function", function(){})
                it("requires a value argument", function(){})
                it("requires a value argument which is a function", function(){})
                it("returns a struct containing when(), else() and end() functions", function(){})
                it("can be chained", function(){})
                it("executes the value", function(){})
                it("doesn't execute a subsequent value when the condition is already true", function(){})
                it("doesn't execute a false condition", function(){})
            })
            describe("Tests for else() function", function(){
                it("is a function", function(){})
                it("requires a value argument", function(){})
                it("requires a value argument which is a function", function(){})
                it("returns a struct containing an end() function", function(){})
                it("cannot be chained", function(){})
                it("executes when the condition is not already true", function(){})
                it("doesn't execute when the condition is already true", function(){})
            })
            describe("Tests for end() function", function(){
                it("is a function", function(){})
                it("returns the result", function(){})
                it("returns the result of an earlier true condition followed by false conditions", function(){})
                it("returns the result of the first true condition", function(){})
            })
        })
    }
}

TestBox is cool in that I can group the sets of tests with nested describe() calls. This doesn't impact how the tests are run - well as far as it impacts my intent, anyhow - it just makes for clearer visual output, and also helps me scan down to make sure I've covered all the necessary bases for the intended functionality.

I then chipped away at the functionality of each individual sub function, making sure they all worked as I went. I ended up with this test code:

Sunday 6 July 2014

Simplifying another CFLib function, and some more unit test examples

G'day:
Last week (I think) whilst I was messing around with some code, and TDDing it as I went, I posted on Twitter about my pleasure at testing with TestBox which yielded a request from Dan Fredericks:
The code I was working on will make it onto the blog at some point, but in the interim I was simplifying a CFLib UDF yesterday, and needed some regression tests for my fix, so took the TDD approach. And here's the business.

Monday 30 December 2013

Unit Testing / TDD: not testing stuff

G'day:
It's about bloody time I got back to this series on TDD and unit testing. I've already got the next few articles semi-planned in my head: topic, if not content. I have to say I am not in the most inspired writing mood today, and the words aren't exactly flowing from fingers through keyboard to screen - this is the third attempt at this paragraph - but we'll see how I go.

To find inspiration and to free up the fingers a bit, I'm at my local pub in Merlin Park in Galway, which - today - has shown its true colours as a football pub (to you Americans, that's "soccer". To me, it's "shite"). I've tried to like football, but it all just seems too effeminate to me. The place is chock-full of colours-wearing lads yelling at the screen. Either for or against one of Chelsea or Liverpool (the former lead 2-1 at half time). It's not conducive to writing, but the Guinness next to me will be.

I'll not list the previous articles in the series as it will take up too much room. They're all tagged with "TDD" and "unit testing" though.

On with the show...

Saturday 21 December 2013

So what did I do today? (Nothing interesting. Here are the details)


G'day:
I faffed around on the computer - with ColdFusion and Railo and TestBox - all day today, and have concluded I have nothing interesting to show for it. Still... this is a log of what I've been up to with CFML, so you're gonna hear all about it anyhow. And what's "better"... I'm gonna spin it out to be two articles. I am the Peter Jackson of CFML blogs!

OK, so I got wind of something to investigate the other day, and to do the metrics, I needed to time some stuff. Normally I'd just use a getTickCount() before and after the code, but I thought I might need something more comprehensive this time, so I figured a wee stopwatch UDF was in order. I've also just installed TestBox and am in the process of seeing if I can migrate away from MXUnit, so decided to do another TDD exercise with it. Note: there is no great exposition in this article, or really much new about TDD etc. It's simply what I did today.

Note: I abandoned TesxtBox's MXUnit compatibility mode today, because of a show-stopper bug: "ADDASSERTDECORATOR not implemented". This is the function that allows the importing of custom assertions, which I am using here. On the basis of that, I decided to just go with TestBox syntax instead.

Let's have a quick look at the function, and the tests.

//makeStopwatch.cfm
struct function makeStopwatch(){
    var timeline        = [];

    var lap = function(string message=""){
        var ticksNow    = getTickCount();
        var lapCount    = arrayLen(timeline);
        var lap            = {
            currentClock    = ticksNow,
            lapDuration        = lapCount > 0 ? ticksNow - timeLine[lapCount].currentClock : 0,
            totalDuration    = lapCount > 0 ? ticksNow - timeLine[1].currentClock : 0,
            message            = message
        };
        arrayAppend(timeline, lap);
        return lap;
    };

    return {
        start        = function(string message="start"){
            return lap(message);
        },
        lap            = function(string message="lap"){
            return lap(message);
        },
        stop        = function(string message="stop"){
            return lap(message);
        },
        getTimeline    = function(){
            return timeLine;
        }
    };
};

Not much interesting here. I enjoyed finding another reason / excuse to use function expressions and a wee bit of closure around the timeline which "sticks" with the start() / lap() / stop() / getTimeline() functions.

What this function does is to log a struct at start, each lap, and another on stop. Here's it in action:

// useStopwatch.cfm
include "makeStopwatch.cfm";
stopwatch = makeStopwatch();

stopwatch.start("Begin timing");
sleep(500);
stopwatch.lap();
sleep(1500);
secondLap = stopwatch.lap("after another 1500ms");
writeDump(var=secondLap, label="secondLap");
sleep(2000);
stopwatch.start("Stop timing");
writeDump(var=stopWatch.getTimeline());

Output:

secondLap - struct
CURRENTCLOCK1387658080067
LAPDURATION1500
MESSAGEafter another 1500ms
TOTALDURATION2000
array
1
struct
CURRENTCLOCK1387658078067
LAPDURATION0
MESSAGEBegin timing
TOTALDURATION0
2
struct
CURRENTCLOCK1387658078567
LAPDURATION500
MESSAGElap
TOTALDURATION500
3
struct
CURRENTCLOCK1387658080067
LAPDURATION1500
MESSAGEafter another 1500ms
TOTALDURATION2000
4
struct
CURRENTCLOCK1387658082117
LAPDURATION2050
MESSAGEStop timing
TOTALDURATION4050

This is the bumpf I usually need when I'm timing stuff. Well: some or all of it. It'll be quite handy, I reckon.

One interesting facet here is that initially I thought I'd need three separate functions for the start() / lap() / stop() functions, I was doing TDD (seriously, I knew you'd wonder, so everything I write on this blog uses full TDD now), and having knocked out the first few tests to verify the method signatures for the returned functions, it occurred to me that stop() was actually redundant. It doesn't do anything that lap() wouldn't already do. I mean all this "stopwatch" does is take time marks... there's no clock starting or stopping really (getTickCount() keeps on ticking, we simply start paying attention and then stop paying attention at some point).

So, anyway, I decided before I starting messing around with a redesign, I'd get the test coverage done, get it working, and then refactor. This is something one of my colleagues (Brian, I mean you) has been drumming into me recently: don't get halfway through something, decide to do it differently, start again, refactor, waste time, and not have anything to show for it if I get interrupted (or, in a work situation, we get to the end of the sprint and need to release). So I banged out the rest of the tests, got everything working, and looked at my code some more.

Here are the tests:

Friday 20 December 2013

Unit Testing / TDD - switching off MXUnit, switching on TestBox

G'day:
This article is more an infrastructure discussion, rather than examining more actual testing stuff. The ever-growing *Box empire has recently borged into yet another part of the CFML community: testing. They're released another box... TestBox. TestBox is interesting to me as it has a different approach to testing than MXUnit has... rather than xUnit style assertion-based testing, instead favouring a BDD approach. I've not done a lick of BDD, but people keep banging on about it, so I shall be looking at it soon. -ish. First I need to switch to TestBox.

One appealing thing I had heard about TestBox is that it's backwards compatible with MXUnit, so this should mean that I can just do the switch and continue with my current approach to testing, and ease my way into BDD as I learn more about it. So the first thing I decided to examine is how well this stands up, and how many changes I need to make to my existing tests to get them to run. Realistically, nothing is every completely backwards compatible... not even say between different versions of the same software (ColdFusion 9 to ColdFusion 10), let along a second system emulating another system (eg: Railo and ColdFusion). This is fine. I don't expect this migration to be seamless.

Here's what I worked through this morning to get up and running (spoilers: kinda running) on TestBox.

I preface this with the fact that I have always found Ortus's documentation to be a bit impenetrable (there's too much of it, it waffles too much), so I was hesitant about how long this would all take.

Locating, downloading and installing

Finding it

I googled "testbox", and the first link was the ColdBox Platform Wiki - TestBox. This is promising. Within a paragraph (and a to-the-point paragraph which just intros the product, so maybe the docs have got some improved focus: cool) there were download links. TestBox requires ColdFusion 10 / Railo 4.1, btw. I presume it uses closure or something? I'm not sure. But that's cool, I use CF10 and Railo [latest] for my work for this blog. It does preclude me from really giving it a test our with our 3000 unit tests at work though (which is a shame), because we're still on CF9 and will be for the foreseeable future.

Installing it

The installation instructions threw me a bit. The default suggestion is to put the testbox dir into the web root, but that's poor advice: only files specifically intended to be web browseable should ever be in your web root. Fortunately the also mention one can stick 'em anywhere, and map them in with a /testbox mapping. I wish this was their default suggestion. In fact I wish it was their only suggestion. They should not encourage poor practice.

There's a caveat with this though (and this is where I had problems), is that TestBox does have some web assets which need to be web browseable, so it does actually need a web mapping, not just a CF mapping. They do caveat this further down the page.

The first pitfall I had was which directory they're actually talking about. The zipfile has this baseline structure:

/testbox_1.0.0/
    testbox-1.0.0.00062-201312171237
    apidocs/
    browser/
    runner/
    runner-template/
    samples/
    testbox/
    license.txt
    mockbox.txt
    testbox.txt

So I homed this lot in my CF root (not web root, CF root) as /frameworks/testbox/1.0.0/, and added a /testbox CF mapping to that location.

WARNING (if you're following along and doing this at the same time): this is not the correct thing to do. Keep reading...

I then had a look around for which directory I needed to add a web server virtual directory for, and found web-servable assets in the following locations:

/apidocs/
/browser/
/runner/
/samples/
/testbox/system/testing/reports/assets/

(I searched for images, JS, CSS, HTML and index.cfm files; not perfect, but will give me an idea).

OK, so I figured he apidocs and samples are separate from the TestBox app, but that still leaves three disconnected (and laterally displaced) directories which need to be web browseable. This ain't great. So basically it looks like I need to make the entire /testbox dir web browseable. That's a bit shit, and a bit how we might have set up our CFML-driven websites... ten years ago. Oh well.

Configuring Tomcat

Here's a challenge (cue: Sean to get grumpy). I have no idea how to set up a virtual directory on Tomcat's built-in web server. Fortunately that's what Google is for, so I googled "tomcat web server virtual directories", and the very first link is a ColdFusion-10-specific document: "Getting Started with Tomcat in ColdFusion 10". I shuddered slightly that this is just in the ColdFusion Blog, rather than in the CF docs where it belongs, but it'll do. Fortunately the info in there is accurate, which is good.

Basically there's a file server.xml located at <ColdFusion_Home>/runtime/conf/server.xml, where <ColdFusion_Home> is the cfusion dir in your ColdFusion install directory. For me the conf dir is at: C:\apps\adobe\ColdFusion\10\cfusion\runtime\conf.

In there there's an XML note like this:

<Context
    path    = "/"
    docBase    = "<cf_home>\wwwroot"
    WorkDir    = "<cf_home>\runtime\conf\Catalina\localhost\tmp"
>
</Context>

It's commented out by default. All the instructions one needs are in the file itself, but basically it's uncomment it, put actual paths in, and add an aliases attribute:

<Context
    path    = "/"
    docBase    = "C:\apps\adobe\ColdFusion\10\cfusion\wwwroot"
    WorkDir    = "C:\apps\adobe\ColdFusion\10\cfusion\runtime\conf\Catalina\localhost\tmp"
    aliases    = "/testbox=C:\webroots\frameworks\testbox\1.0.0"
>
</Context>

I restarted CF and browsed to http://localhost:8500/testbox, and I got the files in my C:\webroots\frameworks\testbox\1.0.0 directory listing, so that worked. Good to know. I'll now forget about server.xml and aliases and stuff as I won't need to do it again for another six months. Shrug.

ColdFusion config

I put a mapping to the same place in my test app's Application.cfc:

// Application.cfc
component {

    this.mappings            = {
        "/cflib"    = getDirectoryFromPath(getCurrentTemplatePath()),
        "/testbox"    = expandPath("/testbox") // CF will use the virtual directory to resolve that. This is slightly cheating, but hey
    };

}

Friday 6 December 2013

Unit Testing / TDD - refactoring existing code

G'day:
I've just had a new submission in the CFLib queue, and as it's an easy one to test and release, I'm gonna jump on it and get it out there today. However I want to test it first, plus I want to refactor it slightly, so I'm gonna use TDD to do so. And document what I'm doing as I go.

This continues a series on unit testing and TDD that I'm doing, the rest of which are tagged with either "Unit Testing" or "TDD" or both, so you can look them up via those links (there's too many to list now, so I won't bother).

The code for this article will be here: https://github.com/daccfml/scratch/tree/master/cflib/dayOfWeekAsInt (only the baseline files are there at the moment, as I haven't written the code yet ;-).


Sunday 1 December 2013

Unit Testing / TDD - passing data to on() and trigger()

G'day:
Yesterday I plugged through more test/code and got the code to the point that it would bind and trigger event handlers A-OK. However we still need to update the code so that we can pass data at both bind-time and trigger-time to the handler's execution.

Before I continue, here's the index to the articles in this series I've already written:

Friday 29 November 2013

Unit Testing / TDD - continuing the tests for on() and trigger()

G'day:
I was getting into a rhythm with my TDD cycle this afternoon... test... refine... test... refine... here's what I was doing. Well: after the obligatory recap links (/SEO bait):

Do you know what? I am actually enjoying writing this code. And I'm now past the bits I already knew I had to write (and had the code pretty much already written in my head), so I'm doing really really TDD now. On with the show...

Unit Testing / TDD - getting stuck on how / what to test (part 2/2)

G'day:
Today I'm resuming where I left off yesterday, but first the obligatory links to the rest of the series so far:
Yesterday I questioned how thoroughly I should test return values from functions. Today I am completely flummoxed as to how I test something at all. Spoiler warning: I never worked it out.