Sunday 15 May 2022

CFML: fixing a coupla bugs in my recent work on TinyTestFramework

G'day:

Last week I did some more work on my TinyTestFramework:

On Saturday, I found a bug in each of those. Same bug, basically, surfacing in two different ways. Here's an example:

describe("Demonstrating afterEach bug", () => {
    afterEach(() => {
        writeOutput("<br><br>This should be displayed<br><br>")
    })

    describe("Control", () => {
        it("is a passing test, to demonstrate expected behaviour", () => {
            expect(true).toBeTrue()
        })        
    })

    describe("Demonstrating bug", () => {
        it("should run even if the test fails", () => {
            expect(true).toBeFalse()
        })
    })
})

Output:

Demonstrating afterEach bug
Control
It is a passing test, to demonstrate expected behaviour:

This should be displayed

OK
Demonstrating bug
It should run even if the test fails: Failed
Results: [Pass: 1] [Fail: 1] [Error: 0] [Total: 2]

Note how the second This should be displayed is not being displayed. Why's this? It's because, internally, a failing test throws an exception:

toBeTrue = () => tinyTest.matchers.toBe(true, actual),

// ...

toBe = (expected, actual) => {
    if (actual.equals(expected)) {
        return true
    }
    throw(type="TinyTest.TestFailedException")
},

And in the implementation of it, an exception is caught before the code handling afterEach has a chance to run:

it = (string label, function implementation) => {
    tinyTest.inDiv(() => {
        try {
            writeOutput("It #label#: ")

            tinyTest.contexts
                .filter((context) => context.keyExists("beforeEachHandler"))
                .each((context) => context.beforeEachHandler())

            decoratedImplementation = tinyTest.contexts
                .filter((context) => context.keyExists("aroundEachHandler"))
                .reduce((reversed, context) => reversed.prepend(context), [])
                .reduce((decorated, context) => () => context.aroundEachHandler(decorated), implementation)
            decoratedImplementation()

            tinyTest.contexts
                .filter((context) => context.keyExists("afterEachHandler"))
                .reduce((reversedContexts, context) => reversedContexts.prepend(context), [])
                .each((context) => context.afterEachHandler())

            tinyTest.handlePass()
        } catch (TinyTest e) {
            tinyTest.handleFail()
        } catch (any e) {
            tinyTest.handleError(e)
        }
    })
},

To explain:

  • implementation is the callback from the it in the test suite. The actual test.
  • All that filter / reduce stuff is just handling aroundEach: don't worry about that.
  • After the decoration we run the test.
  • If it fails, it is caught down here.
  • Meaning the handling of the afterEach callback is never run.

This seems fairly easy to sort out:

try {
    decoratedImplementation()
} finally {
    tinyTest.contexts
        .filter((context) => context.keyExists("afterEachHandler"))
        .reduce((reversedContexts, context) => reversedContexts.prepend(context), [])
        .each((context) => context.afterEachHandler())
}

Now even if the test fails, the afterEachHandler handler will still be run. And as I'm not catching the exception, it'll still do what it was supposed to. Rerunning the tests demonstrates this bug is fixed:

Demonstrating afterEach bug
Control
It is a passing test, to demonstrate excpected behaviour:

This should be displayed

OK
Demonstrating bug
It should run even if the test fails:

This should be displayed

Failed
Results: [Pass: 1] [Fail: 1] [Error: 0] [Total: 2]

I also ran all the rest of the tests as well, and they all still pass, so I'm pretty confident my fix has had no repercussions.


I've got the same problem with aroundEach: the bit of the handler after the call to run the test was not being run, for the same reason we had with afterEach: a failing or erroring test throws an exception, and the exception is caught before the rest of the aroundEach handler can be run. This seems slightly trickier to handle, as the code to call the test is within the callback the tester provides:

aroundEach((test) => {
    // top bit before calling the test. No problem with this

    test()

    // bottom bit. This is not getting run after a failed / erroring test
})

I can't expect the testing dev to stick handling of the test failure in there. I need to do this within the framework.

How to do this flummoxed me a bit, but I wrote some tests in the mean time to give my brain some time to think about things:

describe("Demonstrating aroundEach bug", () => {
    aroundEach((test) => {
        writeOutput("<br><br>Before the call to the test: this should be displayed<br>")
        test()
        writeOutput("<br>After the call to the test:This should be displayed<br><br>")
    })

    describe("Control", () => {
        it("is a passing test, to demonstrate expected behaviour", () => {
            expect(true).toBeTrue()
        })        
    })

    describe("Demonstrating bug", () => {
        it("should display the 'bottom' message even if the test fails", () => {
            expect(true).toBeFalse()
        })
    })
})

Results:

Demonstrating aroundEach bug
Control
It is a passing test, to demonstrate expected behaviour:

Before the call to the test: this should be displayed

After the call to the test: This should be displayed

OK
Demonstrating bug
It should display the 'bottom' message even if the test fails:

Before the call to the test: this should be displayed
Failed
Results: [Pass: 1] [Fail: 1] [Error: 0] [Total: 2]

See how the second test isn't outputting After the call to the test: This should be displayed.

it = (string label, function implementation) => {
    tinyTest.inDiv(() => {
        try {
            writeOutput("It #label#: ")

            tinyTest.contexts
                .filter((context) => context.keyExists("beforeEachHandler"))
                .each((context) => context.beforeEachHandler())

            decoratedImplementation = tinyTest.contexts
                .filter((context) => context.keyExists("aroundEachHandler"))
                .reduce((reversed, context) => reversed.prepend(context), [])
                .reduce((decorated, context) => () => context.aroundEachHandler(decorated), implementation)

        try {    
            decoratedImplementation()
        } finally {
            tinyTest.contexts
                .filter((context) => context.keyExists("afterEachHandler"))
                .reduce((reversedContexts, context) => reversedContexts.prepend(context), [])
                .each((context) => context.afterEachHandler())
        }

            tinyTest.handlePass()
        } catch (TinyTest e) {
            tinyTest.handleFail()
        } catch (any e) {
            tinyTest.handleError(e)
        }
    })
},

The culmination of that deocration code is how any aroundEach handler is called around the test implementation. Somehow I need to do the equivalent of that try / finally here. But it's not so straight forward as I basically need to prevent that call to implementation from erroring until after we bubble out of all the aroundEach handlers. Bear in mind there can be any number of aroundEach handlers to run: one for each level of describe in the tests:

describe("Test of a CFC", () => {

    aroundEach((test) => {
        // something before
        test()
        // something afterwards
    })

    describe("Test of a method", () => {

        aroundEach((test) => {
            // something before
            test()
            // something afterwards
        })

        describe("Test of a specific part of the method's behaviour", () => {

            aroundEach((test) => {
                // something before
                test()
                // something afterwards
            })
            
            it("will have all three of those `aroundEach` handlers run around it",  () => {
                // test stuff
            })
        })
    })
})

OK so I need to put a try / catch around the call to implementation so i can stop it erroring-out too soon. That's easy enough:

decoratedImplementation = tinyTest.contexts
    .filter((context) => context.keyExists("aroundEachHandler"))
    .reduce((reversed, context) => reversed.prepend(context), [])
    .reduce((decorated, context) => () => context.aroundEachHandler(decorated), () => {
        try {
            implementation()
        } catch (any e) {
            // ???
        }                            
    })
    
try {    
    decoratedImplementation()
    // ???
} finally {
    tinyTest.contexts
        .filter((context) => context.keyExists("afterEachHandler"))
        .reduce((reversedContexts, context) => reversedContexts.prepend(context), [])
        .each((context) => context.afterEachHandler())
}

But I still need to know about that exception after we finish calling the aroundEach handlers and the test implementation.

I'm not sure I like this implementation, but this is what I have done:

decoratedImplementation = tinyTest.contexts
    .filter((context) => context.keyExists("aroundEachHandler"))
    .reduce((reversed, context) => reversed.prepend(context), [])
    .reduce((decorated, context) => () => context.aroundEachHandler(decorated), () => {
        try {
            implementation()
            tinyTest.testResult = true
        } catch (any e) {
            tinyTest.testResult = e
        }                            
    })
    
try {    
    decoratedImplementation()
    if (!tinyTest.testResult.equals(true)) {
        throw(object=tinyTest.testResult)
    }
} finally {

I set a variable in the calling code either flagging the test worked, or if not: how it failed (or errored). The if it failed, I throw the exception I originally caught.

This works, and both those tests I wrote above, and all the rest of the test suite still passes too. Bug fixed.

I'm still thinking about this though. I feel I have nailed the red/green part of this process, but I still possibly have some refactoring to do. But obvs now I'm safe to do so because everything is tested. Well: except any other bugs I haven't noticed yet :-)

Righto.

--
Adam