Having done the beforeEach implementation for my TinyTestFramework, I reckoned afterEach would be super easy: barely an inconvenience. And indeed it was. Took me about 15min, given most of the logic is the same as for beforeEach.
Here are the tests:
describe("Tests of afterEach", () => {
it("will not break if an afterEach is not specified for a given describe block", () => {
expect(true).toBeTrue()
})
describe("Baseline", () => {
result = []
afterEach(() => {
result.append("set in afterEach")
})
it("runs after a test (setup)", () => {
expect(result).toBe([])
})
it("runs after a test (test)", () => {
expect(result).toBe(["set in afterEach"])
})
})
describe("Works in a hierarchy (top)", () => {
result = []
afterEach(() => {
result.append("set in afterEach in outer describe")
})
describe("Works in a hierarchy (middle)", () => {
afterEach(() => {
result.append("set in afterEach in middle describe")
})
describe("Works in a hierarchy (inner)", () => {
afterEach(() => {
result.append("set in afterEach in inner describe")
})
it("runs all afterEach handlers, from innermost to outermost (setup)", () => {
expect(result).toBe([])
})
it("runs all afterEach handlers, from innermost to outermost (test)", () => {
expect(result).toBe([
"set in afterEach in inner describe",
"set in afterEach in middle describe",
"set in afterEach in outer describe"
])
})
})
})
})
describe("Tests with beforeEach as well", () => {
result = []
afterEach(() => {
result.append("set by afterEach")
})
beforeEach(() => {
result.append("set by beforeEach")
})
it("is the setup test", () => {
expect(true).toBeTrue()
})
it("tests that both beforeEach and afterEach got run", () => {
result.append("testing that the preceding setup test had its afterEach called")
expect(result).toBe([
"set by beforeEach", // setup test
"set by afterEach", // setup test
"set by beforeEach", // this test
"testing that the preceding setup test had its afterEach called"
])
})
})
})
These are more superficial than the beforeEach ones because most of it is already tested in those tests. I just test that it's called, works in a hierarchy (no reason why it won't given the implemntation requirements, but it's a belt-n-braces sort of test), and works with a beforeEach in play before. One thing to note is that I need to run a stub/control/setup test before my test of afterEach, because obviously it runs after the test's code, so we can't test what it does with a single test. Hopefully you see what I mean there. That's the chief difference.
I have to admit I'm not sure where I'm going with this one yet. I dunno how to implement what I'm needing to do, but I'm gonna start with a test and see where I go from there.
Context: I've been messing around with this TinyTestFramework thing for a bit… it's intended to be a test framework one can run in trycf.com, so I need to squeeze it all into one include file, and at the same time make it not seem too rubbish in the coding dept. The current state of affairs is here: tinyTestFramework.cfm, and its tests: testTinyTestFramework.cfm. Runnable here: on trycf.com
The next thing that has piqued my interest for this is to add beforeEach and afterEach handlers in there too. This will be more of a challenge than the recent "add another matcher" carry on I've done.
First test:
describe("Tests of beforeEach", () => {
result = ""
beforeEach(() => {
result = "set in beforeEach handler"
})
it("was called before the first test in the set", () => {
expect(result).toBe("set in beforeEach handler")
})
})
Right and the first implementation doesn't need to be clever. Just make it pass:
That's fine but it's a bit daft. My next test needs to check that beforeEach is called before subsequent tests too. To test this, simply setting a string and checking it's set won't be any use: it'll still be set in the second test too. Well: either set or reset… no way to tell. So I'll make things more intelligent (just a bit):
describe("Tests of beforeEach", () => {
result = []
beforeEach(() => {
result.append("beforeEach")
})
it("was called before the first test in the set", () => {
result.append("first test")
expect(result).toBe([
"beforeEach",
"first test"
])
})
it("was called before the second test in the set", () => {
result.append("second test")
expect(result).toBe([
"beforeEach",
"first test",
"beforeEach",
"second test"
])
})
})
Now each time beforeEach is called it will cumulatively affect the result, so we can test that it's being called for each test. Which of course it is not, currently, so the second test fails.
Note: it's important to consider that in the real world having beforeEach cumulatively change data, and having the sequence the tests are being run be significant - eg: we need the first test to be run before the second test for either test to pass - is really bad form. beforeEach should be idempotent. But given it's what we're actually testing here, this is a reasonable way of testing its behaviour, I think.
Right so currently we are running the beforeEach callback straight away:
beforeEach = (callback) => {
callback()
}
It needs to be cleverer than that, and only be called when the test is run, which occurs inside it:
That works. Although it's dangerously fragile, as that's gonna collapse in a heap if I don't have a beforeEach handler set. I've put this test before those other ones:
describe("Tests without beforeEach", () => {
it("was called before the first test in the set", () => {
expect(true).toBe(true)
})
})
And I get:
Tests of TinyTestFramework
Tests without beforeEach
It was called before the first test in the set: Error: [The function [beforeEachHandler] does not exist in the Struct, only the following functions are available: [append, clear, copy, count, delete, duplicate, each, every, filter, find, findKey, findValue, insert, isEmpty, keyArray, keyExists, keyList, keyTranslate, len, map, reduce, some, sort, toJson, update, valueArray].][]
Tests of beforeEach
It was called before the first test in the set: OK
It was called before the second test in the set: OK
I need a guard statement around the call to the beforeEach handler:
if (isCustomFunction(tinyTest.beforeEachHandler)) {
tinyTest.beforeEachHandler()
}
That fixed it.
Next I need to check that the beforeEach handler cascades into nested describe blocks. I've a strong feeling this will "just work":
describe("Tests of beforeEach", () => {
describe("Testing first level implementation", () => {
// (tests that were already in place now in here)
})
describe("Testing cascade from ancestor", () => {
result = []
beforeEach(() => {
result.append("beforeEach in ancestor")
})
describe("Child of parent", () => {
it("was called even though it is in an ancestor describe block", () => {
result.append("test in descendant")
expect(result).toBe([
"beforeEach in ancestor",
"test in descendant"
])
})
})
})
})
Note that I have shunted the first lot of tests into their own block now. Also: yeah, this already passes, but I think it's a case of coincidence rather than good design. I'll add another test to demonstrate this:
describe("Tests without beforeEach (bottom)", () => {
result = []
it("was called after all other tests", () => {
result.append("test after any beforeEach implementation")
expect(result).toBe([
"test after any beforeEach implementation"
])
})
})
This code is right at the bottom of the test suite. If I put a writeDump(result) in there, we'll see why:
implentation (sic) error
After I pressed send on this, I noticed the typo in the test and in the dump above. I fixed the test, but can't be arsed fixing the screen cap. Oops.
You might not have noticed, but I had not VARed that result variable: it's being used by all the tests. This was by design so I could test for leakage, and here we have some: tinyTest.beforeEachHandler has been set in the previous describe block, and it's still set in the following one. We can't be having that: we need to contextualise the handlers to only be in-context within their original describe blocks, and its descendants.
I think all I need to do is to get rid of the handler at the end of the describe implementation:
The really seemed easier than I expected it to be. I have a feeling this next step is gonna be trickier though: I need to be able to support multiple sequential handlers, like this:
describe("Multiple sequential handlers", () => {
beforeEach(() => {
result = []
result.append("beforeEach in outer")
})
describe("first descendant of ancestor", () => {
beforeEach(() => {
result.append("beforeEach in middle")
})
describe("inner descendant of ancestor", () => {
beforeEach(() => {
result.append("beforeEach in inner")
})
it("calls each beforeEach handler in the hierarchy, from outermost to innermost", () => {
result.append("test in innermost descendant")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"beforeEach in inner",
"test in innermost descendant"
])
})
})
})
})
Here we have three nested beforeEach handlers. This fails because we're only storing one, which we can see if we do a dump in the test:
I guess we need to chuck these things into an array instead:
This makes the tests pass, but I know this bit is wrong:
tinyTest.beforeEachHandlers = []
If I have a second test anywhere in that hierarchy, the handlers will have been blown away, and won't run:
describe("Multiple sequential handlers", () => {
beforeEach(() => {
result = []
result.append("beforeEach in outer")
})
describe("first descendant of ancestor", () => {
beforeEach(() => {
result.append("beforeEach in middle")
})
describe("inner descendant of ancestor", () => {
beforeEach(() => {
result.append("beforeEach in inner")
})
it("calls each beforeEach handler in the hierarchy, from outermost to innermost", () => {
result.append("test in innermost descendant")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"beforeEach in inner",
"test in innermost descendant"
])
})
})
it("is a test in the middle of the hierarchy, after the inner describe", () => {
result.append("test after the inner describe")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"after the inner describe"
])
})
})
})
This fails, and a dump shows why:
So I've got no handlers at all (which is correct given my current implementation), but it should still have the "beforeEach in outer" and "beforeEach in middle" handlers for this test. I've deleted too much. Initially I was puzzled why I still had all that stuff in the result still, but then it occurs to me that was the stuff created for the previous test, just with my last "after the inner describe" appended. So that's predictable/"correct" for there being no beforeEach handlers running at all.
I had to think about this a bit. Initially I thought I'd need to concoct some sort of hierarchical data structure to contain the "array" of handlers, but after some thought I think an array is right, it's just that I only need to pop off the last handler, and only if it's the one set in that describe block. Not sure how I'm gonna work that out, but give me a bit…
At the beginning of each describe handler I create a context for it - which is just a struct, and push it onto the contexts array.
A beforeEach call sticks its handler into the last context struct, which will be the one for the describe that the beforeEach call was made in.
When it runs, it iterates over contexts.
And if there's a beforeEach handler in a context, then its run.
The last thing describe does is to remove its context from the context array.
This means that as each describe block in a hierarchy is run, it "knows" about all the beforeEach handlers created in its ancestors, and during its own run, it adds its own context to that stack. All tests immediately within it, and within any descendant describe blocks will have all the beforeEach handlers down it and including itself. Once it's done, it tidies up after itself, so any subsequently adjacent describe blocks start with on the the their ancestor contexts.
Hopefully one of the code itself, the bulleted list or the narrative paragraph explained what I mean.
As well as the tests I had before this implementation, I added tests for another few scenarios too. Basically any combination / ordering / nesting of describe / it calls I could think of, testing the correct hierarchical sequence of beforeEach handlers was called in the correct order, for the correct test, without interfering with any other test.
describe("Multiple sequential handlers", () => {
beforeEach(() => {
result = []
result.append("beforeEach in outer")
})
it("is at the top of the hierarchy before any describe", () => {
result.append("at the top of the hierarchy before any describe")
expect(result).toBe([
"beforeEach in outer",
"at the top of the hierarchy before any describe"
])
})
describe("first descendant of ancestor", () => {
beforeEach(() => {
result.append("beforeEach in middle")
})
it("is a test in the middle of the hierarchy, before the inner describe", () => {
result.append("test before the inner describe")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"test before the inner describe"
])
})
describe("inner descendant of ancestor", () => {
it("is a test in the bottom of the hierarchy, before the inner beforeEach", () => {
result.append("in the bottom of the hierarchy, before the inner beforeEach")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"in the bottom of the hierarchy, before the inner beforeEach"
])
})
beforeEach(() => {
result.append("beforeEach in inner")
})
it("calls each beforeEach handler in the hierarchy, from outermost to innermost", () => {
result.append("test in innermost descendant")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"beforeEach in inner",
"test in innermost descendant"
])
})
it("is another innermost test", () => {
result.append("is another innermost test")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"beforeEach in inner",
"is another innermost test"
])
})
})
it("is a test in the middle of the hierarchy, after the inner describe", () => {
result.append("test after the inner describe")
expect(result).toBe([
"beforeEach in outer",
"beforeEach in middle",
"test after the inner describe"
])
})
})
describe("A second describe in the middle tier of the hierarchy", () => {
beforeEach(() => {
result.append("beforeEach second middle describe")
})
it("is a test in the second describe in the middle tier of the hierarchy", () => {
result.append("in the second describe in the middle tier of the hierarchy")
expect(result).toBe([
"beforeEach in outer",
"beforeEach second middle describe",
"in the second describe in the middle tier of the hierarchy"
])
})
})
it("is at the top of the hierarchy after any describe", () => {
result.append("at the top of the hierarchy after any describe")
expect(result).toBe([
"beforeEach in outer",
"at the top of the hierarchy after any describe"
])
})
})
All are green, and all the other tests are still green as well. Yay for the testing safety-net that TDD provides for one. I think I have implemented beforeEach now. Implementing afterEach is next, but this should be easy, and just really the same as I have done here, with similar tests.
However I will do that separate to this, and I am gonna press "send" on this, have a beer first.
This ended up being more of a rabbit hole than I expected it to be. But in the process I've learned a bit more about curl, PHP, Python, JS (client). And actually CFML too I guess.
I can't even remember why I needed to do this, but it was something to do with testing that TinyTestFramework I've been blathering about recently.
Anyhow, I decided I needed to run some code locally on my PC which would send some code off to trycf.com, run it, and send me back the response. I figured it'd be doable if I worked out what Abram was doing when I click the "Run Code" button on the trycf.com UI. As it turns out it's just an HTTP post, and I could could re-run the curl captured from my browser easily enough:
Yeah I don't care
Thanks, but before you take time to mention it: I know the code blows out to the right. It doesn't matter, no-one's expecting you to read it really, and the blow-out is just just cosmetic shite.
I can run that in bash and it works fine. BTW, the actual code I'm running is buried in the middle there, There's quite a chunk of overhead to execute that code, and I reckoned the browser was probably being a bit belt-n-braces about all the headers it was sending, and I'd not need most of them. I whittled it down to this:
There's a handy site that converts curls to language-specific implementations, and CFML is one of the options: https://curlconverter.com/#cfml. This was handy in theory, but the HTTP service call it created didn't work. Not its fault: it should have, but it seems CFHTTP can't handle that boundary syntax in the curl. Note that PHP's version also struggled, but the JS (fetch), Java and Python versions all worked fine.
This threw me for a while cos I'm not really that au fait with building HTTP requests by hand, but eventually I cracked it. Don't hand-crank the multipart boundary stuff: let CFML do it for you. So I came up with this proof of concept:
That example just runs the test for my test framework up on trycf instead of here on my own server. Because I can. At least now I remember why I wanted to do this in the first place, but that will be in a later article.
All these test so far only run on ColdFusion 2021, because that was what I was wanting to do. The other hosts are easy enough to glean just by watching the request when clicking "run code". The Lucee (latest) one is https://lucee5-sbx.trycf.com/lucee5/getremote.cfm
Anyway, not a terribly exciting one this (not like how terribly exciting the shit I write here usually is, eh? EH??), but problem solved and hopefully this will help Scott.
ColdFusion 2021 added the spread and rest operators. These are implemented as two different usages of .... In this article I am going to be making an observation about how the implementation of the rest operator is incomplete and faulty. I first raised this in the CFML Slack channel, but I've now wittered on enough about it to copy it to here.
What does the rest operator do? The docs say:
[The] Rest Operator operator is similar to [the] Spread Operator but behaves in [the] opposite way, while Spread syntax expands the iterables into individual element[s], the Rest syntax collects and condenses them into a single element.
ibid.
That's not so useful out of the context of a discussion on the spread operator, that said. So a test should clarify:
function testRest(first, ...rest) {
return arguments
}
function run() {
describe("Testing CF's rest operator, eg: function testRest(first, ...rest)", () => {
it("combines all latter argument values into one parameter value", () => {
actual = testRest("first", "second", "third", "fourth")
expect(actual).toBe([
first = "first",
rest = ["second", "third", "fourth"]
])
writeDump(var=actual, label="actual/expected")
})
})
}
tinyTest.runTests()
The rest operator is ... as a prefix to the last parameter in a function signature.
Any arguments passed from that position on are combined into that one parameter's value.
So far so good. So what's the problem? I had a use case where I needed this to work for named arguments, not positional ones:
it("combines all latter argument values into one parameter value when using named arguments", () => {
actual = testRest(first="first", two="second", three="third", four="fourth")
expect(actual.first).toBe("first")
expect(actual).toHaveKey("rest")
expect(actual.rest).toBe({two="second", three="third", four="fourth"})
})
Same as before, just the arguments have names now. This test fails. Why? Because CF completely ignores the rest operator when one uses named arguments. This "passing" test shows the actual (wrong) behaviour:
it("doesn't work at all with named arguments", () => {
actual = testRest(first="first", two="second", three="third", four="fourth")
expect(actual.first).toBe("first")
expect(actual?.rest).toBeNull()
expect(actual.keyList().listSort("TEXTNOCASE")).toBe("FIRST,four,REST,three,two")
writeDump(var=actual, label="actual")
writeDump(var={first="first", rest={two="second", three="third", four="fourth"}}, label="expected")
})
As I said, I raised this on the CFML Slack channel. I got one useful response:
I think it's unusual to use rest with named parameters, but, CF supports named parameters as well as positional so I would expect it to work. I'd settle for it being fully documented though as only working with positional.
John Whish
Fair point, and as I said in reply:
My reasoning was remarkably similar.
I started thinking "it's a bit unorthodox to use named arguments here", but then I stopped to think… why? And my conclusion was "because in other languages I use this operation there's no such thing as named arguments, so I've not been used to thinking about it", and that was the only reason I could come up with for me to think that. So I binned that thought (other than deciding to ask about it here).
The thing is there's no good reason I can think of that named arguments should not work. One cannot mix named and positional arguments which would be one wrinkle, so it's 100% reliable to take a set of named arguments, and match the argument names to the parameter names in the method signature. There is no ambiguity: any args that have the same name as a param are assigned as that parameter value. All the rest - instead of being passed in as ad-hoc arguments - are handled by the ... operation.
I cannot see a way that there's any ambiguity either. It's 100% "match the named ones, pass all others in the rest param".
What happens if the method call actually specifies a named argument that matched the name of the "rest" param? Same as if one specifies a positional argument in the position of the "rest" param: it doesn't matter. all arguments that don't match other named params are passed in the rest argument value.
I also think that if for some reason named arguments are not supported for use on function using the rest operator, then an exception should be thrown; not simply the code being ignored.
And whatever the behaviour is needs to be documented.
However one spins it, there are at least two bugs here:
Either it should work as I'd expect (or some variation thereof, if I have not thought of something), or it should throw an exception.
The behaviour should be documented.
I have not raised tickets for these as I'm not really a user of CF any more, so I don't care so much. Enough to raise it with them; not enough to care what they do about it. But CFers probably should care.
NB: I did not realise Lucee did not support the spread and rest operators at all, so I had to take a different approach to my requirement anyhow. I've not decided on the best way as yet.
There is a ticket for them to be implemented in Lucee: LDEV-2201.
Just to pass the time / avoid other things I really ought to be doing instead, over the last few evenings I'm been messing around with my TinyTestFramework. I first created this as an exercise in doing some "real world" TDD for a blog article: "TDD: writing a micro testing framework, using the framework to test itself as I build it". The other intent of this work is so I can run actual tests in my code on trycf.com. This is useful when I'm both asking and answering CFML questions I encounter on the CFML Slack and other places.
The first iteration of the framework was pretty minimal. It was just this:
But it let me write tests in a Jasmine/TestBox sort of way, right there in trycf.com:
describe("describe", () => {
it("it is a test", () => {
expect(true).toBe(true)
})
})
And this would output:
describe
it is a test: OK
That's cool. That was a good MVP. And I actually use it on trycf.com.
However I quickly felt that only having the one toBe matcher was limiting, and made my tests less clear than they could be. Especially when I wanted to expect null or an exception. So… I messed around some more.
I'm not going to take you through the full TDD exercise of writing all this, but I assure you I TDDed almost all of it (I forgot with a coupla small tweaks, I have to admit. I'm not perfect).
But here's the code (also as a gist), for those that are interested:
And it's tested using itself, obviously. Interestingly / predictably, there >300 lines of test code there. The ratio is 1:2.5 code:tests
What am I gonna do next? I want to improve that toInclude matcher to work on more than just strings. I also want to have a toBeInstanceOf matcher. Also at some point I better do something with checking structs and arrays and that sorta jazz. I've not needed to actually do that stuff yet, so have not bothered to implement them. But I intend to.
This follows on from CFML: implementing dependency injection in a CFWheels web site. In that article I got the DI working, but only with a test scenario. For the sake of completeness, I'm gonna continue with the whole point of the exercise, which is getting a logger service into my model objects, via DI.
0.6 adding LogBox
I ran into some "issues" yesterday whilst trying to write this article. Documented here: "A day in the life of trying to write a blog article in the CFML ecosystem", and there's some file changes committed as 0.5.1. So that killed my motivation to continue this work once I'd waded through that. I appear to be back on track now though.
Right so I'll add a quick test to check LogBox is installed. It's a bit of a contrived no-brainer, but this is playing to my strengths. So be it:
If LogBox is where it's supposed to be: it'll pass. Initially I had a new LogBox() in there, but it needs some config to work, and that requires a bit of horsing around, so I'll deal with that next. For now: is it installed? Test sez "no":
Test sez… yes.
OK. That was unexpected. Why did that pass? I have checked that LogBox is not installed, so WTH??
After a coupla hours of horsing about looking at TestBox code, I worked out there's a logic… um… shortfall (shall I say) in its implementation of that regex param, which is a bit wayward. The code in question is this (from /system/Assertion.cfc):
if (
len( arguments.regex ) AND
(
!arrayLen( reMatchNoCase( arguments.regex, e.message ) )
OR
!arrayLen( reMatchNoCase( arguments.regex, e.detail ) )
)
) {
return this;
}
Basically this requires both of message and detail to not match the regex for it to be considered "the same" exception. This is a bit rigorous as it's really unlikely for this to be the case in the real world. I've raised it with Ortus (TESTBOX-349), but for now I'll just work around it. Oh yeah, there's a Lucee bug interfering with this too. When an exception does have the same message and details, Lucee ignores the details. I've not raised a bug for this yet: I'm waiting for them to fee-back as to whether I'm missing something. When there's a ticket, I'll cross-post it here.
Anyway, moving on, I'll just check for any exception, and that'll do:
0.7 wiring LogBox into the DependencyInjectionService
One of the reasons the previous step really didn't push the boat out with testing if LogBox was working, is that to actually create a working LogBox logger takes some messing about; and I wanted to separate that from the installation. And also to give me some time to come up with the next test case. I want to avoid this sort of thing:
I don't want to skip to a test that is "it can log stuff that happens in the Test model object". I guess it is part ofthe requiremnt that the logger is handled via dependency injection into the model, so we can first get it set up and ready to go in the DependencyInjectionService. I mean the whole thing here is about DI: the logger is just an example usage. I think the next step is legit.
I've never used LogBox before, so I am RTFMing all this as I type (docs: "Configuring LogBox"). It seems I need to pass a Config object to my LogBox object, then get the root logger from said object… and that's my logger. Al of that can go in a factory method in configureDependencies, adn I'll just put the resultant logger into the IoC container.
it("loads a logger", () => {
diService = new DependencyInjectionService()
logger = diService.getBean("Logger")
expect(logger).toBeInstanceOf("logbox.system.logging.Logger")
expect(() => logger.info("TEST")).notToThrow()
})
I'm grabbing a logger and logging a message with it. The expectation is simply that the act of logging doesn't error. For now.
First here's the most minimal config I seem to be able to get away with:
The docs ("LogBox DSL
") seemed to indicate I only needed the logBox struct, but it errored when I used it unless I had at least one appender. I'm just using a dummy one for now because I'm testing config, not operations. And there's nothing to test there: it's all implementation, so I think it's fine to create that in the "green" phase of "red-green-refactor" from that test above (currently red). With TDD the red phase is just to do the minimum code to make the test pass. That doesn't mean it needs to be one line of code, or one function or whatever. If my code needed to call a method on this Config object: then I'd test that separately. But I'm happy that this is - well - config. It's just data.
Once we have that I can write my factory method on DependencyInjectionService:
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
variables.container.declareBean("TestDependency", "services.TestDependency")
variables.container.factoryBean("Logger", () => {
config = new Config()
logboxConfig = new LogBoxConfig(config)
logbox = new LogBox(logboxConfig)
logger = logbox.getRootLogger()
return logger
})
}
I got all that from the docs, and I have nothing to add: it's pretty straight forward. Let's see if the test passes:
Cool.
Now I need to get my Test model to inject the logger into itself, and verify I can use it:
Here I am mocking the logger's debug method, just so I can check it's being called, and with what. Having done this, I am now wondering about "don't mock what you don't own", but I suspect in this case I'm OK because whilst the nomenclature is all "mock", I'm actually just spying on the method that "I don't own". IE: it's LogBox's method, not my application's method. I'll have to think about that a bit.
And the implementation for this is way easier than the test:
// models/Test.cfc
private function setDependencies() {
variables.dependency = variables.diService.getBean("TestDependency")
variables.logger = variables.diService.getBean("Logger")
}
public function getMessage() {
variables.logger.debug("getMessage was called")
return variables.dependency.getMessage()
}
Just for the hell of it, I also wrote a functional test to check the append was getting the expected info:
it("logs via the correct appender", () => {
test = model("Test").new()
prepareMock(test)
logger = test.$getProperty("logger")
appenders = logger.getAppenders()
expect(appenders).toHaveKey("DummyAppender", "Logger is not configured with the correct appender. Test aborted.")
appender = logger.getAppenders().DummyAppender
prepareMock(appender)
appender.$("logMessage").$results(appender)
test.getMessage()
appenderCallLog = appender.$callLog()
expect(appenderCallLog).toHaveKey("logMessage")
expect(appenderCallLog.logMessage).toHaveLength(1)
expect(appenderCallLog.logMessage[1]).toSatisfy((actual) => {
expect(actual[1].getMessage()).toBe("getMessage was called")
expect(actual[1].getSeverity()).toBe(logger.logLevels.DEBUG)
expect(actual[1].getTimestamp()).toBeCloseTo(now(), 2, "s")
return true
}, "Log entry is not correct")
})
It's largely the same as the unit test, except it spies on the appender instead of the logger. There's no good reason for doing this, I was just messing around.
This is not the article I intended to write today. That article was gonna be titled "CFML: Adding a LogBox logger to a CFWheels app via dependency injection", but I'll need to get to that another day now.
Here's how far that article got before the wheels fell off:
And that was it.
Why? Well I started by writing an integration test just to check that box install logbox did what I expected:
Simple enough. It'll throws an exception if LogBox ain't there, and I'm expecting that. It's a dumb test but it's a reasonable first step to build on.
I run the test:
Err… come again? I ain't installed it yet. I lifted the code from the expect callback out and run it "raw" in the body ofthe test case: predictable exception. I put it back in the callback. Test passes. I change the matcher to be toThrow. Test still passed. So this code both throws and exception and doesn't throw an exception. This is pleasingly Schrödingeresque, but not helpful.
The weird thing is I know this is not a bug in TestBox, cos we use notToThrow in our tests at work. I port the test over to my work codebase: test fails (remember: this is what I want ATM, we're still at the "red" of "red-green-refactor").
I noticed that we were running a slightly different version of Testbox in the work codebase: 4.4.0-snapshot compared to my 4.5.0+5. Maybe there's been a regression. I changed my TestBox version in box.json and - without thinking things through - went box install again (not just box install testbox which is all I really needed to do), and was greeted with this:
That's reasonably bemusing as I had just used box install fw1 to install it in the first place, and that went fine. And I have not touched it since. I checked what version I already had installed (in framework/box.json), and it claims 4.3.0. So… ForgeBox… I beg to differ pal. You found this version y/day, why can't you find it today? I check on ForgeBox, and for 4.x I see versions 4.0.0, 4.1.0, 4.2.0, 4.5.0-SNAPSHOT. OK, so granted: no 4.3.0. Except that's what it installed for me yesterday. Maybe 4.3.0 has issues and got taken down in the last 24h (doubtful, but hey), so I blow away my /framework directory, and remove the entry from box.json, and box install fw1 again. This is interesting:
4.2.0. But its entry in its own box.json is 4.3.0, and the constraint it put in my box.json is ^4.3.0.
I do not have time or inclination for any of this, so I just stick a constraint of ~4.2.0 in my box.json, and that seems to have solved it. I mean the error went away: it's still installing 4.3.0. Even with a hard-coded 4.2.0 it's still installing 4.3.0.
Brad Wood from Ortus/CommandBox had a look at this, nutted-out that there was something wrong with the way the FW/1 package on ForgeBox was configured, and he in turn pinged Steve Neiland who looks after FW/1 these days, and he got this sorted. I'm now on 4.3.0, and it says it's 4.2.0. And box install no longer complains at me. Cheers fellas.
Then I noticed that because of the stupid way CFWheels "organises" itself in the file system, I have inadvertantly overwritten a bunch of my own CFWheels files. Sigh. CFWheels doesn't bother to package itself up as "app" (its stuff) and "implementation" (my code that uses their app), it just has "here's some files: some you should change (anything outside the wheels subdirectory), some you probably shouldn't (the stuff in the wheels subdirectory)", but there's no differentiation when it comes to installation: all the files are deployed. Overwriting all the user-space files with their original defaults. Sorry but this is just dumbarsey. Hurrah for source control and small commit iterations is all I can say, as I could just revert some files and I was all good.
Right so now I have the same version of TestBox installed here as in our app at work (remember how this was all I was tring to do? Update testbox. Nothing to do with FW/1, and nothing to do with CFWheels. But there's an hour gone cocking around with that lot).
And the test still doesn't work. Ballocks.
I notice the Lucee version is also different. We're locked into an older version of Lucee at work due to bugs and incompats in newer versions that we're still waiting on to be fixed, so the work app is running 5.3.7.47, and I am on 5.3.8.206. Surely it's not that? I rolled my app's Lucee version back to 5.3.7.47 and the test started failing (correctly). OK, so it's a Lucee issue.
I spent about an hour messing around doing a binary search of Lucee container versions until I identified the last version that wasn't broken (5.3.8.3) and the next version - a big jump here - 5.3.8.42 that was broken. I looked at a diff of the code but nothing leapt out at me. This was slightly daft as I had no idea what I was looking for, so that was probably half an hour of time looking at Lucee's source code in an aimless fashion. I actually saw the change that was the problem, but didn't clock that that is what caused it at the time.
Having drawn a blank, I slapped my forehead, called myself a dick, and went back to the code in TestBox that was behaving differently. That would obviously tell me where to look for the issue.
There are some Java method calls there to act as controls, but on Lucee's current version, we get this:
And on earlier versions it's this:
(Full disclosure: I'm using Lucee 4.5 on trycf.com for that second dump, but it's the same results in earlier versions of Lucee 5, up to the point where it starts going wrong)
Note how previously a regex match of .* matches an empty string? This is correct. It does. In all regex engines I know of. Yet in Lucee's current versions, it returns a completely empty array. This indicates no match, and it's wrong. Simple as that. So there's the bug.
I was pointed in the direction of an existing issue for this: LDEV-3703. Depsite being a regression they know they caused, Lucee have decided to only fix it in 6.x. Not the version they actually broke. Less than ideal, but so be it.
There were a coupla of Regex issues dealt with between those Lucee versions I mentioned before. Here's a Jira search for "fixversion >= 5.3.8.4 and fixversion < 5.3.8.42 and text ~ regex". I couldn't be arsed tracking back through the code, but I did find something in LDEV-3009 mentioning a new Application.cfc setting:
this.useJavaAsRegexEngine = true
This is documented for ColdFusion in Application variables, and… absolutely frickin' nowhere in the Lucee docs, as far as I can see.
On a whim I stuck that setting in my Application.cfc and re-ran the test. If the setting was false: the test doesn't work. If it was true: the test does work. That's something, but Lucee is not off the hook here. The behaviour of that regex match does not change between the old and new regex engines! .* always matches an empty string! So there's still a bug.
However, being pragmatic, I figured "problem solved" (for now), and moved on. For some reason I restarted my container, and re-hit my tests:
I switched the setting to this.useJavaAsRegexEngine = false and the tests ran again (failed incorrectly, but ran). So… let me get this straight. For TestBox to work, I need to set that setting to true. To get CFWheels to work, I need to set it to false.
For pete's sake.
As I said on the Lucee subchannel on the CFML Slack:
Do ppl recall how I've always said these stupid flags to change behaviour of the language were a fuckin dumb idea, and are basically unusable in a day and age where we all rely on third-party libs to do our jobs?
Exhibit. Fucking. A.
Every single one of these stupid, toxic, setting doubles the overhead for library providers to make their code work. I do not fault TestBox or CFWheels one bit here. They can't be expected to support the exponential number of variations each one of those settings accumulates. I can firmly say that no CFML code should ever be written that depends on any of these settings. And no library or third-party code should ever be tested with the setting variation on. Just ignore them. The settings should not exist. Anyway: this is an editorial digression. "We are where we are" as the over-used saying goes.
Screw all this. Seriously. All I wanted to do is to do a blog article about perhaps 50-odd new lines of code in my example app. Instead I spent four hours untangling this shite. And my blog article has not progressed.
Here's what I needed to do to my app to work around these various issues:
Recently I wanted to abstract some logic out of one of our CFWheels model classes, into its own representation. Code had grown organically over time, with logic being inlined in functions, making a bunch of methods a bit weighty, and had drifted well away from the notion of following the "Single Responsibility Principle". Looking at the code in question, even if I separated it out into a bunch of private methods (it was a chunk of code, and refactoring into a single private method would not have worked), it was clear that this was just shifting the SRP violation out of the method, and into the class. This code did not belong in this class at all. Not least of all because we also needed to use some of it in another class. This is a pretty common refactoring exercise in OOP land.
I'm going to be a bit woolly in my descriptions of the functionality I'm talking about here, because the code in question is pretty business-domain-specific, and would not make a lot of sense to people outside our team. Let's just say it was around logging requirements. It wasn't, but that's a general notion that everyone gets. We need to log stuff in one of our classes, and currently it's all baked directly into that class. It's not. But let's pretend.
I could see what I needed to do: I'll just rip this lot out into another service class, and then use DI to… bugger. CFWheels does not have any notion of dependency injection. I mean... I'm fairly certain it doesn't even use the notion of constructors when creating objects. If one wants to use some code in a CFWheels model, one writes the code in the model. This is pretty much why we got to where we are. If one wants to use code from another class in one's model... one uses the model factory method to get a different model (eg: myOtherModel = model("Other")). Inline in one's code. There's a few drawbacks here:
In CFWheels parlance, "model" means "an ORM representation of a DB row". The hard-coupling between models and a DB table is innate to CFWheels. It didn't seem to occur to the architects of CFWheels that not all domain model objects map to underlying DB storage. One can have a "tableless model", but it still betrays the inappropriate coupling between domain model and storage representation. A logging service is a prime example of this. It is part of the domain model. It does not represent persisted objects. In complex applications, the storage considerations around the logic is just a tier underneath the domain model. It's not baked right into the model. I've just found a telling quote on the CFWheels website:
That's not correct. That is not what the model is. But explains a lot about CFWheels.
Secondly, it's still a bit of a fail of separation of concerns / IoC if the "calling code" hard-codes which implementation of a concern it is going to use.
If one is writing testable code, one doesn't want that second-point tight-coupling going on. If I'm testing features of my model, I want to stub out the logger it's using, for example. This is awkward to do if the decision as to which logger implementation we're using is baked-into the code I'm testing.
Anyway, you probably get it. Dependency injection exists for a reason. And this is something CFWheels appears to have overlooked.
0.1 - Baseline container
I have worked through the various bits and pieces I'm going to discuss already, to make sure it all works. But as I write this I am starting out with a bare-bones Lucee container, and a bare-bones MariaDB container (from my lucee_and_mariadb repo. I've also gone ahead and installed TestBox, and my baseline tests are passing:
Yes. I have a test that TestBox is installed and running correctly.
We have a green light, so that's a baseline to start with. I've tagged that in GitHub as 0.1.
0.2 - CFWheels operational
OK, I'll install CFWheels. But first my requirement here is "CFWheels is working". I will take this to mean that it displays its homepage after install, so I can test for that pretty easily:
I'm using TDD for even mundane stuff like this so I don't get ahead of myself, and miss bits I need to do to get things working.
This test fails as one would expect. Installing CFWheels is easy: just box install cfwheels. This installs everything in the public web root which is not great, but it's the way CFWheels works. I've written another series about how to get a CFWheels-driven web app working whilst also putting the code in a sensible place, summarised here: Short version: getting CFWheels working outside the context of a web-browsable directory, but life's too short to do all that horsing around today, so we'll just use the default install pattern. Note: I do not consider this approach to be appropriate for production, but it'll do for this demonstration.
After the CFWheels installation I do still have to fix-up a few things:
It steamrolled my existing Application.cfc, so I had to merge the CFWheels bit with my bit again.
Anything in CFWheels will only work properly if it's called in the context of a CFWheels app, so I need to tweak my test config slightly to accommodate that.
And that CFWheels "context" only works if it's called from index.cfm. So I need to rename my test/runTests.cfm to be index.cfm.
Having done that:
A passing test is all good, but I also made sure the thing did work. By actually looking at it:
I want to mess around with models, so I need to create one. I have a stub DB configured with this app, and it has a table test with a coupla test rows in it. I'll create a CFWheels model that maps to that. CFWheels expects plural table names, but mine's singular so I need a config tweak there. I will test that I can retrieve test records from it.
it("can find test records from the DB", () => {
tests = model("Test").findAll(returnAs="object")
expect(tests).notToBeEmpty()
tests.each((test) => {
expect(test).toBeInstanceOf("models.Test")
expect(test.properties().keyArray().sort("text")).toBe(["id", "value"])
})
})
Good. Next I wanted to check when CFWheels calls my Test class's constructor. Given one needs to use that factory method (eg: model("Test").etc) to do anything relating to model objects / collections . etc, I was not sure whether the constructor comes into play. Why do i care? Because when using dependency injection, one generally passes the dependencies in as constructor arguments. This is not the only way of doing it, but it's the most obvious ("KISS" / "Principle of Least Astonishment") approach. So let's at least check.
it("has its constructor called when it is instantiated by CFWheels", () => {
test = model("Test").new()
expect(test.getFindMe()).toBe("FOUND")
})
Implementation:
public function init() {
variables.findMe = "FOUND"
}
public string function getFindMe() {
return variables.findMe
}
Result:
OK so scratch that idea. CFWheels does not call the model class's constructor. Initially I was annoyed about this as it seems bloody stupid. But then I recalled that when one is using a factory method to create objects, it's not unusual to not use the public constructor to do so. OK fair enough.
I asked around, and (sorry I forget who told me, or where they told me) found out that CFWheels does provide an event hook I can leverage for when an model object is created: model.afterInitialization. I already have my test set up to manage my expectations, so I can just change my implementation:
function config() {
table(name="test")
afterInitialization("setFindMe")
}
public function setFindMe() {
variables.findMe = "FOUND"
}
And that passed this time. Oh I changed the test label from "has its constructor called…" to be "has its afterInitialization handler called…". But the rest of the test stays the same. This is an example of how with TDD we are testing the desired outcome rather than the implementation. It doesn't matter whether the value is set by a constructor or by an event handler: it's the end result of being able to use the value that matters.
At the moment I have found my "way in" to each object as they are created. I reckon from here I can have a DependencyInjectionService that I can call upon from the afterInitialization handler so the model can get the dependencies it needs. This is not exactly "dependency injection", it's more "dependency self-medication", but it should work.
My DI requirements ATM are fairly minimal, but I am not going to reinvent the wheel. I'm gonna use DI/1 to handle the dependencies. I've had a look at it before, and it's straight forward enough, and is solid.
My tests are pretty basic to start with: I just want to know it's installed properly and operations:
it("can be instantiated", () => {
container = new framework.ioc("/services")
expect(container).toBeInstanceOf("framework.ioc")
})
And now to install it: box install fw1
And we have a passing test (NB: I'm not showing you the failures necessarily, but I do always actually not proceed with anything until I see the test failing):
It's not much use unless it loads up some stuff, so I'll test that it can:
it("loads services with dependencies", () => {
container = new framework.ioc("/services")
testService = container.getBean("TestService")
expect(testService.getDependency()).toBeInstanceOf("services.TestDependency")
})
I'm gonna show the failures this time. First up:
This is reasonable because the TestService class isn't there yet, so we'd expect DI/1 to complain. The good news is it's complaining in the way we'd want it to. TestService is simple:
component {
public function init(required TestDependency testDependency) {
variables.dependency = arguments.testDependency
}
public TestDependency function getDependency() {
return variables.dependency
}
}
Now the failure changes:
This is still a good sign: DI/1 is doing what it's supposed to. Well: trying to. And reporting back with exactly what's wrong. Let's put it (and, I imagine: you) out of its misery and give it the code it wants. TestDependency:
component {
}
And now DI/1 has wired everything together properly:
As well as creating a DI/1 instance and pointing it at a directory (well: actually I won't be doing that), I need to hand-crank some dependency creation as they are not just a matter of something DI/1 can autowire. So I'm gonna wrap-up all that in a service too, so the app can just use a DependencyInjectionService, and not need to know what its internal workings are.
To start with, I'll just make sure the wrapper can do the same thing we just did with the raw IoC object from the previous tests:
describe("Tests for DependencyInjectionService", () => {
it("loads the DI/1 IoC container and its configuration", () => {
diService = new DependencyInjectionService()
testService = diService.getBean("DependencyInjectionService")
expect(testService).toBeInstanceOf("services.DependencyInjectionService")
})
})
Instead of testing the TestService here, I decided to use DependencyInjectionService to test it can… load itself
There's a bit more code this time for the implementation, but not much.
import framework.ioc
component {
public function init() {
variables.container = new ioc("")
configureDependencies()
}
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
}
public function onMissingMethod(required string missingMethodName, required struct missingMethodArguments) {
return variables.container[missingMethodName](argumentCollection=missingMethodArguments)
}
}
It creates an IOC container object, but doesn't scan any directories for autowiring opportunities this time.
It hand-cranks the loading of the DependencyInjectionService object.
It also acts as a decorator for the underlying IOC instance, so calling code just calls getBean (for example) on a DependencyInjectionService instance, and this is passed straight on to the IOC object to do the work.
And we have a passing test:
Now we can call our DI service in our model, and the model can use it to configure its dependencies. First we need to configure the DependencyInjectionService with another bean:
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
variables.container.declareBean("TestDependency", "services.TestDependency")
}
describe("Tests for TestDependency", () => {
describe("Tests for getMessage method")
it("returns SET_BY_DEPENDENCY", () => {
testDependency = new TestDependency()
expect(testDependency.getMessage()).toBe("SET_BY_DEPENDENCY")
})
})
})
// TestDependency.cfc
component {
public string function getMessage() {
return "SET_BY_DEPENDENCY"
}
}
That's not quite the progression of the code there. I had to create TestDependency first, so I did its test and it first; then wired it into DependencyInjectionService.
Now we need to wire that into the model class. But first a test to show it's worked:
describe("Tests for Test model", () => {
describe("Tests of getMessage method", () => {
it("uses an injected dependency to provide a message", () => {
test = model("Test").new()
expect(test.getMessage()).toBe("SET_BY_DEPENDENCY")
})
})
})
Hopefully that speaks for itself: we're gonna get that getMessage method in Test to call the equivalent method from TestDependency. And to do that, we need to wire an instance of TestDependency into our instance of the Test model. I should have thought of better names for these classes, eh?
// /models/Test.cfc
import services.DependencyInjectionService
import wheels.Model
component extends=Model {
function config() {
table(name="test")
afterInitialization("setFindMe,loadIocContainer")
}
public function setFindMe() {
variables.findMe = "FOUND"
}
public string function getFindMe() {
return variables.findMe
}
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()
setDependencies()
}
private function setDependencies() {
variables.dependency = variables.diService.getBean("TestDependency")
}
public function getMessage() {
return variables.dependency.getMessage()
}
}
That works…
…but it needs some adjustment.
Firstly I want the dependency injection stuff being done for all models, not just this one. So I'm going to shove some of that code up into the Model base class:
// /models/Model.cfc
/**
* This is the parent model file that all your models should extend.
* You can add functions to this file to make them available in all your models.
* Do not delete this file.
*/
import services.DependencyInjectionService
component extends=wheels.Model {
function config() {
afterInitialization("loadIocContainer")
}
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()
setDependencies()
}
private function setDependencies() {
// OVERRIDE IN SUBCLASS
}
}
Now the base model handles the loading of the DependencyInjectionService, and calls a setDependencies method. Its own method does nothing, but if a subclass has an override of it, then that will run instead.
I will quickly tag that lot before I continue. 0.4.
But…
0.5 Dealing with the hard-coded DependencyInjectionService initialisation
The second problem is way more significant. Model is creating and initialising that DependencyInjectionService object every time a model object is created. That's not great. All that stuff only needs to be done once for the life of the application. I need to do that bit onApplicationStart (or whatever approximation of that CFWheels supports), and then I need to somehow expose the resultant object in Model.cfc. A crap way of doing it would be to just stick it in application.dependencyInjectionService and have Model look for that. But that's a bit "global variable" for my liking. I wonder if CFWheels has an object cache that it intrinsically passes around the place, and exposes to its inner workings. I sound vague because I had pre-baked all the code up to where I am now a week or two ago, and it was not until I was writing this article I went "oh well that is shit, I can't be having that stuff in there". And I don't currently know the answer.
Let's take the red-green-refactor route, and at least get the initialisation out of Model, and into the application lifecycle.
…
…
…
Ugh. Looking through the CFWheels codebase is not for the faint-hearted. Unfortunately the "architecture" of CFWheels is such that it's about one million (give or take) individual functions, and no real sense of cohesion to anything other than a set of functions might be in the same .cfm (yes: .cfm file :-| ), which then gets arbitrarily included all over the place. If I dump out the variables scope of my Test model class, it has 291 functions. Sigh.
There's a bunch of functions possibly relating to caching, but there's no Cache class or CacheService or anything like that... there's just some functions that act upon a bunch of application-scoped variable that are not connected in any way other than having the word "cache" in them. I feel like I have fallen back through time to the days of CF4.5. Ah well.
I'll chance my arm creating my DependencyInjectionService object in my onApplicationStart handler, use the $addToCache function to maybe put it into a cache… and then pull it back out in Model. Please hold.
[about an hour passes. It was mostly swearing]
Okey doke, so first things first: obviously there's a new test:
describe("Tests for onApplicationStart", () => {
it("puts an instance of DependencyInjectionService into cache", () => {
diService = $getFromCache("diService")
expect(diService).toBeInstanceOf("services.DependencyInjectionService")
})
})
The implementation for this was annoying. I could not use the onApplicationStart handler in my own Application.cfc because CFWheels steamrolls it with its own one. Rather than using the CFML lifecycle event handlers the way they were intended, and also using inheritance when an application and an application framework might have their own work to do, CFWheels just makes you write its handler methods into your Application.cfc. This sounds ridiculous, but this is what CFWheels does in the application's own Application.cfc. I'm going to follow-up on this stupidity in a separate article, perhaps. But suffice it to say that instead of using my onApplicationStart method, I had to do it the CFWheels way. which is … wait for it… to put the code in events/onapplicationstart.cfm. Yuh. Another .cfm file. Oh well. Anyway, here it is:
<cfscript>
// Place code here that should be executed on the "onApplicationStart" event.
import services.DependencyInjectionService
setDependencyInjectionService()
private void function setDependencyInjectionService() {
diService = new DependencyInjectionService()
$addToCache("diService", diService)
}
</cfscript>
And then in models/Model.cfc I make this adjustment:
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()variables.diService = $getFromCache("diService")
setDependencies()
}
And then…
I consider that a qualified sucessful exercise in "implementing dependency injection in a CFWheels web site". I mean I shouldn't have to hand-crank stuff like this. This article should not need to be written. This is something that any framework still in use in 2022 should do out of the box. But… well… here we are. It's all a wee bit Heath Robinson, but I don't think it's so awful that it's something one oughtn't do.
And now I'm gonna push the code up to github (0.5), press "send" on this, and go pretend none of it ever happened.