This follows on from CFML: implementing dependency injection in a CFWheels web site. In that article I got the DI working, but only with a test scenario. For the sake of completeness, I'm gonna continue with the whole point of the exercise, which is getting a logger service into my model objects, via DI.
0.6 adding LogBox
I ran into some "issues" yesterday whilst trying to write this article. Documented here: "A day in the life of trying to write a blog article in the CFML ecosystem", and there's some file changes committed as 0.5.1. So that killed my motivation to continue this work once I'd waded through that. I appear to be back on track now though.
Right so I'll add a quick test to check LogBox is installed. It's a bit of a contrived no-brainer, but this is playing to my strengths. So be it:
If LogBox is where it's supposed to be: it'll pass. Initially I had a new LogBox() in there, but it needs some config to work, and that requires a bit of horsing around, so I'll deal with that next. For now: is it installed? Test sez "no":
Test sez… yes.
OK. That was unexpected. Why did that pass? I have checked that LogBox is not installed, so WTH??
After a coupla hours of horsing about looking at TestBox code, I worked out there's a logic… um… shortfall (shall I say) in its implementation of that regex param, which is a bit wayward. The code in question is this (from /system/Assertion.cfc):
if (
len( arguments.regex ) AND
(
!arrayLen( reMatchNoCase( arguments.regex, e.message ) )
OR
!arrayLen( reMatchNoCase( arguments.regex, e.detail ) )
)
) {
return this;
}
Basically this requires both of message and detail to not match the regex for it to be considered "the same" exception. This is a bit rigorous as it's really unlikely for this to be the case in the real world. I've raised it with Ortus (TESTBOX-349), but for now I'll just work around it. Oh yeah, there's a Lucee bug interfering with this too. When an exception does have the same message and details, Lucee ignores the details. I've not raised a bug for this yet: I'm waiting for them to fee-back as to whether I'm missing something. When there's a ticket, I'll cross-post it here.
Anyway, moving on, I'll just check for any exception, and that'll do:
0.7 wiring LogBox into the DependencyInjectionService
One of the reasons the previous step really didn't push the boat out with testing if LogBox was working, is that to actually create a working LogBox logger takes some messing about; and I wanted to separate that from the installation. And also to give me some time to come up with the next test case. I want to avoid this sort of thing:
I don't want to skip to a test that is "it can log stuff that happens in the Test model object". I guess it is part ofthe requiremnt that the logger is handled via dependency injection into the model, so we can first get it set up and ready to go in the DependencyInjectionService. I mean the whole thing here is about DI: the logger is just an example usage. I think the next step is legit.
I've never used LogBox before, so I am RTFMing all this as I type (docs: "Configuring LogBox"). It seems I need to pass a Config object to my LogBox object, then get the root logger from said object… and that's my logger. Al of that can go in a factory method in configureDependencies, adn I'll just put the resultant logger into the IoC container.
it("loads a logger", () => {
diService = new DependencyInjectionService()
logger = diService.getBean("Logger")
expect(logger).toBeInstanceOf("logbox.system.logging.Logger")
expect(() => logger.info("TEST")).notToThrow()
})
I'm grabbing a logger and logging a message with it. The expectation is simply that the act of logging doesn't error. For now.
First here's the most minimal config I seem to be able to get away with:
The docs ("LogBox DSL
") seemed to indicate I only needed the logBox struct, but it errored when I used it unless I had at least one appender. I'm just using a dummy one for now because I'm testing config, not operations. And there's nothing to test there: it's all implementation, so I think it's fine to create that in the "green" phase of "red-green-refactor" from that test above (currently red). With TDD the red phase is just to do the minimum code to make the test pass. That doesn't mean it needs to be one line of code, or one function or whatever. If my code needed to call a method on this Config object: then I'd test that separately. But I'm happy that this is - well - config. It's just data.
Once we have that I can write my factory method on DependencyInjectionService:
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
variables.container.declareBean("TestDependency", "services.TestDependency")
variables.container.factoryBean("Logger", () => {
config = new Config()
logboxConfig = new LogBoxConfig(config)
logbox = new LogBox(logboxConfig)
logger = logbox.getRootLogger()
return logger
})
}
I got all that from the docs, and I have nothing to add: it's pretty straight forward. Let's see if the test passes:
Cool.
Now I need to get my Test model to inject the logger into itself, and verify I can use it:
Here I am mocking the logger's debug method, just so I can check it's being called, and with what. Having done this, I am now wondering about "don't mock what you don't own", but I suspect in this case I'm OK because whilst the nomenclature is all "mock", I'm actually just spying on the method that "I don't own". IE: it's LogBox's method, not my application's method. I'll have to think about that a bit.
And the implementation for this is way easier than the test:
// models/Test.cfc
private function setDependencies() {
variables.dependency = variables.diService.getBean("TestDependency")
variables.logger = variables.diService.getBean("Logger")
}
public function getMessage() {
variables.logger.debug("getMessage was called")
return variables.dependency.getMessage()
}
Just for the hell of it, I also wrote a functional test to check the append was getting the expected info:
it("logs via the correct appender", () => {
test = model("Test").new()
prepareMock(test)
logger = test.$getProperty("logger")
appenders = logger.getAppenders()
expect(appenders).toHaveKey("DummyAppender", "Logger is not configured with the correct appender. Test aborted.")
appender = logger.getAppenders().DummyAppender
prepareMock(appender)
appender.$("logMessage").$results(appender)
test.getMessage()
appenderCallLog = appender.$callLog()
expect(appenderCallLog).toHaveKey("logMessage")
expect(appenderCallLog.logMessage).toHaveLength(1)
expect(appenderCallLog.logMessage[1]).toSatisfy((actual) => {
expect(actual[1].getMessage()).toBe("getMessage was called")
expect(actual[1].getSeverity()).toBe(logger.logLevels.DEBUG)
expect(actual[1].getTimestamp()).toBeCloseTo(now(), 2, "s")
return true
}, "Log entry is not correct")
})
It's largely the same as the unit test, except it spies on the appender instead of the logger. There's no good reason for doing this, I was just messing around.
This is not the article I intended to write today. That article was gonna be titled "CFML: Adding a LogBox logger to a CFWheels app via dependency injection", but I'll need to get to that another day now.
Here's how far that article got before the wheels fell off:
And that was it.
Why? Well I started by writing an integration test just to check that box install logbox did what I expected:
Simple enough. It'll throws an exception if LogBox ain't there, and I'm expecting that. It's a dumb test but it's a reasonable first step to build on.
I run the test:
Err… come again? I ain't installed it yet. I lifted the code from the expect callback out and run it "raw" in the body ofthe test case: predictable exception. I put it back in the callback. Test passes. I change the matcher to be toThrow. Test still passed. So this code both throws and exception and doesn't throw an exception. This is pleasingly Schrödingeresque, but not helpful.
The weird thing is I know this is not a bug in TestBox, cos we use notToThrow in our tests at work. I port the test over to my work codebase: test fails (remember: this is what I want ATM, we're still at the "red" of "red-green-refactor").
I noticed that we were running a slightly different version of Testbox in the work codebase: 4.4.0-snapshot compared to my 4.5.0+5. Maybe there's been a regression. I changed my TestBox version in box.json and - without thinking things through - went box install again (not just box install testbox which is all I really needed to do), and was greeted with this:
That's reasonably bemusing as I had just used box install fw1 to install it in the first place, and that went fine. And I have not touched it since. I checked what version I already had installed (in framework/box.json), and it claims 4.3.0. So… ForgeBox… I beg to differ pal. You found this version y/day, why can't you find it today? I check on ForgeBox, and for 4.x I see versions 4.0.0, 4.1.0, 4.2.0, 4.5.0-SNAPSHOT. OK, so granted: no 4.3.0. Except that's what it installed for me yesterday. Maybe 4.3.0 has issues and got taken down in the last 24h (doubtful, but hey), so I blow away my /framework directory, and remove the entry from box.json, and box install fw1 again. This is interesting:
4.2.0. But its entry in its own box.json is 4.3.0, and the constraint it put in my box.json is ^4.3.0.
I do not have time or inclination for any of this, so I just stick a constraint of ~4.2.0 in my box.json, and that seems to have solved it. I mean the error went away: it's still installing 4.3.0. Even with a hard-coded 4.2.0 it's still installing 4.3.0.
Brad Wood from Ortus/CommandBox had a look at this, nutted-out that there was something wrong with the way the FW/1 package on ForgeBox was configured, and he in turn pinged Steve Neiland who looks after FW/1 these days, and he got this sorted. I'm now on 4.3.0, and it says it's 4.2.0. And box install no longer complains at me. Cheers fellas.
Then I noticed that because of the stupid way CFWheels "organises" itself in the file system, I have inadvertantly overwritten a bunch of my own CFWheels files. Sigh. CFWheels doesn't bother to package itself up as "app" (its stuff) and "implementation" (my code that uses their app), it just has "here's some files: some you should change (anything outside the wheels subdirectory), some you probably shouldn't (the stuff in the wheels subdirectory)", but there's no differentiation when it comes to installation: all the files are deployed. Overwriting all the user-space files with their original defaults. Sorry but this is just dumbarsey. Hurrah for source control and small commit iterations is all I can say, as I could just revert some files and I was all good.
Right so now I have the same version of TestBox installed here as in our app at work (remember how this was all I was tring to do? Update testbox. Nothing to do with FW/1, and nothing to do with CFWheels. But there's an hour gone cocking around with that lot).
And the test still doesn't work. Ballocks.
I notice the Lucee version is also different. We're locked into an older version of Lucee at work due to bugs and incompats in newer versions that we're still waiting on to be fixed, so the work app is running 5.3.7.47, and I am on 5.3.8.206. Surely it's not that? I rolled my app's Lucee version back to 5.3.7.47 and the test started failing (correctly). OK, so it's a Lucee issue.
I spent about an hour messing around doing a binary search of Lucee container versions until I identified the last version that wasn't broken (5.3.8.3) and the next version - a big jump here - 5.3.8.42 that was broken. I looked at a diff of the code but nothing leapt out at me. This was slightly daft as I had no idea what I was looking for, so that was probably half an hour of time looking at Lucee's source code in an aimless fashion. I actually saw the change that was the problem, but didn't clock that that is what caused it at the time.
Having drawn a blank, I slapped my forehead, called myself a dick, and went back to the code in TestBox that was behaving differently. That would obviously tell me where to look for the issue.
There are some Java method calls there to act as controls, but on Lucee's current version, we get this:
And on earlier versions it's this:
(Full disclosure: I'm using Lucee 4.5 on trycf.com for that second dump, but it's the same results in earlier versions of Lucee 5, up to the point where it starts going wrong)
Note how previously a regex match of .* matches an empty string? This is correct. It does. In all regex engines I know of. Yet in Lucee's current versions, it returns a completely empty array. This indicates no match, and it's wrong. Simple as that. So there's the bug.
I was pointed in the direction of an existing issue for this: LDEV-3703. Depsite being a regression they know they caused, Lucee have decided to only fix it in 6.x. Not the version they actually broke. Less than ideal, but so be it.
There were a coupla of Regex issues dealt with between those Lucee versions I mentioned before. Here's a Jira search for "fixversion >= 5.3.8.4 and fixversion < 5.3.8.42 and text ~ regex". I couldn't be arsed tracking back through the code, but I did find something in LDEV-3009 mentioning a new Application.cfc setting:
this.useJavaAsRegexEngine = true
This is documented for ColdFusion in Application variables, and… absolutely frickin' nowhere in the Lucee docs, as far as I can see.
On a whim I stuck that setting in my Application.cfc and re-ran the test. If the setting was false: the test doesn't work. If it was true: the test does work. That's something, but Lucee is not off the hook here. The behaviour of that regex match does not change between the old and new regex engines! .* always matches an empty string! So there's still a bug.
However, being pragmatic, I figured "problem solved" (for now), and moved on. For some reason I restarted my container, and re-hit my tests:
I switched the setting to this.useJavaAsRegexEngine = false and the tests ran again (failed incorrectly, but ran). So… let me get this straight. For TestBox to work, I need to set that setting to true. To get CFWheels to work, I need to set it to false.
For pete's sake.
As I said on the Lucee subchannel on the CFML Slack:
Do ppl recall how I've always said these stupid flags to change behaviour of the language were a fuckin dumb idea, and are basically unusable in a day and age where we all rely on third-party libs to do our jobs?
Exhibit. Fucking. A.
Every single one of these stupid, toxic, setting doubles the overhead for library providers to make their code work. I do not fault TestBox or CFWheels one bit here. They can't be expected to support the exponential number of variations each one of those settings accumulates. I can firmly say that no CFML code should ever be written that depends on any of these settings. And no library or third-party code should ever be tested with the setting variation on. Just ignore them. The settings should not exist. Anyway: this is an editorial digression. "We are where we are" as the over-used saying goes.
Screw all this. Seriously. All I wanted to do is to do a blog article about perhaps 50-odd new lines of code in my example app. Instead I spent four hours untangling this shite. And my blog article has not progressed.
Here's what I needed to do to my app to work around these various issues:
Recently I wanted to abstract some logic out of one of our CFWheels model classes, into its own representation. Code had grown organically over time, with logic being inlined in functions, making a bunch of methods a bit weighty, and had drifted well away from the notion of following the "Single Responsibility Principle". Looking at the code in question, even if I separated it out into a bunch of private methods (it was a chunk of code, and refactoring into a single private method would not have worked), it was clear that this was just shifting the SRP violation out of the method, and into the class. This code did not belong in this class at all. Not least of all because we also needed to use some of it in another class. This is a pretty common refactoring exercise in OOP land.
I'm going to be a bit woolly in my descriptions of the functionality I'm talking about here, because the code in question is pretty business-domain-specific, and would not make a lot of sense to people outside our team. Let's just say it was around logging requirements. It wasn't, but that's a general notion that everyone gets. We need to log stuff in one of our classes, and currently it's all baked directly into that class. It's not. But let's pretend.
I could see what I needed to do: I'll just rip this lot out into another service class, and then use DI to… bugger. CFWheels does not have any notion of dependency injection. I mean... I'm fairly certain it doesn't even use the notion of constructors when creating objects. If one wants to use some code in a CFWheels model, one writes the code in the model. This is pretty much why we got to where we are. If one wants to use code from another class in one's model... one uses the model factory method to get a different model (eg: myOtherModel = model("Other")). Inline in one's code. There's a few drawbacks here:
In CFWheels parlance, "model" means "an ORM representation of a DB row". The hard-coupling between models and a DB table is innate to CFWheels. It didn't seem to occur to the architects of CFWheels that not all domain model objects map to underlying DB storage. One can have a "tableless model", but it still betrays the inappropriate coupling between domain model and storage representation. A logging service is a prime example of this. It is part of the domain model. It does not represent persisted objects. In complex applications, the storage considerations around the logic is just a tier underneath the domain model. It's not baked right into the model. I've just found a telling quote on the CFWheels website:
Model: Just another name for the representation of data, usually a database table.
Secondly, it's still a bit of a fail of separation of concerns / IoC if the "calling code" hard-codes which implementation of a concern it is going to use.
If one is writing testable code, one doesn't want that second-point tight-coupling going on. If I'm testing features of my model, I want to stub out the logger it's using, for example. This is awkward to do if the decision as to which logger implementation we're using is baked-into the code I'm testing.
Anyway, you probably get it. Dependency injection exists for a reason. And this is something CFWheels appears to have overlooked.
0.1 - Baseline container
I have worked through the various bits and pieces I'm going to discuss already, to make sure it all works. But as I write this I am starting out with a bare-bones Lucee container, and a bare-bones MariaDB container (from my lucee_and_mariadb repo. I've also gone ahead and installed TestBox, and my baseline tests are passing:
Yes. I have a test that TestBox is installed and running correctly.
We have a green light, so that's a baseline to start with. I've tagged that in GitHub as 0.1.
0.2 - CFWheels operational
OK, I'll install CFWheels. But first my requirement here is "CFWheels is working". I will take this to mean that it displays its homepage after install, so I can test for that pretty easily:
I'm using TDD for even mundane stuff like this so I don't get ahead of myself, and miss bits I need to do to get things working.
This test fails as one would expect. Installing CFWheels is easy: just box install cfwheels. This installs everything in the public web root which is not great, but it's the way CFWheels works. I've written another series about how to get a CFWheels-driven web app working whilst also putting the code in a sensible place, summarised here: Short version: getting CFWheels working outside the context of a web-browsable directory, but life's too short to do all that horsing around today, so we'll just use the default install pattern. Note: I do not consider this approach to be appropriate for production, but it'll do for this demonstration.
After the CFWheels installation I do still have to fix-up a few things:
It steamrolled my existing Application.cfc, so I had to merge the CFWheels bit with my bit again.
Anything in CFWheels will only work properly if it's called in the context of a CFWheels app, so I need to tweak my test config slightly to accommodate that.
And that CFWheels "context" only works if it's called from index.cfm. So I need to rename my test/runTests.cfm to be index.cfm.
Having done that:
A passing test is all good, but I also made sure the thing did work. By actually looking at it:
I want to mess around with models, so I need to create one. I have a stub DB configured with this app, and it has a table test with a coupla test rows in it. I'll create a CFWheels model that maps to that. CFWheels expects plural table names, but mine's singular so I need a config tweak there. I will test that I can retrieve test records from it.
it("can find test records from the DB", () => {
tests = model("Test").findAll(returnAs="object")
expect(tests).notToBeEmpty()
tests.each((test) => {
expect(test).toBeInstanceOf("models.Test")
expect(test.properties().keyArray().sort("text")).toBe(["id", "value"])
})
})
Good. Next I wanted to check when CFWheels calls my Test class's constructor. Given one needs to use that factory method (eg: model("Test").etc) to do anything relating to model objects / collections . etc, I was not sure whether the constructor comes into play. Why do i care? Because when using dependency injection, one generally passes the dependencies in as constructor arguments. This is not the only way of doing it, but it's the most obvious ("KISS" / "Principle of Least Astonishment") approach. So let's at least check.
it("has its constructor called when it is instantiated by CFWheels", () => {
test = model("Test").new()
expect(test.getFindMe()).toBe("FOUND")
})
Implementation:
public function init() {
variables.findMe = "FOUND"
}
public string function getFindMe() {
return variables.findMe
}
Result:
OK so scratch that idea. CFWheels does not call the model class's constructor. Initially I was annoyed about this as it seems bloody stupid. But then I recalled that when one is using a factory method to create objects, it's not unusual to not use the public constructor to do so. OK fair enough.
I asked around, and (sorry I forget who told me, or where they told me) found out that CFWheels does provide an event hook I can leverage for when an model object is created: model.afterInitialization. I already have my test set up to manage my expectations, so I can just change my implementation:
function config() {
table(name="test")
afterInitialization("setFindMe")
}
public function setFindMe() {
variables.findMe = "FOUND"
}
And that passed this time. Oh I changed the test label from "has its constructor called…" to be "has its afterInitialization handler called…". But the rest of the test stays the same. This is an example of how with TDD we are testing the desired outcome rather than the implementation. It doesn't matter whether the value is set by a constructor or by an event handler: it's the end result of being able to use the value that matters.
At the moment I have found my "way in" to each object as they are created. I reckon from here I can have a DependencyInjectionService that I can call upon from the afterInitialization handler so the model can get the dependencies it needs. This is not exactly "dependency injection", it's more "dependency self-medication", but it should work.
My DI requirements ATM are fairly minimal, but I am not going to reinvent the wheel. I'm gonna use DI/1 to handle the dependencies. I've had a look at it before, and it's straight forward enough, and is solid.
My tests are pretty basic to start with: I just want to know it's installed properly and operations:
it("can be instantiated", () => {
container = new framework.ioc("/services")
expect(container).toBeInstanceOf("framework.ioc")
})
And now to install it: box install fw1
And we have a passing test (NB: I'm not showing you the failures necessarily, but I do always actually not proceed with anything until I see the test failing):
It's not much use unless it loads up some stuff, so I'll test that it can:
it("loads services with dependencies", () => {
container = new framework.ioc("/services")
testService = container.getBean("TestService")
expect(testService.getDependency()).toBeInstanceOf("services.TestDependency")
})
I'm gonna show the failures this time. First up:
This is reasonable because the TestService class isn't there yet, so we'd expect DI/1 to complain. The good news is it's complaining in the way we'd want it to. TestService is simple:
component {
public function init(required TestDependency testDependency) {
variables.dependency = arguments.testDependency
}
public TestDependency function getDependency() {
return variables.dependency
}
}
Now the failure changes:
This is still a good sign: DI/1 is doing what it's supposed to. Well: trying to. And reporting back with exactly what's wrong. Let's put it (and, I imagine: you) out of its misery and give it the code it wants. TestDependency:
component {
}
And now DI/1 has wired everything together properly:
As well as creating a DI/1 instance and pointing it at a directory (well: actually I won't be doing that), I need to hand-crank some dependency creation as they are not just a matter of something DI/1 can autowire. So I'm gonna wrap-up all that in a service too, so the app can just use a DependencyInjectionService, and not need to know what its internal workings are.
To start with, I'll just make sure the wrapper can do the same thing we just did with the raw IoC object from the previous tests:
describe("Tests for DependencyInjectionService", () => {
it("loads the DI/1 IoC container and its configuration", () => {
diService = new DependencyInjectionService()
testService = diService.getBean("DependencyInjectionService")
expect(testService).toBeInstanceOf("services.DependencyInjectionService")
})
})
Instead of testing the TestService here, I decided to use DependencyInjectionService to test it can… load itself
There's a bit more code this time for the implementation, but not much.
import framework.ioc
component {
public function init() {
variables.container = new ioc("")
configureDependencies()
}
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
}
public function onMissingMethod(required string missingMethodName, required struct missingMethodArguments) {
return variables.container[missingMethodName](argumentCollection=missingMethodArguments)
}
}
It creates an IOC container object, but doesn't scan any directories for autowiring opportunities this time.
It hand-cranks the loading of the DependencyInjectionService object.
It also acts as a decorator for the underlying IOC instance, so calling code just calls getBean (for example) on a DependencyInjectionService instance, and this is passed straight on to the IOC object to do the work.
And we have a passing test:
Now we can call our DI service in our model, and the model can use it to configure its dependencies. First we need to configure the DependencyInjectionService with another bean:
private function configureDependencies() {
variables.container.declareBean("DependencyInjectionService", "services.DependencyInjectionService")
variables.container.declareBean("TestDependency", "services.TestDependency")
}
describe("Tests for TestDependency", () => {
describe("Tests for getMessage method")
it("returns SET_BY_DEPENDENCY", () => {
testDependency = new TestDependency()
expect(testDependency.getMessage()).toBe("SET_BY_DEPENDENCY")
})
})
})
// TestDependency.cfc
component {
public string function getMessage() {
return "SET_BY_DEPENDENCY"
}
}
That's not quite the progression of the code there. I had to create TestDependency first, so I did its test and it first; then wired it into DependencyInjectionService.
Now we need to wire that into the model class. But first a test to show it's worked:
describe("Tests for Test model", () => {
describe("Tests of getMessage method", () => {
it("uses an injected dependency to provide a message", () => {
test = model("Test").new()
expect(test.getMessage()).toBe("SET_BY_DEPENDENCY")
})
})
})
Hopefully that speaks for itself: we're gonna get that getMessage method in Test to call the equivalent method from TestDependency. And to do that, we need to wire an instance of TestDependency into our instance of the Test model. I should have thought of better names for these classes, eh?
// /models/Test.cfc
import services.DependencyInjectionService
import wheels.Model
component extends=Model {
function config() {
table(name="test")
afterInitialization("setFindMe,loadIocContainer")
}
public function setFindMe() {
variables.findMe = "FOUND"
}
public string function getFindMe() {
return variables.findMe
}
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()
setDependencies()
}
private function setDependencies() {
variables.dependency = variables.diService.getBean("TestDependency")
}
public function getMessage() {
return variables.dependency.getMessage()
}
}
That works…
…but it needs some adjustment.
Firstly I want the dependency injection stuff being done for all models, not just this one. So I'm going to shove some of that code up into the Model base class:
// /models/Model.cfc
/**
* This is the parent model file that all your models should extend.
* You can add functions to this file to make them available in all your models.
* Do not delete this file.
*/
import services.DependencyInjectionService
component extends=wheels.Model {
function config() {
afterInitialization("loadIocContainer")
}
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()
setDependencies()
}
private function setDependencies() {
// OVERRIDE IN SUBCLASS
}
}
Now the base model handles the loading of the DependencyInjectionService, and calls a setDependencies method. Its own method does nothing, but if a subclass has an override of it, then that will run instead.
I will quickly tag that lot before I continue. 0.4.
But…
0.5 Dealing with the hard-coded DependencyInjectionService initialisation
The second problem is way more significant. Model is creating and initialising that DependencyInjectionService object every time a model object is created. That's not great. All that stuff only needs to be done once for the life of the application. I need to do that bit onApplicationStart (or whatever approximation of that CFWheels supports), and then I need to somehow expose the resultant object in Model.cfc. A crap way of doing it would be to just stick it in application.dependencyInjectionService and have Model look for that. But that's a bit "global variable" for my liking. I wonder if CFWheels has an object cache that it intrinsically passes around the place, and exposes to its inner workings. I sound vague because I had pre-baked all the code up to where I am now a week or two ago, and it was not until I was writing this article I went "oh well that is shit, I can't be having that stuff in there". And I don't currently know the answer.
Let's take the red-green-refactor route, and at least get the initialisation out of Model, and into the application lifecycle.
…
…
…
Ugh. Looking through the CFWheels codebase is not for the faint-hearted. Unfortunately the "architecture" of CFWheels is such that it's about one million (give or take) individual functions, and no real sense of cohesion to anything other than a set of functions might be in the same .cfm (yes: .cfm file :-| ), which then gets arbitrarily included all over the place. If I dump out the variables scope of my Test model class, it has 291 functions. Sigh.
There's a bunch of functions possibly relating to caching, but there's no Cache class or CacheService or anything like that... there's just some functions that act upon a bunch of application-scoped variable that are not connected in any way other than having the word "cache" in them. I feel like I have fallen back through time to the days of CF4.5. Ah well.
I'll chance my arm creating my DependencyInjectionService object in my onApplicationStart handler, use the $addToCache function to maybe put it into a cache… and then pull it back out in Model. Please hold.
[about an hour passes. It was mostly swearing]
Okey doke, so first things first: obviously there's a new test:
describe("Tests for onApplicationStart", () => {
it("puts an instance of DependencyInjectionService into cache", () => {
diService = $getFromCache("diService")
expect(diService).toBeInstanceOf("services.DependencyInjectionService")
})
})
The implementation for this was annoying. I could not use the onApplicationStart handler in my own Application.cfc because CFWheels steamrolls it with its own one. Rather than using the CFML lifecycle event handlers the way they were intended, and also using inheritance when an application and an application framework might have their own work to do, CFWheels just makes you write its handler methods into your Application.cfc. This sounds ridiculous, but this is what CFWheels does in the application's own Application.cfc. I'm going to follow-up on this stupidity in a separate article, perhaps. But suffice it to say that instead of using my onApplicationStart method, I had to do it the CFWheels way. which is … wait for it… to put the code in events/onapplicationstart.cfm. Yuh. Another .cfm file. Oh well. Anyway, here it is:
<cfscript>
// Place code here that should be executed on the "onApplicationStart" event.
import services.DependencyInjectionService
setDependencyInjectionService()
private void function setDependencyInjectionService() {
diService = new DependencyInjectionService()
$addToCache("diService", diService)
}
</cfscript>
And then in models/Model.cfc I make this adjustment:
private function loadIocContainer() {
variables.diService = new DependencyInjectionService()variables.diService = $getFromCache("diService")
setDependencies()
}
And then…
I consider that a qualified sucessful exercise in "implementing dependency injection in a CFWheels web site". I mean I shouldn't have to hand-crank stuff like this. This article should not need to be written. This is something that any framework still in use in 2022 should do out of the box. But… well… here we are. It's all a wee bit Heath Robinson, but I don't think it's so awful that it's something one oughtn't do.
And now I'm gonna push the code up to github (0.5), press "send" on this, and go pretend none of it ever happened.
I'm writing this here cos it's getting a bit long for a comment on the CFML Slack channel, and perhaps it might get a different set of eyes on it here anyhow.
I wanna revisit the discussion about import aliasing in CFML. ie this:
import com.vendor.app.package.Date as VendorDate
import org.project.lib.Date as LibDate
vendorDate = new VendorDate(now())
LibDate = new LibDate(now())
This has not been implemented in CFML because - I suspect - the various devs working on the CFML engines are primarily Java devs, and Java does not support this for (IMO) pretty bogus reasons (see the answer to Change Name of Import in Java, or import two classes with the same name, its commments and link within for the "reasoning").
However it's pretty common around the place in other languages, especially ones that occupy overlapping space as CFML, eg:
(It's important to consider that both Groovy and Kotlin are JVM languages with the remit of making Java easier, similar - in a way? - to CFML).
I've got by without this mostly, but just had a real world situation where the absence of it is a pain in the butt:
import logbox.system.logging.config.LogBoxConfig
import logbox.system.logging.LogBox
import services.logging.Config
// ...
config = new Config()
logboxConfig = new LogBoxConfig(config)
logbox = new LogBox(logboxConfig)
This is from a CFC, I've just elided the not relevant bits. Now whilst there are only 10 lines of code between where Config is imported and I use it, I still just went "what config is this? Oh right, my one". Bear in mind the CFC itself is nothing to do with logging, it's an IoC factory method. In context, "Config" suggests it's something to do with the IoC factory (this is the basis of my next article… the one I was working on when this situation presented itself). The code is just not as clear as it could be.
I know I can do this:
import services.logging.*
//...
config = new logging.Config()
But there's cross-over in the "logging" concept here with LogBox's logging any my own logging. Also it's a bit of shit work around because to me a * import indicates there's a bunch of stuff from that package being used, whereas there isn't; I'm just using that one class here. So def a work-around, not how one would naturally solve this.
Then I figured actually there was legit call to use import logbox.system.logging.* as I'm using multiple things from there, but that then conflicts with import services.logging.*
What would be good to do here would be:
import services.logging.Config as LoggingConfig
// ...
config = new LoggingConfig()
That is the clearest representation of what's going on, I think.
Anyway, just wondering what other CFMLers think. Maybe Java's right? Maybe all the other languages including the ones trying to improve on Java are... ;-)
A few weeks back, right in the thick of the crap about all these Log4J vulnerabilities, I was talking to a few people about the necessity and the effort involved in Lucee getting their situation sorted out, vis-a-vis dealing with outdated library dependencies they had. They were lucky to be safe from the Log4J thing… but only serendipitously-so because they'd not been able to prioritise moving off a really old version of Log4J (which didn't have the problematic code in it yet). They just didn't have the resources to do anything about it, when considering all the rest of the work that kept coming in. The crux of it was that they can only afford so much paid-for dev time, which means tough decisions need to be made when it comes to deciding on what to work on.
To their credit, they've now removed the old version of Log4J from the current version of Lucee 5.x, as well as in the upcoming 6.x, replacing it with the fully-patched current version.
I had a private chat with one of the bods involved in the behind-the-curtain parts of Lucee's going on. Initially they were berating me for being unhelpful in my tone (we agreed to disagree on that one. Well: we didn't agree on anything, on that note. We just moved on), but then got to talking about what to do to sort the situation out. They explained the lack of help they were getting on the project, both in the context of volunteer devs, but as well as lack of €€€ to be able to pay the devs that dedicate their time to the project. I said "you need to get something like Patreon!", and they quickly pointed out that they'd told me about this already, and indeed included the link to it that very conversation.
I had only glanced at the page, and had not clocked it wasn't just some page of their own website going on about donations, and I was also completely oblivious to the fact that "Open Collective" is a thing: it is indeed a Patreon-a-like thing.
Cool. Good to know.
This also got me thinking. It sux that people are so happy to use things like Lucee for free, whilst lining their own pockets. Even worse when things don't go their own way, or they need something done, and expect it to just magically appear for them.
It also occurred to me that whilst I personally don't use Lucee to benefit me (although I indirectly do, I know), I sure work for a company that has built its software on Lucee, and is doing pretty well for itself. And I'm the one who's supposedly stewarding our development effort on Lucee, so I was being a bit of a hypocrite. I was not happy with myself about that. I needed to wait for some dust to settle at the end of the year, and then I forgot for a week, but today I bounced the idea of becoming a Lucee sponsor to my boss (the one with the cheque book), and he took zero convincing that it was the right thing to do. He was basically saying yes before I'd finished my wee speech explaining why we really ought to.
And this is the thing. Fair dos if you're just a dev working in a Lucee shop. Like me, you might think it's not on you to put money their way. Or just can't afford it (also like me). But what you could do is mention it to yer boss that it's maybe something the company could do. The bottom rung of the corporate sponsorship is only US$100/month, and whilst that's not trivia for an individual: it's nothing to a company. Even a small one. It's also a sound investment. The more contributions they get, the more time they will be able to spend making sure Lucee is stable, improving, and moving forward. It's more likely a bug that is getting in your way gets fixed (I am not suggesting anyone starts lording "I sponsor you so fix my bug" over them; I just mean there'll be more dev work done, which means more bugs will get fixed). It's actually a good and sensible investment for your company as well. And if it's a sound investment for your employers: it's a sound investment for you too, if you like to continue getting a salary, or move on to another CFML shop after yer current gig. And all you need to do is ask a question.
So: call to action. Here's what I'd like you to do. If you work in a Lucee shop and yer not already sponsoring Lucee: grab that link I posted above, and drop yer boss a line and go "hey, we get a lot of benefit from these guys and it's probably the right thing to do to chuck a bit of money their way. We won't notice it, but it'll really help them". It's easy to sign up, and it's just a zero effort question to ask.
Slightly lazy article, this one. This is basically some thoughts I jotted down in the Working Code PodcastDiscord channel. Then I decided there was almost enough there to make an article, so why not drop it here too.
Right. So that flies in the face of what my usual position is on Clean Code: every dev ought to have read it, and it should strogly influence one's code.
But, OK, I'm open to an opposite opinion so I read it. Here's what I said on the Discord chat:
I read the "It's probably time to stop recommending Clean Code" article and a lot of the comments until they all got a bit samey.
I wanted to disagree with the article, but I found a bunch of it fair enough. However I think there was a bit of false equivalence going on with the author's analysis in places.
It seems to me that their issue was with the code samples (which, TBH, I paid little attn to when I read the book), which were pretty opaque at times, and not exactly good examples of what the narrative was espousing. It was a few days ago I read it, but I don't recall them having specific issues with the concepts themselves?
I s'pose the writer did raise a vocal eyebrow (if one can do that) regarding the notion that the cleanest method has zero parameters, and each additional param increases the smell. They recoiled in horror at the idea of every method having zero paramaters, as if that's just ridiculous (which it is, but…). But I also think they then used that as a bit of a strawman: I don't think Martin was saying "all methods must have zero params or they smell therefore don't have parameters or else", he was just defining a scale wherein the more params there are, the more the code is likely to be smelly. A scale has to start somewhere, and zero is the logical place to start. What else was Martin gonna say: methods should aim to have one parameter [etc]"? No. That's more daft. Zero is the right place to start that scale. I don't think that's bad guidance.
I also think they didn't seem to "get" the guidance about flag parameters. Whether that's Martin's fault for not explaining it, or their fault for not getting it: dunno. It's not really a "apportioning blame" thing anyhow. I just didn't think their disagreement with it had much merit.
Oh and there was also some stuff about only using Java for code examples, and it was all OOP-centric and nothing about FP. To me that's kinda like condemning Romeo + Juliet as a work because it doesn't mention Hamlet even once.
I also kinda wondered - given I don't think the article really made its case - whether there was some sort of "it's trendy to put the hate on RC Martin these days" going on. But... nothing demonstrable in the content of the article to suggest that's accurate. However I was not the only person left wondering this, based on the comments.
(FWIW I think Martin's a creep, always have; but it's simply ad hominem to judge his work on that basis)
So. Should we still be recommending Clean Code? Is it's guidance "right"?
I think - as with any expert guidance - it's great to take verbatim when you are new to said topic, and can't make an informed opinion on it. And as one becomes familiar with the situations the guidance addresses, one might begin to see where things are perhaps grey rather than black or white. But one needs to have the experience and expertise first, before deciding to mix one's own shades of grey.
For my part: my advice stands. If one is unfamiliar with Clean Code as a concept, then one really ought to read it. Once one is familiar with it, then - fine - consider thinking about when its advice might not be most appropriate. Perfect. That's what you ought to be doing.
Simply seeing guidance on "black" and going "I don't even know what 'black' is so I think 'white' is better. I know about 'white'" is just a dumb-arse attitude to have. Learn about black, understand how it's not white, and then once that's nailed, start deciding when things might better be grey.
Anyway, I know there's not much context there: I don't quote from the article I'm talking about at all. But I think you should go read it and see what you think. Don't take my word for anything, Jesus.
And I will reiterate: if you have not read Clean Code, do yerself a favour and go read it. Don't worry that Martin's not flavour-of-the-month social-awareness-speaking these days. I really do think most of the guidance in Clean Code is worth knowing about.
There were a coupla other books recommended in the comments. I'll not recommend (or otherwise) them myself until I've read them, but I have a bit of a reading queue ATM.
Anyway, this article was lazy as fuck, wasn't it? Oh well.
We're expanding our dev team, and I'm looking for a new dev to join us.
This could be a really good opportunity for someone doing CFML development who would like to move away from CFML and pick up a new language, whilst being paid to do so. You know how I shifted from CFML to PHP several years ago? The opportunity to shift to a new language whilst in my same role was the best thing that ever happened to my in my dev career (even if it was just to PHP, it was still excellent to pick up another language professionally). Well: second best after the initial opportunity to shift from systems engineering to development in similar circumstances. Seriously: even if yer in a solid / comfortable CFML dev role now, think about this.
We currently have a CFML app running on Lucee and the CFWheels framework, and over the next coupla years we are going to be porting this to Kotlin on the back-end; and after that my plan is to re-implement the front-end part of it as its own front-end app using vue or react or angular or whatever the flavour of the month is for client-side app development by then.
The CFML app will be running in parallel during this time; we will be shifting pieces of its functionality to the new back-end in a piecemeal fashion, so we do need both solid CFML skills for that side of things, and either pre-existing Kotlin knowledge, or just a desire to learn Kotlin on the job. I myself and the other devs on the project will be picking Kotlin up as we go.
Strong experience with test automation (eg: unit testing).
Strong experience maintaining and building on existing legacy applications.
Strong experience designing and developing new web applications / web services.
Thorough knowledge of design principles such as MVC, SOLID and a good understanding of common design patterns.
Other stuff that is going to be important to us:
Experience with CFWheels.
Experience with TDD practices.
Familiarity with Dockerised development environments.
Experience with or exposure to Kotlin for server-side development.
Experience with another language over and above CFML for application development.
Preparedness to learn Kotlin on the job if no previous experience.
Familiarity with Agile principles, and experience delivering value in an Agile fashion.
If yer a reader of this blog, you know what I'm like with these things. And they are important to me in this role.
Why Kotlin?
We wanted to go to a statically-typed language, to help us tighten-up our code. I didn't want to do native Java, but there's something to be said for the Java API, so there was a lot of appeal in sticking with a JVM language. I've dabbled inconsequentially with Groovy and love it; thinking it's where CFML could be if Macromedia had done a better job with CFMX. But whilst a lot more popualr than CFML, Groovy's popularity still ain't great. Another consideration is we've been burnt being platformed on a very niche language, and don't want to repeat that (another reason for CFML devs to think about taking an opportunity to jump!). We thought about Scala, but I talked to an industry pal (won't drop their name here), and they convinced me it's a bit heavy for web development, and suggested I look at Kotlin for another language in the same space. I had thought it was only for Android dev, but it's made good headway into the server-app space too. It's got the backing of Google and is stewarded by JetBrains, so it seems solid. It rates well in various language popularity lists. The code looks bloody nice too. It's got a coupla decent-looking MVC frameworks, and good testing libraries too. And it was these last things that swung me, I have to say: language, framework, and testing: I had a look at them and I want to program with them. But I also have a responsibility back to my employer to make a decision which we'll be able to reliably work with for a number of years. I think Kotlin ticks all these boxes. Oh and being a JetBrains project, it's integration into IntelliJ is first class, and IntelliJ is an excellent IDE.
Back to the role…
Logistics-wise this is a remote-first position. The rest of the dev team is remote, around the UK. We have an office but I've never set foot in it. But if you want to work in Bournemouth, there's a desk for you there if that's your thing. The non-dev ppl in that office are all nice :-).
Secondly, we are only able to hire employees who are able to live and work in the UK without visa sponsorship (don't get me started about the UK leaving the EU, FFS). However if you are on the East Coast of the States or elsewhere in Europe or similar sort of timezones, we could consider a strong candidate provided they have the ability to invoice us for services, on a contract basis. We will not consider timezones further afield than that, I'm afraid: I want the whole team to be on deck at the same time for at least half the day (and during their day's normal working hours). It is a fulltime role.
We are likely to have another new starter joining in March, and in the mean time I am going to be aiming to kick the Kotlin project off in our next sprint. First task: get a Kotlin dev environment container created. I think. I think that's a first step. This is how early in the project you will be joining us :-). I intend the application to be 100% TDD, continuous delivery, etc. And delivering something to prod every sprint.
With all this talk about the opportunity to pick-up Kotlin, it's important to be be mindful that all this time the CFML app will still be the app making the company money and paying our salaries, so there will be requirement to work on this as well: new features, enhancing and (cough) fixing existing features. Initially the job will be 100% CFML and 0% Kotlin, but I intend for those percentage to start swapping around as soon as we can, so by some point in 2023 it will be 0% CFML and 100% Kotlin.
If you want to have a chat about this, you can ping me in the CFML Slack channel (if for some reason you're a CFML dev reading this and not already in here, you can join via cfml-slack.herokuapp.com). Or you can send yer CV through to the email address on the job spec page I linked to above. I'm not interested in talking to recruiters for now, just in case you are one (and reading this?). For now I'm only wanting to talk to people in the dev community directly.