Monday, 17 January 2022

If your company (or yourself) makes money using Lucee… you should throw them a bone

G'day:

A few weeks back, right in the thick of the crap about all these Log4J vulnerabilities, I was talking to a few people about the necessity and the effort involved in Lucee getting their situation sorted out, vis-a-vis dealing with outdated library dependencies they had. They were lucky to be safe from the Log4J thing… but only serendipitously-so because they'd not been able to prioritise moving off a really old version of Log4J (which didn't have the problematic code in it yet). They just didn't have the resources to do anything about it, when considering all the rest of the work that kept coming in. The crux of it was that they can only afford so much paid-for dev time, which means tough decisions need to be made when it comes to deciding on what to work on.

To their credit, they've now removed the old version of Log4J from the current version of Lucee 5.x, as well as in the upcoming 6.x, replacing it with the fully-patched current version.

I had a private chat with one of the bods involved in the behind-the-curtain parts of Lucee's going on. Initially they were berating me for being unhelpful in my tone (we agreed to disagree on that one. Well: we didn't agree on anything, on that note. We just moved on), but then got to talking about what to do to sort the situation out. They explained the lack of help they were getting on the project, both in the context of volunteer devs, but as well as lack of €€€ to be able to pay the devs that dedicate their time to the project. I said "you need to get something like Patreon!", and they quickly pointed out that they'd told me about this already, and indeed included the link to it that very conversation.

https://opencollective.com/lucee.

I had only glanced at the page, and had not clocked it wasn't just some page of their own website going on about donations, and I was also completely oblivious to the fact that "Open Collective" is a thing: it is indeed a Patreon-a-like thing.

Cool. Good to know.

This also got me thinking. It sux that people are so happy to use things like Lucee for free, whilst lining their own pockets. Even worse when things don't go their own way, or they need something done, and expect it to just magically appear for them.

It also occurred to me that whilst I personally don't use Lucee to benefit me (although I indirectly do, I know), I sure work for a company that has built its software on Lucee, and is doing pretty well for itself. And I'm the one who's supposedly stewarding our development effort on Lucee, so I was being a bit of a hypocrite. I was not happy with myself about that. I needed to wait for some dust to settle at the end of the year, and then I forgot for a week, but today I bounced the idea of becoming a Lucee sponsor to my boss (the one with the cheque book), and he took zero convincing that it was the right thing to do. He was basically saying yes before I'd finished my wee speech explaining why we really ought to.

And this is the thing. Fair dos if you're just a dev working in a Lucee shop. Like me, you might think it's not on you to put money their way. Or just can't afford it (also like me). But what you could do is mention it to yer boss that it's maybe something the company could do. The bottom rung of the corporate sponsorship is only US$100/month, and whilst that's not trivia for an individual: it's nothing to a company. Even a small one. It's also a sound investment. The more contributions they get, the more time they will be able to spend making sure Lucee is stable, improving, and moving forward. It's more likely a bug that is getting in your way gets fixed (I am not suggesting anyone starts lording "I sponsor you so fix my bug" over them; I just mean there'll be more dev work done, which means more bugs will get fixed). It's actually a good and sensible investment for your company as well. And if it's a sound investment for your employers: it's a sound investment for you too, if you like to continue getting a salary, or move on to another CFML shop after yer current gig. And all you need to do is ask a question.

So: call to action. Here's what I'd like you to do. If you work in a Lucee shop and yer not already sponsoring Lucee: grab that link I posted above, and drop yer boss a line and go "hey, we get a lot of benefit from these guys and it's probably the right thing to do to chuck a bit of money their way. We won't notice it, but it'll really help them". It's easy to sign up, and it's just a zero effort question to ask.

You'll feel better about yerself if you do.

Righto.

--
Adam

Friday, 14 January 2022

Reading "It's probably time to stop recommending Clean Code"

G'day:

Slightly lazy article, this one. This is basically some thoughts I jotted down in the Working Code Podcast Discord channel. Then I decided there was almost enough there to make an article, so why not drop it here too.

In the most recent episode ("057: Goals for 2022"), the team mentioned they'd seen/read an article about Clean Code: "It's probably time to stop recommending Clean Code".

Right. So that flies in the face of what my usual position is on Clean Code: every dev ought to have read it, and it should strogly influence one's code.

But, OK, I'm open to an opposite opinion so I read it. Here's what I said on the Discord chat:

I read the "It's probably time to stop recommending Clean Code" article and a lot of the comments until they all got a bit samey.

I wanted to disagree with the article, but I found a bunch of it fair enough. However I think there was a bit of false equivalence going on with the author's analysis in places.

It seems to me that their issue was with the code samples (which, TBH, I paid little attn to when I read the book), which were pretty opaque at times, and not exactly good examples of what the narrative was espousing. It was a few days ago I read it, but I don't recall them having specific issues with the concepts themselves?

I s'pose the writer did raise a vocal eyebrow (if one can do that) regarding the notion that the cleanest method has zero parameters, and each additional param increases the smell. They recoiled in horror at the idea of every method having zero paramaters, as if that's just ridiculous (which it is, but…). But I also think they then used that as a bit of a strawman: I don't think Martin was saying "all methods must have zero params or they smell therefore don't have parameters or else", he was just defining a scale wherein the more params there are, the more the code is likely to be smelly. A scale has to start somewhere, and zero is the logical place to start. What else was Martin gonna say: methods should aim to have one parameter [etc]"? No. That's more daft. Zero is the right place to start that scale. I don't think that's bad guidance.

I also think they didn't seem to "get" the guidance about flag parameters. Whether that's Martin's fault for not explaining it, or their fault for not getting it: dunno. It's not really a "apportioning blame" thing anyhow. I just didn't think their disagreement with it had much merit.

Oh and there was also some stuff about only using Java for code examples, and it was all OOP-centric and nothing about FP. To me that's kinda like condemning Romeo + Juliet as a work because it doesn't mention Hamlet even once.

I also kinda wondered - given I don't think the article really made its case - whether there was some sort of "it's trendy to put the hate on RC Martin these days" going on. But... nothing demonstrable in the content of the article to suggest that's accurate. However I was not the only person left wondering this, based on the comments.

(FWIW I think Martin's a creep, always have; but it's simply ad hominem to judge his work on that basis)


So. Should we still be recommending Clean Code? Is it's guidance "right"?

I think - as with any expert guidance - it's great to take verbatim when you are new to said topic, and can't make an informed opinion on it. And as one becomes familiar with the situations the guidance addresses, one might begin to see where things are perhaps grey rather than black or white. But one needs to have the experience and expertise first, before deciding to mix one's own shades of grey.

For my part: my advice stands. If one is unfamiliar with Clean Code as a concept, then one really ought to read it. Once one is familiar with it, then - fine - consider thinking about when its advice might not be most appropriate. Perfect. That's what you ought to be doing.

Simply seeing guidance on "black" and going "I don't even know what 'black' is so I think 'white' is better. I know about 'white'" is just a dumb-arse attitude to have. Learn about black, understand how it's not white, and then once that's nailed, start deciding when things might better be grey.


Anyway, I know there's not much context there: I don't quote from the article I'm talking about at all. But I think you should go read it and see what you think. Don't take my word for anything, Jesus.

And I will reiterate: if you have not read Clean Code, do yerself a favour and go read it. Don't worry that Martin's not flavour-of-the-month social-awareness-speaking these days. I really do think most of the guidance in Clean Code is worth knowing about.

There were a coupla other books recommended in the comments. I'll not recommend (or otherwise) them myself until I've read them, but I have a bit of a reading queue ATM.

Anyway, this article was lazy as fuck, wasn't it? Oh well.

Righto.

--
Adam

Tuesday, 11 January 2022

Work with me here

G'day

I mean literally: come work with me. Here.

We're expanding our dev team, and I'm looking for a new dev to join us.

This could be a really good opportunity for someone doing CFML development who would like to move away from CFML and pick up a new language, whilst being paid to do so. You know how I shifted from CFML to PHP several years ago? The opportunity to shift to a new language whilst in my same role was the best thing that ever happened to my in my dev career (even if it was just to PHP, it was still excellent to pick up another language professionally). Well: second best after the initial opportunity to shift from systems engineering to development in similar circumstances. Seriously: even if yer in a solid / comfortable CFML dev role now, think about this.

We currently have a CFML app running on Lucee and the CFWheels framework, and over the next coupla years we are going to be porting this to Kotlin on the back-end; and after that my plan is to re-implement the front-end part of it as its own front-end app using vue or react or angular or whatever the flavour of the month is for client-side app development by then.

The CFML app will be running in parallel during this time; we will be shifting pieces of its functionality to the new back-end in a piecemeal fashion, so we do need both solid CFML skills for that side of things, and either pre-existing Kotlin knowledge, or just a desire to learn Kotlin on the job. I myself and the other devs on the project will be picking Kotlin up as we go.

The official job description is here: Senior Application Developer UK, but the important bits are as follows:

  • Strong object-oriented CFML experience.
  • Strong experience with test automation (eg: unit testing).
  • Strong experience maintaining and building on existing legacy applications.
  • Strong experience designing and developing new web applications / web services.
  • Thorough knowledge of design principles such as MVC, SOLID and a good understanding of common design patterns.

Other stuff that is going to be important to us:

  • Experience with CFWheels.
  • Experience with TDD practices.
  • Familiarity with Dockerised development environments.
  • Experience with or exposure to Kotlin for server-side development.
  • Experience with another language over and above CFML for application development.
  • Preparedness to learn Kotlin on the job if no previous experience.
  • Familiarity with Agile principles, and experience delivering value in an Agile fashion.

If yer a reader of this blog, you know what I'm like with these things. And they are important to me in this role.

Why Kotlin?

We wanted to go to a statically-typed language, to help us tighten-up our code. I didn't want to do native Java, but there's something to be said for the Java API, so there was a lot of appeal in sticking with a JVM language. I've dabbled inconsequentially with Groovy and love it; thinking it's where CFML could be if Macromedia had done a better job with CFMX. But whilst a lot more popualr than CFML, Groovy's popularity still ain't great. Another consideration is we've been burnt being platformed on a very niche language, and don't want to repeat that (another reason for CFML devs to think about taking an opportunity to jump!). We thought about Scala, but I talked to an industry pal (won't drop their name here), and they convinced me it's a bit heavy for web development, and suggested I look at Kotlin for another language in the same space. I had thought it was only for Android dev, but it's made good headway into the server-app space too. It's got the backing of Google and is stewarded by JetBrains, so it seems solid. It rates well in various language popularity lists. The code looks bloody nice too. It's got a coupla decent-looking MVC frameworks, and good testing libraries too. And it was these last things that swung me, I have to say: language, framework, and testing: I had a look at them and I want to program with them. But I also have a responsibility back to my employer to make a decision which we'll be able to reliably work with for a number of years. I think Kotlin ticks all these boxes. Oh and being a JetBrains project, it's integration into IntelliJ is first class, and IntelliJ is an excellent IDE.

Back to the role…

Logistics-wise this is a remote-first position. The rest of the dev team is remote, around the UK. We have an office but I've never set foot in it. But if you want to work in Bournemouth, there's a desk for you there if that's your thing. The non-dev ppl in that office are all nice :-).

Secondly, we are only able to hire employees who are able to live and work in the UK without visa sponsorship (don't get me started about the UK leaving the EU, FFS). However if you are on the East Coast of the States or elsewhere in Europe or similar sort of timezones, we could consider a strong candidate provided they have the ability to invoice us for services, on a contract basis. We will not consider timezones further afield than that, I'm afraid: I want the whole team to be on deck at the same time for at least half the day (and during their day's normal working hours). It is a fulltime role.

We are likely to have another new starter joining in March, and in the mean time I am going to be aiming to kick the Kotlin project off in our next sprint. First task: get a Kotlin dev environment container created. I think. I think that's a first step. This is how early in the project you will be joining us :-). I intend the application to be 100% TDD, continuous delivery, etc. And delivering something to prod every sprint.

With all this talk about the opportunity to pick-up Kotlin, it's important to be be mindful that all this time the CFML app will still be the app making the company money and paying our salaries, so there will be requirement to work on this as well: new features, enhancing and (cough) fixing existing features. Initially the job will be 100% CFML and 0% Kotlin, but I intend for those percentage to start swapping around as soon as we can, so by some point in 2023 it will be 0% CFML and 100% Kotlin.

If you want to have a chat about this, you can ping me in the CFML Slack channel (if for some reason you're a CFML dev reading this and not already in here, you can join via cfml-slack.herokuapp.com). Or you can send yer CV through to the email address on the job spec page I linked to above. I'm not interested in talking to recruiters for now, just in case you are one (and reading this?). For now I'm only wanting to talk to people in the dev community directly.

Righto.

--
Adam

Thursday, 2 December 2021

A question about DAO testing, abstraction, and mocking

G'day:

This is another "low-hanging fruit" kinda exercise (see also previous article: "A question about the overhead of OOP in CFML"). I'm extracting this one from the conversation in the CFML Slack channel so that it doesn't get lost. I have a growing concern about how Slack is a bit of a content-black-hole, and am cooling to the notion of helping people there. Especially when my answer is a fairly generic one. The question is a CFML-based one; the answer is just a general testing-strategy one. I say this so people are less likely to stop reading because it's about CFML ;-) Anyhoo: time for some copy and paste.

The question is a simple one:

Just dropping it in here. Unit testing DAO object using qb. How would you guys go about it or why not? Go!
Kevin L. @ CFML Slack Channel

I say "simple". I mean short and on-point.

Here's the rest of the thread. I've tidied up the English in some places, but have not changed any detail of what was said. I will prefix this saying I don't know "qb" from a bar of soap; it's some sort of DBAL. For the purposes of this, how it works doesn't really matter.


The participants here are myself, Kevin and Samuel Knowlton.

It depends on how you design this part of your application.

For me the interface between the application and the storage tier is via a Repository class, which talks to the higher part of the domain in object and collections thereof: no trace of any persistence DSL at all (no SQLish language, for example).

For the purposes of testing, I abstract the actual storage-tier communications: in CFML this would be queryExecute calls, or calls to an external library like qb. I fashion these as DAOs if they contain actual SQL statements, or an Adapter if it was something like qb.

So the repo has all the testable logic in it, the DAO or Adapter simply takes values, calls the persistence implementation, and returns whatever the persistence call returns. The repo converts that lot to/from domain-ready stuff for the application to use.

I'll just add an example of this for the blog version of this conversation:

selectFooById(id) {
    return queryExecute("
        SELECT col1, col2, col3, etc
        FROM some_table
        WHERE id = :id
    ", {id=id})
}

It's important to note that this method does nothing other than abstract the queryExecute call away from any application logic in the method that calls this. No manipulation of inputs; no remapping of outputs. Both of those are the job of the repository method that calls this DAO method. This DAO method only exists to remove the external call from the repo code. That's it.

To that end, there is no unit testable logic in the DAO. But I will do an integration test on it. For a CFML example if I call a method on the DAO, then I do get the kind of query I'd expect (correct columns).

I'd unit test the repo logic along the lines of the mapping to/from the domain/storage is correct, and any transformation logic is correct (converting datetime objects to just date-only strings and the like). I would test that if a DAO call returned a three-row query, then the repo returns a three-object collection, with the expected values.

NB: Repos don't have business logic; they only have mapping / transformation logic.

An initial task here might be to look at the DAO code, and even if it rolls-in transformation/mapping logic and persistence-tier calls, at the very least it doesn't do both in the same method.

One thing to be careful to not do is to re-test qb. Assume it's doing its job correctly, and just test what your code does with the data qb gives you.

In that case my current tests are not correct at all… They felt wrong from the start, but I was struggling with how to test the DAO layer (which is abstracted through an interface). My DAO layer at this point does have a hard reference to / dependency on qb, which in that case violates some OO principles… This is a brainbreaker

Ah yes everything I mentioned above is predicated on using DI and an IoC container which should really be the baseline for any application designed with testing in mind (which should be every application).

(NB: DI is the baseline; using an IoC container to manage the DI is a very-nice-to-have, but is not essential).

You can "fake it til you make it" with the DI, by extracting hard references to qb object creation into methods that solely do that. You can then mock-out those methods in your tests so you can return a testable object from it.

So instead of this:

// MyRepository
component {

    getObjectCollection() {
        qb = new QB()
        
        raw = qb.getStuff()
        
        collection = raw.map(() => {
            // convert to domain model objects. this is what you want to test
        })
        
        return collection
    }
}

One has this:

// MyRepository
component {

    getObjectCollection() {
        qb = getQb()
        
        raw = qb.getStuff()
        
        collection = raw.map(() => {
            // convert to domain model objects. this is what you want to test
        })
        
        return collection
    }
    
    private getQb() {
        return new QB()
    }
}

And can test like this:

it("maps the data correctly", () => {
    sut = new MyRepository()
    mockbox.prepareMock(sut) // sut is still a MyRepository, but can selectively mock methods
    
    rawData = [/* known values that exercise the logic in the mapper */]
    sut.$("getQb").$results(rawData)
    
    collection = sut.getObjectCollection()
    
    expectedCollection = [/* expected values based on known values */]
    
    expect(collection).toBe(expectedCollection)
})

You could also extract the mapping logic from the fetching logic, and test the mapping logic directly. That's getting a bit bitty for my liking though. However do what you need to do to be able to separate the logic into something testable, away from code that isn't testable. It's always doable with a bit of refactoring, even if the refactoring isn't perfect.

[…] I already have a getQueryBuilder method in the DAO object. So that's a good starting point. How would you test update / create / deletion?

Two sets of tests:

Integration tests

Test that your app integrates with QB and the DB. Call the actual live method, hit the DB, and test the results.

So…

  • create a record using your app. Then use queryExecute to get the values back out, and test it's what you expect to have been saved.
  • Insert a row into the DB (either via your create mechanism, or via queryExecute). Update it via your app. Use queryExecute to get the values back out, and test it's what you expect to have been updated.
  • Insert a row into the DB (either via your create mechanism, or via queryExecute). Delete it via your app. Use queryExecute to try to get the values back out, and make sure they ain't there.

Unit tests

  • When you call your app's create method with [given inputs] then those inputs (or equivalents, if you need to transform them) are the ones passed to the expected method of the (mocked) object returned from your mocked getQueryBuilder call.
  • Same with update.
  • Same with delete.
it("deletes the data from storage", () => {
    testId = 17
    
    sut = new MyRepository()
    mockbox.prepareMock(sut)
    
    mockedQb = createMock("QB")
    expectedReturnFromDelete = "I dunno what QB returns; but use some predictable value"
    mockedQb.$("delete").$args(testId).$results(expectedReturnFromDelete)
    
    sut.$("getQueryBuilder").$results(mockedQb)
    
    result = sut.delete(testId)
    
    expect(result).toBe(expectedReturnFromDelete) // it won't have been returned unless testId was passed to QB.delete
})

That's a very simplistic test, obvs.

Oh: run all the integration tests in a transaction that you rollback once done.

Here the unit tests don't hit the actual storage, so are easier to write, maintain, and quick to run in your red/green/refactor cycle. You run these all the time when doing development work. The integration tests are fiddly and slow, so you only run those when you make changes to that tier of the app (DB schema changes that impact the app, generally; and then when putting in a pull request, merging it, and doing a build). It is vital the code being integration tested does not have any logic in it. Because then you need to unit test it, which defeats the purpose of separating-out yer concerns.

[…]To add our two cents: we use QB and Quick all the time in integration tests to make sure that objects are created, updated, or deleted the way we want. Sometimes we will use cfmigrations to pre-populate things so we are not doing a whole create/update/destroy cycle on each test run (which can be awesome, but also slow)

[Agreed]

A lot of information to take in. I'll take my time to read this through. Thank you @Adam Cameron and @sknowlton for taking your time to answer my questions!

NP. And ping away with any questions / need for clarification.


[a few days pass]

[…]Thanks to your input I was able to refactor the "dao" layer through test-driven design!

I do have a general testing question though. Let's assume I have method foo in a service with parameters x, y, z with the following pseudocode:

function foo( x, y, z ) {
  /* does something */
  var args = ...
  
  /* validate, throw, etc... */
 
  return bar( 
    argumentCollection = args 
  );
}

Should the unit test mock out the bar function and assert the args parameter(s) and / or throw scenarios or actually assert the return value of the foo function. Or is the latter case more of a integration test rather than a unit test?

Sorry if this is a noob question, been trying to wrap my head around this…

A handy thing to do here is to frame the tests as testing the requirement, not testing the code.

Based on your sample pseudo code, then you have a requirement which is "it does the foothing so that barthing can happen". I have purposely not used the method names there directly; those are implementation detail, but should describe the action taking place. It could be "it converts the scores into roman numerals": in your example "it converts the scores" is the call to foo, and "into roman numerals" is the call to bar.

If you frame it like that, your tests will be testing all the variations of foothing and barthing - on variation per test, eg:

  • x, y, z are valid and you get the expected result (happy path)
  • x is bung, expected exception is thrown
  • y is bung [etc]
  • z is bung [etc]
  • say there's a conditional within bar… one test for each true / false part of that. The true variant might be part of that initial happy path test though.

This is how I read the pseudo code, given that bar looks like another method in the same class - probably a private method that is part of some refactoring to keep foo streamlined.

If your pseudocode was return dao.bar(argumentCollection = args), then I'd say that the DAO is a boundary of your application (eg, it is an adapter to a queryExecute call), and in this case you would mock-out dao.bar, and use spying to check that it received the correct arguments, based on the logic of foo. For example foo might multiple x by two... make sure bar receives 2x, etc

Does that cover it / make sense? Am off to a(~nother) meeting now, so had to be quick there. I'll check back in later on.

It covers my question, Adam, thanks!

One thing I didn't have time to say... I used a DAO in that example specifically to keep my answer short. A DAO is by definition a boundary.

If it was something else like a RomanNumeralWriterService and it was part of your own application, just not internal to the class the method you're testing is in... it's not so clear cut as to whether to mock or use a live object.

If RomanNumeralWriterService is easy to create (say its constructor take no or very simple arguments), and doesn't itself reach outside your application... then no reason to mock it: let it do its job. If however it's got a bunch of its own dependencies and config and will take a chunk of your test just to create it, or if it internally does its own logging or DB writing or reaching out to another service, or it's sloooow, then you mock it out. You don't want your tests messing around too much with dependencies that are not part of what yer testing, but it's not black or white whether to mock; it's a judgement call.

Yeah I was struggling with the thing you mention in your last paragraph. Service A has Service B as a dependency which itself has a lot of other dependencies (logging, dao, …). The setup for the test takes ages while the test method is 1 line of code :/.

Yeah, mock it out.

Make sure whatever you mock-out is well-test-covered as well in its own right, but it's not the job of these tests to faff around with that stuff.


That was it. I think that was a reasonably handy discussion, so was worth preserving here. YMMV of course: lemme know.

Righto.

--
Adam

 

 

Thursday, 4 November 2021

A question about the overhead of OOP in CFML

G'day:

A question cropped up on the CFML Slack channel the other day. My answer was fairly long-winded so I decided to post it here as well. I asked the original questioner, and they are OK with me reproducing their question.

Again, I have a question to experienced OOP cfml coders. From the clean code concept I know I should break code into smaller (er even its smallest ) pieces. Is there any possible reason to stop doing that at a certain level in CFML? Eg. for performance reasons? Eg. lets assume I have a component named Car.cfc. Should I always break a Car.cfc component into Wheel.cfc, Engine.cfc, CarBody.cfc accordingly? Does the createObject behave like include files that would come with a certain overhead because of physical file request? What is when I also break Engine.cfc into many little pieces (and Wheel.cfc also)?
Andreas @ CFML Slack Channel

Here's my answer. I've tidied up the English in some places, but have not changed any detail of what I said.


Eventually there will be a meaningful overhead cost of creating a lot of objects.

Note that your comment about "behave like include files that would come with a certain overhead because of physical file request" isn't really accurate because the files are only read once, and after that they're in memory. Same applies with includes, for that matter. The process is (sort of):

  • code calls for an object to be created
  • if implementation for object is not found in memory, its code is loaded from disk
  • object is created
  • object is used
  • object is at some point cleaned up once it's not being referenced any more (at the discretion of the garbage collector)

That second step is only performed if needed, and all things being equal, will only be needed once in the lifetime of yer app in the JVM.

So don't worry about file system overhead; it's not going to be significant here.

Creating objects does come at a cost, and neither CFML engine has traditionally been particularly efficient at doing so (Lucee is better I believe; and CF is not as slow as it used to be). This could be a consideration at some point.

However performance considerations like this shouldn't be worried about until they start becoming an issue.

Design your application in a way that best represents the data and behaviour of your business domain. Make it modular, employing a sense of reusability and following the Single Responsibility Principle.

Keep an eye on your JVM. Use FusionReactor or something else similar. I suspect FR is the only game in town for CFML code; but there are other general JVM profiling tools out there as well which will do as good a job, but be Java-centric. If you see performance spikes: sort them out.

Load test your application with real-world load. This doesn't mean looping over object-creation one million times and doing "tada! It took x seconds". This means close to nothing and is not really a meaningful test. Use a load testing tool to load test your application, not your code. Back when I uses to do such things, There was tooling that could re-run a web server log, so one could easily test with real-world traffic. This is important because concurrency issues which might cause locking bottlenecks, and application slow-downs.

[I forgot to say this bit in my original answer]. Irrespective of the overhead of creating objects, these will be (orders of magnitude more ~) trivial compared to the overhad of a poorly-written DB query, or bad indexing, or bad locking of code, heavy (and possibly badly-designed) string processing etc. There's stacks of things I'd be worrying about before I wondered if I was calling new Thing() too often.

[conted…]

That said, don't go crazy with decomposing your domain models. A Car doesn't intrinsically need to have a collection of Wheel objects. It might just need to know the number "4" (number of wheels). Wait until there is behaviour or data needed for the wheels, and make sure to keep those elements separate in your Car class. At some point if there's a non-trivial proportion of data and/or behaviour around the Wheel implementation, or you need another sort of Car that has Wheels with different (but similar) properties: then extract the Wheels into a separate class.

Making sure you have all your domain requirements tested makes this sort of refactoring much safer, so that one can continually engineer and improve one's domain design without worrying too much about breaking our client's requirements.


Andreas followed up with an observation of falling into the trap of using bad practices when writing ones code, due to not knowing what the good practices are (my wording, not his). To this, I've just responded:

I think doing stuff that is "bad practice" when you don't know any better is fine; it's when you do know better and still follow bad practice for invented reasons (fictitious deadlines, mythical complexity, general "CBA") that's an issue.

That said one can mitigate some of this by actively improving one's knowledge of what is good and bad practice. This is why I advocate all devs should - at a minimum - have read these books:

  • Clean Code - Martin
  • Head First Design Patterns (second ed) - various
  • Test-Driven Development by Example - Beck
  • Refactoring - Fowler

There's other ones like Code Complete (I couldn't wade through it I'm afraid) and The Pragmatic Programmer (am about 20% of the way through it and it's only slightly engaging me so far) which other people will recommend.

One should make sure one is comfortable with the testing framework of choice for one's environment, and testing one's code. Either before via TDD or even just afterwards is essential to writing good stable, scalable, maintainable code.


I'm pretty sure there's not an original thought here, but hey: most of my writing is like that, eh? Anyway, there you go.

Righto.

--
Adam

Saturday, 30 October 2021

TDD: writing a micro testing framework, using the framework to test itself as I build it

G'day:

Being back in CFML land, I spend a lot of time using trycf.com to write sample code. Sometimes I just want to see how ColdFusion and Lucee behave differently in a given situation. Sometimes I'm "helping" someone on the CFML Slack channel, and want to give them a runnable example of what I think they might want to be doing. trycf.com is cool, but I usually want to write my example code as a series of tests demonstrating variants of its behaviour. I also generally want to TDD even sample code that I write, so I know I'm staying on-point and the code works without any surprises. trycf.com is a bit basic, one can only really have one CFML script and run that. One cannot import other libraries like TestBox so I can write my code the way I want.

Quite often I write a little stub tester to help me, and I find myself doing this over and over again, and I never think to save these stubs across examples. Plus having to include the stubs along with the sample code I'm writing clutters things a bit, and also means my example code doesn't really stick to the Single Responsibility Principle.

I found out from Abram - owner of trycf.com - that one can specify a setupCodeGistId in the trycf.com URL, and it will load a CFML script from that and pre-include that before the scratch-pad code. This is bloody handy, and armed with this knowledge I decided I was gonna knock together a minimal single-file testing framework which I could include transparently in with a scratch-pad file, so that scratch-pad file could focus on the sample code I'm writing, and the testing of the same.

A slightly hare-brained idea I have had when doing this is to TDD the whole exercise… and using the test framework to test itself as it evolves. Obviously to start with the test will need to be separate pieces of code before the framework is usable, but beyond a point it should work enough for me to be able to use it to test the latter stages of its development. Well: we'll see, anyhow.

Another challenge I am setting for myself is that I'm gonna mimic the "syntax" of TestBox, so that the tests I write in a scratch-pad should be able to be lifted-out as-is and dumped into a TestBox test spec CFC. Obviously I'm not going to reimplement all of TestBox, just a subset of stuff to be able to do simple tests. I think I will need to implement the following functions:

  • void run() - well, like in TestBox all my tests will be implemented in a function called run, and then one runs that to execute the tests. This is just convention rather than needing any development.
  • void describe(required string label, required function testGroup) - this will output its label and call its testGroup.
  • void it(required string label, required function implementation) - this will output its label and call its implementation. I'm pretty sure the implementation of describe and it will be identical. I mean like I will use two references to the same function to implement this.
  • struct expect(required any actual) - this will take the actual value being tested, and return a struct containing a matcher to that actual value.
  • boolean toBe(required any expected) - this will take the value that the actual value passed to expect should be. This will be as a key in the struct returned by expect (this will emulate the function-chaining TestBox uses with expect(x).toBe(y).

If I create that minimalist implementation, then I will be able to write this much of a test suite in a trycf.com scratch pad:

function myFunctionToTest(x) {
    // etc
}

function run() {
    describe("Tests for myFunctionToTest" ,() => {
        it("tests some variant", () => {
            expect(myFunctionToTest("variant a")).toBe("something")
        })

        it("tests some other variant", () => {
            expect(myFunctionToTest("variant b")).toBe("something else")
        })
    })
}

run()

And everything in that run function will be compatible with TestBox.

I am going to show every single iteration of the process here, to demonstrate TDD in action. This means this article will be long, and it will be code-heavy. And it will have a lot of repetition. I'll try to keep my verbiage minimal, so if I think the iteration of the code speaks for itself, I possibly won't have anything to add.

Let's get on with it.


It runs the tests via a function "run"

<cfscript>
// tests    
try {
    run()
    writeOutput("OK")
} catch (any e) {
    writeOutput("run function not found")
}
</cfscript>

<cfscript>
// implementation    
    
</cfscript>

I am going to do this entire implementation in a trycf.com scratch-pad file. As per above, I have two code blocks: tests and implementation. For each step I will show you the test (or updates to existing tests), and then I will show you the implementation. We can take it as a given that the test will fail in the way I expect (which will be obvious from the code), unless I state otherwise. As a convention a test will output "OK" if it passed, or "Failure: " and some error explanation if not. Later these will be baked into the framework code, but for now, it's hand-cranked. This shows that you don't need any framework to design your code via TDD: it's a practice, it's not a piece of software.

When I run the code above, I get "Failure: run function not found". This is obviously because the run function doesn't even exist yet. Let's address that.

// implementation    
void function run(){
}

Results: OK

We have completed one iteration of red/green. There is nothing to refactor yet. Onto the next iteration.


It has a function describe

We already have some of our framework operational. We can put tests into that run function, and they'll be run: that's about all that function needs to do. Hey I didn't say this framework was gonna be complicated. In fact the aim is for it to be the exact opposite of complicated.

// tests
// ...

void function run(){
    try {
        describe()
        writeOutput("OK")
    } catch (any e) {
        writeOutput("Failure: describe function not found")
    }
}
// implementation
void function describe(){
}

The tests already helped me here actually. In my initial effort at implementing describe, I misspelt it as "desribe". Took me a few seconds to spot why the test failed. I presume, like me, you have found a function with a spelling mistake in it that has escaped into production. I was saved that here, by having to manually type the name of the function into the test before I did the first implementation.


describe takes a string parameter label which is displayed on a line by itself as a label for the test grouping

// tests
// ...
savecontent variable="testOutput" {
    describe("TEST_DESCRIPTION")
};
if (testOutput == "TEST_DESCRIPTION<br>") {
    writeOutput("OK<br>")
    return
}
writeOutput("Failure: label not output<br>")
// implementation
void function describe(required string label){
    writeOutput("#label#<br>")
}

This implementation passes its own test (good), but it makes the previous test break:

try {
    describe()
    writeOutput("OK")
} catch (any e) {
    writeOutput("Failure: describe function not found")
}

We require describe to take an argument now, so we need to update that test slightly:

try {
    describe("NOT_TESTED")
    writeOutput("OK")
} catch (any e) {
    writeOutput("Failure: describe function not found")
}

I make a point of being very clear when arguments etc I need to use are not part of the test.

It's also worth noting that the test output is a bit of a mess at the moment:

OKNOT_TESTED
OKOK

For the sake of cosmetics, I've gone through and put <br> tags on all the test messages I'm outputting, and I've also slapped a cfsilent around that first describe test, as its output is just clutter. The full implementation is currently:

// tests    
try {
    cfsilent() {
        run()
    }
    writeOutput("OK<br>")
} catch (any e) {
    writeOutput("Failure: run function not found<br>")
}

void function run(){
    try {
        cfsilent(){describe("NOT_TESTED")}
        writeOutput("OK<br>")
    } catch (any e) {
        writeOutput("Failure: describe function not found<br>")
    }

    savecontent variable="testOutput" {
        describe("TEST_DESCRIPTION")
    };
    if (testOutput == "TEST_DESCRIPTION<br>") {
        writeOutput("OK<br>")
    }else{
    	writeOutput("Failure: label not output<br>")
    }
}

run()
</cfscript>

<cfscript>
// implementation
void function describe(required string label){
    writeOutput("#label#<br>")
}

And now the output is tidier:

OK
OK
OK

I actually did a double-take here, wondering why the TEST_DESCRIPTION message was not displaying for that second test: the one where I'm actually testing that that message displays. Of course it's because I've got the savecontent around it, so I'm capturing the output, not letting it output. Duh.


describe takes a callback parameter testGroup which is is executed after the description is displayed

savecontent variable="testOutput" {
    describe("TEST_DESCRIPTION", () => {
        writeOutput("DESCRIBE_GROUP_OUTPUT")
    })
};
if (testOutput == "TEST_DESCRIPTION<br>DESCRIBE_GROUP_OUTPUT") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: testGroup not executed<br>")
}
void function describe(required string label, required function testGroup){
    writeOutput("#label#<br>")
    testGroup()
}

This implementation change broke earlier tests that called describe, because they were not passing a testGroup argument. I've updated those to just slung an empty callback into those calls, eg:

describe("TEST_DESCRIPTION", () => {})

That's all I really need from describe, really. But I'll just do one last test to confirm that it can be nested OK (there's no reason why it couldn't be, but I'm gonna check anyways.


describe calls can be nested

savecontent variable="testOutput" {
    describe("OUTER_TEST_DESCRIPTION", () => {
        describe("INNER_TEST_DESCRIPTION", () => {
            writeOutput("DESCRIBE_GROUP_OUTPUT")
        })
    })
};
if (testOutput == "OUTER_TEST_DESCRIPTION<br>INNER_TEST_DESCRIPTION<br>DESCRIBE_GROUP_OUTPUT") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: describe nested did not work<br>")
}

And this passes without any adjustment to the implementation, as I expected. Note that it's OK to write a test that just passes, if there's not then a need to make an implementation change to make it pass. One only needs a failing test before one makes an implementation change. Remember the tests are to test that the implementation change does what it's supposed to. This test here is just a belt and braces thing, and to be declarative about some functionality the system has.


The it function behaves the same way as the describe function

it will end up having more required functionality than describe needs, but there's no reason for them to not just be aliases of each other for the purposes of this micro-test-framework. As long as the describe alias still passes all its tests, it doesn't matter what extra functionality I put into it to accommodate the requirements of it.

savecontent variable="testOutput" {
    describe("TEST_DESCRIPTION", () => {
        it("TEST_CASE_DESCRIPTION", () => {
            writeOutput("TEST_CASE_RESULT")
        })
    })
};
if (testOutput == "TEST_DESCRIPTION<br>TEST_CASE_DESCRIPTION<br>TEST_CASE_RESULT") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not work<br>")
}
it = describe

it will not error-out if an exception occurs in its callback, instead reporting an error result, with the exception's message

Tests are intrinsically the sort of thing that might break, so we can't have the test run stopping just cos an exception occurs.

savecontent variable="testOutput" {
    describe("NOT_TESTED", () => {
        it("tests an exception", () => {
            throw "EXCEPTION_MESSAGE";
        })
    })
};
if (testOutput CONTAINS "tests an exception<br>Error: EXCEPTION_MESSAGE<br>") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not correctly report the test error<br>")
}
void function describe(required string label, required function testGroup) {
    try {
        writeOutput("#label#<br>")
        testGroup()
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

At this point I decided I didn't like how I was just using describe to implement it, so I refactored:

void function runLabelledCallback(required string label, required function callback) {
    try {
        writeOutput("#label#<br>")
        callback()
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

void function describe(required string label, required function testGroup) {
    runLabelledCallback(label, testGroup)
}

void function it(required string label, required function implementation) {
    runLabelledCallback(label, implementation)
}

Because everything is test-covered, I am completely safe to just make that change. Now I have the correct function signature on describe and it, and an appropriately "general" function to do the actual execution. And the tests all still pass, so that's cool. Reminder: refactoring doesn't need to start with a failing test. Intrinsically it's an activity that mustn't impact the behaviour of the code being refactored, otherwise it's not a refactor. Also note that one does not ever alter implementation and refactor at the same time. Follow red / green / refactor as separate steps.


it outputs "OK" if the test ran correctly

// <samp>it</samp> outputs OK if the test ran correctly
savecontent variable="testOutput" {
    describe("NOT_TESTED", () => {
        it("outputs OK if the test ran correctly", () => {
            // some test here... this actually starts to demonstrate an issue with the implementation, but we'll get to that
        })
    })
};
if (testOutput CONTAINS "outputs OK if the test ran correctly<br>OK<br>") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not correctly report the test error<br>")
}

The implementation of this demonstrated that describe and it can't share an implementation. I don't want describe calls outputting "OK" when their callback runs OK, and this was what started happening when I did my first pass of the implementation for this:

void function runLabelledCallback(required string label, required function callback) {
    try {
        writeOutput("#label#<br>")
        callback()
        writeOutput("OK<br>")
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

Only it is a test, and only it is supposed to say "OK". This is an example of premature refactoring, and probably going overboard with DRY. The describe function wasn't complex, so we were gaining nothing by de-duping it from it, especially when it's implementation will have more complexity still to come.

I backed out my implementation and my failing test for a moment, and did another refactor to separate-out the two functions completely. I know I'm good when all my tests still pass.

void function describe(required string label, required function testGroup) {
    writeOutput("#label#<br>")
    testGroup()
}

void function it(required string label, required function implementation) {
    try {
        writeOutput("#label#<br>")
        implementation()
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

I also needed to update the baseline it test to expect the "OK<br>". After that everything is green again, and I can bring my new test back (failure as expected), and implementation (all green again/still):

void function it(required string label, required function implementation) {
    try {
        writeOutput("#label#<br>")
        implementation()
        writeOutput("OK<br>")
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

describe will not error-out if an exception occurs in its callback, instead reporting an error result, with the exception's message

Back to describe again. It too needs to not error-out if there's an issue with its callback, so we're going to put in some handling for that again too. This test is largely copy and paste from the equivalent it test:

savecontent variable="testOutput" {
    describe("TEST_DESCRIPTION", () => {
        throw "EXCEPTION_MESSAGE";
    })
};
if (testOutput CONTAINS "TEST_DESCRIPTION<br>Error: EXCEPTION_MESSAGE<br>") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not correctly report the test error<br>")
}
void function describe(required string label, required function testGroup) {
    try {
        writeOutput("#label#<br>")
        testGroup()
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

it outputs "Failed" if the test failed

This took some thought. How do I know a test "failed"? In TestBox when something like expect(true).toBeFalse() runs, it exits from the test immediately, and does not run any further expectations the test might have. Clearly it's throwing an exception from toBeFalse if the actual value (the one passed to expect) isn't false. So this is what I need to do. I have not written any assertions or expectations yet, so for now I'll just test with an actual exception. It can't be any old exception, because the test could break for any other reason too, so I need to differentiate between a test fail exception (the test has failed), and any other sort of exception (the test errored). I'll use a TestFailedException!

savecontent variable="testOutput" {
    describe("NOT_TESTED", () => {
        it("outputs failed if the test failed", () => {
            throw(type="TestFailedException");
        })
    })
};
if (testOutput.reFind("Failed<br>$")) {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not correctly report the test failure<br>")
}
void function it(required string label, required function implementation) {
    try {
        writeOutput("#label#<br>")
        implementation()
        writeOutput("OK<br>")
    } catch (TestFailedException e) {
        writeOutput("Failed<br>")
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

This is probably enough now to a) start using the framework for its own tests; b) start working on expect and toBe.


has a function expect

it("has a function expect", () => {
    expect()
})

Check. It. Out. I'm actually able to use the framework now. I run this and I get:

has a function expect
Error: Variable EXPECT is undefined.

And when I do the implementation:

function expect() {
}
has a function expect
OK

I think once I'm tidying up, I might look to move the test result to be on the same line as the label. We'll see. For now though: functionality.


expect returns a struct with a key toBe which is a function

it("expect returns a struct with a key toBe which is a function", () => {
    var result = expect()
    if (isNull(result) || isNull(result.toBe) || !isCustomFunction(local.result.toBe)) {
        throw(type="TestFailedException")
    }
})

I'd like to be able to just use !isCustomFunction(local?.result?.toBe) in that if statement there, but Lucee has a bug that prevents ?. to be used in a function expression (I am not making this up, go look at LDEV-3020). Anyway, the implementation for this for now is easy:

function expect() {
    return {toBe = () => {}}
}

toBe returns true if the actual and expected values are equal

it("toBe returns true if the actual and expected values are equal", () => {
    var actual = "TEST_VALUE"
    var expected = "TEST_VALUE"

    result = expect(actual).toBe(expected)
    if (isNull(result) || !result) {
        throw(type="TestFailedException")
    }
})

The implementation for this bit is deceptively easy:

function expect(required any actual) {
    return {toBe = (expected) => {
        return actual.equals(expected)
    }}
}

toBe throws a TestFailedException if the actual and expected values are not equal

it("toBe throws a TestFailedException if the actual and expected values are not equal", () => {
    var actual = "ACTUAL_VALUE"
    var expected = "EXPECTED_VALUE"

    try {
        expect(actual).toBe(expected)
    } catch (TestFailedException e) {
        return
    }
    throw(type="TestFailedException")
})
function expect(required any actual) {
    return {toBe = (expected) => {
        if (actual.equals(expected)) {
            return true
        }
        throw(type="TestFailedException")
    }}
}

And with that… I have a very basic testing framework. Obviously it could be improved to have more expectations (toBeTrue, toBeFalse, toBe{Type}), and could have some nice messages on the toBe function so a failure is more clear as to what went on, but for a minimum viable project, this is fine. I'm going to do a couple more tests / tweaks though.


toBe works with a variety of data types

var types = ["string", 0, 0.0, true, ["array"], {struct="struct"}, queryNew(""), xmlNew()]
types.each((type) => {
    it("works with #type.getClass().getName()#", (type) => {
        expect(type).toBe(type)
    })
})

The results here differ between Lucee:

works with java.lang.String
OK
works with java.lang.Double
OK
works with java.lang.Double
OK
works with java.lang.Boolean
OK
works with lucee.runtime.type.ArrayImpl
OK
works with lucee.runtime.type.StructImpl
OK
works with lucee.runtime.type.QueryImpl
OK
works with lucee.runtime.text.xml.struct.XMLDocumentStruct
Failed

And ColdFusion:

works with java.lang.String
OK
works with java.lang.Integer
OK
works with coldfusion.runtime.CFDouble
Failed
works with coldfusion.runtime.CFBoolean
OK
works with coldfusion.runtime.Array
OK
works with coldfusion.runtime.Struct
OK
works with coldfusion.sql.QueryTable
OK
works with org.apache.xerces.dom.DocumentImpl
Failed

But the thing is the tests actually work on both platforms. If you compare the objects outside the context of the test framework, the results are the same. Apparently in ColdFusion 0.0 does not equal itself. I will be taking this up with Adobe, I think.

It's good to know that it works OK for structs, arrays and queries though.


The it function puts the test result on the same line as the test label, separated by a colon

As I mentioned above, I'd gonna take the <br> out from between the test message and the result, instead just colon-separating them:

// The <samp>it</samp> function puts the test result on the same line as the test label, separated by a colon
savecontent variable="testOutput" {
    describe("TEST_DESCRIPTION", () => {
        it("TEST_CASE_DESCRIPTION", () => {
            writeOutput("TEST_CASE_RESULT")
        })
    })
};
if (testOutput == "TEST_DESCRIPTION<br>TEST_CASE_DESCRIPTION: TEST_CASE_RESULTOK<br>") {
    writeOutput("OK<br>")
}else{
    writeOutput("Failure: the it function did not work<br>")
}
void function it(required string label, required function implementation) {
    try {
    	writeOutput("#label#<br>")
        writeOutput("#label#: ")
        implementation()
        writeOutput("OK<br>")
    } catch (TestFailedException e) {
        writeOutput("Failed<br>")
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

That's just a change to that line before calling implementation()


Putting it to use

I can now save the implementation part of this to a gist, save another gist with my actual tests in it, and load that into trycf.com with the setupCodeGistId param pointing to my test framework Gist: https://trycf.com/gist/b8e6322c291ba6308bd82c3ee499dd0e?setupCodeGistId=816ce84fd991c2682df612dbaf1cad11. And… (sigh of relief, cos I'd not tried this until now) it all works.


Outro

This was a long one, but a lot of it was really simple code snippets, so hopefully it doesn't overflow the brain to read it. If you'd like me to clarify anything or find any bugs or I've messed something up or whatever, let me know. Also note that whilst it'll take you a while to read, and it took me bloody ages to write, if I was just doing the actual TDD exercise it's only about an hour's effort. The red/green/refactor cycle is very short.

Oh! Speaking of implementations. I never showed the final product. It's just this:

void function describe(required string label, required function testGroup) {
    try {
        writeOutput("#label#<br>")
        testGroup()
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

void function it(required string label, required function implementation) {
    try {
        writeOutput("#label#: ")
        implementation()
        writeOutput("OK<br>")
    } catch (TestFailedException e) {
        writeOutput("Failed<br>")
    } catch (any e) {
        writeOutput("Error: #e.message#<br>")
    }
}

struct function expect(required any actual) {
    return {toBe = (expected) => {
        if (actual.equals(expected)) {
            return true
        }
        throw(type="TestFailedException")
    }}
}

That's a working testing framework. I quite like this :-)

All the code for the entire exercise - including all the test code - can be checked-out on trycf.com.

Righto.

--
Adam

Thursday, 21 October 2021

Unit testing back-fill question: how thorough to be?

G'day:

I'm extracting this from a thread I started on the CFML Slack channel (it's not a CFML-specific question, before you browse-away from here ;-), because it's a chunk of text/content that possibly warrants preservation beyond Slack's content lifetime, plus I think the audience here (such as it is, these days), are probably more experienced than the bods on the Slack channel.

I've found myself questioning my approach to a test case I'm having to backfill in our application. I'm keen to hear what people think.

I am testing a method like this:

function testMe() {
    // irrelevant lines of code here
    
    if (someService.isItSafe()) {
        someOtherService.doThingWeDontWantToHappen()

        things = yetAnotherService.getThings()
        for (stuff in things) {
            stuff.doWhenSafe()
            if (someCondition) {
                return false
            }
        }
        moarService.doThisOnlyWhenSafe()
        
        return true
    }
    
    return false
}

That implementation is fixed for now, So I can't refactor it to make it more testable just yet. I need to play with this specific hand I've been dealt.

The case I am currently implementing is for when it's not safe (someService.isItSafe() returns false to indicate this). When it's not safe, then a subprocess doesn't run (all the stuff in the if block), and the method returns false.

I've thrown-together this sort of test:

it("doesn't do [a description of the overall subprocess] if it's not safe to do so", () => {
    someService = mockbox.createMock("someService")
    someService.$("isItSafe").$results(false)

    someOtherService = mockbox.createMock("SomeOtherService")
    someOtherService.$(method="doThingWeDontWantToHappen", callback=() => fail("this should not be called if it's not safe"))
    
    yetAnotherService = mockbox.createMock("YetAnotherService")
    moarService = mockbox.createMock("MoarService")
    
    serviceToTest = new ServiceToTest(someService, someOtherService, yetAnotherService, moarService)
    
    result = serviceToTest.testMe()
    
    expect(result).toBeFalse
})

In summary, I'm forcing isItSafe to return false, and I'm mocking the first part of the subprocess to fail if it's called. And obviously I'm also testing the return value.

That works. Exactly as written, the test passes. If I tweak it so that isItSafe returns true, then the test fails with "this should not be called if it's not safe".

However I'm left wondering if I should also mock yetAnotherService.getThings, Stuff.doWhenSafe and moarService.doThisOnlyWhenSafe to also fail if they are called. They are part of the sub-process that must not run as well (well I guess it's safe for getThings to run, but not the other two).

In a better world the subprocess code could all actually be in a method called doSubprocess, rather than inline in testMe and the answer would be to simply make sure that wasn't called. We will get there, but for now we need the testing in before we do the refactor to do so.

So the things I'm wondering about are:

  • Is it "OK" to only fail on the first part of the subprocess, or should I belt and braces the rest of it too?
  • Am I being too completist to think to fail each step, especially given they will be unreachable after the first fail?
  • I do not know the timeframe for the refactor. Ideally it's "soon", but it's not scheduled. We have just identified that this code is too critical to not have tests, so I'm backfilling the test cases. If it was "soon", I'd keep the test brief (as it is now); but as it might not be "soon" I wonder if I should go the extra mile.
  • I guess once this test is in place, if we do any more maintenance on testMe, we will look at the tests first (TDD...), and adjust them accordingly if merely checking doThingWeDontWantToHappen doesn't get called is no longer sufficient to test this case. But that's in an ideal world that we do not reside in yet, so I can't even be certain of that.

What do you think?

Righto.

--
Adam