Showing posts with label Unit Testing. Show all posts
Showing posts with label Unit Testing. Show all posts

Thursday, 2 December 2021

A question about DAO testing, abstraction, and mocking

G'day:

This is another "low-hanging fruit" kinda exercise (see also previous article: "A question about the overhead of OOP in CFML"). I'm extracting this one from the conversation in the CFML Slack channel so that it doesn't get lost. I have a growing concern about how Slack is a bit of a content-black-hole, and am cooling to the notion of helping people there. Especially when my answer is a fairly generic one. The question is a CFML-based one; the answer is just a general testing-strategy one. I say this so people are less likely to stop reading because it's about CFML ;-) Anyhoo: time for some copy and paste.

The question is a simple one:

Just dropping it in here. Unit testing DAO object using qb. How would you guys go about it or why not? Go!
Kevin L. @ CFML Slack Channel

I say "simple". I mean short and on-point.

Here's the rest of the thread. I've tidied up the English in some places, but have not changed any detail of what was said. I will prefix this saying I don't know "qb" from a bar of soap; it's some sort of DBAL. For the purposes of this, how it works doesn't really matter.


The participants here are myself, Kevin and Samuel Knowlton.

It depends on how you design this part of your application.

For me the interface between the application and the storage tier is via a Repository class, which talks to the higher part of the domain in object and collections thereof: no trace of any persistence DSL at all (no SQLish language, for example).

For the purposes of testing, I abstract the actual storage-tier communications: in CFML this would be queryExecute calls, or calls to an external library like qb. I fashion these as DAOs if they contain actual SQL statements, or an Adapter if it was something like qb.

So the repo has all the testable logic in it, the DAO or Adapter simply takes values, calls the persistence implementation, and returns whatever the persistence call returns. The repo converts that lot to/from domain-ready stuff for the application to use.

I'll just add an example of this for the blog version of this conversation:

selectFooById(id) {
    return queryExecute("
        SELECT col1, col2, col3, etc
        FROM some_table
        WHERE id = :id
    ", {id=id})
}

It's important to note that this method does nothing other than abstract the queryExecute call away from any application logic in the method that calls this. No manipulation of inputs; no remapping of outputs. Both of those are the job of the repository method that calls this DAO method. This DAO method only exists to remove the external call from the repo code. That's it.

To that end, there is no unit testable logic in the DAO. But I will do an integration test on it. For a CFML example if I call a method on the DAO, then I do get the kind of query I'd expect (correct columns).

I'd unit test the repo logic along the lines of the mapping to/from the domain/storage is correct, and any transformation logic is correct (converting datetime objects to just date-only strings and the like). I would test that if a DAO call returned a three-row query, then the repo returns a three-object collection, with the expected values.

NB: Repos don't have business logic; they only have mapping / transformation logic.

An initial task here might be to look at the DAO code, and even if it rolls-in transformation/mapping logic and persistence-tier calls, at the very least it doesn't do both in the same method.

One thing to be careful to not do is to re-test qb. Assume it's doing its job correctly, and just test what your code does with the data qb gives you.

In that case my current tests are not correct at all… They felt wrong from the start, but I was struggling with how to test the DAO layer (which is abstracted through an interface). My DAO layer at this point does have a hard reference to / dependency on qb, which in that case violates some OO principles… This is a brainbreaker

Ah yes everything I mentioned above is predicated on using DI and an IoC container which should really be the baseline for any application designed with testing in mind (which should be every application).

(NB: DI is the baseline; using an IoC container to manage the DI is a very-nice-to-have, but is not essential).

You can "fake it til you make it" with the DI, by extracting hard references to qb object creation into methods that solely do that. You can then mock-out those methods in your tests so you can return a testable object from it.

So instead of this:

// MyRepository
component {

    getObjectCollection() {
        qb = new QB()
        
        raw = qb.getStuff()
        
        collection = raw.map(() => {
            // convert to domain model objects. this is what you want to test
        })
        
        return collection
    }
}

One has this:

// MyRepository
component {

    getObjectCollection() {
        qb = getQb()
        
        raw = qb.getStuff()
        
        collection = raw.map(() => {
            // convert to domain model objects. this is what you want to test
        })
        
        return collection
    }
    
    private getQb() {
        return new QB()
    }
}

And can test like this:

it("maps the data correctly", () => {
    sut = new MyRepository()
    mockbox.prepareMock(sut) // sut is still a MyRepository, but can selectively mock methods
    
    rawData = [/* known values that exercise the logic in the mapper */]
    sut.$("getQb").$results(rawData)
    
    collection = sut.getObjectCollection()
    
    expectedCollection = [/* expected values based on known values */]
    
    expect(collection).toBe(expectedCollection)
})

You could also extract the mapping logic from the fetching logic, and test the mapping logic directly. That's getting a bit bitty for my liking though. However do what you need to do to be able to separate the logic into something testable, away from code that isn't testable. It's always doable with a bit of refactoring, even if the refactoring isn't perfect.

[…] I already have a getQueryBuilder method in the DAO object. So that's a good starting point. How would you test update / create / deletion?

Two sets of tests:

Integration tests

Test that your app integrates with QB and the DB. Call the actual live method, hit the DB, and test the results.

So…

  • create a record using your app. Then use queryExecute to get the values back out, and test it's what you expect to have been saved.
  • Insert a row into the DB (either via your create mechanism, or via queryExecute). Update it via your app. Use queryExecute to get the values back out, and test it's what you expect to have been updated.
  • Insert a row into the DB (either via your create mechanism, or via queryExecute). Delete it via your app. Use queryExecute to try to get the values back out, and make sure they ain't there.

Unit tests

  • When you call your app's create method with [given inputs] then those inputs (or equivalents, if you need to transform them) are the ones passed to the expected method of the (mocked) object returned from your mocked getQueryBuilder call.
  • Same with update.
  • Same with delete.
it("deletes the data from storage", () => {
    testId = 17
    
    sut = new MyRepository()
    mockbox.prepareMock(sut)
    
    mockedQb = createMock("QB")
    expectedReturnFromDelete = "I dunno what QB returns; but use some predictable value"
    mockedQb.$("delete").$args(testId).$results(expectedReturnFromDelete)
    
    sut.$("getQueryBuilder").$results(mockedQb)
    
    result = sut.delete(testId)
    
    expect(result).toBe(expectedReturnFromDelete) // it won't have been returned unless testId was passed to QB.delete
})

That's a very simplistic test, obvs.

Oh: run all the integration tests in a transaction that you rollback once done.

Here the unit tests don't hit the actual storage, so are easier to write, maintain, and quick to run in your red/green/refactor cycle. You run these all the time when doing development work. The integration tests are fiddly and slow, so you only run those when you make changes to that tier of the app (DB schema changes that impact the app, generally; and then when putting in a pull request, merging it, and doing a build). It is vital the code being integration tested does not have any logic in it. Because then you need to unit test it, which defeats the purpose of separating-out yer concerns.

[…]To add our two cents: we use QB and Quick all the time in integration tests to make sure that objects are created, updated, or deleted the way we want. Sometimes we will use cfmigrations to pre-populate things so we are not doing a whole create/update/destroy cycle on each test run (which can be awesome, but also slow)

[Agreed]

A lot of information to take in. I'll take my time to read this through. Thank you @Adam Cameron and @sknowlton for taking your time to answer my questions!

NP. And ping away with any questions / need for clarification.


[a few days pass]

[…]Thanks to your input I was able to refactor the "dao" layer through test-driven design!

I do have a general testing question though. Let's assume I have method foo in a service with parameters x, y, z with the following pseudocode:

function foo( x, y, z ) {
  /* does something */
  var args = ...
  
  /* validate, throw, etc... */
 
  return bar( 
    argumentCollection = args 
  );
}

Should the unit test mock out the bar function and assert the args parameter(s) and / or throw scenarios or actually assert the return value of the foo function. Or is the latter case more of a integration test rather than a unit test?

Sorry if this is a noob question, been trying to wrap my head around this…

A handy thing to do here is to frame the tests as testing the requirement, not testing the code.

Based on your sample pseudo code, then you have a requirement which is "it does the foothing so that barthing can happen". I have purposely not used the method names there directly; those are implementation detail, but should describe the action taking place. It could be "it converts the scores into roman numerals": in your example "it converts the scores" is the call to foo, and "into roman numerals" is the call to bar.

If you frame it like that, your tests will be testing all the variations of foothing and barthing - on variation per test, eg:

  • x, y, z are valid and you get the expected result (happy path)
  • x is bung, expected exception is thrown
  • y is bung [etc]
  • z is bung [etc]
  • say there's a conditional within bar… one test for each true / false part of that. The true variant might be part of that initial happy path test though.

This is how I read the pseudo code, given that bar looks like another method in the same class - probably a private method that is part of some refactoring to keep foo streamlined.

If your pseudocode was return dao.bar(argumentCollection = args), then I'd say that the DAO is a boundary of your application (eg, it is an adapter to a queryExecute call), and in this case you would mock-out dao.bar, and use spying to check that it received the correct arguments, based on the logic of foo. For example foo might multiple x by two... make sure bar receives 2x, etc

Does that cover it / make sense? Am off to a(~nother) meeting now, so had to be quick there. I'll check back in later on.

It covers my question, Adam, thanks!

One thing I didn't have time to say... I used a DAO in that example specifically to keep my answer short. A DAO is by definition a boundary.

If it was something else like a RomanNumeralWriterService and it was part of your own application, just not internal to the class the method you're testing is in... it's not so clear cut as to whether to mock or use a live object.

If RomanNumeralWriterService is easy to create (say its constructor take no or very simple arguments), and doesn't itself reach outside your application... then no reason to mock it: let it do its job. If however it's got a bunch of its own dependencies and config and will take a chunk of your test just to create it, or if it internally does its own logging or DB writing or reaching out to another service, or it's sloooow, then you mock it out. You don't want your tests messing around too much with dependencies that are not part of what yer testing, but it's not black or white whether to mock; it's a judgement call.

Yeah I was struggling with the thing you mention in your last paragraph. Service A has Service B as a dependency which itself has a lot of other dependencies (logging, dao, …). The setup for the test takes ages while the test method is 1 line of code :/.

Yeah, mock it out.

Make sure whatever you mock-out is well-test-covered as well in its own right, but it's not the job of these tests to faff around with that stuff.


That was it. I think that was a reasonably handy discussion, so was worth preserving here. YMMV of course: lemme know.

Righto.

--
Adam

 

 

Thursday, 21 October 2021

Unit testing back-fill question: how thorough to be?

G'day:

I'm extracting this from a thread I started on the CFML Slack channel (it's not a CFML-specific question, before you browse-away from here ;-), because it's a chunk of text/content that possibly warrants preservation beyond Slack's content lifetime, plus I think the audience here (such as it is, these days), are probably more experienced than the bods on the Slack channel.

I've found myself questioning my approach to a test case I'm having to backfill in our application. I'm keen to hear what people think.

I am testing a method like this:

function testMe() {
    // irrelevant lines of code here
    
    if (someService.isItSafe()) {
        someOtherService.doThingWeDontWantToHappen()

        things = yetAnotherService.getThings()
        for (stuff in things) {
            stuff.doWhenSafe()
            if (someCondition) {
                return false
            }
        }
        moarService.doThisOnlyWhenSafe()
        
        return true
    }
    
    return false
}

That implementation is fixed for now, So I can't refactor it to make it more testable just yet. I need to play with this specific hand I've been dealt.

The case I am currently implementing is for when it's not safe (someService.isItSafe() returns false to indicate this). When it's not safe, then a subprocess doesn't run (all the stuff in the if block), and the method returns false.

I've thrown-together this sort of test:

it("doesn't do [a description of the overall subprocess] if it's not safe to do so", () => {
    someService = mockbox.createMock("someService")
    someService.$("isItSafe").$results(false)

    someOtherService = mockbox.createMock("SomeOtherService")
    someOtherService.$(method="doThingWeDontWantToHappen", callback=() => fail("this should not be called if it's not safe"))
    
    yetAnotherService = mockbox.createMock("YetAnotherService")
    moarService = mockbox.createMock("MoarService")
    
    serviceToTest = new ServiceToTest(someService, someOtherService, yetAnotherService, moarService)
    
    result = serviceToTest.testMe()
    
    expect(result).toBeFalse
})

In summary, I'm forcing isItSafe to return false, and I'm mocking the first part of the subprocess to fail if it's called. And obviously I'm also testing the return value.

That works. Exactly as written, the test passes. If I tweak it so that isItSafe returns true, then the test fails with "this should not be called if it's not safe".

However I'm left wondering if I should also mock yetAnotherService.getThings, Stuff.doWhenSafe and moarService.doThisOnlyWhenSafe to also fail if they are called. They are part of the sub-process that must not run as well (well I guess it's safe for getThings to run, but not the other two).

In a better world the subprocess code could all actually be in a method called doSubprocess, rather than inline in testMe and the answer would be to simply make sure that wasn't called. We will get there, but for now we need the testing in before we do the refactor to do so.

So the things I'm wondering about are:

  • Is it "OK" to only fail on the first part of the subprocess, or should I belt and braces the rest of it too?
  • Am I being too completist to think to fail each step, especially given they will be unreachable after the first fail?
  • I do not know the timeframe for the refactor. Ideally it's "soon", but it's not scheduled. We have just identified that this code is too critical to not have tests, so I'm backfilling the test cases. If it was "soon", I'd keep the test brief (as it is now); but as it might not be "soon" I wonder if I should go the extra mile.
  • I guess once this test is in place, if we do any more maintenance on testMe, we will look at the tests first (TDD...), and adjust them accordingly if merely checking doThingWeDontWantToHappen doesn't get called is no longer sufficient to test this case. But that's in an ideal world that we do not reside in yet, so I can't even be certain of that.

What do you think?

Righto.

--
Adam

Sunday, 5 September 2021

Testing: reasoning over "testing implementation detail" vs "testing features"

G'day:

I can't see how this article will be a very long one, as it only really dwells on one concept I had to reason through before I decided I wasn't talking horseshit.

I'm helping some community members with improving their undertanding of testing, and I was reviewing some code that was in need of some test coverage. The code below is not the actual code - for obvious "plausible deniability" reasons - but it's pretty much the same situation and logic flow we were looking at, just with different business entities and the basis for the rule has been "uncomplicated" for the sake of this article:

function validateEmail() {
    if ((isInstanceOf(this.orderItems, "ServicesCollection") || isInstanceOf(this.orderItems, "ProductsCollection")) && !isValid("email", this?.email) ) {
        var hasItemWithCost = this.orderItems.some((item) => item.cost > 0)
        if (hasItemWithCost) { // (*)
            this.addError("Valid email is required")
        }
    }
}

// (*) see AdamT's comment below. I was erroneously calling .len() on hasItemWithCost due to me not having an operational brain. Fixed.

This function is called as part of an object's validation before it's processed. It's also slightly contrived, and the only real bit is that it's a validation function. For this example the business rule is that if any items in the order have a cost associated with them; then the order also needs an email address. Say the receipt needs to be emailed or something. The real world code was different, but this is a fairly common-knowledge sort of parallel to the situation.

All the rest of the code contributing to the data being validated already existed, we're just got this new validation rule around the email not always being optional now.

We're not doing TDD here (yet!), so it's a matter of backfilling tests. The exercise was to identify what test need to be written.

We picked two obvious ones:

  • make an orderItems collection with no costs in it, and the email shouldn't be checked;
  • make an orderItems collection with at least one oneitem having a costs in it, and the email should be present and valid;

But then I figured that we've skipped a big chunk of the conditional logic: we're ignoring the check of whether the collection is one of a ServicesCollection or a ProductsCollection. NB: unfortunately the order can comprise collections of two disparate types, and there's no common interface we can check against instead. There's possibly some tech debt there.

I'm thinking we have three more cases here:

  • the collection is a ServicesCollection;
  • the collection is a ProductsCollection;
  • the collection is neither of those, somehow. This seems unlikely, but it seems it could be true.

Then I stopped and thought about whether by dissecting that if statement so closely, I am just chasing 100% line-of-code coverage. And worse: aren't the specifics of that part of the statement just implementation detail, rather than part of the actual requirement? The feature was "update the order system so that orders requiring a financial transaction require a valid email address". It doesn't say anything about what sort of collections we use to store the order.

Initially I concluded it was just implementation detail and there was no (test) case to answer to here. But it stuck in my craw that we were not testing that part of the conditional logic. I chalked that up to me being pedantic about making sure every line of code has coverage. Remember I generally use TDD when writing my code, so it's just weird to me to have conditional code with no specific tests, because the condition would only ever be written because a feature called for it.

But it kept gnawing at me.

Then the penny dropped (this was a few hours later, I have to admit).

It's not just testing implementation detail. The issue / confusion has arisen because we're not doing TDD.

If I rewind to when we were writing the function, and take a TDD approach, I'd be going "right, what's the first test case here?". The first test case - the reason why want to type in isInstanceOf(this.orderItems, "ServicesCollection") - is because part of the stated requirement is implicit. When the client said "if any items in the order have a cost associated with them…" they were not explicit about "this applies to both orders for services as well as orders for products", because in the business's vernacular the distinction between types of orders isn't one that's generally considered. For a lot of requirements there's just the concept of "orders" (this is why there should be an interface for orders!). However the fully-stated requirement could be made more accurately "if any items in an order for services have a cost associated with them…", followed by "likewise, if any items in an order for products have a cost associated with them…". It's this business information that inform our test cases, which would be better enumerated as:

  • it requires an email address if any items in a services order have an associated cost;
  • it requires an email address if any items in a products order have an associated cost;
  • it does not require an email address if the order is empty;

I had to chase up what the story was if the collection was neither of those types, and it turns out there wasn't a third type that was immune to the email address requirement, it's just - as per the nice clear test case! - the order might be empty, but the object as a whole still needs validation.

There is still "implementation detail" here - the code checking the collection type is obviously the implementation of that part of the business requirement: all code is implementation detail after all. The muddiness here arose because we were backfilling tests, rather than using TDD. Had we used TDD I'd not have been questioning myself when I was deciding on my test cases. The test cases are driven by business requirements; they implementation is driven by the test cases. And, indeed, the test implementation has to busy itself with implementation detail too, because to implement those tests we need to pass-in collections of the type the function is checking for.


As an aside, I was not happy with how busy that if condition was. It seemed to be doing too much to me, and mixing a coupla different things: type checks and a validity check. I re-reasoned the situation and concluded that the validation function has absolutely nothing to do if the email is there and is valid: this passes all cases right from the outset. So I will be recommending we separate that out into its own initial guard clause:

function validateEmail() {
    if (isValid("email", this.email)) {
        return
    }
    if ((isInstanceOf(this.orderItems, "ServicesCollection") || isInstanceOf(this.orderItems, "ProductsCollection"))) {
        var hasItemsWithCost = this.orderItems.some((item) => item.cost > 0)
        if (hasItemsWithCost.len()) {
            this.addError("Valid email is required")
        }
    }
}

I think this refactor makes things a bit easier to reason through what's going on. And note that this is a refactor, so we'll do it after we've got the tests in place and green.

I now wonder if this is something I should be looking out for more often? Is a compound if condition that is evaluating "asymmetrical" sub-conditions indicative of a code smell? Hrm… will need to keep an eye on this notion.

Now if only I could get that OrderCollection interface created, the code would be tidier-still, and closer to the business requirements.

Righto.

--
Adam

 

PS: I actually doubted myself again whilst writing this, but by the time I had written it all out, I'm happy that the line of thinking and the test cases are legit.

Tuesday, 24 August 2021

Test coverage: it's not about lines of code

G'day

A coupla days ago someone shared this Twitter status with me:

I'll just repeat the text here for google & copy-n-paste-ability:

My personal opinion: Having 100% test coverage doesn't make your code more effective. Tests should add value but at some point being overly redundant and testing absolutely every line of code is ineffective. Ex) Testing that a component renders doesn't IN MY OPINION add value.

Emma's position here is spot on, IMO. There were also a lot of "interesting" replies to it. Quelle surprise. Go read some of them, but then give up when they get too repetitive / unhelpful.

In a similar - but slightly contrary - context, I was reminded of my erstwhile article on this topic: "Yeah, you do want 100% test coverage".

But hang on: on one hand I say Emma is spot on. On the other hand I cross-post an old article that seems to contradict this. What gives?

The point of differentiation is "testing absolutely every line of code" vs "100% test coverage". Emma is talking about lines of code. I'm talking about test cases.

Don't worry about testing lines of code. That's intrinsically testing "implementation". However do care about your test cases: the variations that define the feature you are working on. you've been asked to develop a feature and its variations, and you don't know you've delivered the feature (or in the case of automated testing: continue to having been delivered in a working state) unless you test it.

Now, yeah, sure: features are made of lines of code, but don't worry about those. A trite example I posited to a mate yesterday is this: consider this to be the implementation of your feature:

doMyFeature() {
    // line 1
    // line 2
    // line 3
    // line 4
    // line 5
    // line 6
    // line 7
    // line 8
    // line 9
    // line 10
}

I challenge you to write a test for "My Feature", and not by implictation happen to test all those lines of code. But it's the feature we care about, not the lines of code.

On the other hand, let's consider a variant:

doMyFeature() {
    // line 1
    // line 2
    // line 3
    // line 4
    // line 5
    if (someVariantOfMyFeature) {
        // line 6
    }
    // line 7
    // line 8
    // line 9
    // line 10
}

If you run your test coverage analysis and see that line 6 ain't tested, this is not a case of wringing one's hands about line 6 of the code not being tested; it's that yer not testing the variation of the feature. It's the feature coverage that's missing. Not the lines-of-code coverage.

Intrinsically your code coverage analysis tooling probably marks individual lines of code as green or red or whatever, but only if you look that closely. If you zoom out a bit, you'll see that the method the lines of code are in is either green or orange or red; and out further the class is likewise green / orange / red, and probably says something like "76% coverage". The tooling necessarily needs to work in lines-of-code because its a dumb machine, and those are the units it can work on. You are the programmer don't need to focus on the lines-of-code. You have a brain, and what the report is saying is "you've not tested part of your feature", and it's saying "the bit of the feature that is represented by these lines of code". That bit. You're not testing it. Go test yer feature properly.

There's parts of the implementation of a feature I won't test maybe. Your DAOs and your other adapters for external services? Maybe not something I'd test during the TDD cycle of things, but it might still be prudent to chuck in an integration test for those. I mean they do need to work and continue to work. But I see this as possibly outside of feature testing. Maybe? Is it?

Also - now I won't repeat too much of that other article - there's a difference between "coverage" and "actually tested". Responsible devs can mark some code as "won't test" (eg in PHPUnit with a @codeCoverageIgnore), and if the reasons are legit then they're "covered" there.

Why I'm writing this fairly short article when it's kinda treading on ground I've already covered is because of some of the comments I saw in reply to Emma. Some devs are a bit shit, and a bit lazy, and they will see what Emma has written, and their take away will be "basically I don't need to test x because x is some lines of code, and we should not be focusing on lines of code for testing". That sounds daft, but I know people that have rationalised their way out of testing perfectly testable (and test-necessary) code due to "oh we should not test implementation details", or "you're just focusing on lines of code, that's wrong", and that sort of shit. Sorry pal, but a feature necessarily has an implementation, and that's made out of lines of code, so you will need to address that implementation in yer testing, which will innately report that those lines of code are "covered".

Also this notion of "100% coverage is too high, so maybe 80% is fine" (possibly using the Pareto Pinciple, poss just pulling a number of out their arses). Serious question: which 80%? If you set that goal, the a lot of devs will go "aah, it's in that 20% we don't test". Or they'll test the easy 80%, and leave the hard 20% - the bit that needs testing - to be the 20% they don't test (as I said in the other article: cos they basically can't be arsed).

Sorry to be a bit of a downer when it comes to devs' level of responsibility / professionalism / what-have-you. There are stack of devs out there who rock and just "get" what Emma (et al) is saying, and crack on with writing excellently-designed and well-tested features. But they probably didn't need Emma's messaging in the first place. I will just always think some more about advice like this, and wonder how it can be interpreted, and then be a contrarian and say "well actually…" (cringe, OK I hope I'm not actually doing that here?), and offer a different informed observation.

So my position remains: set your sights on 100% feature coverage. Use TDD, and your code will be better and leaner and you'll also likely end up with 100% LOC coverage too (but that doesn't matter; it's just a test-implementation detail ;-).

Righto.

--
Adam

Thursday, 8 April 2021

TDD and external services

G'day

You might have noticed I spend a bit of my time encouraging people to use TDD, or at the very least making sure yer code is tested somehow. But use TDD ;-)

As an interesting aside, I recently failed a technical interview because the interviewer didn't feel I was strong enough at the testing side of things. Given what I see around the industry… that seems to be a moderately high bar yer setting for yerselves there, peeps. Or perhaps I'm just shit at articulating myself. Hrm. But anyway.

OK, so I rattled out a quick article a few days ago - "TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode" - which revisits some existing ground and by-and-large is not relevant to what I'm going to say here, other than the "TDD & professionalism" being why I bang on about it so much. And you might think I bang on about it here, but I also bang on about it at work (when I have work I mean), and in my background conversations too. I try to limit it to only my technical associates, that said.

Right so Mingo hit me up in a comment on that article, asking this question:

Something I ran into was needing to access the external API for the tests and I understand that one usually uses mocking for that, right? But, my question is then: how do you then **know** that you're actually calling the API correctly? Should I build the error handling they have in their API into my mocked up API as well (so I can test my handling of invalid inputs)? This feels like way too much work. I chose to just call the API and use a test account on there, which has it's own issues, because that test account could be setup differently than the multiple different live ones we have. I guess I should just verify my side of things, it's just that it's nice when it's testing everything together.

Yep, good question. With new code, my approach to the TDD is based on the public interface doing what's been asked of it. One can see me working through this process in my earlier article "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements". Here I'm building a web service end point - by definition the public interface to some code - and I am always hitting the controller (via the routing). And whatever I start testing, I just "fake it until I make it". My first test case here is "It needs to return a 200-OK status for GET requests on the /workshops endpoint", and the test is this:

/**
 * @testdox it needs to return a 200-OK status for successful GET requests
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturns200()
{
    $this->client->request('GET', '/workshops/');

    $this->assertEquals(Response::HTTP_OK, $this->client->getResponse()->getStatusCode());
}

To get this to pass, the first iteration of the implementation code is just this:

public function doGet() : JsonResponse
{
    return new JsonResponse(null);
}

The next case is "It returns a collection of workshop objects, as JSON", implemented thus:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturnsJson()
{
    $workshops = [
        new Workshop(1, 'Workshop 1'),
        new Workshop(2, 'Workshop 2')
    ];

    $this->client->request('GET', '/workshops/');

    $resultJson = $this->client->getResponse()->getContent();
    $result = json_decode($resultJson, false);

    $this->assertCount(count($workshops), $result);
    array_walk($result, function ($workshopValues, $i) use ($workshops) {
        $workshop = new Workshop($workshopValues->id, $workshopValues->name);
        $this->assertEquals($workshops[$i], $workshop);
    });
}

And the code to make it work shows I've pushed the mocking one level back into the application:

class WorkshopsController extends AbstractController
{

    private WorkshopCollection $workshops;

    public function __construct(WorkshopCollection $workshops)
    {
        $this->workshops = $workshops;
    }

    public function doGet() : JsonResponse
    {
        $this->workshops->loadAll();

        return new JsonResponse($this->workshops);
    }
}

class WorkshopCollection implements \JsonSerializable
{
    /** @var Workshop[] */
    private $workshops;

    public function loadAll()
    {
        $this->workshops = [
            new Workshop(1, 'Workshop 1'),
            new Workshop(2, 'Workshop 2')
        ];
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

(I've skipped a step here… the first iteration could/should be to mock the data right there in the controller, and then refactor it into the model, but this isn't about refactoring, it's about mocking).

From here I refactor further, so that instead of having the data itself in loadAll, the WorkshopCollection calls a repository, and the repository calls a DAO, which for now ends up being:

class WorkshopsDAO
{
    public function selectAll() : array
    {
        return [
            ['id' => 1, 'name' => 'Workshop 1'],
            ['id' => 2, 'name' => 'Workshop 2']
        ];
    }
}

The next step is where Mingo's question comes in. The next refactor is to swap out the mocked data for a DB call. We'll end up with this:

class WorkshopsDAO
{
    private Connection $connection;

    public function __construct(Connection $connection)
    {
        $this->connection = $connection;
    }

    public function selectAll() : array
    {
        $sql = "
            SELECT
                id, name
            FROM
                workshops
            ORDER BY
                id ASC
        ";
        $statement = $this->connection->executeQuery($sql);

        return $statement->fetchAllAssociative();
    }
}

But wait. if we do that, our unit tests will be hitting the DB. Which we are not gonna do. We've run out of things to directly mock as we're at the lower-boundary of our application, and the connection object is "someon else's code" (Doctrine/DBAL in this case). We can't mock that, but fortunately this is why I have the DAO tier. It acts as the frontier between our app and the external service provider, and we still mock that:

public function testDoGetReturnsJson()
{
    $workshopDbValues = [
        ['id' => 1, 'name' => 'Workshop 1'],
        ['id' => 2, 'name' => 'Workshop 2']
    ];

    $this->mockWorkshopDaoInServiceContainer($workshopDbValues);

    // ... unchanged ...

    array_walk($result, function ($workshopValues, $i) use ($workshopDbValues) {
        $this->assertEquals($workshopDbValues[$i], $workshopValues);
    });
}

private function mockWorkshopDaoInServiceContainer($returnValue = []): void
{
    $mockedDao = $this->createMock(WorkshopsDAO::class);
    $mockedDao->method('selectAll')->willReturn($returnValue);

    $container = $this->client->getContainer();
    $workshopRepository = $container->get('test.WorkshopsRepository');

    $reflection = new \ReflectionClass($workshopRepository);
    $property = $reflection->getProperty('dao');
    $property->setAccessible(true);
    $property->setValue($workshopRepository, $mockedDao);
}

We just use a mocking library (baked into PHPUnit in this case) to create a runtime mock, and we put that into our repository.

The tests pass, the DB is left alone, and the code is "complete" so we can push it to production perhaps. But we are not - as Mingo observed - actually testing that what we are asking the DB to do is being done. Because all our tests mock the DB part of things out.

The solution is easy, but it's not done via a unit test. It's done via an integration test (or end-to-end test, or acceptance test or whatever you wanna call it), which hits the real endpoint which queries the real database, and gets the real data. Adjacent to that in the test we hit the DB directly to fetch the records we're expecting, and then we compare the JSON that the end point returns represents the same data we manually fetched from the DB. This tests the SQL statement in the DAO, that the data fetched models OK in the repo, and that the model (WorkshopCollection here) applies whatever business logic is necessary to the data from the repo before passing it back to the controller to return with the response, which was requested via the external URL. IE: it tests end-to-end.

public function testDoGetExternally()
{
    $client = new Client([
        'base_uri' => 'http://fullstackexercise.backend/'
    ]);

    $response = $client->get('workshops/');
    $this->assertEquals(Response::HTTP_OK, $response->getStatusCode());
    $workshops = json_decode($response->getBody(), false);

    /** @var Connection */
    $connection = static::$container->get('database_connection');
    $expectedRecords = $connection->query("SELECT id, name FROM workshops ORDER BY id ASC")->fetchAll();

    $this->assertCount(count($expectedRecords), $workshops);
    array_walk($expectedRecords, function ($record, $i) use ($workshops) {
        $this->assertEquals($record['id'], $workshops[$i]->id);
        $this->assertSame($record['name'], $workshops[$i]->name);
    });
}

Note that despite I'm saying "it's not a unit test, it's an integration test", I'm still implementing it via PHPUnit. The testing framework should just provide testing functionality: it should not dictate what kind of testing you implement with it. And similarly not all tests written with PHPUnit are unit tests. They are xUnit style tests, eg: in a class called SomethingTest, and the the methods are prefixed with test and use assertion methods to implement the test constraints.

Also: why don't I just use end-to-end tests then? They seem more reliable? Yep they are. However they are also more fiddly to write as they have more set-up / tear-down overhead, so they take longer to write. Also they generally take longer to run, and given TDD is supposed to be a very quick cadence of test / run / code / run / refactor / run, the less overhead the better. The slower your tests are, the more likely you are to switch to writing code and testing later once you need to clear your head. In the mean time your code design has gone out the window. Also unit tests are more focused - addressing only a small part of the codebase overall - and that has merit in itself. Aso I used a really really trivial example here, but some end-to-end tests are really very tricky to write, given the overall complexity of the functionality being tested. I've been in the lucky place that at my last gig we had a dedicated QA development team, and they wrote the end-to-end tests for us, but this also meant that those tests were executed after the dev considered the tasks "code complete", and QA ran the tests to verify this. There is no definitive way of doing this stuff, that said.

To round this out, I'm gonna digress into another chat I had with Mingo yesterday:

Normally I'd say this:

Unit tests
Test logic of one small part of the code (maybe a public method in one class). If you were doing TDD and about to add a condition into your logic, you'd write a until test to cover the new expectations that the condition brings to the mix.
Functional tests
These are a subset of unit tests which might test a broader section of the application, eg from the public frontier of the application (so like an endpoint) down to where the code leaves the system (to a logger, or a DB, or whatever). The difference between unit tests and functional tests - to me - are just how distributed the logic being tests is throughout the system.
Integration tests
Test that the external connections all work fine. So if you use the app's DB configuration, the correct database is usable. I'd personally consider a test an integration test if it only focused on a single integration.
Acceptance tests(or end-to-end tests)
Are to integration tests what functional tests are to unit tests: a broader subset. That test above is an end-to-end test, it tests the web server, the application and the DB.

And yes I know the usages of these terms vary a bit.

Furthermore, considering the distinction between BDD and TDD:

  • The BDD part is the nicely-worded case labels, which in theory (but seldom in practise, I find) are written in direct collaboration with the client user.
  • The TDD part is when in the design-phase they are created: with TDD it's before the implementation is written; I am not sure whether in BDD it matters or is stipulated.
  • But both of them are design / development strategies, not testing strategies.
  • The tests can be implemented as any sort of test, not specifically unit tests or functional tests or end-to-end tests. The point is the test defines the design of the piece of code being written: it codifies the expectations of the behaviour of the code.
  • BDD and TDD tests are generally implemented via some unit testing framework, be it xUnit (testMyMethodDoesSomethingRight), or Jasmine-esque (it("does something right", function (){}).

One can also do testing that is not TDD or BDD, but it's a less than ideal way of going about things, and I would image result in subpar tests, fragmented test coverage, and tests that don't really help understand the application, so are harder to maintain in a meaningful way. But they are still better than no tests at all.

When I am designing my code, I use TDD, and I consider my test cases in a BDD-ish fashion (except I do it on the client's behalf generally, and sadly), and I use PHPUnit (xUnit) to do so on PHP, and Mocha (Jasime-esque) to do so on Javascript.

Hopefully that clarifies some things for people. Or people will leap at me and tell me where I'm wrong, and I can learn the error in my ways.

Righto.

--
Adam

Sunday, 4 April 2021

Unit testing: tests are not much bloody use if they always pass

G'day:

I started back on the next article of my VueJS / Symfony / etc series this morning. And now it's 18:24 and I've made zero progress. Well I've written a different blog article in the middle of that ("TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode"), but that was basically just a procrastinary exercise, avoiding getting down to the other work.

I'm currently laughing (more "nervous giggling") at my mental juxtaposition of "TDD & professionalism" from that earlier article and the title of this one. I'm not feeling very professional round about now.

OK so I sat down to get cracking on this new article, and the first thing I did was re-run my tests to make sure they were all still working. This is largely due to some issues I had with the Vue Test Utils library about a week ago, which I will discuss in that next article. Anyhow, everything was green. All good.

Next I opened my Vue component file, and remembered a slight tweak I wanted to make to my code. I have this (in WorkshopRegistrationForm.vue):

submitButtonLabel: function() {
    return this.registrationState === REGISTRATION_STATE_FORM ? "Register" : "Processing…";
},

I'm not in love with those hard-coded strings there; I want to extract them and use named constants instead (same as with the form-state constants I already have there).

The first thing I did was to locate the test for when the button switches to "Processing…", and update it to be broken so I can expect the change. Basically I figured I change the label to be something different, see the test fail, update the code to use a constant with the "different" value, see the tests pass, and then change the test back to expect the "Processing…" value, and then the const value in the code. Sometimes that's all a test change that's needed. And in hindsight I'm glad I did it.

The test method is thus (from test/unit/workshopRegistration.spec):

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        let lastLabel;
        component.vm.$watch("submitButtonLabel", (newValue) => {
            lastLabel = newValue;
        });

        let lastFormState;
        component.vm.$watch("isFormDisabled", (newValue) => {
            lastFormState = newValue;
        });

        await submitPopulatedForm();

        expect(lastLabel).to.equal("Processing…");
        expect(lastFormState).to.be.true;
    });
});

The line in question is that second to last expectation, and I just changed it to be:

expect(lastLabel).to.equal("Processing…TEST_WILL_FAIL");

And I ran the test:

 DONE  Compiled successfully in 3230ms

  [=========================] 100% (completed)

 WEBPACK  Compiled successfully in 3230ms

 MOCHA  Testing...



  Testing WorkshopRegistrationForm component
    Testing form submission
       should disable the form and indicate data is processing when the form is submitted


  1 passing (52ms)

 MOCHA  Tests completed successfully

root@9b1e15054be3:/usr/share/fullstackExercise#

Umm… hello?

Note: in the real situation I ran all the tests. It's not just a case of me running the wrong test or something completely daft. Although bear with me, there's def some daftness to come.

I did some fossicking around and putting some console.log entries about the place, and narrowed it down to how I had "fixed" these tests the last time I had issues with them. Previously the tests were running too quickly, and the Workshop listing had not been returned from the remote call in time for the test to try to submit the form, and any tests that relied on filling-out the form went splat cos there were not (yet) and workshops to select. OKOK, hang on this is what I'm talking about:

Those come from a remote call, so the data arrives asynchronously.

My fix was this bit in that test:

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        //...
    });
});

I was being "clever" and watching for when the workshops data finally arrived, waited for the options to populate, then we're good to run the test code. A whole bunch of the tests needed this. Now I hasten to add that I did thoroughly test this strategy when I updated all the tests. I made them all fail one of their expectations, watched the tests fail, then fixed the assertions and watched them pass. It's not like I made this change and just went "yeah that (will) work OK on my machine".

So what was the problem? Can you guess? Looking now, the tests do make a certain assumption.

Well. So my original issue was the code I was testing was running slow, so I changed the tests to wait for a change, and then run. And last week I tweaked my Docker settings to speed up all my containers. Now the code isn't slow. So now the workshops data is already loaded before the test code gets to that watch. So… there's nothing to watch. I started watching too late. I proved this to myself by slowing the remote call down again, and suddenly the tests started working again (ie: that test started to fail like I wanted it to).

It occurred to me then I had solved the original issue the wrong way. I was thinking synchronously about an asynchronous problem. I can't know if the data will arrive before or after my test runs. Just that at some time it is promised to arrive. Aha!

The data was already coming back in a promise (from WorkshopDAO.js):

selectAll() {
    return this.client.get(this.config.workshopsUrl)
        .then((response) => {
            return response.data;
        });
}

The problem is that by the time it bubbles back through DAO › Repository › Service › Component, I'd ditched the promise and just waited for the value (WorkshopRegistrationForm.vue):

async mounted() {
    this.workshops = await this.workshopService.getWorkshops();
},

And I needed that this.workshops to just be the eventual array of objects, becauseI have a v-for looping over it. And v-for ain't clever enough to take the promise of an array, it needs the actual array (this is from the same file, just further up at line 82):

<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I knew what I needed to do in the test. Instead of the watch, I just needed to append another then handler to the promise. Then whether or not the data has arrived back yet, the handler would run either straight away or once the data got there. But how do I get hold of that promise?

In the end I cheated: (again, same file, but a new version of it):

data() {
    return {
        //...
        promisedWorkshops: null,
        workshops: [],
        //...
    };
},
//...
async mounted() {
    this.promisedWorkshops = this.workshopService.getWorkshops();
    this.workshops = await this.promisedWorkshops;
},

I put the promise into the component's data as well as the values :-)

And the test becomes(from test/unit/workshopRegistration.spec again):

it("should disable the form and indicate data is processing when the form is submitted", async () => {
    component.vm.$watch("workshops", async () => {
    await component.vm.promisedWorkshops.then(async () => {
        await component.vm.$nextTick();

As I said about I just slap all the code in a then handler instead of a watch callback. The rest of the code is the same. I need to wait that tick because the options don't render until the next Vue-tick after the data arrives.

That's a much more semantically-appropriate (and less hacky) way of addressing this issue. I'm reasonably pleased with that as a solution. For now.

Having learned my lesson I went back and retested everything in both a broken and working state, with an instant response time, and a very delayed response time on the remote call. The tests seem stable now.

Until I find the next thing wrong with them, anyhow.

OK that's enough staring at code on the screen for the day. I'm gonna stare at a game on the screen instead now.

Righto.

--
Adam

Wednesday, 10 March 2021

TDDing the reading of data from a web service to populate elements of a Vue JS component

G'day:

This article leads on from "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements", which itself continues a saga I started back with "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 1: Intro & Nginx" (that's a 12-parter), and a coupla other intermediary articles also related to this body of work. What I'm doing here specifically leads on from "Vue.js: using TDD to develop a data-entry form". In that article I built a front-end Vue.js component for a "workshop registration" form, submission, and summary:


Currently the front-end transitions work, but it's not connected to the back-end at all. It needs to read that list of "Workshops to attend" from the DB, and it needs to save all the data between form and summary.

In the immediately previous article I set up the endpoint the front end needs to call to fetch that list of workshops, and today I'm gonna remove the mocked data the form is using, and get it to use reallyreally data from the DB, via that end point.

To recap, this is what we currently have:

In the <template/> section of WorkshopRegistrationForm.vue:

<label for="workshopsToAttend" class="required">Workshops to attend:</label>

<select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
    <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
</select>

And in the code part of it:

mounted() {
      this.workshops = this.workshopService.getWorkshops();
},

And WorkshopService just has some mocked data:

getWorkshops() {
    return [
        {value: 2, text:"Workshop 1"},
        {value: 3, text:"Workshop 2"},
        {value: 5, text:"Workshop 3"},
        {value: 7, text:"Workshop 4"}
    ];
}

In this exercise I will be taking very small increments in my test / code / test / code cycle here. It's slightly arbitrary, but it's just because I like to keep separate the "refactoring" parts from the "new development parts", and I'll be doing both. The small increments help surface problems quickly, and also assist my focus so that I'm less likely to stray off-plan and start implementing stuff that is not part of the increment's requirements. This might, however, make for tedious reading. Shrug.

We're gonna refactor things a bit to start with, to push the mocked data closer to the boundary between the application and the DB. We're gonna follow a similar tiered application approach as the back-end work, in that we're going to have these components:

WorkshopRegistrationForm.vue
The Vue component providing the view part of the workshop registration form.
WorkshopService
This is the boundary between WorkshopRegistrationForm and the application code. The component calls it to get / process data. In this exercise, all it needs to do is to return a WorkshopCollection.
WorkshopCollection
This is the model for the collection of Workshops we display. It loads its data via WorkshopRepository
Workshop
This is the model for a single workshop. Just an ID and name.
WorkshopsRepository
This deals with fetching data from storage, and modelling it for the application to use. It knows about the application domain, and it knows about the DB schema. And translates between the two.
WorkshopsDAO
Because the repository has logic in it that we will be testing, our code that actually makes the calls to the DB connector has been extracted into this DAO class, so that when testing the repository it can be mocked-out.
Client
This is not out code. We're using Axios to make calls to the API, and the DAO is a thin layer around this.

As a first step we are going to push that mocked data back to the repository. We can do this without having to change our tests or writing any data manipulation logic.

Here's the current test run:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
     should leave the submit button disabled until the form is filled
     should disable the form and indicate data is processing when the form is submitted
     should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
     should display the registration summary 'template' after the registration has been submitted
     should display the summary values in the registration summary

We have to keep that green, and all we can do is move code around. No new code for now (other than a wee bit of scaffolding to let the app know about the new classes). Here are all the first round of changes.

Currently the deepest we go in the new code is the repository, which just returns the mocked data at the moment:

class WorkshopsRepository {

    selectAll() {
        return [
            {value: 2, text:"Workshop 1"},
            {value: 3, text:"Workshop 2"},
            {value: 5, text:"Workshop 3"},
            {value: 7, text:"Workshop 4"}
        ];
    }
}

export default WorkshopsRepository;

The WorkshopCollection grabs that data and uses it to populate itself. It extends the native array class so can be used as an array by the Vue template code:

class WorkshopCollection extends Array {

    constructor(repository) {
        super();
        this.repository = repository;
    }

    loadAll() {
        let workshops = this.repository.selectAll();

        this.length = 0;
        this.push(...workshops);
    }
}

module.exports = WorkshopCollection;

And the WorkshopService has had the mocked values removed, an in their place it tells its WorkshopCollection to load itself, and it returns the populated collection:

class WorkshopService {

    constructor(workshopCollection) {
        this.workshopCollection = workshopCollection;
    }

    getWorkshops() {
        this.workshopCollection.loadAll();
        return this.workshopCollection;
    }

    // ... rest of it unchanged...
}

module.exports = WorkshopService;

WorkshopRegistrationForm.vue has not changed, as it still receives an array of data to its this.workshops property, which is still used in the same way by the template code:

<template>
    <form method="post" action="" class="workshopRegistration" v-if="registrationState !== REGISTRATION_STATE_SUMMARY">
        <fieldset :disabled="isFormDisabled">

            <!-- ... -->

            <select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
                <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
            </select>

            <!-- ... -->

        </fieldset>
    </form>

    <!-- ... -->
</template>

<script>
// ...

export default {
    // ...
    mounted() {
        this.workshops = this.workshopService.getWorkshops();
    },
    // ...
}
</script>

Before we wire this up, we need to change our test to reflect where the data is coming from now. It used to be just stubbing the WorkshopService's getWorkshops method, but now we will need to be mocking the WorkshopRepository's selectAll method instead:

workshopService = new WorkshopService();
sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(expectedOptions);

The tests now break as we have not yet implemented this change. We need to update App.vue, which has a bit more work to do to initialise that WorkshopService now, with its dependencies:

<template>
    <workshop-registration-form></workshop-registration-form>
</template>

<script>
import WorkshopRegistrationForm from "./WorkshopRegistrationForm";
import WorkshopService from "./WorkshopService";
import WorkshopCollection from "./WorkshopCollection";
import WorkshopsRepository from "./WorkshopsRepository";

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
    )
);

export default {
    name: 'App',
    components: {
        WorkshopRegistrationForm
    },
    provide: {
        workshopService: workshopService
    }
}
</script>

Having done this, the tests are passing again.

Next we need to deal with the fact that so far the mocked data we're passing around is keyed on how it is being used in the <option> tags it's populating:

return [
    {value: 2, text:"Workshop 1"},
    {value: 3, text:"Workshop 2"},
    {value: 5, text:"Workshop 3"},
    {value: 7, text:"Workshop 4"}
]

value and text are derived from the values' use in the mark-up, whereas what we ought to be mocking is an array of Workshop objects, which have keys id and name:

class Workshop {

    constructor(id, name) {
        this.id = id;
        this.name = name;
    }
}

module.exports = Workshop;

We will need to update our tests to expect this change:

sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

The tests fail again, so we're ready to change the code to make the tests pass. The repo now returns objects:

class WorkshopsRepository {

    selectAll() {
        return [
            new Workshop(2, "Workshop 1"),
            new Workshop(3, "Workshop 2"),
            new Workshop(5, "Workshop 3"),
            new Workshop(7, "Workshop 4")
        ];
    }
}

And we need to make a quick change to the stubbed saveWorkshopRegistration method in WorkshopService too, so the selectedWorkshops are now filtered on id, not value:

saveWorkshopRegistration(details) {
    let allWorkshops = this.getWorkshops();
    let selectedWorkshops = allWorkshops.filter((workshop) => {
        return details.workshopsToAttend.indexOf(workshop.name) >= 0;
        return details.workshopsToAttend.indexOf(workshop.id) >= 0;
    });
    // ...

And the template now expects them:

<option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I almost forgot about this, but the summary section also uses the WorkshopCollection data, and currently it's also coded to expect mocked workshops with value/text properties, rather than id/name one. So the test needs updating:

it("should display the summary values in the registration summary", async () => {
    const summaryValues = {
        registrationCode : "TEST_registrationCode",
        fullName : "TEST_fullName",
        phoneNumber : "TEST_phoneNumber",
        emailAddress : "TEST_emailAddress",
        workshopsToAttend : [{value: "TEST_workshopToAttend_VALUE", text:"TEST_workshopToAttend_TEXT"}]
        workshopsToAttend : [{id: "TEST_workshopToAttend_ID", name:"TEST_workshopToAttend_NAME"}]
    };
    
	// ...

    let ddValue = actualWorkshopValue.find("ul>li");
    expect(ddValue.exists()).to.be.true;
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].name);
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].value);

	// ...
});

And the code change in the template:

<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.value">{{workshop.text}}</li>
<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.id">{{workshop.name}}</li>

And the tests pass again.

Now we will introduce the WorkshopsDAO with the data that it will get back from the API call mocked. It'll return this to the WorkshopsRepository which will implement the modelling now:

class WorkshopDAO {

    selectAll() {
        return [
            {id: 2, name: "Workshop 1"},
            {id: 3, name: "Workshop 2"},
            {id: 5, name: "Workshop 3"},
            {id: 7, name: "Workshop 4"}
        ];
    }
}

export default WorkshopDAO;

We also now need to update the test initialisation to pass a DAO instance into the repository, and for now we can remove the Sinon stubbing because the DAO is appropriately stubbed already:

let repository = new WorkshopsRepository();
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

workshopService = new WorkshopService(
    new WorkshopCollection(
        repository
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);

That's the only test change here (and the tests now break). We'll update the code to also pass-in a DAO when we initialise WorkshopService's dependencies, and also implement the modelling code in the WorkshopRepository class's selectAll method:

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);
class WorkshopRepository {

    constructor(dao) {
        this.dao = dao;
    }

    selectAll() {
        return this.dao.selectAll().map((unmodelledWorkshop) => {
            return new Workshop(unmodelledWorkshop.id, unmodelledWorkshop.name);
        });
    }
}

We're stable again. The next bit of development is the important bit: add the code in the DAO that actually makes the DB call. We will be changing the DAO to be this:

class WorkshopDAO {

    constructor(client, config) {
        this.client = client;
        this.config = config;
    }

    selectAll() {
        return this.client.get(this.config.workshopsUrl)
            .then((response) => {
                return response.data;
            });
    }
}

export default WorkshopDAO;

And that Config class will be this:

class Config {
    static workshopsUrl = "http://fullstackexercise.backend/workshops/";
}

module.exports = Config;

Now. Because Axios's get method returns a promise, we're going to need to cater to this in the up-stream methods that need manipulate the data being promised. Plus, ultimately, WorkshopRegistration.vue's code is going to receive a promise, not just the raw data. IE, this code:

mounted() {
    this.workshops = this.workshopService.getWorkshops();
    this.workshops = this.workshopService.getWorkshops()
        .then((workshops) => {
            this.workshops = workshops;
        });
},

A bunch of the tests rely on the state of the component after the data has been received:

  • should have a required text input for fullName, maxLength 100, and label 'Full name'
  • should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
  • should have a required text input for emailAddress, maxLength 320, and label 'Email address'
  • should have a required password input for password, maxLength 255, and label 'Password'
  • should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
  • should list the workshop options fetched from the back-end
  • should have a button to submit the registration
  • should leave the submit button disabled until the form is filled
  • should disable the form and indicate data is processing when the form is submitted
  • should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
  • should display the registration summary 'template' after the registration has been submitted
  • should display the summary values in the registration summary

In these tests we are gonna have to put the test code within a watch handler, so it waits until the promise returns the data (and accordingly workshops gets set, and the watch-handler fires) before trying to test with it, eg:

it("should list the workshop options fetched from the back-end", () => {
    component.vm.$watch("workshops", async () => {
    	await flushPromises();
        let options = component.findAll("form.workshopRegistration select[name='workshopsToAttend[]']>option");

        expect(options).to.have.length(expectedOptions.length);
        options.forEach((option, i) => {
            expect(option.attributes("value"), `option[${i}] value incorrect`).to.equal(expectedOptions[i].value.toString());
            expect(option.text(), `option[${i}] text incorrect`).to.equal(expectedOptions[i].text);
        });
    });
});

Other than that the tests shouldn't need changing. Also now that the DAO is not itself a mock as it was in the last iteration, we need to go back to mocking it again, this time to return a promise of things to come:

sinon.stub(dao, "selectAll").returns(Promise.resolve(
    expectedOptions.map((option) => ({id: option.value, name: option.text}))
));

It might seem odd that we are making changes to the DAO class but we're not unit testing those changes: the test changes here are only to accommodate the other objects' changes from being synchronous to being asynchrous. Bear in mind that these are unit tests and the DAO only exists as a thin wrapper around the Axios Client object, and its purpose is to give us some of our code that we can mock to prevent Axios from making an actual API call when we test the objects above it. We'll do a in integration test of the DAO separately after we deal with the unit testing and implementation.

After updating those tests to deal with the promises, the tests collapse in a heap, but this is to be expected. We'll make them pass again by bubbling-back the promise from returned from the DAO.

For the code revisions to the other classes I'm just going to show an image of the diff between the files from my IDE. I hate using pictures of code, but this is the clearest way of showing the diffs. All the actual code is on Github @ frontend/src/workshopRegistration

Here in the repository we need to use the values from the promise, so we need to wait for them to be resolved.

Similarly with the collection, service, and component (in two places), as per below:



It's important to note that the template code in the component did not need changing at all: it was happy to wait until the promise resolved before rendering the workshops in the select box.

Having done all this, the tests pass wonderfully, but the UI breaks. Doh! But in an "OK" way:

 


Entirely fair enough. I need to send that header back with the response.

class WorkshopControllerTest extends TestCase
{

    private WorkshopsController $controller;
    private WorkshopCollection $collection;

    protected function setUp(): void
    {
        $this->collection = $this->createMock(WorkshopCollection::class);
        $this->controller = new WorkshopsController($this->collection);
    }

    /**
     * @testdox It makes sure the CORS header returns with the origin's address
     * @covers ::doGet
     */
    public function testDoGetSetsCorsHeader()
    {
        $testOrigin = 'TEST_ORIGIN';
        $request = new Request();
        $request->headers->set('origin', $testOrigin);

        $response = $this->controller->doGet($request);

        $this->assertSame($testOrigin, $response->headers->get('Access-Control-Allow-Origin'));
    }
}
class WorkshopsController extends AbstractController
{

    // ...

    public function doGet(Request $request) : JsonResponse
    {
        $this->workshops->loadAll();

        $origin = $request->headers->get('origin');
        return new JsonResponse(
            $this->workshops,
            Response::HTTP_OK,
            [
                'Access-Control-Allow-Origin' => $origin
            ]
        );
    }
}

That sorts it out. And… we're code complete.

But before I finish, I'm gonna do an integration test of that DAO method, just to automate proof that it does what it needs do. I don't count "looking at the UI and going 'seems legit'" as testing.

import {expect} from "chai";
import Config from "../../src/workshopRegistration/Config";
import WorkshopsDAO from "../../src/workshopRegistration/WorkshopsDAO";
const client = require('axios').default;

describe("Integration tests of WorkshopsDAO", () => {

    it("returns the correct records from the API", async () => {
        let directCallObjects = await client.get(Config.workshopsUrl);

        let daoCallObjects = await new WorkshopsDAO(client, Config).selectAll();

        expect(daoCallObjects).to.eql(directCallObjects.data);
    });
});

And that passes.

OK we're done here. I learned a lot about JS promises in this exercise: I hid a lot of it from you here, but working out how to get those tests working for async stuff coming from the DAO took me an embarassing amount of time and frustration. Once I nailed it it all made sense though, and I was kinda like "duh, Cameron". So that's… erm… good…?

The next thing we need to do is to sort out how to write the data to the DB now. But before we do that, I'm feeling a bit naked without having any data validation on either the form or on the web service. Well: the web service end of things doesn't exist yet, but I'm going to start that work with putting some expected validation in. But I'll put some on the client-side first. As with most of the stuff in this current wodge of work: I do not have the slightest idea of how to do form validation with Vue.js. But tomorrow I will start to find out (see "Vue.js and TDD: adding client-side form field validation" for the results of that). Once again, I have a beer appointment to get myself too for now.

Righto.

--
Adam

Thursday, 4 March 2021

Kahlan: getting it working with Symfony 5 and generating a code coverage report

G'day:

This is not the article I intended to write today. Today (well: I hoped to have it done by yesterday, actually) I had hoped to be writing about my happy times doing using TDD to implement a coupla end-points I need in my Symfony-driven web service. I got 404 (chuckle) words into that and then was blocked by trying to get Kahlan to play nice for about five hours (I did have dinner in that time too, but it was add my desk, and with a scowl on my face googling stuff). And that put me at 1am so I decided to go to bed. I resumed today an hour or so ago, and am just now in the position to get going again. But I've decided to document that lost six hours first.

I sat down to create a simple endpoint to fetch some data, and started by deciding on my first test case, which was "It needs to return a 200-OK status for GET requests on the /workshops endpoint". I knew Symfony did some odd shenanigans to be able to functionally test right from a route slug rather than having tests directly instantiating controller classes and such. I checked the docs and all this is done via an extension of PHPUnit, using WebTestCase. But I don't wanna use PHPUnit for this. Can you imagine my test case name? Something like: testItNeedsToReturnA200OKStatusForGetRequestsOnTheWorkshopsEndpoint. Or I could break PSR-12/PSR-1 and make it (worse) with test_it_needs_to_Return_a_200_OK_status_for_get_requests_on_the_workshops_endpoint (this is why I will never use phpspec). Screw that. I'm gonna work out how to do these Symfony WebTestCase tests in Kahlan.

Not so fast mocking PHPUNit there, Cameron

2021-03-06

Due to some - probably show-stopping - issues I'm seeing with Kahlan, I have been looking at PHPUnit some more. I just discovered the textdox reporting functionality it has, which makes giving test case names much clearer.

/** @testdox Tests the /workshops endpoint methods */
class workshopsEndPointTet {
    /** @testdox it needs to return a 200-OK status for GET requests */
    function testReturnStatus() {}
}

This will output in the test runs as:

Perfect. I still prefer the style of code Kahlan uses, but… this is really good to know.

First things first, I rely heavily on PHPUnit's code coverage analysis, so I wanted to check out Kahan's offering. The docs seem pretty clear ("Code Coverage"), and seems I just want to be using the lcov integration Kahlan offers, like this:

vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info"
genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info

I need to install lcov first via my Dockerfile:

RUN apt-get install lcov --yes

OK so I did all that, and had a look at the report:

Pretty terrible coverage, but it's working. Excellent. But drilling down into the report I see this:

>

This is legit reporting because I have no tests for the Kernel class, but equally that class is generated by Symfony and I don't want to cover that. How do I exclude it from code coverage analysis? I'm looking for Kahlan's equivalent of PHPUnit's @codeCoverageIgnore. There's nothing in the docs, and all I found was a passing comment against an issue in Github asking the same question I was: "Exclude a folder in code coverage #321". The answer is to do this sort of thing to my kahlan-config.php file:

use Kahlan\Filter\Filters;
use Kahlan\Reporter\Coverage;
use Kahlan\Reporter\Coverage\Driver\Xdebug;

$commandLine = $this->commandLine();
$commandLine->option('no-header', 'default', 1);

Filters::apply($this, 'coverage', function($next) {
    if (!extension_loaded('xdebug')) {
        return;
    }
    $reporters = $this->reporters();
    $coverage = new Coverage([
        'verbosity' => $this->commandLine()->get('coverage'),
        'driver'    => new Xdebug(),
        'path'      => $this->commandLine()->get('src'),
        'exclude'   => [
            'src/Kernel.php'
        ],
        'colors'    => !$this->commandLine()->get('no-colors')
    ]);
    $reporters->add('coverage', $coverage);
});

That seems a lot of messing around to do something that seems like it should be very simple to me. I will also note that Kahlan - currently - has no ability to suppress code coverage at a method or code-block level either (see "Skip individual functions in code coverage? #333"). This is not a deal breaker for me in this work, but it would be a show-stopper on any of the codebases I have worked on in the last decade, as they've all been of dubious quality, and all needed some stuff to be actively "overlooked" as they're not testable as they currently stand, and we (/) like my baseline code coverage report to have 100% coverage reported, and be entirely green. This is so if any omissions creep in, they're easy to spot (see "Yeah, you do want 100% test coverage"). Anyway, I'll make that change and omit Kernel from analysis:

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info"

.................                                                 17 / 17 (100%)



Expectations   : 51 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 17 of 17 PASS in 0.527 seconds (using 7MB)

Coverage Summary
----------------

Total: 33.33% (1/3)

Coverage collected in 0.001 seconds


root@13038aa90234:/usr/share/fullstackExercise# genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info
Reading data file var/tmp/lcov/coverage.info
Found 2 entries.
Found common filename prefix "/usr/share/fullstackExercise"
Writing .css and .png files.
Generating output.
Processing file src/MyClass.php
Processing file src/Controller/GreetingsController.php
Writing directory view page.
Overall coverage rate:
  lines......: 33.3% (1 of 3 lines)
  functions..: 50.0% (1 of 2 functions)
root@13038aa90234:/usr/share/fullstackExercise#

And the report now doesn't mention Kernel:

Cool.

Now to implement that test case. I need to work out how to run a Symfony request without using WebTestCase. Well I say "I need to…" I mean I need to google someone else who's already done it, and copy them. I have NFI how to do it, and I'm not prepared to dive into Symfony code to find out how. Fortunately someone has already cracked this one: "Functional Test Symfony 4 with Kahlan 4". It says "Symfony 4", but I'll check if it works on Symfony 5 too. I also happened back to the Kahlan docs, and they mention the same guy's solution ("Integration with popular frameworks › Symfony"). This one points to a library to encapsulate it (elephantly/kahlan-bundle), but that is actively version-constrained to only Symfony 4. Plus it's not seen any work since 2017, so I suspect it's abandoned.

Anyway, back to samsonasik's blog article. It looks like this is the key bit:

$this->request = Request::createFromGlobals();
$this->kernel  = new Kernel('test', false);
$request = $this->request->create('/lucky/number', 'GET');
$response = $this->kernel->handle($request);

That's how to create a Request and get Symfony's Kernel to run it. Easy. Hopefully. Let's try it.

namespace adamCameron\fullStackExercise\spec\functional\Controller;

use adamCameron\fullStackExercise\Kernel;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;

describe('Tests of GreetingsController', function () {

    beforeAll(function () {
        $this->request = Request::createFromGlobals();
        $this->kernel  = new Kernel('test', false);
    });

    describe('Tests of doGet', function () {
        it('returns a JSON greeting object', function () {
            $testName = 'Zachary';

            $request = $this->request->create("/greetings/$testName", 'GET');
            $response = $this->kernel->handle($request);

            expect($response->getStatusCode())->toBe(Response::HTTP_OK);
        });
    });
});

I'm not getting too ambitious here, and it's not addressing the entire test case yet. I'm just making the request and checking its response status code.

And this just goes splat:

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --ff

E                                                                 18 / 18 (100%)


Tests of GreetingsController
  Tests of doGet
    ✖ it returns a JSON greeting object
      an uncaught exception has been thrown in `vendor/symfony/framework-bundle/Kernel/MicroKernelTrait.php` line 91

      message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` require(/tmp/kahlan/usr/share/fullstackExercise/src/config/bundles.php): Failed to open stream: No such file or directory"

        [NA] - vendor/symfony/framework-bundle/Kernel/MicroKernelTrait.php, line  to 91

Eek. I had a look into this, and the code in question is try to do this:

$contents = require $this->getProjectDir().'/config/bundles.php';

And the code in getProjectDir is thus:

<pre class="source-code"><code>public function getProjectDir()
{
    if (null === $this-&gt;projectDir) {
        $r = new \ReflectionObject($this);

        if (!is_file($dir = $r-&gt;getFileName())) {
            throw new \LogicException(sprintf('Cannot auto-detect project dir for kernel of class &quot;%s&quot;.', $r-&gt;name));
        }

        $dir = $rootDir = \dirname($dir);
        while (!is_file($dir.'/composer.json')) {
            if ($dir === \dirname($dir)) {
                return $this-&gt;projectDir = $rootDir;
            }
            $dir = \dirname($dir);
        }
        $this-&gt;projectDir = $dir;
    }

    return $this-&gt;projectDir;
}
</code></pre>

The code starts in the directory of the current file, and traverses up the directory structure until it finds the directory with composer.json.If it doesn't find that, then - somewhat enigmatically, IMO - it just says "ah now, we'll just use the directory we're in now. It'll be grand". To me if it expects to find what it's looking for by looking up the ancestor directory path and that doesn't work: throw an exception. Still. In the normal scheme of things, this would work cos the app's Kernel class - by default - seems to live in the src directory, which is one down from where composer.json is.

So why didn't it work? Look at the directory that it's trying to load the bundles from: /tmp/kahlan/usr/share/fullstackExercise/src/config/bundles.php. Where? /tmp/kahlan/usr/share/fullstackExercise/. Ain't no app code in there, pal. It's in /usr/share/fullstackExercise/. Why's it looking in there? Because the Kernel object that is initiating all this is at /tmp/kahlan/usr/share/fullstackExercise/src/Kernel.php. It's not the app's own one (at /usr/share/fullstackExercise/src/Kernel.php), it's all down to how Kahlan monkey-patches everything that's going to be called by the test code, on the off chance you want to spy on anything. It achieves this by loading the source code of the classes, patching the hell out of it, and saving it in that /tmp/kahlan. The only problem with this is that when Symfony traverses up from where the patched Kernel class is… it never finds composer.json, so it just takes a guess at where the project directory is. And it's not a well-informed guess.

I'm not sure who I blame more here, to be honest. Kahlan for patching everything and running code from a different directory from where it's supposed to be; or Symfony for its "interesting" way to know where the project directory is. I have an idea here, Symfony: you could just ask me. Or even force me tell it. Ah well. Just trying to be "helpful" I s'pose.

Anyway, I can exclude files from being patched, according to the docs:

  --exclude=<string>                  Paths to exclude from patching. (default: `[]`).

I tried sticking the path to Kernel in there: src/Kernel.php, and that didn't work. I hacked about in the code and it doesn't actually want a path, it wants the fully-qualified class name, eg: adamCameron\fullStackExercise\Kernel. I've raised a ticket for this with Kahlan, just to clarify the wording there: Bug: bad nomenclature in help: "path" != "namespace".

This does work…

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --ff --exclude=adamCameron\\fullStackExercise\\Kernel

E                                                                 18 / 18 (100%)


Tests of GreetingsController
  Tests of doGet
    ✖ it returns a JSON greeting object
      an uncaught exception has been thrown in `vendor/symfony/deprecation-contracts/function.php` line 25

      message:`Kahlan\PhpErrorException` Code(0) with message "`E_USER_DEPRECATED` Please install the \"intl\" PHP extension for best performance."

        [NA] - vendor/symfony/deprecation-contracts/function.php, line  to 25
        trigger_deprecation - vendor/symfony/framework-bundle/DependencyInjection/FrameworkExtension.php, line  to 253

This is not exactly what I want, but it's a different error, so Symfony is finding itself this time, and then just faceplanting again. However when I look into the code, it's this:

if (!\extension_loaded('intl') && !\defined('PHPUNIT_COMPOSER_INSTALL')) {
    trigger_deprecation('', '', 'Please install the "intl" PHP extension for best performance.');
}

// which in turn...

function trigger_deprecation(string $package, string $version, string $message, ...$args): void
{
    @trigger_error(($package || $version ? "Since $package $version: " : '').($args ? vsprintf($message, $args) : $message), \E_USER_DEPRECATED);
}

So Symfony is very quietly raising a flag that it suggests I have that extension installed. But only as a deprecation notice, and even then it's @-ed out. Somehow Kahlan is getting hold of that and going "nonono, this is worth stopping for". No it ain't. Ticket raised: "Q: should trigger_error type E_USER_DEPRECATED cause testing to halt?".

Anyway, the point is a legit one, so I'll install the intl extension. I initially thought it was just a matter of slinging this in the Dockerfile:

RUN apt-get install --yes zlib1g-dev libicu-dev g++
RUN docker-php-ext-install intl

But that didn't work, I needed a bunch of Other Stuff too:

RUN apt-get install --yes zlib1g-dev libicu-dev g++
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl

(Thanks to the note in docker-php-ext-install intl fails #57 for solving that for me).

After rebuilding the container, let's see what goes wrong next:

root@58e3325d1a16:/usr/share/fullstackExercise# composer coverage
> vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --exclude=adamCameron\\fullStackExercise\\Kernel

..................                                                18 / 18 (100%)



Expectations   : 52 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 18 of 18 PASS in 0.605 seconds (using 12MB)

Coverage Summary
----------------

Total: 100.00% (3/3)

Coverage collected in 0.001 seconds


> genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info
Reading data file var/tmp/lcov/coverage.info
Found 2 entries.
Found common filename prefix "/usr/share/fullstackExercise"
Writing .css and .png files.
Generating output.
Processing file src/MyClass.php
Processing file src/Controller/GreetingsController.php
Writing directory view page.
Overall coverage rate:
  lines......: 100.0% (3 of 3 lines)
  functions..: 100.0% (2 of 2 functions)
root@58e3325d1a16:/usr/share/fullstackExercise#

(I've stuck a Composer script in for this, btw):

"coverage": [
    "vendor/bin/kahlan --lcov=\"var/tmp/lcov/coverage.info\" --exclude=adamCameron\\\\fullStackExercise\\\\Kernel",
    "genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info"
]

And most promising of all is this:

All green! I like that.

And now I'm back to where I wanted to be, yesterday, as I typed that 404th word of the article I was meant to be working on. 24h later now.

Righto.

--
Adam