Showing posts with label Unit Testing. Show all posts
Showing posts with label Unit Testing. Show all posts

Sunday 2 May 2021

How TDD and automated testing helped me solve an Nginx config problem I had created for myself

G'day:

I have a "website" I'm building on as part of a series of articles I'm writing about Lucee / CFWheels / Docker. I have a Docker container running Nginx which proxies requests for CFML code to a Docker container running Lucee.

Due to the nature of web applications, I have my web-accessible assets - JS, CSS, image and "entry point" index.cfm and Application.cfc files - in a public directory off my app root; and adjacent to that I have a src directory for my code, and vendor directory for third-party code (like stuff I install from ForgeBox via CommandBox).

This /public abstraction needs to be hidden from the end user: they're going to want to be browsing to - for example - http://example.com/, not http://example.com/public. So this means the proxy from Nginx to Lucee also needs to deal with that. It is imperative that no CFML files are publicly exposed other than the aforementioned index.cfm and Application.cfc

I'm terrible at configuring Nginx, and mak a lot of mistakes. Some mistakes are obvious because Nginx flat-out refuses to start. Others are less obvious, and bleed out as "unexpected behaviour" later in the piece. Knowing this, I approached the exercise in a TDD fashion; identifying what cases need addressing and writing tests for them, and then doing the config work to make each pass. Note: my wording is ambiguous there: I did not write more than one test at a time. I wrote a test, then got the config to make that test pass. Then I wrote the next test and reconfigured to make that pass (whilst also keeping the earlier ones passing too). I've already detailed some of this in my earleir article "Adding TestBox, some tests and CFConfig into my Lucee container".

As a result of these efforts to get Nginx working how I expected it to, I ended up with these green tests:

(Ignore how I'm skipping some of these tests. This is down to a bug in CFWheels that doesn't report 404 situations with an actual 404 status code when in dev mode. This does demonstrate though how I identified a shortcoming in behaviour whilst TDDing the work, that said).

And that's great. At every step as I was configuring more and more of Nginx's handling of requests, I could see that any change I made had a) addressed the new requirement; b) not broken any previous requirement. And it was indeed the reality at times that something I tried fixed one issue, but broke something else.

Back to those tests I'm skipping. This showed to me that I was short some cases: CFWheels handles things differently from how I expect it to in some situations, so I figured I had better have two flavours of test: one set for proxying to non-CFWheel-handled URLs; another set for when the URLs are within CFWheels domain. And I'm glad I did make this call, because it showed-up a bug in my Nginx config:

And when I checked those URLs, I saw the problem:

expectedParamValue = "expectedValue"
// ... rest of test not relevant here
expect(response.fileContent).toInclude(
    "Expected query param value: [#expectedParamValue#]",
    "Query parameter value was incorrect (URL: #testUrl#)"
)

But what I was seeing at that URL was this:

Expected query param value: [expectedValue?testParam=expectedValue]

The query string part (including the ?) was being appended to the URL sent to Lucee twice (same problem for both those failing tests). I was pretty puzzled how my previous non-Wheels test was passing, and it still seemed legit. Bemusing. However I was actively appending the query string in my proxy_pass URL, so that was "clearly wrong":

proxy_pass  http://cfml-in-docker.lucee:8888/public$fastcgi_script_name$is_args$args;

I got rid of that, and figured "I had better go and check why those other tests are passing after I re-run the tests here, to check that change:


Dammit. Now the CFWheels-specific tests are passing, but the non-Wheels ones are failing. Time for me to RTFM, cos I'm clearly doing something wrong here.

The first thing I'm going wrong is here:

proxy_pass  http://cfml-in-docker.lucee:8888/public$fastcgi_script_name;

$fastcgi_script_name is a PHP thing, and whilst it coincidentally holds the URI I want, it's the wrong thing to have here. So I put $request_uri back in there.

Right and that broke all the path_info CFWheels tests, so wasn't right. I decided to read more closely. It turns out that request_uri is the whole original URI (including the path_info and query string), and thus is ignoring my rewrites, and it was the reason that the query params were getting doubled up. In my rewrite I had this:

location @rewrite {
    rewrite ^/(.*)? /index.cfm$request_uri last;
    rewrite ^ /index.cfm last;
    return 404;
}

I just wanted $uri, which is just the document URI part of the requested URI, and it also reflects any changes made to that URI during rewrites and what-have-you. So once I used that in my rewrite and for my proxy_pass URL, the tests now look better:

I've abbreviated how long it took me to work this out, and how many cycles of trial end error it too me. Having those automated tests in place were gold because after each iteration I knew how wrong I was - for all cases - in half a second. I didn't need to manually go "OK, did it work for this?" "did it break this other one?" etc, for 20-odd tests.

It was also a big help to me to take the TDD path here, and stop and think & reason about exactly what my expectations ought to be for each of the cases I had. It also lead me to add more cases, such as the combinations of "it has both path_info and query parameters", as well as realising the path through the Nginx config was different for URLs aimed at Wheels (which are completely rewritten), and the ones directly to files in the public directory. I could easily cover both cases by duplicating the tests and changing the URLs sligthly.

Things seem to be working now, but if I find something else wrong, I will first work out what my expectations of it to be right are, and write a quick test for it. Then I'll fix it (without breaking anything else).

For now though: I'm fed-up with Nginx & CFML & CFWheel and I'm gonna do something els for a while. But I'll be back to it later this afternoon: I'm wll behind wehre I want to be with this stuff, and using the bank-holiday weekend to catch up a bit.

The "final" state of my Nginx site config is (docker/nginx/sites/default.conf):

server {
    listen 80;
    listen [::]:80;

    #rewrite_log on;

    server_name cfml-in-docker.frontend;
    root /usr/share/nginx/html;
    index index.html index.cfm;

    resolver 127.0.0.11;

    location / {
        try_files $uri $uri/ @rewrite;
    }

    location @rewrite {
        rewrite ^/(.*)? /index.cfm$uri last;
        rewrite ^ /index.cfm last;
        return 404;
    }

    location ~ \.(?:cfm|cfc)\b {
        proxy_http_version  1.1;
        proxy_set_header    Connection "";
        proxy_set_header    Host                $host;
        proxy_set_header    X-Forwarded-Host    $host;
        proxy_set_header    X-Forwarded-Server  $host;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;     ## CGI.REMOTE_ADDR
        proxy_set_header    X-Forwarded-Proto   $scheme;                        ## CGI.SERVER_PORT_SECURE
        proxy_set_header    X-Real-IP           $remote_addr;
        expires             epoch;

        proxy_pass  http://cfml-in-docker.lucee:8888/public$uri$is_args$args;
    }

    location ~ /\.ht {
        deny all;
    }
}

Righto, where's me shooting game?

--
Adam

Sunday 11 April 2021

TDD: eating the elephant one bite at a time

G'day:

I've got another interesting reader comment to address today. My namesake Adam Tuttle has sent through this wodge of questions, attached to my earlier article "TDD is not a testing strategy":

You weren't offering to teach anyone about TDD in this post, but hey... I'm here, you're here, I have questions... Shall we?

One of the things I struggle with w/r/t TDD is the temptation to test every. single. action. For example, a large, complex form-save. Dozens, possibly 100 fields. Whether that ends up saved via ORM entities or queries, chances are good that since the form is so large the logic is a bit beyond a dirt-simple single-CRUD query. Multiple relationships, order of operations, permissions to modify different fields, etc.

My gut reaction is to skip the unit-testing layer and jump up to an integration or E2E test: submit the form, then view the detail-view (or re-open the record for editing) and assert that the values changed have persisted and they are what you're expecting, where you're expecting it, on the latter view.

BUT doesn't that almost entirely eliminate the possibility of using mocks to make the tests fast(er), the base-state predictable, and to not leave a mess in a designated testing db/env? My (and I mean this literally!) feeble, bad-at-TDD brain doesn't comprehend what a good solution is to this problem.

Unless the solution is to not test that aspect of the code? I fully subscribe to the "100% test coverage is a fools errand" ethos, so perhaps this is something that should just not be tested; and save the testing for things that are doing "interesting" algorithmic work? (not-crud)

Since I specifically mentioned permission to edit a certain field in my example, I guess I should say that stands out to me as something I would likely want to test. Thinking about it now, my brain wants to architect a system that accepts the user object and the field name as inputs and returns a boolean for editable or not. Easy enough to implement and you're basically changing the conditional in the save method from "if user has X permission, entity.setProperty(newval)" to "if evaluatePermissions(user, property) = true, entity.setProperty(newval)", so there's no big mental leap to the next developer to read the code... but it also seems hairy to separate the permission logic from the form-save logic, not because of the separation, but because it leads towards combining the permission logic of lots of disparate and unrelated forms. I'm not seeing how that could be cleanly implemented.

So yeah. There's a can of worms for you. What do you make of that?

Nice one. There's a lot of work there, so I am going to approach it how I'd approach addressing any other requirements: a bit at a time. Like I'm doing TDD. Except I've NFI how I can write tests for a blog article, so just imagine that part. Also remember that TDD is not a testing strategy, it's a design strategy, so my TDD-ish approach here is focusing on identifying cases, and addressing them one at a time. OKOK, this is torturing my fixation with TDD a bit. Sorry.

You weren't offering to teach anyone about TDD in this post

You weren't offering to teach anyone about TDD in this post. OK, so first point. I'm always open to excuses to think about TDD practices, and how we can use them to address our work. So don't worry abou that.

It needs to save a large form

For example, a large, complex form-save. Dozens, possibly 100 fields. For my convenience I am going to interpret this as two separate things: the form, and the code that processes a post request. I suspect you were only meaning the latter. But, really, the same approach applies to both.

In seeing a large HTML form, you are not using a TDD mindset: how do I test that huge thing?! Using the TDD mindset, it's not a huge thing. It starts off being nothing. It starts off perhaps with "requests to /myForm.html return a 200-OK". From there it might move on to "It will be submitted as a POST to /processMyForm", and the to "after a successful form submission the user is redirected to /formSubmissionResults.html, and that request's status is 201-CREATED". Small steps. No form fields at all yet. But we have tested that requirements of your work have been tested (and implemented and pass the tests).

Next you might start addressing a form field requirement: "it has a text field with maximum length 100 for firstName". Quickly after that you have the same case for lastName. And then there might be 20 other fields that are all text and all have a sole constraint of maxLength, so you can test all of those really quicky but with still the same amount of care with a data provider that passes the test the case variations, but otherwise is the same test. This is still super quick, and your case still shows that you have addressed the requirement. And you can demonstrate that with your test output:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
    - should leave the submit button disabled until the form is filled


  7 passing (48ms)
  1 pending

 MOCHA  Tests completed successfully

(I've lifted that from the blog article I mention lower down).

Not all form fields are so simple. Some need to be select boxes that source their data from [somewhere]. "It has a select for favouriteColour, which offers values returned from a call to /colours/?type=favourite". This needs better testing that just name and length. "It has a password field that only accepts [rules]". Definitely needs testing discrete from the other tests. Etc

Your form is a collection of form fields all of which will have stated requirements. If the requirements have been stated, it stands to reason you should demonstrate you've met the requirements. Both now in the first iteration of development, and that this continues to hold true during subsequent iterations (direct or indirect: basically new work doesn't break existing tests).

I cover an approach to this in article "Vue.js: using TDD to develop a data-entry form". It's a small form, but the technique scales.

Bottom line when using TDD you don't start with a massive form.

It's a similar story on the form submission handler. The TDD process doesn't start with "holy f**k 100 form fields!", it starts of with a POST request. or it might start with a controller method receiving a request object that represents that request. Each value in that request must have validation, and you must test that, because validation is a) critical, b) fiddly and error-prone. But you start with one field: firstName must exist, and must be between 1-100 character. You'd have these cases:

  • It's not passed with the request at all (fail);
  • It's passed with the request but its value is empty (fail);
  • It's passed with the request and its length is 1 (pass);
  • It's passed with the request and its length is 100 (pass);
  • It's passed with the request and its length is 101 (fail);

These are requirements your client has given you: You need to test them!

The validation tests are perhaps a good example of where one might use a focused unit test, rather than a functional test that actually makes a request to /processMyForm and analyses the response. Maybe you just pass a request object, or the request body values to a validate method, and check the results.

Once validation is in place, you'd need to vary the response based on those results: "when validation fails it returns a 400-BAD-REQUEST"; "when validation fails it returns a non-empty-array errors with validation failure details", etc. All actual requirements you've been given; all need to be tested.

Then you'd move on to whatever other business logic is needed, step by step, until you get to a point where yer firing some values into storage or whatever, and you check the expected values for each field are passed to the right place in storage. Although I'd still use a mock (or spy, or whatever the precise term is), and just check what values it receives, rather than actually letting the test write to storage.

It also has end-to-end acceptance tests

At this point you can demonstrate the requirements have been tested, and you know they work. I'd then put an end-to-end happy path test on that (maybe all the way from automating the form submission with a virtual web client, maybe just by sending a POST request; either is valid). And then I'd do an end-to-end unhappy path test, eg: when validation fails are the correct messages put in the correct place on the form, or whatever. Maybe there are other valid variations of end-to-end tests here, but I would not think to have an end-to-end test for each form field, and each validation rule. That'd be fiddly to write, and slow to run.

It does need to cover all the behaviour

I fully subscribe to the "100% test coverage is a fools errand" ethos. Steady on there. There's 100% and there's 100%. This notion is applied to lines-of-code, or 100% of methods, or basically implementation detail stuff. And it's also usually trotted out by someone who's looking at the code after it's been done, and is faced with a whole pile of testing to write and trying to work out ways of wriggling out of it. This is no slight on you, Adam (Tuttle), it's just how I have experienced devs rationalise this with me. If one does TDD / BDD, then one is not thinking about lines of code when one is testing. One is thinking about behaviour. And the behaviour has been requested by a client, and the behaviour needs to work. So we test the behaviour. Whether that's 1 line of code or 100 is irrelevant. However the test will exercise the code, because the code only ever came into existence to address the case / behaviour being delivered. Using TDD generally results in ~100% of the code being covered because you don't write code you don't need, which is the only time code might not be covered. How did that code get in there? Why did you write it? Obviously it's not needed so get rid of it ;-).

The key here is that 100% of behaviour gets covered.

Nothing is absolute though. There will be situations where some code - for whatever reason - is just not testable. This is rare, but it happens. In that case: don't get hung up by it. Isolate it away by itself, and mark it as not covered (eg in PHPUNit we have @codeCoverageIgnore), and move on. But be circumspect when making this decision, and the situations that one can't test some code is very rare. I find devs quite often seem to confuse "can't" with "don't feel like ~". Two different things ;-)

I'll also draw you back to an article I wrote ages ago about the benefits of 100% test coverage: "Yeah, you do want 100% test coverage". TL;DR: where in these two displays can you spot then new code that is accidentally missing test coverage:


Accidents are easy to spot when a previously all-green board starts being not all-green.

It uses emergent design to solve large problems

[My] brain wants to architect a system that [long and complicated description follows]. One of the premises of TDD is that you let the solution architect itself. I'm not 100% behind this as I can't quite see it yet, but I know I do find it really daunting if my requirement seems to be "it all does everything I need it to do", and I don't know where to start with that. This was my real life experience doing that Vue.js stuff I linked to above. I really did start with "yikes this whole form thing is gonna be a monster!? I don't even know where to start!". I pushed the end result I thought I might have to the back of your mind, especially the architectural side of things (which will probably more define itself in the refactor stage of things, not the red / green part).

And I started by adding a route for the form, and then I responded to request to that route with a 200-OK. And then moved on to the next bite of the elephant.

HTH.

--
Adam (Cameron)

Thursday 8 April 2021

TDD and external services

G'day

You might have noticed I spend a bit of my time encouraging people to use TDD, or at the very least making sure yer code is tested somehow. But use TDD ;-)

As an interesting aside, I recently failed a technical interview because the interviewer didn't feel I was strong enough at the testing side of things. Given what I see around the industry… that seems to be a moderately high bar yer setting for yerselves there, peeps. Or perhaps I'm just shit at articulating myself. Hrm. But anyway.

OK, so I rattled out a quick article a few days ago - "TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode" - which revisits some existing ground and by-and-large is not relevant to what I'm going to say here, other than the "TDD & professionalism" being why I bang on about it so much. And you might think I bang on about it here, but I also bang on about it at work (when I have work I mean), and in my background conversations too. I try to limit it to only my technical associates, that said.

Right so Mingo hit me up in a comment on that article, asking this question:

Something I ran into was needing to access the external API for the tests and I understand that one usually uses mocking for that, right? But, my question is then: how do you then **know** that you're actually calling the API correctly? Should I build the error handling they have in their API into my mocked up API as well (so I can test my handling of invalid inputs)? This feels like way too much work. I chose to just call the API and use a test account on there, which has it's own issues, because that test account could be setup differently than the multiple different live ones we have. I guess I should just verify my side of things, it's just that it's nice when it's testing everything together.

Yep, good question. With new code, my approach to the TDD is based on the public interface doing what's been asked of it. One can see me working through this process in my earlier article "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements". Here I'm building a web service end point - by definition the public interface to some code - and I am always hitting the controller (via the routing). And whatever I start testing, I just "fake it until I make it". My first test case here is "It needs to return a 200-OK status for GET requests on the /workshops endpoint", and the test is this:

/**
 * @testdox it needs to return a 200-OK status for successful GET requests
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturns200()
{
    $this->client->request('GET', '/workshops/');

    $this->assertEquals(Response::HTTP_OK, $this->client->getResponse()->getStatusCode());
}

To get this to pass, the first iteration of the implementation code is just this:

public function doGet() : JsonResponse
{
    return new JsonResponse(null);
}

The next case is "It returns a collection of workshop objects, as JSON", implemented thus:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturnsJson()
{
    $workshops = [
        new Workshop(1, 'Workshop 1'),
        new Workshop(2, 'Workshop 2')
    ];

    $this->client->request('GET', '/workshops/');

    $resultJson = $this->client->getResponse()->getContent();
    $result = json_decode($resultJson, false);

    $this->assertCount(count($workshops), $result);
    array_walk($result, function ($workshopValues, $i) use ($workshops) {
        $workshop = new Workshop($workshopValues->id, $workshopValues->name);
        $this->assertEquals($workshops[$i], $workshop);
    });
}

And the code to make it work shows I've pushed the mocking one level back into the application:

class WorkshopsController extends AbstractController
{

    private WorkshopCollection $workshops;

    public function __construct(WorkshopCollection $workshops)
    {
        $this->workshops = $workshops;
    }

    public function doGet() : JsonResponse
    {
        $this->workshops->loadAll();

        return new JsonResponse($this->workshops);
    }
}

class WorkshopCollection implements \JsonSerializable
{
    /** @var Workshop[] */
    private $workshops;

    public function loadAll()
    {
        $this->workshops = [
            new Workshop(1, 'Workshop 1'),
            new Workshop(2, 'Workshop 2')
        ];
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

(I've skipped a step here… the first iteration could/should be to mock the data right there in the controller, and then refactor it into the model, but this isn't about refactoring, it's about mocking).

From here I refactor further, so that instead of having the data itself in loadAll, the WorkshopCollection calls a repository, and the repository calls a DAO, which for now ends up being:

class WorkshopsDAO
{
    public function selectAll() : array
    {
        return [
            ['id' => 1, 'name' => 'Workshop 1'],
            ['id' => 2, 'name' => 'Workshop 2']
        ];
    }
}

The next step is where Mingo's question comes in. The next refactor is to swap out the mocked data for a DB call. We'll end up with this:

class WorkshopsDAO
{
    private Connection $connection;

    public function __construct(Connection $connection)
    {
        $this->connection = $connection;
    }

    public function selectAll() : array
    {
        $sql = "
            SELECT
                id, name
            FROM
                workshops
            ORDER BY
                id ASC
        ";
        $statement = $this->connection->executeQuery($sql);

        return $statement->fetchAllAssociative();
    }
}

But wait. if we do that, our unit tests will be hitting the DB. Which we are not gonna do. We've run out of things to directly mock as we're at the lower-boundary of our application, and the connection object is "someon else's code" (Doctrine/DBAL in this case). We can't mock that, but fortunately this is why I have the DAO tier. It acts as the frontier between our app and the external service provider, and we still mock that:

public function testDoGetReturnsJson()
{
    $workshopDbValues = [
        ['id' => 1, 'name' => 'Workshop 1'],
        ['id' => 2, 'name' => 'Workshop 2']
    ];

    $this->mockWorkshopDaoInServiceContainer($workshopDbValues);

    // ... unchanged ...

    array_walk($result, function ($workshopValues, $i) use ($workshopDbValues) {
        $this->assertEquals($workshopDbValues[$i], $workshopValues);
    });
}

private function mockWorkshopDaoInServiceContainer($returnValue = []): void
{
    $mockedDao = $this->createMock(WorkshopsDAO::class);
    $mockedDao->method('selectAll')->willReturn($returnValue);

    $container = $this->client->getContainer();
    $workshopRepository = $container->get('test.WorkshopsRepository');

    $reflection = new \ReflectionClass($workshopRepository);
    $property = $reflection->getProperty('dao');
    $property->setAccessible(true);
    $property->setValue($workshopRepository, $mockedDao);
}

We just use a mocking library (baked into PHPUnit in this case) to create a runtime mock, and we put that into our repository.

The tests pass, the DB is left alone, and the code is "complete" so we can push it to production perhaps. But we are not - as Mingo observed - actually testing that what we are asking the DB to do is being done. Because all our tests mock the DB part of things out.

The solution is easy, but it's not done via a unit test. It's done via an integration test (or end-to-end test, or acceptance test or whatever you wanna call it), which hits the real endpoint which queries the real database, and gets the real data. Adjacent to that in the test we hit the DB directly to fetch the records we're expecting, and then we compare the JSON that the end point returns represents the same data we manually fetched from the DB. This tests the SQL statement in the DAO, that the data fetched models OK in the repo, and that the model (WorkshopCollection here) applies whatever business logic is necessary to the data from the repo before passing it back to the controller to return with the response, which was requested via the external URL. IE: it tests end-to-end.

public function testDoGetExternally()
{
    $client = new Client([
        'base_uri' => 'http://fullstackexercise.backend/'
    ]);

    $response = $client->get('workshops/');
    $this->assertEquals(Response::HTTP_OK, $response->getStatusCode());
    $workshops = json_decode($response->getBody(), false);

    /** @var Connection */
    $connection = static::$container->get('database_connection');
    $expectedRecords = $connection->query("SELECT id, name FROM workshops ORDER BY id ASC")->fetchAll();

    $this->assertCount(count($expectedRecords), $workshops);
    array_walk($expectedRecords, function ($record, $i) use ($workshops) {
        $this->assertEquals($record['id'], $workshops[$i]->id);
        $this->assertSame($record['name'], $workshops[$i]->name);
    });
}

Note that despite I'm saying "it's not a unit test, it's an integration test", I'm still implementing it via PHPUnit. The testing framework should just provide testing functionality: it should not dictate what kind of testing you implement with it. And similarly not all tests written with PHPUnit are unit tests. They are xUnit style tests, eg: in a class called SomethingTest, and the the methods are prefixed with test and use assertion methods to implement the test constraints.

Also: why don't I just use end-to-end tests then? They seem more reliable? Yep they are. However they are also more fiddly to write as they have more set-up / tear-down overhead, so they take longer to write. Also they generally take longer to run, and given TDD is supposed to be a very quick cadence of test / run / code / run / refactor / run, the less overhead the better. The slower your tests are, the more likely you are to switch to writing code and testing later once you need to clear your head. In the mean time your code design has gone out the window. Also unit tests are more focused - addressing only a small part of the codebase overall - and that has merit in itself. Aso I used a really really trivial example here, but some end-to-end tests are really very tricky to write, given the overall complexity of the functionality being tested. I've been in the lucky place that at my last gig we had a dedicated QA development team, and they wrote the end-to-end tests for us, but this also meant that those tests were executed after the dev considered the tasks "code complete", and QA ran the tests to verify this. There is no definitive way of doing this stuff, that said.

To round this out, I'm gonna digress into another chat I had with Mingo yesterday:

Normally I'd say this:

Unit tests
Test logic of one small part of the code (maybe a public method in one class). If you were doing TDD and about to add a condition into your logic, you'd write a until test to cover the new expectations that the condition brings to the mix.
Functional tests
These are a subset of unit tests which might test a broader section of the application, eg from the public frontier of the application (so like an endpoint) down to where the code leaves the system (to a logger, or a DB, or whatever). The difference between unit tests and functional tests - to me - are just how distributed the logic being tests is throughout the system.
Integration tests
Test that the external connections all work fine. So if you use the app's DB configuration, the correct database is usable. I'd personally consider a test an integration test if it only focused on a single integration.
Acceptance tests(or end-to-end tests)
Are to integration tests what functional tests are to unit tests: a broader subset. That test above is an end-to-end test, it tests the web server, the application and the DB.

And yes I know the usages of these terms vary a bit.

Furthermore, considering the distinction between BDD and TDD:

  • The BDD part is the nicely-worded case labels, which in theory (but seldom in practise, I find) are written in direct collaboration with the client user.
  • The TDD part is when in the design-phase they are created: with TDD it's before the implementation is written; I am not sure whether in BDD it matters or is stipulated.
  • But both of them are design / development strategies, not testing strategies.
  • The tests can be implemented as any sort of test, not specifically unit tests or functional tests or end-to-end tests. The point is the test defines the design of the piece of code being written: it codifies the expectations of the behaviour of the code.
  • BDD and TDD tests are generally implemented via some unit testing framework, be it xUnit (testMyMethodDoesSomethingRight), or Jasmine-esque (it("does something right", function (){}).

One can also do testing that is not TDD or BDD, but it's a less than ideal way of going about things, and I would image result in subpar tests, fragmented test coverage, and tests that don't really help understand the application, so are harder to maintain in a meaningful way. But they are still better than no tests at all.

When I am designing my code, I use TDD, and I consider my test cases in a BDD-ish fashion (except I do it on the client's behalf generally, and sadly), and I use PHPUnit (xUnit) to do so on PHP, and Mocha (Jasime-esque) to do so on Javascript.

Hopefully that clarifies some things for people. Or people will leap at me and tell me where I'm wrong, and I can learn the error in my ways.

Righto.

--
Adam

Tuesday 6 April 2021

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD. Is. Not. A. Testing. Strategy.

Just a passing thought. Apropros to absolutely nothing. 'Onest guv.(*)

Dunno if it occurred to you, but that TDD thing? It's not a testing strategy. It's a design strategy.

Let's look at the name. In the name test-driven is a compound adjective: it just modifies the subject. The subject is development. It's about development. It's not about testing.

It's probably better described by BDD, although to me that's a documentation strategy, rather than a development (or testing) one. BDD is about arriving at the test cases (with the client), TDD is about implementing those cases.

The purpose of TDD is to define a process that leads you - as a developer - to pay more attention to the design of your code. It achieves this by forcing you to address the requirement as a set of needs (or cases), eg "it needs to return the product of the two operands". Then you demonstrate your understanding of the case by demonstrating what it is for the case to "work" (a test that when you pass 2 and 3 to the function it returns 6), and then you implement the code to address that case. Then you refine the case, refactor the implementation so it's "nicer", or move on to the next case, and cycle through that one. Rinse and repeat.

But all along the object of the exercise is to think about what needs to be done, break it into small pieces, and code just what's needed to implement the customer need. It also provides a firm foundation to be able to safely refactor the code once it's working. You know: the bit that you do to make your code actually good; rather than just settling for "doesn't break", which is a very low bar to set yourself.

That you end up with repeatable tests along the way is a by-product of TDD. Not the reason you're doing it. Although obviously it's key to offering that stability and confidence for the refactor phase.

Too many people I interact with when they're explaining why it's OK they don't do TDD [because reasons], fall back to the validity / longevity of the tests. It's… not about the tests. It's about how you design your solutions.

Lines of code are not a measure of productivity

Tangential to this, we all know that LOC are not a measure of productivity. There's not a uniform relationship between one line of code and another adjacent line of code. Or ten lines of code in one logic block that represent the implementation of a method are likely to represent less productivity burden than a single line of code nested 14-levels deep in some flow-control-logic monstrousity. We all know this. Not all lines of code are created equal. More is definitely not better. But fewer is also not intrinsically better. It's just an invalid metric.

So why is it that so many people are prepared to count the lines of code a test adds to the codebase as a rationalisation (note: not a justification, because it's invalid) as to why they don't have time to write that test? Or that the test represents an undue burden in the codebase. Why are they back to measuring productivity with LOC? Why won't they let it occur to them that - especially when employing TDD - the investment in the LOC for the test code reduces the investment in the LOC for the production code? And note I am not meaning this as a benefit that one only realises over time having amortised it over a long code lifespan. I mean on the first iteration of code<->test<->release, because the bouncing back and forth between each step there will be reduced. Even for code which this might (although probably won't) be the only iteration the production code sees.

It's just "measure twice, cut once" for code. Of course one doesn't have the limitation in code that one can only cut once; the realisation here needs to be that "measuring" takes really a lot less time than "cutting" when it comes to code.

In closing, if you are rationalising to me (or, in reality: to yourself) why you don't do TDD, and that rationalisation involves lines of code or how often code will be revisited, then you are not talking about TDD. You are pretty much just setting up a strawman to tilt at to make yourself feel better. Hopefully that tactic will seem a little more transparent to you now.

Design your code. Measure your code.

Righto.

--
Adam

(*) that's a lie. It's obviously a retaliation to a coupla comments I've been getting on recent articles I've written about TDD.

Sunday 4 April 2021

Unit testing: tests are not much bloody use if they always pass

G'day:

I started back on the next article of my VueJS / Symfony / etc series this morning. And now it's 18:24 and I've made zero progress. Well I've written a different blog article in the middle of that ("TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode"), but that was basically just a procrastinary exercise, avoiding getting down to the other work.

I'm currently laughing (more "nervous giggling") at my mental juxtaposition of "TDD & professionalism" from that earlier article and the title of this one. I'm not feeling very professional round about now.

OK so I sat down to get cracking on this new article, and the first thing I did was re-run my tests to make sure they were all still working. This is largely due to some issues I had with the Vue Test Utils library about a week ago, which I will discuss in that next article. Anyhow, everything was green. All good.

Next I opened my Vue component file, and remembered a slight tweak I wanted to make to my code. I have this (in WorkshopRegistrationForm.vue):

submitButtonLabel: function() {
    return this.registrationState === REGISTRATION_STATE_FORM ? "Register" : "Processing&hellip;";
},

I'm not in love with those hard-coded strings there; I want to extract them and use named constants instead (same as with the form-state constants I already have there).

The first thing I did was to locate the test for when the button switches to "Processing…", and update it to be broken so I can expect the change. Basically I figured I change the label to be something different, see the test fail, update the code to use a constant with the "different" value, see the tests pass, and then change the test back to expect the "Processing…" value, and then the const value in the code. Sometimes that's all a test change that's needed. And in hindsight I'm glad I did it.

The test method is thus (from test/unit/workshopRegistration.spec):

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        let lastLabel;
        component.vm.$watch("submitButtonLabel", (newValue) => {
            lastLabel = newValue;
        });

        let lastFormState;
        component.vm.$watch("isFormDisabled", (newValue) => {
            lastFormState = newValue;
        });

        await submitPopulatedForm();

        expect(lastLabel).to.equal("Processing&hellip;");
        expect(lastFormState).to.be.true;
    });
});

The line in question is that second to last expectation, and I just changed it to be:

expect(lastLabel).to.equal("Processing&hellip;TEST_WILL_FAIL");

And I ran the test:

 DONE  Compiled successfully in 3230ms

  [=========================] 100% (completed)

 WEBPACK  Compiled successfully in 3230ms

 MOCHA  Testing...



  Testing WorkshopRegistrationForm component
    Testing form submission
       should disable the form and indicate data is processing when the form is submitted


  1 passing (52ms)

 MOCHA  Tests completed successfully

root@9b1e15054be3:/usr/share/fullstackExercise#

Umm… hello?

Note: in the real situation I ran all the tests. It's not just a case of me running the wrong test or something completely daft. Although bear with me, there's def some daftness to come.

I did some fossicking around and putting some console.log entries about the place, and narrowed it down to how I had "fixed" these tests the last time I had issues with them. Previously the tests were running too quickly, and the Workshop listing had not been returned from the remote call in time for the test to try to submit the form, and any tests that relied on filling-out the form went splat cos there were not (yet) and workshops to select. OKOK, hang on this is what I'm talking about:

Those come from a remote call, so the data arrives asynchronously.

My fix was this bit in that test:

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        //...
    });
});

I was being "clever" and watching for when the workshops data finally arrived, waited for the options to populate, then we're good to run the test code. A whole bunch of the tests needed this. Now I hasten to add that I did thoroughly test this strategy when I updated all the tests. I made them all fail one of their expectations, watched the tests fail, then fixed the assertions and watched them pass. It's not like I made this change and just went "yeah that (will) work OK on my machine".

So what was the problem? Can you guess? Looking now, the tests do make a certain assumption.

Well. So my original issue was the code I was testing was running slow, so I changed the tests to wait for a change, and then run. And last week I tweaked my Docker settings to speed up all my containers. Now the code isn't slow. So now the workshops data is already loaded before the test code gets to that watch. So… there's nothing to watch. I started watching too late. I proved this to myself by slowing the remote call down again, and suddenly the tests started working again (ie: that test started to fail like I wanted it to).

It occurred to me then I had solved the original issue the wrong way. I was thinking synchronously about an asynchronous problem. I can't know if the data will arrive before or after my test runs. Just that at some time it is promised to arrive. Aha!

The data was already coming back in a promise (from WorkshopDAO.js):

selectAll() {
    return this.client.get(this.config.workshopsUrl)
        .then((response) => {
            return response.data;
        });
}

The problem is that by the time it bubbles back through DAO › Repository › Service › Component, I'd ditched the promise and just waited for the value (WorkshopRegistrationForm.vue):

async mounted() {
    this.workshops = await this.workshopService.getWorkshops();
},

And I needed that this.workshops to just be the eventual array of objects, becauseI have a v-for looping over it. And v-for ain't clever enough to take the promise of an array, it needs the actual array (this is from the same file, just further up at line 82):

<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I knew what I needed to do in the test. Instead of the watch, I just needed to append another then handler to the promise. Then whether or not the data has arrived back yet, the handler would run either straight away or once the data got there. But how do I get hold of that promise?

In the end I cheated: (again, same file, but a new version of it):

data() {
    return {
        //...
        promisedWorkshops: null,
        workshops: [],
        //...
    };
},
//...
async mounted() {
    this.promisedWorkshops = this.workshopService.getWorkshops();
    this.workshops = await this.promisedWorkshops;
},

I put the promise into the component's data as well as the values :-)

And the test becomes(from test/unit/workshopRegistration.spec again):

it("should disable the form and indicate data is processing when the form is submitted", async () => {
    component.vm.$watch("workshops", async () => {
    await component.vm.promisedWorkshops.then(async () => {
        await component.vm.$nextTick();

As I said about I just slap all the code in a then handler instead of a watch callback. The rest of the code is the same. I need to wait that tick because the options don't render until the next Vue-tick after the data arrives.

That's a much more semantically-appropriate (and less hacky) way of addressing this issue. I'm reasonably pleased with that as a solution. For now.

Having learned my lesson I went back and retested everything in both a broken and working state, with an instant response time, and a very delayed response time on the remote call. The tests seem stable now.

Until I find the next thing wrong with them, anyhow.

OK that's enough staring at code on the screen for the day. I'm gonna stare at a game on the screen instead now.

Righto.

--
Adam

TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode

G'day:

Yer gonna need to go back and read the comments on Thoughts on Working Code podcast's Testing episode for context here. Especially as I quote a couple of them. I kinda left the comments run themselves there a bit and didn't reply to everyone as I didn't want to dominate the conversation. But one earlier comment that made me itchy, and now one comment that came in in the last week or so, have made me decide to - briefly - follow-up one point that I think warrants drawing attention to and building on.

Briefly, Working Code Pod did an episode on testing, and I got all surly about some of the things that were said, and wrote them up in the article I link to above. BTW Ben's reaction to my feedback in their follow-up episode ("Listener Questions #1) was the source of my current strapline quote: "the tone... it sounds very heated and abrasive". That should frame things nicely.

Right so in the comments to that previous article, we have these discussion fragments:

  • Sean Corfield - Heck, there's still a sizable portion that still doesn't use version control or has some whacked-out manual approach to "source code control".
  • Ben replied to that with - Yikes, I have trouble believing that there are developers _anywhere_ that don't use source-control in this day-and-age. That seems like "table stakes" for development. Period.
  • [off-screen, Adam bites his tongue]
  • Then Sean Hogge replied to the article rather than that particular comment thread. I'm gonna include a chunk of what he said here:

    18 months ago, I was 100% Ben-shaped. Testing simply held little ROI. I have a dev server that's a perfect replica of production, with SSL and everything. I can just log in, open the dashboard, delete the cache and check things with a few clicks.

    But as I started developing features that interact with other features, or that use the same API call in different ways, or present the same data with a partial or module with options, I started seeing an increase in production errors. I could feel myself scrambling more and more. When I stepped back and assessed objectively, tests were the only efficient answer.

    After about 3 weeks of annoying, frustrating, angry work learning to write tests, every word of Adam C's blog post resonates with me. I am not good at it (yet), I am not fast at it (yet), but it is paying off exactly as he and those he references promised it would.

    I recommend reading his entire comment, because it's bloody good.

  • Finally last week I listened to a YouTube video "Jim Coplien and Bob Martin Debate TDD", from which I extracted these two quotes from Martin that drew me back to this discussion:
    • (@ 43sec) My thesis is that it has become infeasible […] for a software developer to consider himself professional if [(s)he] does not practice test-driven development.
    • (@ 14min 42sec) Nowadays it is […] irresponsible for a developer to ship a line of code that [(s)he] has not executed any unit test [upon].. It's important to note that "nowadays" being 2012 in this case: that's when the video was from.
    And, yes, OK the two quotes say much the same thing. I just wanted to emphasise the words "professional" and "irresponsible".

This I think is what Ben is missing. He shows incredulity that someone in 2021 would not use source control. People's reaction is going to be the same to his suggestion he doesn't put much focus on test-automatic, or practise TDD as a matter of course when he's designing his code. And Sean (Hogge) nails it for me.

(And not just Ben. I'm not ragging on him here, he's just the one providing the quote for me to start from).

TDD is not something to be framed in a context alongside other peripheral things one might pick up like this week's kewl JS framework, or Docker or some other piece of utility one might optionally use when solving a client's problem. It's lower level than that, so it's false equivalence to bracket it like that conceptually. Especially as a rationalisation for not addressing your shortcomings in this area.

Testing yer code and using TDD is as fundamental to your professional responsibilities as using source control. That's how one ought to contextualise this. Now I know plenty of people who I'd consider professional and strong pillars of my dev community who aren't as good as they could be with testing/TDD. So I think Martin's first quote is a bit strong. However I think his second quote nails it. If you're not doing TDD you are eroding your professionalism, and you are being professionalbly irresponsible by not addressing this.

In closing: thanks to everyone for putting the effort you did into commenting on that previous article. I really appreciate the conversation even if I didn't say thanks etc to everyone participating.

Righto.

--
Adam

Wednesday 10 March 2021

TDDing the reading of data from a web service to populate elements of a Vue JS component

G'day:

This article leads on from "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements", which itself continues a saga I started back with "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 1: Intro & Nginx" (that's a 12-parter), and a coupla other intermediary articles also related to this body of work. What I'm doing here specifically leads on from "Vue.js: using TDD to develop a data-entry form". In that article I built a front-end Vue.js component for a "workshop registration" form, submission, and summary:


Currently the front-end transitions work, but it's not connected to the back-end at all. It needs to read that list of "Workshops to attend" from the DB, and it needs to save all the data between form and summary.

In the immediately previous article I set up the endpoint the front end needs to call to fetch that list of workshops, and today I'm gonna remove the mocked data the form is using, and get it to use reallyreally data from the DB, via that end point.

To recap, this is what we currently have:

In the <template/> section of WorkshopRegistrationForm.vue:

<label for="workshopsToAttend" class="required">Workshops to attend:</label>

<select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
    <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
</select>

And in the code part of it:

mounted() {
      this.workshops = this.workshopService.getWorkshops();
},

And WorkshopService just has some mocked data:

getWorkshops() {
    return [
        {value: 2, text:"Workshop 1"},
        {value: 3, text:"Workshop 2"},
        {value: 5, text:"Workshop 3"},
        {value: 7, text:"Workshop 4"}
    ];
}

In this exercise I will be taking very small increments in my test / code / test / code cycle here. It's slightly arbitrary, but it's just because I like to keep separate the "refactoring" parts from the "new development parts", and I'll be doing both. The small increments help surface problems quickly, and also assist my focus so that I'm less likely to stray off-plan and start implementing stuff that is not part of the increment's requirements. This might, however, make for tedious reading. Shrug.

We're gonna refactor things a bit to start with, to push the mocked data closer to the boundary between the application and the DB. We're gonna follow a similar tiered application approach as the back-end work, in that we're going to have these components:

WorkshopRegistrationForm.vue
The Vue component providing the view part of the workshop registration form.
WorkshopService
This is the boundary between WorkshopRegistrationForm and the application code. The component calls it to get / process data. In this exercise, all it needs to do is to return a WorkshopCollection.
WorkshopCollection
This is the model for the collection of Workshops we display. It loads its data via WorkshopRepository
Workshop
This is the model for a single workshop. Just an ID and name.
WorkshopsRepository
This deals with fetching data from storage, and modelling it for the application to use. It knows about the application domain, and it knows about the DB schema. And translates between the two.
WorkshopsDAO
Because the repository has logic in it that we will be testing, our code that actually makes the calls to the DB connector has been extracted into this DAO class, so that when testing the repository it can be mocked-out.
Client
This is not out code. We're using Axios to make calls to the API, and the DAO is a thin layer around this.

As a first step we are going to push that mocked data back to the repository. We can do this without having to change our tests or writing any data manipulation logic.

Here's the current test run:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
     should leave the submit button disabled until the form is filled
     should disable the form and indicate data is processing when the form is submitted
     should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
     should display the registration summary 'template' after the registration has been submitted
     should display the summary values in the registration summary

We have to keep that green, and all we can do is move code around. No new code for now (other than a wee bit of scaffolding to let the app know about the new classes). Here are all the first round of changes.

Currently the deepest we go in the new code is the repository, which just returns the mocked data at the moment:

class WorkshopsRepository {

    selectAll() {
        return [
            {value: 2, text:"Workshop 1"},
            {value: 3, text:"Workshop 2"},
            {value: 5, text:"Workshop 3"},
            {value: 7, text:"Workshop 4"}
        ];
    }
}

export default WorkshopsRepository;

The WorkshopCollection grabs that data and uses it to populate itself. It extends the native array class so can be used as an array by the Vue template code:

class WorkshopCollection extends Array {

    constructor(repository) {
        super();
        this.repository = repository;
    }

    loadAll() {
        let workshops = this.repository.selectAll();

        this.length = 0;
        this.push(...workshops);
    }
}

module.exports = WorkshopCollection;

And the WorkshopService has had the mocked values removed, an in their place it tells its WorkshopCollection to load itself, and it returns the populated collection:

class WorkshopService {

    constructor(workshopCollection) {
        this.workshopCollection = workshopCollection;
    }

    getWorkshops() {
        this.workshopCollection.loadAll();
        return this.workshopCollection;
    }

    // ... rest of it unchanged...
}

module.exports = WorkshopService;

WorkshopRegistrationForm.vue has not changed, as it still receives an array of data to its this.workshops property, which is still used in the same way by the template code:

<template>
    <form method="post" action="" class="workshopRegistration" v-if="registrationState !== REGISTRATION_STATE_SUMMARY">
        <fieldset :disabled="isFormDisabled">

            <!-- ... -->

            <select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
                <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
            </select>

            <!-- ... -->

        </fieldset>
    </form>

    <!-- ... -->
</template>

<script>
// ...

export default {
    // ...
    mounted() {
        this.workshops = this.workshopService.getWorkshops();
    },
    // ...
}
</script>

Before we wire this up, we need to change our test to reflect where the data is coming from now. It used to be just stubbing the WorkshopService's getWorkshops method, but now we will need to be mocking the WorkshopRepository's selectAll method instead:

workshopService = new WorkshopService();
sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(expectedOptions);

The tests now break as we have not yet implemented this change. We need to update App.vue, which has a bit more work to do to initialise that WorkshopService now, with its dependencies:

<template>
    <workshop-registration-form></workshop-registration-form>
</template>

<script>
import WorkshopRegistrationForm from "./WorkshopRegistrationForm";
import WorkshopService from "./WorkshopService";
import WorkshopCollection from "./WorkshopCollection";
import WorkshopsRepository from "./WorkshopsRepository";

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
    )
);

export default {
    name: 'App',
    components: {
        WorkshopRegistrationForm
    },
    provide: {
        workshopService: workshopService
    }
}
</script>

Having done this, the tests are passing again.

Next we need to deal with the fact that so far the mocked data we're passing around is keyed on how it is being used in the <option> tags it's populating:

return [
    {value: 2, text:"Workshop 1"},
    {value: 3, text:"Workshop 2"},
    {value: 5, text:"Workshop 3"},
    {value: 7, text:"Workshop 4"}
]

value and text are derived from the values' use in the mark-up, whereas what we ought to be mocking is an array of Workshop objects, which have keys id and name:

class Workshop {

    constructor(id, name) {
        this.id = id;
        this.name = name;
    }
}

module.exports = Workshop;

We will need to update our tests to expect this change:

sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

The tests fail again, so we're ready to change the code to make the tests pass. The repo now returns objects:

class WorkshopsRepository {

    selectAll() {
        return [
            new Workshop(2, "Workshop 1"),
            new Workshop(3, "Workshop 2"),
            new Workshop(5, "Workshop 3"),
            new Workshop(7, "Workshop 4")
        ];
    }
}

And we need to make a quick change to the stubbed saveWorkshopRegistration method in WorkshopService too, so the selectedWorkshops are now filtered on id, not value:

saveWorkshopRegistration(details) {
    let allWorkshops = this.getWorkshops();
    let selectedWorkshops = allWorkshops.filter((workshop) => {
        return details.workshopsToAttend.indexOf(workshop.name) >= 0;
        return details.workshopsToAttend.indexOf(workshop.id) >= 0;
    });
    // ...

And the template now expects them:

<option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I almost forgot about this, but the summary section also uses the WorkshopCollection data, and currently it's also coded to expect mocked workshops with value/text properties, rather than id/name one. So the test needs updating:

it("should display the summary values in the registration summary", async () => {
    const summaryValues = {
        registrationCode : "TEST_registrationCode",
        fullName : "TEST_fullName",
        phoneNumber : "TEST_phoneNumber",
        emailAddress : "TEST_emailAddress",
        workshopsToAttend : [{value: "TEST_workshopToAttend_VALUE", text:"TEST_workshopToAttend_TEXT"}]
        workshopsToAttend : [{id: "TEST_workshopToAttend_ID", name:"TEST_workshopToAttend_NAME"}]
    };
    
	// ...

    let ddValue = actualWorkshopValue.find("ul>li");
    expect(ddValue.exists()).to.be.true;
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].name);
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].value);

	// ...
});

And the code change in the template:

<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.value">{{workshop.text}}</li>
<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.id">{{workshop.name}}</li>

And the tests pass again.

Now we will introduce the WorkshopsDAO with the data that it will get back from the API call mocked. It'll return this to the WorkshopsRepository which will implement the modelling now:

class WorkshopDAO {

    selectAll() {
        return [
            {id: 2, name: "Workshop 1"},
            {id: 3, name: "Workshop 2"},
            {id: 5, name: "Workshop 3"},
            {id: 7, name: "Workshop 4"}
        ];
    }
}

export default WorkshopDAO;

We also now need to update the test initialisation to pass a DAO instance into the repository, and for now we can remove the Sinon stubbing because the DAO is appropriately stubbed already:

let repository = new WorkshopsRepository();
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

workshopService = new WorkshopService(
    new WorkshopCollection(
        repository
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);

That's the only test change here (and the tests now break). We'll update the code to also pass-in a DAO when we initialise WorkshopService's dependencies, and also implement the modelling code in the WorkshopRepository class's selectAll method:

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);
class WorkshopRepository {

    constructor(dao) {
        this.dao = dao;
    }

    selectAll() {
        return this.dao.selectAll().map((unmodelledWorkshop) => {
            return new Workshop(unmodelledWorkshop.id, unmodelledWorkshop.name);
        });
    }
}

We're stable again. The next bit of development is the important bit: add the code in the DAO that actually makes the DB call. We will be changing the DAO to be this:

class WorkshopDAO {

    constructor(client, config) {
        this.client = client;
        this.config = config;
    }

    selectAll() {
        return this.client.get(this.config.workshopsUrl)
            .then((response) => {
                return response.data;
            });
    }
}

export default WorkshopDAO;

And that Config class will be this:

class Config {
    static workshopsUrl = "http://fullstackexercise.backend/workshops/";
}

module.exports = Config;

Now. Because Axios's get method returns a promise, we're going to need to cater to this in the up-stream methods that need manipulate the data being promised. Plus, ultimately, WorkshopRegistration.vue's code is going to receive a promise, not just the raw data. IE, this code:

mounted() {
    this.workshops = this.workshopService.getWorkshops();
    this.workshops = this.workshopService.getWorkshops()
        .then((workshops) => {
            this.workshops = workshops;
        });
},

A bunch of the tests rely on the state of the component after the data has been received:

  • should have a required text input for fullName, maxLength 100, and label 'Full name'
  • should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
  • should have a required text input for emailAddress, maxLength 320, and label 'Email address'
  • should have a required password input for password, maxLength 255, and label 'Password'
  • should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
  • should list the workshop options fetched from the back-end
  • should have a button to submit the registration
  • should leave the submit button disabled until the form is filled
  • should disable the form and indicate data is processing when the form is submitted
  • should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
  • should display the registration summary 'template' after the registration has been submitted
  • should display the summary values in the registration summary

In these tests we are gonna have to put the test code within a watch handler, so it waits until the promise returns the data (and accordingly workshops gets set, and the watch-handler fires) before trying to test with it, eg:

it("should list the workshop options fetched from the back-end", () => {
    component.vm.$watch("workshops", async () => {
    	await flushPromises();
        let options = component.findAll("form.workshopRegistration select[name='workshopsToAttend[]']>option");

        expect(options).to.have.length(expectedOptions.length);
        options.forEach((option, i) => {
            expect(option.attributes("value"), `option[${i}] value incorrect`).to.equal(expectedOptions[i].value.toString());
            expect(option.text(), `option[${i}] text incorrect`).to.equal(expectedOptions[i].text);
        });
    });
});

Other than that the tests shouldn't need changing. Also now that the DAO is not itself a mock as it was in the last iteration, we need to go back to mocking it again, this time to return a promise of things to come:

sinon.stub(dao, "selectAll").returns(Promise.resolve(
    expectedOptions.map((option) => ({id: option.value, name: option.text}))
));

It might seem odd that we are making changes to the DAO class but we're not unit testing those changes: the test changes here are only to accommodate the other objects' changes from being synchronous to being asynchrous. Bear in mind that these are unit tests and the DAO only exists as a thin wrapper around the Axios Client object, and its purpose is to give us some of our code that we can mock to prevent Axios from making an actual API call when we test the objects above it. We'll do a in integration test of the DAO separately after we deal with the unit testing and implementation.

After updating those tests to deal with the promises, the tests collapse in a heap, but this is to be expected. We'll make them pass again by bubbling-back the promise from returned from the DAO.

For the code revisions to the other classes I'm just going to show an image of the diff between the files from my IDE. I hate using pictures of code, but this is the clearest way of showing the diffs. All the actual code is on Github @ frontend/src/workshopRegistration

Here in the repository we need to use the values from the promise, so we need to wait for them to be resolved.

Similarly with the collection, service, and component (in two places), as per below:



It's important to note that the template code in the component did not need changing at all: it was happy to wait until the promise resolved before rendering the workshops in the select box.

Having done all this, the tests pass wonderfully, but the UI breaks. Doh! But in an "OK" way:

 


Entirely fair enough. I need to send that header back with the response.

class WorkshopControllerTest extends TestCase
{

    private WorkshopsController $controller;
    private WorkshopCollection $collection;

    protected function setUp(): void
    {
        $this->collection = $this->createMock(WorkshopCollection::class);
        $this->controller = new WorkshopsController($this->collection);
    }

    /**
     * @testdox It makes sure the CORS header returns with the origin's address
     * @covers ::doGet
     */
    public function testDoGetSetsCorsHeader()
    {
        $testOrigin = 'TEST_ORIGIN';
        $request = new Request();
        $request->headers->set('origin', $testOrigin);

        $response = $this->controller->doGet($request);

        $this->assertSame($testOrigin, $response->headers->get('Access-Control-Allow-Origin'));
    }
}
class WorkshopsController extends AbstractController
{

    // ...

    public function doGet(Request $request) : JsonResponse
    {
        $this->workshops->loadAll();

        $origin = $request->headers->get('origin');
        return new JsonResponse(
            $this->workshops,
            Response::HTTP_OK,
            [
                'Access-Control-Allow-Origin' => $origin
            ]
        );
    }
}

That sorts it out. And… we're code complete.

But before I finish, I'm gonna do an integration test of that DAO method, just to automate proof that it does what it needs do. I don't count "looking at the UI and going 'seems legit'" as testing.

import {expect} from "chai";
import Config from "../../src/workshopRegistration/Config";
import WorkshopsDAO from "../../src/workshopRegistration/WorkshopsDAO";
const client = require('axios').default;

describe("Integration tests of WorkshopsDAO", () => {

    it("returns the correct records from the API", async () => {
        let directCallObjects = await client.get(Config.workshopsUrl);

        let daoCallObjects = await new WorkshopsDAO(client, Config).selectAll();

        expect(daoCallObjects).to.eql(directCallObjects.data);
    });
});

And that passes.

OK we're done here. I learned a lot about JS promises in this exercise: I hid a lot of it from you here, but working out how to get those tests working for async stuff coming from the DAO took me an embarassing amount of time and frustration. Once I nailed it it all made sense though, and I was kinda like "duh, Cameron". So that's… erm… good…?

The next thing we need to do is to sort out how to write the data to the DB now. But before we do that, I'm feeling a bit naked without having any data validation on either the form or on the web service. Well: the web service end of things doesn't exist yet, but I'm going to start that work with putting some expected validation in. But I'll put some on the client-side first. As with most of the stuff in this current wodge of work: I do not have the slightest idea of how to do form validation with Vue.js. But tomorrow I will start to find out (see "Vue.js and TDD: adding client-side form field validation" for the results of that). Once again, I have a beer appointment to get myself too for now.

Righto.

--
Adam