Saturday, 17 April 2021

Using Docker to strum up an Nginx website serving CFML via Lucee

G'day

OK so this is not the blog article I expected to be writing, had you asked me two weeks ago. But here we are. I'll go into the reason why I'm doing this a bit later.

This will be a CFML-oriented version of the "VueJs/Symfony/Docker/TDD series":

  • Nginx website.
  • Proxying for Lucee as the CFML-processing application layer.
  • Running inside Docker containers.
  • TDD the whole enterprise.

If I have time (and any will-to-live remaining), I will add this lot into the mix:

  • Work out how Forgebox works, which seems to be CFML's equivalent of Composer / NPM
  • Use that to install Testbox (CFML-based Jasmine-ish testing framework)
  • And also install CFWheels, a CFML-based framework akin to Ruby on Rails.

I'll also be returning to SublimeText for the first time in seven-or-so years. Apparently it's still a reasonable text editor to use for CFML code.

For those few of you that have started paying attention to me more recently: CFML is not new to me. I spent over a decade as a CFML developer (2001-2013). I shifted to PHP because my erstwhile employer (HostelBookers, CFML shop), was bought by Hostelworld (PHP shop) back then. I've been doing PHP since. That said, I am very rusty with CFML, and - well, hopefully - they CFML landscape has moved on since then too. So whilst I'm not a newbie with CFML stuff, getting Lucee running in a container, Forgebox and CFWheels is entirely new to me.

I'm still gonna be using PHP to do the initial testing of things, because I won't have Testbox running for the first while. So I'll need a PHP container too. I'll refactor this out once I get Testbox in.

It needs a PHP container for running tests

There's nothing new here, and what I've done is largely irrelevant to this exercise, so I'll just list the files and link through to that current state of the files in source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── php-cli
│   │   ├── root_home
│   │   │   ├── .bash_history
│   │   │   ├── .bashrc
│   │   │   ├── .gitignore
│   │   │   └── .vimrc
│   │   └── Dockerfile
│   ├── .env
│   └── docker-compose.yml
├── test
│   └── php
│       └── SelfTest.php
├── .gitignore
├── LICENSE
├── README.md
├── composer.json
├── composer.lock
└── phpunit.xml.dist

5 directories, 14 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The test this just this:

/** @testdox Tests PHPUnit install */
class SelfTest extends TestCase
{
    /** @testdox it self-tests PHPUnit */
    public function testSelf()
    {
        $this->assertTrue(true);
    }
}

And it passes:

root@18c5eabeb9f2:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.002, Memory: 6.00 MB

OK (1 test, 1 assertion)
root@18c5eabeb9f2:/usr/share/cfml-in-docker#

In this instance I could not actually run the test before I implemented the work, for what should seem obvious reasons. However I followed the TDD mindset of just doing the least amount of work possible to make the test pass. I also monkeyed around with the test itself to see it fail if I had an assertion that was no good (I changed the argument to that assertion to false, basically).

The TDD lesson here is: I've set myself a case - "It needs a PHP container for running tests" - and only resolved that case before pausing and assessing the situation. I have also didn't move any further forward than I needed to to address that case.

It returns a 200-OK from requests to /gdayWorld.html

Next I need an Nginx container running, and serving a test file. Well: I need the test for that.

/** @testdox Tests Nginx is serving html */
class NginxTest extends TestCase
{
    /** @testdox It serves gdayWorld.html as 200-OK */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.backend/']);

        $response = $client->get('gdayWorld.html');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

Once again, I'll largely just list the added files here, and link through to source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── nginx
│   │   ├── root_home
│   │   │   ├── .gitignore
│   │   │   ├── .profile
│   │   │   └── .vimrc
│   │   ├── sites
│   │   │   └── default.conf
│   │   ├── Dockerfile
│   │   └── nginx.conf
│   └── [...]
├── public
│   └── gdayWorld.html
├── test
│   └── php
│       ├── NginxTest.php
│       └── [...]
├── var
│   └── log
│       └── nginx
│           ├── .gitkeep
│           ├── access.log
│           └── error.log
└── [...]

12 directories, 25 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The contents of gdayWorld.html should be obvious from the test, but it's just:

G'day world!

OK so that was all stuff I've done a few times before now. Next… Lucee

It has a Lucee container which serves CFML code via its internal web server

I'm kinda guessing at this next case. I'm gonna need to have a Lucee container, this is a cert. And I recollect Adobe's ColdFusion CFML engine ships with an wee stubbed web server for dev use. I can't recall if Lucee does too. I'm assuming it does. You can see how prepared I am for all this: I've not even RTFMed about the Lucee Docker image on DockerHub yet (I did at least make sure there was one though ;-). The idea is that there's a two-step here: getting the Lucee container up and doing "something", and after that, wire it through from Nginx. But that's a separate case.

Right so this is all new to me, so I'll actually list the files I've created. First the test:

/** @testdox Tests Lucee is serving cfml */
class LuceeTest extends TestCase
{
    /** @testdox It serves gdayWorld.cfm as 200-OK on Lucee's internal web server */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.lucee:8888/']);

        $response = $client->get('gdayWorld.cfm');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

It's the same as the HTML one except I'm hitting a different host, and on port 8888 (I have now done that RTFM I mentioned, and found the port Lucee serves on by default).

The Dockerfile is simple:

FROM lucee/lucee:5.3

RUN apt-get update
RUN apt-get install vim --yes

COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc

WORKDIR  /var/www

EXPOSE 8888

It's more complex than it needs to be as I always like vi installed in my containers because I inevitably need it (this is prescient as it turns out: I definitely did need it).

And the relevant bit from docker-compose.yml:

lucee:
    build:
        context: ./lucee
    volumes:
        - ../public:/var/www
        - ../var/log/tomcat:/usr/local/tomcat/log
        - ../var/log/lucee:/opt/lucee/web/logs
        - ./lucee/root_home:/root
    ports:
        - "8888:8888"
    stdin_open: true
    tty: true
    networks:
        backend:
            aliases:
                - cfml-in-docker.lucee

That's mostly just me mapping logging directories back to my host for convenience-sake.

Currently my test file - gdayWorld.cfm - is just plonked in the web root, which is not where one would normally put CFML files (except the application entry point file I mean), but it'll do for now:

<cfset message="G'day world!">
<cfoutput>#message#</cfoutput>

And that's it. After rebuilding my containers and running the tests, everything passes now:

root@a034afe670d4:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Lucee is serving cfml
It serves gdayWorld.cfm as 200-OK on Lucee's internal web server

Tests Nginx is serving html
It serves gdayWorld.html as 200-OK

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.028, Memory: 6.00 MB

OK (3 tests, 5 assertions)
root@a034afe670d4:/usr/share/cfml-in-docker#

It proxies .cfm requests from Nginx to Lucee

OK so Lucee is working. Painless. Now I need to tell Nginx about it. I have NFI how to do that… I hope Google and/or Stack Overflow does.

After some googling, my recollection that some sort of connector was needed to run between the web server and the application server seems outdated, and all I need to do is use proxy_pass from Nginx to the address Lucee has configured Tomcat to listen on (Lucee runs atop of Tomcat: it's basically a Java Servlet). I can never remember the syntax for this, but fortunately Nando Breiter has documented it in article "Using Nginx With ColdFusion or Lucee". It's also reminded me a few other cases I need to test for, but first the baseline. Well actually first the test:

/** @testdox It proxies a CFM request to Lucee */
public function testCfmReturns200OK()
{
    $client = new Client(['base_uri' => 'http://cfml-in-docker.frontend/']);

    $response = $client->get('gdayWorld.cfm');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
}

This is the same as the previous one except I'm using the Nginx website's host, and on port 80. Also note I've changed the name of the host to be cfml-in-docker.frontend not cfml-in-docker.backend. This is cosmetic, and just to distinguish between references to stuff happening on the network within the containers (called backend), and addresses browsed from the public-facing websites.

The implementation for this case is simply this, in the website config default.conf:

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name;
}

Adding this and restarting Nginx has that test passing, as well as not interfering with any non-CFML requests (ie: the other Nginx tests still pass).

This config has some shortfalls though. Well I say "shortfalls". Basically I mean it doesn't work properly for a real-world situation. More test cases…

It passes query values to Lucee

The test demonstrates this:

/** @testdox It passes query values to Lucee */
public function testCfmReceivesQueryParameters()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('queryTest.cfm?testParam=expectedValue');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("expectedValue", trim($content));
}

and queryTest.cfm is just this:

<cfoutput>#URL.testParam#</cfoutput>

If I run this test I get a failure because the 500 INTERNAL SERVER ERROR response from Lucee doesn't match the expected 200. This happens because Lucee can't see that param value. Because Nginx is not passing it. Easily fixed.

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

It passes the upstream remote address to Lucee

As it currently stands, Lucee will be receiving all requests as it they came from Nginx, rather than from whoever requested them. This is the nature of proxying, but we can work around this. First the test to set expectations:

/** @testdox It passes the upstream remote address to Lucee */
public function testLuceeReceivesCorrectRemoteAddr()
{
    $directClient = new Client([
        'base_uri' => 'http://cfml-in-docker.lucee:8888/',
        'http_errors' => false
    ]);
    $response = $directClient->get('remoteAddrTest.cfm');
    $expectedRemoteAddr = $response->getBody()->getContents();

    $proxiedClient = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $testResponse = $proxiedClient->get('remoteAddrTest.cfm');

    $this->assertEquals(200, $testResponse->getStatusCode());
    $actualRemoteAddr = $testResponse->getBody()->getContents();
    $this->assertSame($expectedRemoteAddr, $actualRemoteAddr);
}

And remoteAddrTest.cfm is just this:

<cfoutput>#CGI.remote_addr#</cfoutput>

This is slightly more complicated than the previous tests, but only in that I can't know what the remote address is of the service running the test, because it could be "anything" (in reality inside these Docker containers, if they're brought up in the same order with the default bridging network, then it'll always be the same, but we don't want to break these tests if unrelated config changes). The best way is to just check what the remote address is if we make the call directly to Lucee, and then expect that value if we make the same call via the Nginx proxy. As of now it fails because Lucee correctly sees the request as coming from the PHP container when we hit Lucee directly; but it sees the request as coming from the Nginx container when using Nginx's proxy. No surprise there. Fortunately Nando had the solution to this baked into his blog article already, so I can just copy and paste his work:

location ~ \.(?:cfm|cfc) {
    proxy_http_version  1.1;
    proxy_set_header    Connection "";
    proxy_set_header    Host                $host;
    proxy_set_header    X-Forwarded-Host    $host;
    proxy_set_header    X-Forwarded-Server  $host;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;     ## CGI.REMOTE_ADDR
    proxy_set_header    X-Forwarded-Proto   $scheme;                        ## CGI.SERVER_PORT_SECURE
    proxy_set_header    X-Real-IP           $remote_addr;
    expires             epoch;

    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

And if I restart Nginx: all good. One more issue to deal with…

It passes URL path_info to Lucee correctly

Something too few people know about, is there's an optional part of a URL between the script name and the query: path info. An example is: http://example.com/script/name/path/document.html/additional/path/info?queryParam=paramValue. That path is nothing to do with the script to be executed or where it's located, it's just… some extra pathing information for the script to do something with. It's seldom used, but it's part of the spec (RFC-3875, section 4.1.5). The spec says this:

The PATH_INFO variable specifies a path to be interpreted by the CGI script. It identifies the resource or sub-resource to be returned by the CGI script, and is derived from the portion of the URI path hierarchy following the part that identifies the script itself.

Anyway, from what I could see of what I have in the Nginx config, I suspected that we're not passing that on to Lucee, so its CGI.path_info value would be blank. A test for this is easy, and much the same as the earlier ones:

/** @testdox It passes URL path_info to Lucee correctly */
public function testLuceeReceivesPathInfo()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('pathInfoTest.cfm/additional/path/info/');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("/additional/path/info/", trim($content));
}

And pathInfoTest.cfm is similarly familiar:

<cfoutput>#CGI.path_info#</cfoutput>

And as I predicted (although as we'll see below, not for the reasons I thought!) the test errors:

> vendor/bin/phpunit --testdox '--filter=testLuceeReceivesPathInfo'
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Nginx proxies CFML requests to Lucee
It passes URL path_info to Lucee correctly
  
   Failed asserting that 404 matches expected 200.
  
   /usr/share/cfml-in-docker/test/php/NginxProxyToLuceeTest.php:71
  

Time: 00:00.090, Memory: 8.00 MB


FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
Script vendor/bin/phpunit --testdox handling the test event returned with error code 1
root@29840662fdf9:/usr/share/cfml-in-docker#

At this point I disappeared down a rabbit hole of irritation, as detailed in article "Repro for Lucee weirdness". There are two bottom lines to this:

  1. For reasons best known to [someone other than me], Lucee only handles path_info on requests to index.cfm, but not to any other .cfm file! This can be shown by changing that test by renaming pathInfoTest.cfm to index.cfm, and calling that instead.
  2. Actually Nginx already handles it correctly anyhow. In that the value is passed on already, and I don't need to do anything extra to make it work (as far as Nginx is concerned, anyhow).

I can fix the situation for pathInfoTest.cfm if I hack Lucee's web.xml file (this is down at line 4643):

<servlet-mapping>
    <servlet-name>CFMLServlet</servlet-name>
    <url-pattern>*.cfm</url-pattern>
    <url-pattern>*.cfml</url-pattern>
    <url-pattern>*.cfc</url-pattern>
    <url-pattern>/index.cfm/*</url-pattern>
    <url-pattern>/index.cfc/*</url-pattern>
    <url-pattern>/index.cfml/*</url-pattern>
</servlet-mapping>

I could slap a special mapping for it in there. But that's a daft way to deal with this. I'm going to just mark that test as "incomplete", and move on.

Thanks to Pete Freitag, Adam Tuttle, Zac Spitzer and Sean Corfield for putting me on the right direction for working out this particular "WTF, Lucee?" episode.


Speaking of "moving on", I said I'd get the code this far, but only progress onto the more CFML-oriented stuff if I still had will to live. Well Lucee has eroded that for now, so I'll get back to that part later, when I've stopped shaking my fist at the screen.

Righto.

--
Adam

Friday, 16 April 2021

Repro for Lucee weirdness

G'day:

I'm just having to install Lucee on my machine, and have got its Docker version up and running, but I'm seeing some weirdness with it. I was just wondering if someone else could take the time to try a quick experiment for me, and report back.

  1. In a browser-accessible directory, save this code in index.cfm:
    <cfdump var="#{
        script_name = CGI.script_name,
        path_info = CGI.path_info,
        query = url
    }#">
    
  2. Browse that file as http://[your test domain, etc]/path/to/that/index.cfm. You should see something like:
  3. Browse that file as http://[your test domain, etc]/path/to/that/index.cfm/extra/path/info?param=value. You should see something like:

    Note how it's correctly extracting the path_info value.

Now repeat the exercise, except instead of index.cfm, call the file testPathInfo.cfm, and browse to that instead.

For me, this works as expected if I'm using index.cfm. But if I use anything else, I just get this error if I have additional path info in the URL:

Note how it's seeing the path_info as part of the script_name, rather than separating it out.

My Lucee install is a fresh one from the the Lucee Docker image on DockerHub. I am only using the built-in web server ATM. However this is stopping me from sorting out the proxy_pass from Nginx… I want to get this ironed out before I move onwards with that.

Also, if anyone fancied running the experiment on CF instead of Lucee, that would be good too. But I'm mostly interested in seeing if this is just me doing something daft (if so: I'm buggered if I know what!), or if there is an issue.

Further investigations after feedback

FWIW I bit the bullet and downloaded and installed ColdFusion. It handles this situation fine:

Also thanks to Sean, Pete and Adam's guidance below; they've identified the issue as being in Lucee's web.xml file:

    <servlet-mapping>
        <servlet-name>CFMLServlet</servlet-name>
        <url-pattern>*.cfm</url-pattern>
        <url-pattern>*.cfml</url-pattern>
        <url-pattern>*.cfc</url-pattern>
        <url-pattern>/index.cfm/*</url-pattern>
        <url-pattern>/index.cfc/*</url-pattern>
        <url-pattern>/index.cfml/*</url-pattern>
    </servlet-mapping>

So the way Lucee works is that only index.cfm (or variant) can have path_info. That's pretty weird.

I've also looked at the servlet spec, and one can only have the single wildcard in the url-pattern, so it's not possible to solve this as *.cfm/* etc. I find it odd that the servlet spec includes the path_info in the "URL" it checks for the pattern. It should only be the script_name as far as I can tell, but they do specifically use everything after the context (the first part of the URL, omitted for Lucee), up to but not including the query part of the URL. If I was a betting person, I'd say the intent here is that the pattern should be an entire subdirectory (so widgets/*), or a file type, based on extension (so *.myServlet). And the people writing the servlet spec didn't think that the file extension is not necessarily the last thing before the ? or the end of the URL.

Still: the spec is clear in how it works, and what Lucee is trying to do with it doesn't work. I suspect they have decided path_info is only for old-skooly human-friendly URLs like this: http://example.com/index.cfm/fake/friendly/url/path/here (as opposed to just http://example.com/actually/friendly/url/path/here/). I've not seen someone use URLs like that since the early 2000s, and they should not be encouraged anyhow.

Am gonna have a quick look at what ColdFusion does with those mappings…

How ColdFusion handles it

Adobe have cheated and seemed to have patched the URL matcher so it accepts two wildcards, so in web.xml it's got this sort of thing (for each file extension variant):

<servlet-mapping id="coldfusion_mapping_6">
    <servlet-name>CfmServlet</servlet-name>
    <url-pattern>*.cfm/*</url-pattern>
</servlet-mapping>

Gets the job done.

Cheers and I appreciate the help.

Righto.

--
Adam

Sunday, 11 April 2021

TDD: eating the elephant one bite at a time

G'day:

I've got another interesting reader comment to address today. My namesake Adam Tuttle has sent through this wodge of questions, attached to my earlier article "TDD is not a testing strategy":

You weren't offering to teach anyone about TDD in this post, but hey... I'm here, you're here, I have questions... Shall we?

One of the things I struggle with w/r/t TDD is the temptation to test every. single. action. For example, a large, complex form-save. Dozens, possibly 100 fields. Whether that ends up saved via ORM entities or queries, chances are good that since the form is so large the logic is a bit beyond a dirt-simple single-CRUD query. Multiple relationships, order of operations, permissions to modify different fields, etc.

My gut reaction is to skip the unit-testing layer and jump up to an integration or E2E test: submit the form, then view the detail-view (or re-open the record for editing) and assert that the values changed have persisted and they are what you're expecting, where you're expecting it, on the latter view.

BUT doesn't that almost entirely eliminate the possibility of using mocks to make the tests fast(er), the base-state predictable, and to not leave a mess in a designated testing db/env? My (and I mean this literally!) feeble, bad-at-TDD brain doesn't comprehend what a good solution is to this problem.

Unless the solution is to not test that aspect of the code? I fully subscribe to the "100% test coverage is a fools errand" ethos, so perhaps this is something that should just not be tested; and save the testing for things that are doing "interesting" algorithmic work? (not-crud)

Since I specifically mentioned permission to edit a certain field in my example, I guess I should say that stands out to me as something I would likely want to test. Thinking about it now, my brain wants to architect a system that accepts the user object and the field name as inputs and returns a boolean for editable or not. Easy enough to implement and you're basically changing the conditional in the save method from "if user has X permission, entity.setProperty(newval)" to "if evaluatePermissions(user, property) = true, entity.setProperty(newval)", so there's no big mental leap to the next developer to read the code... but it also seems hairy to separate the permission logic from the form-save logic, not because of the separation, but because it leads towards combining the permission logic of lots of disparate and unrelated forms. I'm not seeing how that could be cleanly implemented.

So yeah. There's a can of worms for you. What do you make of that?

Nice one. There's a lot of work there, so I am going to approach it how I'd approach addressing any other requirements: a bit at a time. Like I'm doing TDD. Except I've NFI how I can write tests for a blog article, so just imagine that part. Also remember that TDD is not a testing strategy, it's a design strategy, so my TDD-ish approach here is focusing on identifying cases, and addressing them one at a time. OKOK, this is torturing my fixation with TDD a bit. Sorry.

You weren't offering to teach anyone about TDD in this post

You weren't offering to teach anyone about TDD in this post. OK, so first point. I'm always open to excuses to think about TDD practices, and how we can use them to address our work. So don't worry abou that.

It needs to save a large form

For example, a large, complex form-save. Dozens, possibly 100 fields. For my convenience I am going to interpret this as two separate things: the form, and the code that processes a post request. I suspect you were only meaning the latter. But, really, the same approach applies to both.

In seeing a large HTML form, you are not using a TDD mindset: how do I test that huge thing?! Using the TDD mindset, it's not a huge thing. It starts off being nothing. It starts off perhaps with "requests to /myForm.html return a 200-OK". From there it might move on to "It will be submitted as a POST to /processMyForm", and the to "after a successful form submission the user is redirected to /formSubmissionResults.html, and that request's status is 201-CREATED". Small steps. No form fields at all yet. But we have tested that requirements of your work have been tested (and implemented and pass the tests).

Next you might start addressing a form field requirement: "it has a text field with maximum length 100 for firstName". Quickly after that you have the same case for lastName. And then there might be 20 other fields that are all text and all have a sole constraint of maxLength, so you can test all of those really quicky but with still the same amount of care with a data provider that passes the test the case variations, but otherwise is the same test. This is still super quick, and your case still shows that you have addressed the requirement. And you can demonstrate that with your test output:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
    - should leave the submit button disabled until the form is filled


  7 passing (48ms)
  1 pending

 MOCHA  Tests completed successfully

(I've lifted that from the blog article I mention lower down).

Not all form fields are so simple. Some need to be select boxes that source their data from [somewhere]. "It has a select for favouriteColour, which offers values returned from a call to /colours/?type=favourite". This needs better testing that just name and length. "It has a password field that only accepts [rules]". Definitely needs testing discrete from the other tests. Etc

Your form is a collection of form fields all of which will have stated requirements. If the requirements have been stated, it stands to reason you should demonstrate you've met the requirements. Both now in the first iteration of development, and that this continues to hold true during subsequent iterations (direct or indirect: basically new work doesn't break existing tests).

I cover an approach to this in article "Vue.js: using TDD to develop a data-entry form". It's a small form, but the technique scales.

Bottom line when using TDD you don't start with a massive form.

It's a similar story on the form submission handler. The TDD process doesn't start with "holy f**k 100 form fields!", it starts of with a POST request. or it might start with a controller method receiving a request object that represents that request. Each value in that request must have validation, and you must test that, because validation is a) critical, b) fiddly and error-prone. But you start with one field: firstName must exist, and must be between 1-100 character. You'd have these cases:

  • It's not passed with the request at all (fail);
  • It's passed with the request but its value is empty (fail);
  • It's passed with the request and its length is 1 (pass);
  • It's passed with the request and its length is 100 (pass);
  • It's passed with the request and its length is 101 (fail);

These are requirements your client has given you: You need to test them!

The validation tests are perhaps a good example of where one might use a focused unit test, rather than a functional test that actually makes a request to /processMyForm and analyses the response. Maybe you just pass a request object, or the request body values to a validate method, and check the results.

Once validation is in place, you'd need to vary the response based on those results: "when validation fails it returns a 400-BAD-REQUEST"; "when validation fails it returns a non-empty-array errors with validation failure details", etc. All actual requirements you've been given; all need to be tested.

Then you'd move on to whatever other business logic is needed, step by step, until you get to a point where yer firing some values into storage or whatever, and you check the expected values for each field are passed to the right place in storage. Although I'd still use a mock (or spy, or whatever the precise term is), and just check what values it receives, rather than actually letting the test write to storage.

It also has end-to-end acceptance tests

At this point you can demonstrate the requirements have been tested, and you know they work. I'd then put an end-to-end happy path test on that (maybe all the way from automating the form submission with a virtual web client, maybe just by sending a POST request; either is valid). And then I'd do an end-to-end unhappy path test, eg: when validation fails are the correct messages put in the correct place on the form, or whatever. Maybe there are other valid variations of end-to-end tests here, but I would not think to have an end-to-end test for each form field, and each validation rule. That'd be fiddly to write, and slow to run.

It does need to cover all the behaviour

I fully subscribe to the "100% test coverage is a fools errand" ethos. Steady on there. There's 100% and there's 100%. This notion is applied to lines-of-code, or 100% of methods, or basically implementation detail stuff. And it's also usually trotted out by someone who's looking at the code after it's been done, and is faced with a whole pile of testing to write and trying to work out ways of wriggling out of it. This is no slight on you, Adam (Tuttle), it's just how I have experienced devs rationalise this with me. If one does TDD / BDD, then one is not thinking about lines of code when one is testing. One is thinking about behaviour. And the behaviour has been requested by a client, and the behaviour needs to work. So we test the behaviour. Whether that's 1 line of code or 100 is irrelevant. However the test will exercise the code, because the code only ever came into existence to address the case / behaviour being delivered. Using TDD generally results in ~100% of the code being covered because you don't write code you don't need, which is the only time code might not be covered. How did that code get in there? Why did you write it? Obviously it's not needed so get rid of it ;-).

The key here is that 100% of behaviour gets covered.

Nothing is absolute though. There will be situations where some code - for whatever reason - is just not testable. This is rare, but it happens. In that case: don't get hung up by it. Isolate it away by itself, and mark it as not covered (eg in PHPUNit we have @codeCoverageIgnore), and move on. But be circumspect when making this decision, and the situations that one can't test some code is very rare. I find devs quite often seem to confuse "can't" with "don't feel like ~". Two different things ;-)

I'll also draw you back to an article I wrote ages ago about the benefits of 100% test coverage: "Yeah, you do want 100% test coverage". TL;DR: where in these two displays can you spot then new code that is accidentally missing test coverage:


Accidents are easy to spot when a previously all-green board starts being not all-green.

It uses emergent design to solve large problems

[My] brain wants to architect a system that [long and complicated description follows]. One of the premises of TDD is that you let the solution architect itself. I'm not 100% behind this as I can't quite see it yet, but I know I do find it really daunting if my requirement seems to be "it all does everything I need it to do", and I don't know where to start with that. This was my real life experience doing that Vue.js stuff I linked to above. I really did start with "yikes this whole form thing is gonna be a monster!? I don't even know where to start!". I pushed the end result I thought I might have to the back of your mind, especially the architectural side of things (which will probably more define itself in the refactor stage of things, not the red / green part).

And I started by adding a route for the form, and then I responded to request to that route with a 200-OK. And then moved on to the next bite of the elephant.

HTH.

--
Adam (Cameron)

Thursday, 8 April 2021

TDD and external services

G'day

You might have noticed I spend a bit of my time encouraging people to use TDD, or at the very least making sure yer code is tested somehow. But use TDD ;-)

As an interesting aside, I recently failed a technical interview because the interviewer didn't feel I was strong enough at the testing side of things. Given what I see around the industry… that seems to be a moderately high bar yer setting for yerselves there, peeps. Or perhaps I'm just shit at articulating myself. Hrm. But anyway.

OK, so I rattled out a quick article a few days ago - "TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode" - which revisits some existing ground and by-and-large is not relevant to what I'm going to say here, other than the "TDD & professionalism" being why I bang on about it so much. And you might think I bang on about it here, but I also bang on about it at work (when I have work I mean), and in my background conversations too. I try to limit it to only my technical associates, that said.

Right so Mingo hit me up in a comment on that article, asking this question:

Something I ran into was needing to access the external API for the tests and I understand that one usually uses mocking for that, right? But, my question is then: how do you then **know** that you're actually calling the API correctly? Should I build the error handling they have in their API into my mocked up API as well (so I can test my handling of invalid inputs)? This feels like way too much work. I chose to just call the API and use a test account on there, which has it's own issues, because that test account could be setup differently than the multiple different live ones we have. I guess I should just verify my side of things, it's just that it's nice when it's testing everything together.

Yep, good question. With new code, my approach to the TDD is based on the public interface doing what's been asked of it. One can see me working through this process in my earlier article "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements". Here I'm building a web service end point - by definition the public interface to some code - and I am always hitting the controller (via the routing). And whatever I start testing, I just "fake it until I make it". My first test case here is "It needs to return a 200-OK status for GET requests on the /workshops endpoint", and the test is this:

/**
 * @testdox it needs to return a 200-OK status for successful GET requests
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturns200()
{
    $this->client->request('GET', '/workshops/');

    $this->assertEquals(Response::HTTP_OK, $this->client->getResponse()->getStatusCode());
}

To get this to pass, the first iteration of the implementation code is just this:

public function doGet() : JsonResponse
{
    return new JsonResponse(null);
}

The next case is "It returns a collection of workshop objects, as JSON", implemented thus:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturnsJson()
{
    $workshops = [
        new Workshop(1, 'Workshop 1'),
        new Workshop(2, 'Workshop 2')
    ];

    $this->client->request('GET', '/workshops/');

    $resultJson = $this->client->getResponse()->getContent();
    $result = json_decode($resultJson, false);

    $this->assertCount(count($workshops), $result);
    array_walk($result, function ($workshopValues, $i) use ($workshops) {
        $workshop = new Workshop($workshopValues->id, $workshopValues->name);
        $this->assertEquals($workshops[$i], $workshop);
    });
}

And the code to make it work shows I've pushed the mocking one level back into the application:

class WorkshopsController extends AbstractController
{

    private WorkshopCollection $workshops;

    public function __construct(WorkshopCollection $workshops)
    {
        $this->workshops = $workshops;
    }

    public function doGet() : JsonResponse
    {
        $this->workshops->loadAll();

        return new JsonResponse($this->workshops);
    }
}

class WorkshopCollection implements \JsonSerializable
{
    /** @var Workshop[] */
    private $workshops;

    public function loadAll()
    {
        $this->workshops = [
            new Workshop(1, 'Workshop 1'),
            new Workshop(2, 'Workshop 2')
        ];
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

(I've skipped a step here… the first iteration could/should be to mock the data right there in the controller, and then refactor it into the model, but this isn't about refactoring, it's about mocking).

From here I refactor further, so that instead of having the data itself in loadAll, the WorkshopCollection calls a repository, and the repository calls a DAO, which for now ends up being:

class WorkshopsDAO
{
    public function selectAll() : array
    {
        return [
            ['id' => 1, 'name' => 'Workshop 1'],
            ['id' => 2, 'name' => 'Workshop 2']
        ];
    }
}

The next step is where Mingo's question comes in. The next refactor is to swap out the mocked data for a DB call. We'll end up with this:

class WorkshopsDAO
{
    private Connection $connection;

    public function __construct(Connection $connection)
    {
        $this->connection = $connection;
    }

    public function selectAll() : array
    {
        $sql = "
            SELECT
                id, name
            FROM
                workshops
            ORDER BY
                id ASC
        ";
        $statement = $this->connection->executeQuery($sql);

        return $statement->fetchAllAssociative();
    }
}

But wait. if we do that, our unit tests will be hitting the DB. Which we are not gonna do. We've run out of things to directly mock as we're at the lower-boundary of our application, and the connection object is "someon else's code" (Doctrine/DBAL in this case). We can't mock that, but fortunately this is why I have the DAO tier. It acts as the frontier between our app and the external service provider, and we still mock that:

public function testDoGetReturnsJson()
{
    $workshopDbValues = [
        ['id' => 1, 'name' => 'Workshop 1'],
        ['id' => 2, 'name' => 'Workshop 2']
    ];

    $this->mockWorkshopDaoInServiceContainer($workshopDbValues);

    // ... unchanged ...

    array_walk($result, function ($workshopValues, $i) use ($workshopDbValues) {
        $this->assertEquals($workshopDbValues[$i], $workshopValues);
    });
}

private function mockWorkshopDaoInServiceContainer($returnValue = []): void
{
    $mockedDao = $this->createMock(WorkshopsDAO::class);
    $mockedDao->method('selectAll')->willReturn($returnValue);

    $container = $this->client->getContainer();
    $workshopRepository = $container->get('test.WorkshopsRepository');

    $reflection = new \ReflectionClass($workshopRepository);
    $property = $reflection->getProperty('dao');
    $property->setAccessible(true);
    $property->setValue($workshopRepository, $mockedDao);
}

We just use a mocking library (baked into PHPUnit in this case) to create a runtime mock, and we put that into our repository.

The tests pass, the DB is left alone, and the code is "complete" so we can push it to production perhaps. But we are not - as Mingo observed - actually testing that what we are asking the DB to do is being done. Because all our tests mock the DB part of things out.

The solution is easy, but it's not done via a unit test. It's done via an integration test (or end-to-end test, or acceptance test or whatever you wanna call it), which hits the real endpoint which queries the real database, and gets the real data. Adjacent to that in the test we hit the DB directly to fetch the records we're expecting, and then we compare the JSON that the end point returns represents the same data we manually fetched from the DB. This tests the SQL statement in the DAO, that the data fetched models OK in the repo, and that the model (WorkshopCollection here) applies whatever business logic is necessary to the data from the repo before passing it back to the controller to return with the response, which was requested via the external URL. IE: it tests end-to-end.

public function testDoGetExternally()
{
    $client = new Client([
        'base_uri' => 'http://fullstackexercise.backend/'
    ]);

    $response = $client->get('workshops/');
    $this->assertEquals(Response::HTTP_OK, $response->getStatusCode());
    $workshops = json_decode($response->getBody(), false);

    /** @var Connection */
    $connection = static::$container->get('database_connection');
    $expectedRecords = $connection->query("SELECT id, name FROM workshops ORDER BY id ASC")->fetchAll();

    $this->assertCount(count($expectedRecords), $workshops);
    array_walk($expectedRecords, function ($record, $i) use ($workshops) {
        $this->assertEquals($record['id'], $workshops[$i]->id);
        $this->assertSame($record['name'], $workshops[$i]->name);
    });
}

Note that despite I'm saying "it's not a unit test, it's an integration test", I'm still implementing it via PHPUnit. The testing framework should just provide testing functionality: it should not dictate what kind of testing you implement with it. And similarly not all tests written with PHPUnit are unit tests. They are xUnit style tests, eg: in a class called SomethingTest, and the the methods are prefixed with test and use assertion methods to implement the test constraints.

Also: why don't I just use end-to-end tests then? They seem more reliable? Yep they are. However they are also more fiddly to write as they have more set-up / tear-down overhead, so they take longer to write. Also they generally take longer to run, and given TDD is supposed to be a very quick cadence of test / run / code / run / refactor / run, the less overhead the better. The slower your tests are, the more likely you are to switch to writing code and testing later once you need to clear your head. In the mean time your code design has gone out the window. Also unit tests are more focused - addressing only a small part of the codebase overall - and that has merit in itself. Aso I used a really really trivial example here, but some end-to-end tests are really very tricky to write, given the overall complexity of the functionality being tested. I've been in the lucky place that at my last gig we had a dedicated QA development team, and they wrote the end-to-end tests for us, but this also meant that those tests were executed after the dev considered the tasks "code complete", and QA ran the tests to verify this. There is no definitive way of doing this stuff, that said.

To round this out, I'm gonna digress into another chat I had with Mingo yesterday:

Normally I'd say this:

Unit tests
Test logic of one small part of the code (maybe a public method in one class). If you were doing TDD and about to add a condition into your logic, you'd write a until test to cover the new expectations that the condition brings to the mix.
Functional tests
These are a subset of unit tests which might test a broader section of the application, eg from the public frontier of the application (so like an endpoint) down to where the code leaves the system (to a logger, or a DB, or whatever). The difference between unit tests and functional tests - to me - are just how distributed the logic being tests is throughout the system.
Integration tests
Test that the external connections all work fine. So if you use the app's DB configuration, the correct database is usable. I'd personally consider a test an integration test if it only focused on a single integration.
Acceptance tests(or end-to-end tests)
Are to integration tests what functional tests are to unit tests: a broader subset. That test above is an end-to-end test, it tests the web server, the application and the DB.

And yes I know the usages of these terms vary a bit.

Furthermore, considering the distinction between BDD and TDD:

  • The BDD part is the nicely-worded case labels, which in theory (but seldom in practise, I find) are written in direct collaboration with the client user.
  • The TDD part is when in the design-phase they are created: with TDD it's before the implementation is written; I am not sure whether in BDD it matters or is stipulated.
  • But both of them are design / development strategies, not testing strategies.
  • The tests can be implemented as any sort of test, not specifically unit tests or functional tests or end-to-end tests. The point is the test defines the design of the piece of code being written: it codifies the expectations of the behaviour of the code.
  • BDD and TDD tests are generally implemented via some unit testing framework, be it xUnit (testMyMethodDoesSomethingRight), or Jasmine-esque (it("does something right", function (){}).

One can also do testing that is not TDD or BDD, but it's a less than ideal way of going about things, and I would image result in subpar tests, fragmented test coverage, and tests that don't really help understand the application, so are harder to maintain in a meaningful way. But they are still better than no tests at all.

When I am designing my code, I use TDD, and I consider my test cases in a BDD-ish fashion (except I do it on the client's behalf generally, and sadly), and I use PHPUnit (xUnit) to do so on PHP, and Mocha (Jasime-esque) to do so on Javascript.

Hopefully that clarifies some things for people. Or people will leap at me and tell me where I'm wrong, and I can learn the error in my ways.

Righto.

--
Adam

Tuesday, 6 April 2021

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD. Is. Not. A. Testing. Strategy.

Just a passing thought. Apropros to absolutely nothing. 'Onest guv.(*)

Dunno if it occurred to you, but that TDD thing? It's not a testing strategy. It's a design strategy.

Let's look at the name. In the name test-driven is a compound adjective: it just modifies the subject. The subject is development. It's about development. It's not about testing.

It's probably better described by BDD, although to me that's a documentation strategy, rather than a development (or testing) one. BDD is about arriving at the test cases (with the client), TDD is about implementing those cases.

The purpose of TDD is to define a process that leads you - as a developer - to pay more attention to the design of your code. It achieves this by forcing you to address the requirement as a set of needs (or cases), eg "it needs to return the product of the two operands". Then you demonstrate your understanding of the case by demonstrating what it is for the case to "work" (a test that when you pass 2 and 3 to the function it returns 6), and then you implement the code to address that case. Then you refine the case, refactor the implementation so it's "nicer", or move on to the next case, and cycle through that one. Rinse and repeat.

But all along the object of the exercise is to think about what needs to be done, break it into small pieces, and code just what's needed to implement the customer need. It also provides a firm foundation to be able to safely refactor the code once it's working. You know: the bit that you do to make your code actually good; rather than just settling for "doesn't break", which is a very low bar to set yourself.

That you end up with repeatable tests along the way is a by-product of TDD. Not the reason you're doing it. Although obviously it's key to offering that stability and confidence for the refactor phase.

Too many people I interact with when they're explaining why it's OK they don't do TDD [because reasons], fall back to the validity / longevity of the tests. It's… not about the tests. It's about how you design your solutions.

Lines of code are not a measure of productivity

Tangential to this, we all know that LOC are not a measure of productivity. There's not a uniform relationship between one line of code and another adjacent line of code. Or ten lines of code in one logic block that represent the implementation of a method are likely to represent less productivity burden than a single line of code nested 14-levels deep in some flow-control-logic monstrousity. We all know this. Not all lines of code are created equal. More is definitely not better. But fewer is also not intrinsically better. It's just an invalid metric.

So why is it that so many people are prepared to count the lines of code a test adds to the codebase as a rationalisation (note: not a justification, because it's invalid) as to why they don't have time to write that test? Or that the test represents an undue burden in the codebase. Why are they back to measuring productivity with LOC? Why won't they let it occur to them that - especially when employing TDD - the investment in the LOC for the test code reduces the investment in the LOC for the production code? And note I am not meaning this as a benefit that one only realises over time having amortised it over a long code lifespan. I mean on the first iteration of code<->test<->release, because the bouncing back and forth between each step there will be reduced. Even for code which this might (although probably won't) be the only iteration the production code sees.

It's just "measure twice, cut once" for code. Of course one doesn't have the limitation in code that one can only cut once; the realisation here needs to be that "measuring" takes really a lot less time than "cutting" when it comes to code.

In closing, if you are rationalising to me (or, in reality: to yourself) why you don't do TDD, and that rationalisation involves lines of code or how often code will be revisited, then you are not talking about TDD. You are pretty much just setting up a strawman to tilt at to make yourself feel better. Hopefully that tactic will seem a little more transparent to you now.

Design your code. Measure your code.

Righto.

--
Adam

(*) that's a lie. It's obviously a retaliation to a coupla comments I've been getting on recent articles I've written about TDD.

Sunday, 4 April 2021

Unit testing: tests are not much bloody use if they always pass

G'day:

I started back on the next article of my VueJS / Symfony / etc series this morning. And now it's 18:24 and I've made zero progress. Well I've written a different blog article in the middle of that ("TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode"), but that was basically just a procrastinary exercise, avoiding getting down to the other work.

I'm currently laughing (more "nervous giggling") at my mental juxtaposition of "TDD & professionalism" from that earlier article and the title of this one. I'm not feeling very professional round about now.

OK so I sat down to get cracking on this new article, and the first thing I did was re-run my tests to make sure they were all still working. This is largely due to some issues I had with the Vue Test Utils library about a week ago, which I will discuss in that next article. Anyhow, everything was green. All good.

Next I opened my Vue component file, and remembered a slight tweak I wanted to make to my code. I have this (in WorkshopRegistrationForm.vue):

submitButtonLabel: function() {
    return this.registrationState === REGISTRATION_STATE_FORM ? "Register" : "Processing&hellip;";
},

I'm not in love with those hard-coded strings there; I want to extract them and use named constants instead (same as with the form-state constants I already have there).

The first thing I did was to locate the test for when the button switches to "Processing…", and update it to be broken so I can expect the change. Basically I figured I change the label to be something different, see the test fail, update the code to use a constant with the "different" value, see the tests pass, and then change the test back to expect the "Processing…" value, and then the const value in the code. Sometimes that's all a test change that's needed. And in hindsight I'm glad I did it.

The test method is thus (from test/unit/workshopRegistration.spec):

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        let lastLabel;
        component.vm.$watch("submitButtonLabel", (newValue) => {
            lastLabel = newValue;
        });

        let lastFormState;
        component.vm.$watch("isFormDisabled", (newValue) => {
            lastFormState = newValue;
        });

        await submitPopulatedForm();

        expect(lastLabel).to.equal("Processing&hellip;");
        expect(lastFormState).to.be.true;
    });
});

The line in question is that second to last expectation, and I just changed it to be:

expect(lastLabel).to.equal("Processing&hellip;TEST_WILL_FAIL");

And I ran the test:

 DONE  Compiled successfully in 3230ms

  [=========================] 100% (completed)

 WEBPACK  Compiled successfully in 3230ms

 MOCHA  Testing...



  Testing WorkshopRegistrationForm component
    Testing form submission
       should disable the form and indicate data is processing when the form is submitted


  1 passing (52ms)

 MOCHA  Tests completed successfully

root@9b1e15054be3:/usr/share/fullstackExercise#

Umm… hello?

Note: in the real situation I ran all the tests. It's not just a case of me running the wrong test or something completely daft. Although bear with me, there's def some daftness to come.

I did some fossicking around and putting some console.log entries about the place, and narrowed it down to how I had "fixed" these tests the last time I had issues with them. Previously the tests were running too quickly, and the Workshop listing had not been returned from the remote call in time for the test to try to submit the form, and any tests that relied on filling-out the form went splat cos there were not (yet) and workshops to select. OKOK, hang on this is what I'm talking about:

Those come from a remote call, so the data arrives asynchronously.

My fix was this bit in that test:

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        //...
    });
});

I was being "clever" and watching for when the workshops data finally arrived, waited for the options to populate, then we're good to run the test code. A whole bunch of the tests needed this. Now I hasten to add that I did thoroughly test this strategy when I updated all the tests. I made them all fail one of their expectations, watched the tests fail, then fixed the assertions and watched them pass. It's not like I made this change and just went "yeah that (will) work OK on my machine".

So what was the problem? Can you guess? Looking now, the tests do make a certain assumption.

Well. So my original issue was the code I was testing was running slow, so I changed the tests to wait for a change, and then run. And last week I tweaked my Docker settings to speed up all my containers. Now the code isn't slow. So now the workshops data is already loaded before the test code gets to that watch. So… there's nothing to watch. I started watching too late. I proved this to myself by slowing the remote call down again, and suddenly the tests started working again (ie: that test started to fail like I wanted it to).

It occurred to me then I had solved the original issue the wrong way. I was thinking synchronously about an asynchronous problem. I can't know if the data will arrive before or after my test runs. Just that at some time it is promised to arrive. Aha!

The data was already coming back in a promise (from WorkshopDAO.js):

selectAll() {
    return this.client.get(this.config.workshopsUrl)
        .then((response) => {
            return response.data;
        });
}

The problem is that by the time it bubbles back through DAO › Repository › Service › Component, I'd ditched the promise and just waited for the value (WorkshopRegistrationForm.vue):

async mounted() {
    this.workshops = await this.workshopService.getWorkshops();
},

And I needed that this.workshops to just be the eventual array of objects, becauseI have a v-for looping over it. And v-for ain't clever enough to take the promise of an array, it needs the actual array (this is from the same file, just further up at line 82):

<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I knew what I needed to do in the test. Instead of the watch, I just needed to append another then handler to the promise. Then whether or not the data has arrived back yet, the handler would run either straight away or once the data got there. But how do I get hold of that promise?

In the end I cheated: (again, same file, but a new version of it):

data() {
    return {
        //...
        promisedWorkshops: null,
        workshops: [],
        //...
    };
},
//...
async mounted() {
    this.promisedWorkshops = this.workshopService.getWorkshops();
    this.workshops = await this.promisedWorkshops;
},

I put the promise into the component's data as well as the values :-)

And the test becomes(from test/unit/workshopRegistration.spec again):

it("should disable the form and indicate data is processing when the form is submitted", async () => {
    component.vm.$watch("workshops", async () => {
    await component.vm.promisedWorkshops.then(async () => {
        await component.vm.$nextTick();

As I said about I just slap all the code in a then handler instead of a watch callback. The rest of the code is the same. I need to wait that tick because the options don't render until the next Vue-tick after the data arrives.

That's a much more semantically-appropriate (and less hacky) way of addressing this issue. I'm reasonably pleased with that as a solution. For now.

Having learned my lesson I went back and retested everything in both a broken and working state, with an instant response time, and a very delayed response time on the remote call. The tests seem stable now.

Until I find the next thing wrong with them, anyhow.

OK that's enough staring at code on the screen for the day. I'm gonna stare at a game on the screen instead now.

Righto.

--
Adam

TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode

G'day:

Yer gonna need to go back and read the comments on Thoughts on Working Code podcast's Testing episode for context here. Especially as I quote a couple of them. I kinda left the comments run themselves there a bit and didn't reply to everyone as I didn't want to dominate the conversation. But one earlier comment that made me itchy, and now one comment that came in in the last week or so, have made me decide to - briefly - follow-up one point that I think warrants drawing attention to and building on.

Briefly, Working Code Pod did an episode on testing, and I got all surly about some of the things that were said, and wrote them up in the article I link to above. BTW Ben's reaction to my feedback in their follow-up episode ("Listener Questions #1) was the source of my current strapline quote: "the tone... it sounds very heated and abrasive". That should frame things nicely.

Right so in the comments to that previous article, we have these discussion fragments:

  • Sean Corfield - Heck, there's still a sizable portion that still doesn't use version control or has some whacked-out manual approach to "source code control".
  • Ben replied to that with - Yikes, I have trouble believing that there are developers _anywhere_ that don't use source-control in this day-and-age. That seems like "table stakes" for development. Period.
  • [off-screen, Adam bites his tongue]
  • Then Sean Hogge replied to the article rather than that particular comment thread. I'm gonna include a chunk of what he said here:

    18 months ago, I was 100% Ben-shaped. Testing simply held little ROI. I have a dev server that's a perfect replica of production, with SSL and everything. I can just log in, open the dashboard, delete the cache and check things with a few clicks.

    But as I started developing features that interact with other features, or that use the same API call in different ways, or present the same data with a partial or module with options, I started seeing an increase in production errors. I could feel myself scrambling more and more. When I stepped back and assessed objectively, tests were the only efficient answer.

    After about 3 weeks of annoying, frustrating, angry work learning to write tests, every word of Adam C's blog post resonates with me. I am not good at it (yet), I am not fast at it (yet), but it is paying off exactly as he and those he references promised it would.

    I recommend reading his entire comment, because it's bloody good.

  • Finally last week I listened to a YouTube video "Jim Coplien and Bob Martin Debate TDD", from which I extracted these two quotes from Martin that drew me back to this discussion:
    • (@ 43sec) My thesis is that it has become infeasible […] for a software developer to consider himself professional if [(s)he] does not practice test-driven development.
    • (@ 14min 42sec) Nowadays it is […] irresponsible for a developer to ship a line of code that [(s)he] has not executed any unit test [upon].. It's important to note that "nowadays" being 2012 in this case: that's when the video was from.
    And, yes, OK the two quotes say much the same thing. I just wanted to emphasise the words "professional" and "irresponsible".

This I think is what Ben is missing. He shows incredulity that someone in 2021 would not use source control. People's reaction is going to be the same to his suggestion he doesn't put much focus on test-automatic, or practise TDD as a matter of course when he's designing his code. And Sean (Hogge) nails it for me.

(And not just Ben. I'm not ragging on him here, he's just the one providing the quote for me to start from).

TDD is not something to be framed in a context alongside other peripheral things one might pick up like this week's kewl JS framework, or Docker or some other piece of utility one might optionally use when solving a client's problem. It's lower level than that, so it's false equivalence to bracket it like that conceptually. Especially as a rationalisation for not addressing your shortcomings in this area.

Testing yer code and using TDD is as fundamental to your professional responsibilities as using source control. That's how one ought to contextualise this. Now I know plenty of people who I'd consider professional and strong pillars of my dev community who aren't as good as they could be with testing/TDD. So I think Martin's first quote is a bit strong. However I think his second quote nails it. If you're not doing TDD you are eroding your professionalism, and you are being professionalbly irresponsible by not addressing this.

In closing: thanks to everyone for putting the effort you did into commenting on that previous article. I really appreciate the conversation even if I didn't say thanks etc to everyone participating.

Righto.

--
Adam