Showing posts with label Unit Testing. Show all posts
Showing posts with label Unit Testing. Show all posts

Sunday, 4 April 2021

Unit testing: tests are not much bloody use if they always pass

G'day:

I started back on the next article of my VueJS / Symfony / etc series this morning. And now it's 18:24 and I've made zero progress. Well I've written a different blog article in the middle of that ("TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode"), but that was basically just a procrastinary exercise, avoiding getting down to the other work.

I'm currently laughing (more "nervous giggling") at my mental juxtaposition of "TDD & professionalism" from that earlier article and the title of this one. I'm not feeling very professional round about now.

OK so I sat down to get cracking on this new article, and the first thing I did was re-run my tests to make sure they were all still working. This is largely due to some issues I had with the Vue Test Utils library about a week ago, which I will discuss in that next article. Anyhow, everything was green. All good.

Next I opened my Vue component file, and remembered a slight tweak I wanted to make to my code. I have this (in WorkshopRegistrationForm.vue):

submitButtonLabel: function() {
    return this.registrationState === REGISTRATION_STATE_FORM ? "Register" : "Processing…";
},

I'm not in love with those hard-coded strings there; I want to extract them and use named constants instead (same as with the form-state constants I already have there).

The first thing I did was to locate the test for when the button switches to "Processing…", and update it to be broken so I can expect the change. Basically I figured I change the label to be something different, see the test fail, update the code to use a constant with the "different" value, see the tests pass, and then change the test back to expect the "Processing…" value, and then the const value in the code. Sometimes that's all a test change that's needed. And in hindsight I'm glad I did it.

The test method is thus (from test/unit/workshopRegistration.spec):

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        let lastLabel;
        component.vm.$watch("submitButtonLabel", (newValue) => {
            lastLabel = newValue;
        });

        let lastFormState;
        component.vm.$watch("isFormDisabled", (newValue) => {
            lastFormState = newValue;
        });

        await submitPopulatedForm();

        expect(lastLabel).to.equal("Processing…");
        expect(lastFormState).to.be.true;
    });
});

The line in question is that second to last expectation, and I just changed it to be:

expect(lastLabel).to.equal("Processing…TEST_WILL_FAIL");

And I ran the test:

 DONE  Compiled successfully in 3230ms

  [=========================] 100% (completed)

 WEBPACK  Compiled successfully in 3230ms

 MOCHA  Testing...



  Testing WorkshopRegistrationForm component
    Testing form submission
       should disable the form and indicate data is processing when the form is submitted


  1 passing (52ms)

 MOCHA  Tests completed successfully

root@9b1e15054be3:/usr/share/fullstackExercise#

Umm… hello?

Note: in the real situation I ran all the tests. It's not just a case of me running the wrong test or something completely daft. Although bear with me, there's def some daftness to come.

I did some fossicking around and putting some console.log entries about the place, and narrowed it down to how I had "fixed" these tests the last time I had issues with them. Previously the tests were running too quickly, and the Workshop listing had not been returned from the remote call in time for the test to try to submit the form, and any tests that relied on filling-out the form went splat cos there were not (yet) and workshops to select. OKOK, hang on this is what I'm talking about:

Those come from a remote call, so the data arrives asynchronously.

My fix was this bit in that test:

it("should disable the form and indicate data is processing when the form is submitted", () => {
    component.vm.$watch("workshops", async () => {
        await flushPromises();
        //...
    });
});

I was being "clever" and watching for when the workshops data finally arrived, waited for the options to populate, then we're good to run the test code. A whole bunch of the tests needed this. Now I hasten to add that I did thoroughly test this strategy when I updated all the tests. I made them all fail one of their expectations, watched the tests fail, then fixed the assertions and watched them pass. It's not like I made this change and just went "yeah that (will) work OK on my machine".

So what was the problem? Can you guess? Looking now, the tests do make a certain assumption.

Well. So my original issue was the code I was testing was running slow, so I changed the tests to wait for a change, and then run. And last week I tweaked my Docker settings to speed up all my containers. Now the code isn't slow. So now the workshops data is already loaded before the test code gets to that watch. So… there's nothing to watch. I started watching too late. I proved this to myself by slowing the remote call down again, and suddenly the tests started working again (ie: that test started to fail like I wanted it to).

It occurred to me then I had solved the original issue the wrong way. I was thinking synchronously about an asynchronous problem. I can't know if the data will arrive before or after my test runs. Just that at some time it is promised to arrive. Aha!

The data was already coming back in a promise (from WorkshopDAO.js):

selectAll() {
    return this.client.get(this.config.workshopsUrl)
        .then((response) => {
            return response.data;
        });
}

The problem is that by the time it bubbles back through DAO › Repository › Service › Component, I'd ditched the promise and just waited for the value (WorkshopRegistrationForm.vue):

async mounted() {
    this.workshops = await this.workshopService.getWorkshops();
},

And I needed that this.workshops to just be the eventual array of objects, becauseI have a v-for looping over it. And v-for ain't clever enough to take the promise of an array, it needs the actual array (this is from the same file, just further up at line 82):

<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I knew what I needed to do in the test. Instead of the watch, I just needed to append another then handler to the promise. Then whether or not the data has arrived back yet, the handler would run either straight away or once the data got there. But how do I get hold of that promise?

In the end I cheated: (again, same file, but a new version of it):

data() {
    return {
        //...
        promisedWorkshops: null,
        workshops: [],
        //...
    };
},
//...
async mounted() {
    this.promisedWorkshops = this.workshopService.getWorkshops();
    this.workshops = await this.promisedWorkshops;
},

I put the promise into the component's data as well as the values :-)

And the test becomes(from test/unit/workshopRegistration.spec again):

it("should disable the form and indicate data is processing when the form is submitted", async () => {
    component.vm.$watch("workshops", async () => {
    await component.vm.promisedWorkshops.then(async () => {
        await component.vm.$nextTick();

As I said about I just slap all the code in a then handler instead of a watch callback. The rest of the code is the same. I need to wait that tick because the options don't render until the next Vue-tick after the data arrives.

That's a much more semantically-appropriate (and less hacky) way of addressing this issue. I'm reasonably pleased with that as a solution. For now.

Having learned my lesson I went back and retested everything in both a broken and working state, with an instant response time, and a very delayed response time on the remote call. The tests seem stable now.

Until I find the next thing wrong with them, anyhow.

OK that's enough staring at code on the screen for the day. I'm gonna stare at a game on the screen instead now.

Righto.

--
Adam

Wednesday, 10 March 2021

TDDing the reading of data from a web service to populate elements of a Vue JS component

G'day:

This article leads on from "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements", which itself continues a saga I started back with "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 1: Intro & Nginx" (that's a 12-parter), and a coupla other intermediary articles also related to this body of work. What I'm doing here specifically leads on from "Vue.js: using TDD to develop a data-entry form". In that article I built a front-end Vue.js component for a "workshop registration" form, submission, and summary:


Currently the front-end transitions work, but it's not connected to the back-end at all. It needs to read that list of "Workshops to attend" from the DB, and it needs to save all the data between form and summary.

In the immediately previous article I set up the endpoint the front end needs to call to fetch that list of workshops, and today I'm gonna remove the mocked data the form is using, and get it to use reallyreally data from the DB, via that end point.

To recap, this is what we currently have:

In the <template/> section of WorkshopRegistrationForm.vue:

<label for="workshopsToAttend" class="required">Workshops to attend:</label>

<select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
    <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
</select>

And in the code part of it:

mounted() {
      this.workshops = this.workshopService.getWorkshops();
},

And WorkshopService just has some mocked data:

getWorkshops() {
    return [
        {value: 2, text:"Workshop 1"},
        {value: 3, text:"Workshop 2"},
        {value: 5, text:"Workshop 3"},
        {value: 7, text:"Workshop 4"}
    ];
}

In this exercise I will be taking very small increments in my test / code / test / code cycle here. It's slightly arbitrary, but it's just because I like to keep separate the "refactoring" parts from the "new development parts", and I'll be doing both. The small increments help surface problems quickly, and also assist my focus so that I'm less likely to stray off-plan and start implementing stuff that is not part of the increment's requirements. This might, however, make for tedious reading. Shrug.

We're gonna refactor things a bit to start with, to push the mocked data closer to the boundary between the application and the DB. We're gonna follow a similar tiered application approach as the back-end work, in that we're going to have these components:

WorkshopRegistrationForm.vue
The Vue component providing the view part of the workshop registration form.
WorkshopService
This is the boundary between WorkshopRegistrationForm and the application code. The component calls it to get / process data. In this exercise, all it needs to do is to return a WorkshopCollection.
WorkshopCollection
This is the model for the collection of Workshops we display. It loads its data via WorkshopRepository
Workshop
This is the model for a single workshop. Just an ID and name.
WorkshopsRepository
This deals with fetching data from storage, and modelling it for the application to use. It knows about the application domain, and it knows about the DB schema. And translates between the two.
WorkshopsDAO
Because the repository has logic in it that we will be testing, our code that actually makes the calls to the DB connector has been extracted into this DAO class, so that when testing the repository it can be mocked-out.
Client
This is not out code. We're using Axios to make calls to the API, and the DAO is a thin layer around this.

As a first step we are going to push that mocked data back to the repository. We can do this without having to change our tests or writing any data manipulation logic.

Here's the current test run:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
     should leave the submit button disabled until the form is filled
     should disable the form and indicate data is processing when the form is submitted
     should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
     should display the registration summary 'template' after the registration has been submitted
     should display the summary values in the registration summary

We have to keep that green, and all we can do is move code around. No new code for now (other than a wee bit of scaffolding to let the app know about the new classes). Here are all the first round of changes.

Currently the deepest we go in the new code is the repository, which just returns the mocked data at the moment:

class WorkshopsRepository {

    selectAll() {
        return [
            {value: 2, text:"Workshop 1"},
            {value: 3, text:"Workshop 2"},
            {value: 5, text:"Workshop 3"},
            {value: 7, text:"Workshop 4"}
        ];
    }
}

export default WorkshopsRepository;

The WorkshopCollection grabs that data and uses it to populate itself. It extends the native array class so can be used as an array by the Vue template code:

class WorkshopCollection extends Array {

    constructor(repository) {
        super();
        this.repository = repository;
    }

    loadAll() {
        let workshops = this.repository.selectAll();

        this.length = 0;
        this.push(...workshops);
    }
}

module.exports = WorkshopCollection;

And the WorkshopService has had the mocked values removed, an in their place it tells its WorkshopCollection to load itself, and it returns the populated collection:

class WorkshopService {

    constructor(workshopCollection) {
        this.workshopCollection = workshopCollection;
    }

    getWorkshops() {
        this.workshopCollection.loadAll();
        return this.workshopCollection;
    }

    // ... rest of it unchanged...
}

module.exports = WorkshopService;

WorkshopRegistrationForm.vue has not changed, as it still receives an array of data to its this.workshops property, which is still used in the same way by the template code:

<template>
    <form method="post" action="" class="workshopRegistration" v-if="registrationState !== REGISTRATION_STATE_SUMMARY">
        <fieldset :disabled="isFormDisabled">

            <!-- ... -->

            <select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
                <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
            </select>

            <!-- ... -->

        </fieldset>
    </form>

    <!-- ... -->
</template>

<script>
// ...

export default {
    // ...
    mounted() {
        this.workshops = this.workshopService.getWorkshops();
    },
    // ...
}
</script>

Before we wire this up, we need to change our test to reflect where the data is coming from now. It used to be just stubbing the WorkshopService's getWorkshops method, but now we will need to be mocking the WorkshopRepository's selectAll method instead:

workshopService = new WorkshopService();
sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(expectedOptions);

The tests now break as we have not yet implemented this change. We need to update App.vue, which has a bit more work to do to initialise that WorkshopService now, with its dependencies:

<template>
    <workshop-registration-form></workshop-registration-form>
</template>

<script>
import WorkshopRegistrationForm from "./WorkshopRegistrationForm";
import WorkshopService from "./WorkshopService";
import WorkshopCollection from "./WorkshopCollection";
import WorkshopsRepository from "./WorkshopsRepository";

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
    )
);

export default {
    name: 'App',
    components: {
        WorkshopRegistrationForm
    },
    provide: {
        workshopService: workshopService
    }
}
</script>

Having done this, the tests are passing again.

Next we need to deal with the fact that so far the mocked data we're passing around is keyed on how it is being used in the <option> tags it's populating:

return [
    {value: 2, text:"Workshop 1"},
    {value: 3, text:"Workshop 2"},
    {value: 5, text:"Workshop 3"},
    {value: 7, text:"Workshop 4"}
]

value and text are derived from the values' use in the mark-up, whereas what we ought to be mocking is an array of Workshop objects, which have keys id and name:

class Workshop {

    constructor(id, name) {
        this.id = id;
        this.name = name;
    }
}

module.exports = Workshop;

We will need to update our tests to expect this change:

sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

The tests fail again, so we're ready to change the code to make the tests pass. The repo now returns objects:

class WorkshopsRepository {

    selectAll() {
        return [
            new Workshop(2, "Workshop 1"),
            new Workshop(3, "Workshop 2"),
            new Workshop(5, "Workshop 3"),
            new Workshop(7, "Workshop 4")
        ];
    }
}

And we need to make a quick change to the stubbed saveWorkshopRegistration method in WorkshopService too, so the selectedWorkshops are now filtered on id, not value:

saveWorkshopRegistration(details) {
    let allWorkshops = this.getWorkshops();
    let selectedWorkshops = allWorkshops.filter((workshop) => {
        return details.workshopsToAttend.indexOf(workshop.name) >= 0;
        return details.workshopsToAttend.indexOf(workshop.id) >= 0;
    });
    // ...

And the template now expects them:

<option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I almost forgot about this, but the summary section also uses the WorkshopCollection data, and currently it's also coded to expect mocked workshops with value/text properties, rather than id/name one. So the test needs updating:

it("should display the summary values in the registration summary", async () => {
    const summaryValues = {
        registrationCode : "TEST_registrationCode",
        fullName : "TEST_fullName",
        phoneNumber : "TEST_phoneNumber",
        emailAddress : "TEST_emailAddress",
        workshopsToAttend : [{value: "TEST_workshopToAttend_VALUE", text:"TEST_workshopToAttend_TEXT"}]
        workshopsToAttend : [{id: "TEST_workshopToAttend_ID", name:"TEST_workshopToAttend_NAME"}]
    };
    
	// ...

    let ddValue = actualWorkshopValue.find("ul>li");
    expect(ddValue.exists()).to.be.true;
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].name);
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].value);

	// ...
});

And the code change in the template:

<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.value">{{workshop.text}}</li>
<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.id">{{workshop.name}}</li>

And the tests pass again.

Now we will introduce the WorkshopsDAO with the data that it will get back from the API call mocked. It'll return this to the WorkshopsRepository which will implement the modelling now:

class WorkshopDAO {

    selectAll() {
        return [
            {id: 2, name: "Workshop 1"},
            {id: 3, name: "Workshop 2"},
            {id: 5, name: "Workshop 3"},
            {id: 7, name: "Workshop 4"}
        ];
    }
}

export default WorkshopDAO;

We also now need to update the test initialisation to pass a DAO instance into the repository, and for now we can remove the Sinon stubbing because the DAO is appropriately stubbed already:

let repository = new WorkshopsRepository();
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

workshopService = new WorkshopService(
    new WorkshopCollection(
        repository
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);

That's the only test change here (and the tests now break). We'll update the code to also pass-in a DAO when we initialise WorkshopService's dependencies, and also implement the modelling code in the WorkshopRepository class's selectAll method:

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);
class WorkshopRepository {

    constructor(dao) {
        this.dao = dao;
    }

    selectAll() {
        return this.dao.selectAll().map((unmodelledWorkshop) => {
            return new Workshop(unmodelledWorkshop.id, unmodelledWorkshop.name);
        });
    }
}

We're stable again. The next bit of development is the important bit: add the code in the DAO that actually makes the DB call. We will be changing the DAO to be this:

class WorkshopDAO {

    constructor(client, config) {
        this.client = client;
        this.config = config;
    }

    selectAll() {
        return this.client.get(this.config.workshopsUrl)
            .then((response) => {
                return response.data;
            });
    }
}

export default WorkshopDAO;

And that Config class will be this:

class Config {
    static workshopsUrl = "http://fullstackexercise.backend/workshops/";
}

module.exports = Config;

Now. Because Axios's get method returns a promise, we're going to need to cater to this in the up-stream methods that need manipulate the data being promised. Plus, ultimately, WorkshopRegistration.vue's code is going to receive a promise, not just the raw data. IE, this code:

mounted() {
    this.workshops = this.workshopService.getWorkshops();
    this.workshops = this.workshopService.getWorkshops()
        .then((workshops) => {
            this.workshops = workshops;
        });
},

A bunch of the tests rely on the state of the component after the data has been received:

  • should have a required text input for fullName, maxLength 100, and label 'Full name'
  • should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
  • should have a required text input for emailAddress, maxLength 320, and label 'Email address'
  • should have a required password input for password, maxLength 255, and label 'Password'
  • should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
  • should list the workshop options fetched from the back-end
  • should have a button to submit the registration
  • should leave the submit button disabled until the form is filled
  • should disable the form and indicate data is processing when the form is submitted
  • should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
  • should display the registration summary 'template' after the registration has been submitted
  • should display the summary values in the registration summary

In these tests we are gonna have to put the test code within a watch handler, so it waits until the promise returns the data (and accordingly workshops gets set, and the watch-handler fires) before trying to test with it, eg:

it("should list the workshop options fetched from the back-end", () => {
    component.vm.$watch("workshops", async () => {
    	await flushPromises();
        let options = component.findAll("form.workshopRegistration select[name='workshopsToAttend[]']>option");

        expect(options).to.have.length(expectedOptions.length);
        options.forEach((option, i) => {
            expect(option.attributes("value"), `option[${i}] value incorrect`).to.equal(expectedOptions[i].value.toString());
            expect(option.text(), `option[${i}] text incorrect`).to.equal(expectedOptions[i].text);
        });
    });
});

Other than that the tests shouldn't need changing. Also now that the DAO is not itself a mock as it was in the last iteration, we need to go back to mocking it again, this time to return a promise of things to come:

sinon.stub(dao, "selectAll").returns(Promise.resolve(
    expectedOptions.map((option) => ({id: option.value, name: option.text}))
));

It might seem odd that we are making changes to the DAO class but we're not unit testing those changes: the test changes here are only to accommodate the other objects' changes from being synchronous to being asynchrous. Bear in mind that these are unit tests and the DAO only exists as a thin wrapper around the Axios Client object, and its purpose is to give us some of our code that we can mock to prevent Axios from making an actual API call when we test the objects above it. We'll do a in integration test of the DAO separately after we deal with the unit testing and implementation.

After updating those tests to deal with the promises, the tests collapse in a heap, but this is to be expected. We'll make them pass again by bubbling-back the promise from returned from the DAO.

For the code revisions to the other classes I'm just going to show an image of the diff between the files from my IDE. I hate using pictures of code, but this is the clearest way of showing the diffs. All the actual code is on Github @ frontend/src/workshopRegistration

Here in the repository we need to use the values from the promise, so we need to wait for them to be resolved.

Similarly with the collection, service, and component (in two places), as per below:



It's important to note that the template code in the component did not need changing at all: it was happy to wait until the promise resolved before rendering the workshops in the select box.

Having done all this, the tests pass wonderfully, but the UI breaks. Doh! But in an "OK" way:

 


Entirely fair enough. I need to send that header back with the response.

class WorkshopControllerTest extends TestCase
{

    private WorkshopsController $controller;
    private WorkshopCollection $collection;

    protected function setUp(): void
    {
        $this->collection = $this->createMock(WorkshopCollection::class);
        $this->controller = new WorkshopsController($this->collection);
    }

    /**
     * @testdox It makes sure the CORS header returns with the origin's address
     * @covers ::doGet
     */
    public function testDoGetSetsCorsHeader()
    {
        $testOrigin = 'TEST_ORIGIN';
        $request = new Request();
        $request->headers->set('origin', $testOrigin);

        $response = $this->controller->doGet($request);

        $this->assertSame($testOrigin, $response->headers->get('Access-Control-Allow-Origin'));
    }
}
class WorkshopsController extends AbstractController
{

    // ...

    public function doGet(Request $request) : JsonResponse
    {
        $this->workshops->loadAll();

        $origin = $request->headers->get('origin');
        return new JsonResponse(
            $this->workshops,
            Response::HTTP_OK,
            [
                'Access-Control-Allow-Origin' => $origin
            ]
        );
    }
}

That sorts it out. And… we're code complete.

But before I finish, I'm gonna do an integration test of that DAO method, just to automate proof that it does what it needs do. I don't count "looking at the UI and going 'seems legit'" as testing.

import {expect} from "chai";
import Config from "../../src/workshopRegistration/Config";
import WorkshopsDAO from "../../src/workshopRegistration/WorkshopsDAO";
const client = require('axios').default;

describe("Integration tests of WorkshopsDAO", () => {

    it("returns the correct records from the API", async () => {
        let directCallObjects = await client.get(Config.workshopsUrl);

        let daoCallObjects = await new WorkshopsDAO(client, Config).selectAll();

        expect(daoCallObjects).to.eql(directCallObjects.data);
    });
});

And that passes.

OK we're done here. I learned a lot about JS promises in this exercise: I hid a lot of it from you here, but working out how to get those tests working for async stuff coming from the DAO took me an embarassing amount of time and frustration. Once I nailed it it all made sense though, and I was kinda like "duh, Cameron". So that's… erm… good…?

The next thing we need to do is to sort out how to write the data to the DB now. But before we do that, I'm feeling a bit naked without having any data validation on either the form or on the web service. Well: the web service end of things doesn't exist yet, but I'm going to start that work with putting some expected validation in. But I'll put some on the client-side first. As with most of the stuff in this current wodge of work: I do not have the slightest idea of how to do form validation with Vue.js. But tomorrow I will start to find out (see "Vue.js and TDD: adding client-side form field validation" for the results of that). Once again, I have a beer appointment to get myself too for now.

Righto.

--
Adam

Thursday, 4 March 2021

Kahlan: getting it working with Symfony 5 and generating a code coverage report

G'day:

This is not the article I intended to write today. Today (well: I hoped to have it done by yesterday, actually) I had hoped to be writing about my happy times doing using TDD to implement a coupla end-points I need in my Symfony-driven web service. I got 404 (chuckle) words into that and then was blocked by trying to get Kahlan to play nice for about five hours (I did have dinner in that time too, but it was add my desk, and with a scowl on my face googling stuff). And that put me at 1am so I decided to go to bed. I resumed today an hour or so ago, and am just now in the position to get going again. But I've decided to document that lost six hours first.

I sat down to create a simple endpoint to fetch some data, and started by deciding on my first test case, which was "It needs to return a 200-OK status for GET requests on the /workshops endpoint". I knew Symfony did some odd shenanigans to be able to functionally test right from a route slug rather than having tests directly instantiating controller classes and such. I checked the docs and all this is done via an extension of PHPUnit, using WebTestCase. But I don't wanna use PHPUnit for this. Can you imagine my test case name? Something like: testItNeedsToReturnA200OKStatusForGetRequestsOnTheWorkshopsEndpoint. Or I could break PSR-12/PSR-1 and make it (worse) with test_it_needs_to_Return_a_200_OK_status_for_get_requests_on_the_workshops_endpoint (this is why I will never use phpspec). Screw that. I'm gonna work out how to do these Symfony WebTestCase tests in Kahlan.

Not so fast mocking PHPUNit there, Cameron

2021-03-06

Due to some - probably show-stopping - issues I'm seeing with Kahlan, I have been looking at PHPUnit some more. I just discovered the textdox reporting functionality it has, which makes giving test case names much clearer.

/** @testdox Tests the /workshops endpoint methods */
class workshopsEndPointTet {
    /** @testdox it needs to return a 200-OK status for GET requests */
    function testReturnStatus() {}
}

This will output in the test runs as:

Perfect. I still prefer the style of code Kahlan uses, but… this is really good to know.

First things first, I rely heavily on PHPUnit's code coverage analysis, so I wanted to check out Kahan's offering. The docs seem pretty clear ("Code Coverage"), and seems I just want to be using the lcov integration Kahlan offers, like this:

vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info"
genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info

I need to install lcov first via my Dockerfile:

RUN apt-get install lcov --yes

OK so I did all that, and had a look at the report:

Pretty terrible coverage, but it's working. Excellent. But drilling down into the report I see this:

>

This is legit reporting because I have no tests for the Kernel class, but equally that class is generated by Symfony and I don't want to cover that. How do I exclude it from code coverage analysis? I'm looking for Kahlan's equivalent of PHPUnit's @codeCoverageIgnore. There's nothing in the docs, and all I found was a passing comment against an issue in Github asking the same question I was: "Exclude a folder in code coverage #321". The answer is to do this sort of thing to my kahlan-config.php file:

use Kahlan\Filter\Filters;
use Kahlan\Reporter\Coverage;
use Kahlan\Reporter\Coverage\Driver\Xdebug;

$commandLine = $this->commandLine();
$commandLine->option('no-header', 'default', 1);

Filters::apply($this, 'coverage', function($next) {
    if (!extension_loaded('xdebug')) {
        return;
    }
    $reporters = $this->reporters();
    $coverage = new Coverage([
        'verbosity' => $this->commandLine()->get('coverage'),
        'driver'    => new Xdebug(),
        'path'      => $this->commandLine()->get('src'),
        'exclude'   => [
            'src/Kernel.php'
        ],
        'colors'    => !$this->commandLine()->get('no-colors')
    ]);
    $reporters->add('coverage', $coverage);
});

That seems a lot of messing around to do something that seems like it should be very simple to me. I will also note that Kahlan - currently - has no ability to suppress code coverage at a method or code-block level either (see "Skip individual functions in code coverage? #333"). This is not a deal breaker for me in this work, but it would be a show-stopper on any of the codebases I have worked on in the last decade, as they've all been of dubious quality, and all needed some stuff to be actively "overlooked" as they're not testable as they currently stand, and we (/) like my baseline code coverage report to have 100% coverage reported, and be entirely green. This is so if any omissions creep in, they're easy to spot (see "Yeah, you do want 100% test coverage"). Anyway, I'll make that change and omit Kernel from analysis:

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info"

.................                                                 17 / 17 (100%)



Expectations   : 51 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 17 of 17 PASS in 0.527 seconds (using 7MB)

Coverage Summary
----------------

Total: 33.33% (1/3)

Coverage collected in 0.001 seconds


root@13038aa90234:/usr/share/fullstackExercise# genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info
Reading data file var/tmp/lcov/coverage.info
Found 2 entries.
Found common filename prefix "/usr/share/fullstackExercise"
Writing .css and .png files.
Generating output.
Processing file src/MyClass.php
Processing file src/Controller/GreetingsController.php
Writing directory view page.
Overall coverage rate:
  lines......: 33.3% (1 of 3 lines)
  functions..: 50.0% (1 of 2 functions)
root@13038aa90234:/usr/share/fullstackExercise#

And the report now doesn't mention Kernel:

Cool.

Now to implement that test case. I need to work out how to run a Symfony request without using WebTestCase. Well I say "I need to…" I mean I need to google someone else who's already done it, and copy them. I have NFI how to do it, and I'm not prepared to dive into Symfony code to find out how. Fortunately someone has already cracked this one: "Functional Test Symfony 4 with Kahlan 4". It says "Symfony 4", but I'll check if it works on Symfony 5 too. I also happened back to the Kahlan docs, and they mention the same guy's solution ("Integration with popular frameworks › Symfony"). This one points to a library to encapsulate it (elephantly/kahlan-bundle), but that is actively version-constrained to only Symfony 4. Plus it's not seen any work since 2017, so I suspect it's abandoned.

Anyway, back to samsonasik's blog article. It looks like this is the key bit:

$this->request = Request::createFromGlobals();
$this->kernel  = new Kernel('test', false);
$request = $this->request->create('/lucky/number', 'GET');
$response = $this->kernel->handle($request);

That's how to create a Request and get Symfony's Kernel to run it. Easy. Hopefully. Let's try it.

namespace adamCameron\fullStackExercise\spec\functional\Controller;

use adamCameron\fullStackExercise\Kernel;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;

describe('Tests of GreetingsController', function () {

    beforeAll(function () {
        $this->request = Request::createFromGlobals();
        $this->kernel  = new Kernel('test', false);
    });

    describe('Tests of doGet', function () {
        it('returns a JSON greeting object', function () {
            $testName = 'Zachary';

            $request = $this->request->create("/greetings/$testName", 'GET');
            $response = $this->kernel->handle($request);

            expect($response->getStatusCode())->toBe(Response::HTTP_OK);
        });
    });
});

I'm not getting too ambitious here, and it's not addressing the entire test case yet. I'm just making the request and checking its response status code.

And this just goes splat:

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --ff

E                                                                 18 / 18 (100%)


Tests of GreetingsController
  Tests of doGet
    ✖ it returns a JSON greeting object
      an uncaught exception has been thrown in `vendor/symfony/framework-bundle/Kernel/MicroKernelTrait.php` line 91

      message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` require(/tmp/kahlan/usr/share/fullstackExercise/src/config/bundles.php): Failed to open stream: No such file or directory"

        [NA] - vendor/symfony/framework-bundle/Kernel/MicroKernelTrait.php, line  to 91

Eek. I had a look into this, and the code in question is try to do this:

$contents = require $this->getProjectDir().'/config/bundles.php';

And the code in getProjectDir is thus:

<pre class="source-code"><code>public function getProjectDir()
{
    if (null === $this-&gt;projectDir) {
        $r = new \ReflectionObject($this);

        if (!is_file($dir = $r-&gt;getFileName())) {
            throw new \LogicException(sprintf('Cannot auto-detect project dir for kernel of class &quot;%s&quot;.', $r-&gt;name));
        }

        $dir = $rootDir = \dirname($dir);
        while (!is_file($dir.'/composer.json')) {
            if ($dir === \dirname($dir)) {
                return $this-&gt;projectDir = $rootDir;
            }
            $dir = \dirname($dir);
        }
        $this-&gt;projectDir = $dir;
    }

    return $this-&gt;projectDir;
}
</code></pre>

The code starts in the directory of the current file, and traverses up the directory structure until it finds the directory with composer.json.If it doesn't find that, then - somewhat enigmatically, IMO - it just says "ah now, we'll just use the directory we're in now. It'll be grand". To me if it expects to find what it's looking for by looking up the ancestor directory path and that doesn't work: throw an exception. Still. In the normal scheme of things, this would work cos the app's Kernel class - by default - seems to live in the src directory, which is one down from where composer.json is.

So why didn't it work? Look at the directory that it's trying to load the bundles from: /tmp/kahlan/usr/share/fullstackExercise/src/config/bundles.php. Where? /tmp/kahlan/usr/share/fullstackExercise/. Ain't no app code in there, pal. It's in /usr/share/fullstackExercise/. Why's it looking in there? Because the Kernel object that is initiating all this is at /tmp/kahlan/usr/share/fullstackExercise/src/Kernel.php. It's not the app's own one (at /usr/share/fullstackExercise/src/Kernel.php), it's all down to how Kahlan monkey-patches everything that's going to be called by the test code, on the off chance you want to spy on anything. It achieves this by loading the source code of the classes, patching the hell out of it, and saving it in that /tmp/kahlan. The only problem with this is that when Symfony traverses up from where the patched Kernel class is… it never finds composer.json, so it just takes a guess at where the project directory is. And it's not a well-informed guess.

I'm not sure who I blame more here, to be honest. Kahlan for patching everything and running code from a different directory from where it's supposed to be; or Symfony for its "interesting" way to know where the project directory is. I have an idea here, Symfony: you could just ask me. Or even force me tell it. Ah well. Just trying to be "helpful" I s'pose.

Anyway, I can exclude files from being patched, according to the docs:

  --exclude=<string>                  Paths to exclude from patching. (default: `[]`).

I tried sticking the path to Kernel in there: src/Kernel.php, and that didn't work. I hacked about in the code and it doesn't actually want a path, it wants the fully-qualified class name, eg: adamCameron\fullStackExercise\Kernel. I've raised a ticket for this with Kahlan, just to clarify the wording there: Bug: bad nomenclature in help: "path" != "namespace".

This does work…

root@13038aa90234:/usr/share/fullstackExercise# vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --ff --exclude=adamCameron\\fullStackExercise\\Kernel

E                                                                 18 / 18 (100%)


Tests of GreetingsController
  Tests of doGet
    ✖ it returns a JSON greeting object
      an uncaught exception has been thrown in `vendor/symfony/deprecation-contracts/function.php` line 25

      message:`Kahlan\PhpErrorException` Code(0) with message "`E_USER_DEPRECATED` Please install the \"intl\" PHP extension for best performance."

        [NA] - vendor/symfony/deprecation-contracts/function.php, line  to 25
        trigger_deprecation - vendor/symfony/framework-bundle/DependencyInjection/FrameworkExtension.php, line  to 253

This is not exactly what I want, but it's a different error, so Symfony is finding itself this time, and then just faceplanting again. However when I look into the code, it's this:

if (!\extension_loaded('intl') && !\defined('PHPUNIT_COMPOSER_INSTALL')) {
    trigger_deprecation('', '', 'Please install the "intl" PHP extension for best performance.');
}

// which in turn...

function trigger_deprecation(string $package, string $version, string $message, ...$args): void
{
    @trigger_error(($package || $version ? "Since $package $version: " : '').($args ? vsprintf($message, $args) : $message), \E_USER_DEPRECATED);
}

So Symfony is very quietly raising a flag that it suggests I have that extension installed. But only as a deprecation notice, and even then it's @-ed out. Somehow Kahlan is getting hold of that and going "nonono, this is worth stopping for". No it ain't. Ticket raised: "Q: should trigger_error type E_USER_DEPRECATED cause testing to halt?".

Anyway, the point is a legit one, so I'll install the intl extension. I initially thought it was just a matter of slinging this in the Dockerfile:

RUN apt-get install --yes zlib1g-dev libicu-dev g++
RUN docker-php-ext-install intl

But that didn't work, I needed a bunch of Other Stuff too:

RUN apt-get install --yes zlib1g-dev libicu-dev g++
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl

(Thanks to the note in docker-php-ext-install intl fails #57 for solving that for me).

After rebuilding the container, let's see what goes wrong next:

root@58e3325d1a16:/usr/share/fullstackExercise# composer coverage
> vendor/bin/kahlan --lcov="var/tmp/lcov/coverage.info" --exclude=adamCameron\\fullStackExercise\\Kernel

..................                                                18 / 18 (100%)



Expectations   : 52 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 18 of 18 PASS in 0.605 seconds (using 12MB)

Coverage Summary
----------------

Total: 100.00% (3/3)

Coverage collected in 0.001 seconds


> genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info
Reading data file var/tmp/lcov/coverage.info
Found 2 entries.
Found common filename prefix "/usr/share/fullstackExercise"
Writing .css and .png files.
Generating output.
Processing file src/MyClass.php
Processing file src/Controller/GreetingsController.php
Writing directory view page.
Overall coverage rate:
  lines......: 100.0% (3 of 3 lines)
  functions..: 100.0% (2 of 2 functions)
root@58e3325d1a16:/usr/share/fullstackExercise#

(I've stuck a Composer script in for this, btw):

"coverage": [
    "vendor/bin/kahlan --lcov=\"var/tmp/lcov/coverage.info\" --exclude=adamCameron\\\\fullStackExercise\\\\Kernel",
    "genhtml --output-directory public/lcov/ var/tmp/lcov/coverage.info"
]

And most promising of all is this:

All green! I like that.

And now I'm back to where I wanted to be, yesterday, as I typed that 404th word of the article I was meant to be working on. 24h later now.

Righto.

--
Adam

Friday, 19 February 2021

Part 12: unit testing Vue.js components

G'day

OKOK, another article in that bloody series I've been doing. Same caveats as usual: go have a breeze through the earlier articles if you feel so inclined. That said this is reasonably stand-alone.

  1. Intro / Nginx
  2. PHP
  3. PHPUnit
  4. Tweaks I made to my Bash environment in my Docker containers
  5. MariaDB
  6. Installing Symfony
  7. Using Symfony
  8. Testing a simple web page built with Vue.js using Mocha, Chai and Puppeteer
  9. I mess up how I configure my Docker containers
  10. An article about moving files and changing configuration
  11. Setting up a Vue.js project and integrating some existing code into it
  12. Unit testing Vue.js components (this article)

This is really very frustrating. You might recall I ended my previous article with this:

Now I just need to work out how to implement a test of just the GreetingMessage.vue component discretely, as opposed to the way I'm doing it now: curling a page it's within and checking the page's content. […]

Excuse me whilst I do some reading.

[Adam runs npm install of many combinations of NPM libs]

[Adam downgrades his version of Vue.js and does the npm install crap some more]

OK screw that. Seems to me - on initial examination - that getting all the libs together to make stand-alone components testable is going to take me longer to work out than I have patience for. I'll do it later. […] Sigh.

I have returned to this situation, and have got everything working fine. Along the way I had an issue with webpack that I eventually worked around, but once I circled back to replicate the work and write this article, I could no longer replicate the issue. Even rolling back to the previous version of the application code and step-by-step repeating the steps to get to where the problem was. This is very frustrating. However other people have had similar issues in the past so I'm going to include the steps to solve the problem here, even if I have to fake the issue to get error messages, etc.

Right so I'm back with the Vue.js docs regarding testing: "Vue.JS > Testing > Component Testing". The docs are incredibly content-lite, and pretty much just fob you off onto other people to work out what to do. Not that cool, and kinda suggests Vue.js considers the notion of unit testing pretty superficially. I did glean I needed to install a coupla Node modules.

First, @testing-library/vue, for which I ran into a glitch immediately:

root@00e2ea0a3109:/usr/share/fullstackExercise# npm install --save-dev @testing-library/vue
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: full-stack-exercise@2.13.0
npm ERR! Found: vue@3.0.5
npm ERR! node_modules/vue
npm ERR!   vue@"^3.0.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer vue@"^2.6.10" from @testing-library/vue@5.6.1
npm ERR! node_modules/@testing-library/vue
npm ERR!   dev @testing-library/vue@"*" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /var/cache/npm/eresolve-report.txt for a full report.

npm ERR! A complete log of this run can be found in:
npm ERR!     /var/cache/npm/_logs/2021-02-19T11_29_15_901Z-debug.log
root@00e2ea0a3109:/usr/share/fullstackExercise#

The current version of @testing-library/vue doesn't work with the current version of Vue.js. Neato. Nice one Vue team. After some googling of "wtf?", I landed on an issue someone else had raised already: "Support for Vue 3 #176": I need to use the next branch (npm install --save-dev @testing-library/vue@next). This worked OK.

The other module I needed was @vue/cli-plugin-unit-mocha. That installed with no problem. This all gives me the ability to run vue-cli-service test:unit, which will run call up Mocha and run some tests. The guidance is to set this up in package.json, thus:

  "scripts": {
    "test": "mocha test/**/*Test.js",
    "serve": "vue-cli-service serve --watch",
    "build": "vue-cli-service build",
    "lint": "vue-cli-service lint",
    "test:unit": "vue-cli-service test:unit test/unit/**/*.spec.js"
  },

Then one can jus run it as npm run test:unit.

I looked at the docs for how to test a component ("Unit Testing Vue Components" - for Vue v2, but there's no equivalent page in the v3 docs), and knocked together an initial first test which would just load the component and do nothing else: just to check everything was running (frontend/test/unit/GreetingMessage.spec.js (final version) on Github):

import GreetingMessage from "../../src/gdayWorld/components/GreetingMessage";

import { shallowMount } from "@vue/test-utils";

import {expect} from "chai";

describe("Tests of GreetingMessage component", () => {
    it("should successfully load the component", () => {
        let greetingMessage = shallowMount(GreetingMessage, {propsData: {message: "G'day world"}});

        expect(true).to.be.true;
    });
});

Here's where I got to the problem I now can't replicate. When I ran this, I got something like:

root@eba0490b453d:/usr/share/fullstackExercise# npm run test:unit

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

 WEBPACK  Compiling...

  [=========================] 98% (after emitting)

 ERROR  Failed to compile with 1 error

error  in ./node_modules/mocha/lib/cli/cli.js

Module parse failed: Unexpected character '#' (1:0)
File was processed with these loaders:
* ./node_modules/cache-loader/dist/cjs.js
* ./node_modules/babel-loader/lib/index.js
* ./node_modules/eslint-loader/index.js
You may need an additional loader to handle the result of these loaders.
> #!/usr/bin/env node
| 'use strict';
|
[…etc…]

There's a JS file that has a shell-script shebang thing at the start of it, and the Babel transpiler doesn't like that. Fair enough, but I really don't understand why it's trying to transpile stuff in the node_modules directory, but at this point in time, I just thought "Hey-ho, it knows what it's doing so I'll take its word for it".

Googling about I found a bunch of other people having a similar issue with Webpack compilation, and the solution seemed to be to use a shebang-loader in the compilation process (see "How to keep my shebang in place using webpack?", "How to Configure Webpack with Shebang Loader to Ignore Hashbang…", "Webpack report an error about Unexpected character '#'"). All the guidance for this solution was oriented aroud sticking stuff in the webpack.config.js file, but of course Vue.js hides that away from you, and you need to do things in a special Vue.js way, but adding stuff with a special syntax to the vue.config.js file. The docs for this are at "Vue.js > Working with Webpack". The docs there showed how to do it using chainWebpack, but I never actually got this approach to actually solve the problem, so I mention this only because it's "something I tried".

From there I starting thinking, "no, seriously why is it trying to transpile stuff in the node_modules directory?" This does not seem right. I changed my googling tactic to try to find out what was going on there, and came across "Webpack not excluding node_modules", and that let me to update my vue.config.js file to actively exclude node_modules (copied from that answer):

var nodeExternals = require('webpack-node-externals');
...
module.exports = {
    ...
    target: 'node', // in order to ignore built-in modules like path, fs, etc. 
    externals: [nodeExternals()], // in order to ignore all modules in node_modules folder 
    ...
};

And that worked. Now when I ran the test I have made progress:

root@eba0490b453d:/usr/share/fullstackExercise# npm run test:unit

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

 WEBPACK   Compiling...

  [=========================] 98% (after emitting)

 DONE   Compiled successfully in 1161ms

  [=========================] 100% (completed)

WEBPACK  Compiled successfully in 1161ms

MOCHA  Testing...



  Tests of GreetingMessage component
    ✓ should successfully load the component


  1 passing (17ms)

MOCHA  Tests completed successfully

root@eba0490b453d:/usr/share/fullstackExercise#

From there I rounded out the tests properly (frontend/test/unit/GreetingMessage.spec.js):

import GreetingMessage from "../../src/gdayWorld/components/GreetingMessage";

import { shallowMount } from "@vue/test-utils";

import {expect} from "chai";

describe("Tests of GreetingMessage component", () => {

    let greetingMessage;
    let expectedText = "TEST_MESSAGE";

    before("Load test GreetingMessage component", () => {
        greetingMessage = shallowMount(GreetingMessage, {propsData: {message: expectedText}});
    });

    it("should return the correct heading", () => {
        let heading = greetingMessage.find("h1");
        expect(heading.exists()).to.be.true;

        let headingText = heading.text();
        expect(headingText).to.equal(expectedText);
    });

    it("should return the correct content", () => {
        let contentParagraph = greetingMessage.find("h1+p");
        expect(contentParagraph.exists()).to.be.true;

        let contentParagraphText = contentParagraph.text();
        expect(contentParagraphText).to.equal(expectedText);
    });
});

Oh! Just a reminder of what the component is (frontend/src/gdayWorld/components/GreetingMessage.vue)! Very simple stuff, as the tests indicate:

<template>
    <h1>{{ message }}</h1>
    <p>{{ message }}</p>
</template>

<script>
export default {
  name: 'GreetingMessage',
  props : {
    message : {
      type: String,
      required: true
    }
  }
}
</script>

One thing I found was that every time I touched the test file, I was getting this compilation error:

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

WEBPACK  Compiling...

  [=========================] 98% (after emitting)

ERROR  Failed to compile with 1 error

error  in ./test/unit/GreetingMessage.spec.js

Module Error (from ./node_modules/eslint-loader/index.js):

/usr/share/fullstackExercise/test/unit/GreetingMessage.spec.js
   7:1  error  'describe' is not defined  no-undef
  12:5  error  'before' is not defined    no-undef
  16:5  error  'it' is not defined        no-undef
  24:5  error  'it' is not defined        no-undef

error4 problems (4 errors, 0 warnings)

But if I ran it again, the problem went away. Somehow ESLint was getting confused by things; it only lints things when they've changed, and on the second - and subsequent - runs it doesn't run so the problem doesn't occur. More googling, and I found this: "JavaScript Standard Style does not recognize Mocha". The guidance here is to let the linter know I'm running Mocha, with the inference that there will be some global functions it can just assume are legit. This is just an entry in package.json:

  "eslintConfig": {
    "root": true,
    "env": {
      "node": true,
      "mocha": true
    },
    // etc

Having done that, everything works perfectly, and despite the fact that is a very very simple unit test… I'm quite pleased with myself that I got it running OK.

After sorting it all out, I reverted everything in source control back to how it was before I started the exercise, so as to replicate it and write it up here. This is when I was completely unable to reproduce that shebang issue at all. I cannot - for the life of me - work out why not. Weird. Anyway, I decided to not waste too much time trying to reproduce a bug I had solved, and just decided to mention it here as a possible "thing" that could happen, but otherwise move on with my life.

I have now got to where I wanted to get at the beginning of this series of articles, so I am gonna stop now. My next task with Vue.js testing is to work out how to mock things, because my next task will require me to make remote calls and/or hit a DB. And I don't want to be doing that when I'm running tests. But that was never an aim of this article series, so I'll revert to stand-alone articles now.

Righto.

--
Adam

Thursday, 11 February 2021

Thoughts on Working Code podcast's Testing episode

G'day:

Working Code (@WorkingCodePod on Twitter) is a podcast by some friends and industry colleagues: Tim Cunningham, Carol Hamilton, Ben Nadel and Adam Tuttle.


(apologies for swiping your image without permission there, team)

It's an interesting take on a techo podcast, in their own words from their strapline:

Working Code is a technology podcast unlike all others. Instead of diving deep into specific technologies to learn them better, or focusing on soft-skills, this one is like hanging out together at the water cooler or in the hallway at a technical conference. Working Code celebrates the triumphs and fails of working as a developer, and aims to make your career in coding more enjoyable.

I think they achieve this, and it makes for a good listen.

So that's that, I just wanted to say they've done good work, and go listen.

Oh, just one more thing.

Yesterday they released their episode "Testing". I have to admit my reaction to a lot of what was said was… "poor", so I pinged my namesake and said "I have some feedback". After a brief discussion on Signal, Adam & I concluded that I might try to do a "reaction blog article" on the topic, and they might see if they can respond to the feedback, if warranted, at a later date. They are recording tonight apparently, and I'm gonna try to get this across to them for their morning coffee.

Firstly as a reminder: I'm pretty keen on testing, and I am also keen on TDD as a development practice. I've written a fair bit on unit testing and TDD both. I'm making a distinction between unit testing and TDD very deliberately. I'll come back to this. But anyway this is why I was very very interested to see what the team had to say about testing. Especially as I already knew one of them doesn't do automated testing (Ben), and another (Carol) I believe has only recently got into it (I think that's what she said, a coupla episodes ago). I did not know Adam or Tim's position on it.

And just before I get under way, I'll stick a coupla Twitter messages here I saw recently. At the time I saw them I was thinking about Ben's claim to a lack of testing, and they struck a particular chord with me.




I do not know Maaret or Mathias, but I think they're on the money here.

OK So I'm now listening to the podcast now. I'm going to pull quotes from it, and comment on them where I think it's "necessary".

[NB I'm exercising my right to reproduce small parts of the Working Code Podcast transcript here, as I'm doing so for the purposes of commentary / criticism]


Ahem.

Adam @ 11:27:

…up front we should acknowledge you know we're not testing experts. None of us [...] have been to like 'testing college'. […]There's a good chance we're going to get something wrong.

I think, in hindsight, this podcast needed this caveat to be made louder and clearer. To be blunt - and I don't think any would disagree with me here - all four are very far from being testing experts. Indeed one is even a testing naesayer. I think there's some dangerously ill-informed opinions being expressed as the podcast progresses, and as these are all people who are looked-up-to in their community, I think there's a risk people will take onboard what they say as "advice". Even if there's this caveat at the beginning. This might seem like a very picky thing to draw on, but perhaps I should have put it at then end of the article, after there's more context.

Carol @ 12:07:

Somebody find that monster already.

Hi guys.

Ben @ 12:41

I test nothing. And it's not like a philosophical approach to life, it's more just I'm not good at testing

Adam @ 13:09:

Clearly that's working out pretty well for you, you've got a good career going.

I hear this a bit. "I don't test and I get by just fine". This is pretty woolly thinking, and it's false logic people will jump on to justify why they don't do things. And Adam is just perpetuating the myth here. The problem with this rationalisation is demonstrated with an analogy of going on a journey without a map and just wandering aimlessly but you still (largely accidentally) arrive at your intended destination. In contrast had you used a map, you'd've been more efficient with your time and effort, and been able to progress even further on the next leg of your journey sooner. Ben's built a good career for himself. Undoubtedly. Who knows how much better it would be had he… used a map.

Also going back to Ben's comment about kind of explaining away why he doesn't test because he's not good at it. Everyone starts off not being good at testing mate. The rest of us do something about it. This is a disappointing attitude from someone as clued-up as Ben. Also if you don't know about something… don't talk about it mate. Inform yourself first and then talk about it.

Ben @ 13:21:

[…] some additional context. So - one - I work on a very small team. Two: all the people who work on my team are very very familiar with the software. Three: we will never ever hire a new engineer specifically for my team. Cos I work on the legacy codebase. The legacy codebase is in the process of being phased out. […] I am definitely in a context where I don't have to worry about hiring a new person and training the up on a system and then thinking they'll touch something in the code that they don't understand how it works. That's like the farthest possible thing from my day-to-day operations currently.

Um… so? I'm not being glib. You're still writing new logic, or altering existing logic. If you do that, it intrinsically needs testing. I mean you admitted you do manual testing, but it beggars belief that a person in the computer industry will favour manually performing a pre-defined repetitive task as a (prone-to-error) human, rather than automating this. We're in the business of automating repetitive tasks!

I'd also add that this would be a brilliant, low-risk, environment for you to get yourself up to speed with TDD and unit testing, and work towards the point where it's just (brain) muscle memory to work that way. And then you'll be all ready once you progress to more mission-critical / contemporary codebases.

Ben @ 14:42:

I can wrap my head around testing when it comes to testing a data workflow that is completely pure, meaning you have a function or you have a component that has functions and you give it some inputs and it generates some outputs. I can 100% wrap my head around testing that. And sometimes actually when I'm writing code that deals with something like that, even though I'm not writing tests per se, I might write a scratch file that instantiates that component and sends data to it and checks the output just during the development process that I don't have to load-up the whole application.

Ben. That's a unit test. You have written a unit test there. So why don't you put it in a test class instead of a scratch file, and - hey presto - you have a persistent test that will guard against that code's behaviour somehow changing to break those rules later on. You are doing the work here, you're just not doing it in a sensible fashion.

Ben @ 15:18:

Where it breaks down immediately for me is when I have to either a) involve a database, or b) involve a user interface. And I know that ther's all kinds of stuff that the industry has brought to cater to those problems. I've just never taken the time to learn.

There's a bit of a blur here between the previous train of thought which was definitely talking about unit tests of a unit of code, and now we're talking about end-to-end testing. These are two different things. I am not saying Ben doesn't realise this, but they're jammed up next to each other in the podcast so the distinction is not being made. These two kinds of testing are separate ideas, only really coupled by the fact they are both types of testing. Ben's right, the tooling is there, and - in my experience at least with the browser emulation stuff - it's pretty easy and almost fun to use. Ben already tests his stuff manually every time he does a release, so it would seem sensible to me to take the small amount of time it takes to get up to speed with these things, and then instead of testing something manually, take the time to automate the same testing, then it's taken care of thenceforth. It just a matter of being a bit more wise with one's time usage.

Adam @ 16:39

The reason that we don't have a whole lot of automated tests for our CFML code is simply performance. So when we started our product I tried really hard to do TDD. If I was writing a new module or a new section of that module I would work on tests along with that code, and would try to stay ahead of the game there. And what ended up happening was I had for me - let's say - 500 functions that could run, I had 400 tests. And I don't want to point a finger at any particular direction, but when you take the stack as a whole and you say "OK now run my test suite" and it takes ten minutes to run those tests and [my product, the project I was working on] is still in its infancy, and you can see this long road of so much more work that has to be done, and it takes ten minutes to run the tests - you know, early on - there was no way that that was going to be sustainable. So we kind of abandoned hope there. [...] I have, in more recent years, on a more recent stack seen way better performance of tests. [...] So we are starting to get more into automated testing and finding it actually really helpful. [...] I guess what I wanted to say there is that a perfectly valid reason to have fewer or no tests is if it doesn't work well on your platform.

Adam starts off well here, both in what he's saying and his historical efforts with tests, but he then goes on to pretty much blame ColdFusion for not being very good at running tests. This is just untrue, sorry mate. We had thousands upon thousands of tests running on ColdFusion, and they ran in an amount of time best measured in seconds. And when we ported that codebase to PHP, we had a similar number of test cases, and they ran in round about the same amount of time (PHP was faster, that said, but also the tests were better). I think the issue is here - and confirms this about 30sec after the quote above, and again about 15min later when he comes back to this - is that your tests weren't written so well, and they were not focused on the logic (as TDD-oriented tests ought to be), they were basically full integration tests. Full integration tests are excellent, but you don't want your tight red / green / refactor testing cycle to be slowed down by external services like databases. That's the wrong sort of testing there. My reaction to you saying your test runs are slow is not to say "ColdFusion's fault", it's to say "your tests' fault". And that's not a reason to not test. It's a reason to check what you've been doing, and fix it. I'm applying hindsight here for you obviously, but this ain't the conclusion/message you should be delivering here.

Carol @ 20:03:

I also want to say that if you are starting out and you're starting to add test, don't let slowness stop you from doing it.

Spot on. I hope Ben was listening there. When you start to learn something new, it is going to take more time. I think this is sometimes why people conclude that testing takes a lot of time: the people arriving at that conclusion are basing it on their time spent on the learning curve. Accept that things go slow when you are learning, but also accept that things will become second nature. And especially with writing automated tests it's not exactly rocket-science, the initial learning time is not that long. Just… decide to learn how to test stuff, start automating your testing, and stick at it.

Ben @ 21:24:

I was thinking about debugging incidents and getting a page in the middle of the night and having to jump on a call and you seeing the problem, and now you have to do a hotfix, and push a deployment in the middle of the night […]. And imagine having to sit there for 30 minutes for your tests to run just so you can push out a hotfix. Which I thought to myself: that would drive me crazy.

At this point I think Ben is just trying to invent excuses to justify to himself why he's right to eschew testing. I'm reminded of Maaret's Twitter message I included above. The subtext of Ben's here is that if one tests manually, then you're more flexible in what you can choose to re-test when you are hotfixing. Well obviously if you can make that call re manual tests, then you can make the same call with automated tests! So his position here is just specious. Doubly so because automated tests are intrinsically going to be faster than the equivalent manual tests to start with. Another thing I'll note that with this entire analogy: you've already got yourself in a shit situation by needing to hotfix stuff in the middle of the night. Are you really sure you want to be less diligent in how you implement that fix? In my experience that approach can lead to a second / third / fourth hotfix being needed in rapid succession. Hotfix situations are definitely ones of "work smarter, not faster".

Ben @ 21:56:

I'm wondering if there should be a test budget that you can have for your team where you like have "here is the largest amount of time we're willing to let testing block a deployment". And anything above that have to be tests that sit in an optional bucket where it's up to the developer to run them as they see fit, but isn't necessarily tests that would block deployment. I don't know if that's totally crazy.

Adam continues @ 22:36:

You have to figure out which tests are critical path, which ones are "must pass", and these ones are like "low risk areas" […] are the things I would look for to make optional.

Yep, OK there's some sense here, but I can't help thinking that we are talking about testing in this podcast, and we're spending our time inventing situations in which we're not gonna test. It all seems a bit inverted to me. How about instead you just do your testing and then if/when a situation arises you then deal with it. Instead of deciding there will be situations and justify to yourselves why you oughtn't test in the first place.

But I also have to wonder: why the perceived rush here? What's wrong with putting over 30min to test stuff if it "proves" thaty your work has maintained stability, and you'll be less likely to need that midnight hotfix. What percentage of the whole cycle time of feature request to delivery is that 30 minutes? Especially if taking the effort to write the tests in the first place will inately improve the stability of your code in the first place, and then help to keep it stable? It's a false economy.

Tim @ 23:15:

When we have contractors do work for us. I require unit tests. I require so much testing just because it's a way for me to validate the truth of what they're saying they've done. So that everything that we have that's done by third parties is very well tested, and it's fantastic because I have a high level of confidence.

Well: precisely. Why do you not want that same level of confidence in your in-house work? Like you say: confidence is fantastic. Be fantastic, Tim. Also: any leader should eat their own dogfood I think. If there's sense in you making the contractors work like this, clearly you ought to be working that way yourself.

Tim @ 23:36:

Any time I start a new project, if I have a greenfields project, I always start with some level of unit tests, and then I get so involved in the actual architecture of the system that I put it off, and like "well I don't really need a test for this", "I'm not really sure where I'm going with this, so I'm not going to write a test first" because I'm kinda experimenting. Then my experiment becomes reality, then my reality becomes the released version. And then it's like "well what's the point of writing a test now?"

I think we've all been there. I think what Tim needs here is just a bit more self-discipline in identifying what is "architectural spike" and what's "now doing the work". If one is doing TDD, then the spike can be used to identify the test cases (eg "it's going to need to capture their phone number") without necessarily writing the test to prove the phone number has been captured. So you write this:

describe("my new thing", function () {
    it ("needs to capture the phone number", function () {
        // @todo need to test this
    });
});

And then when you detect you are not spiking any more, you write the test, and then introduce the code to make the test pass. I also think Tim is overlooking that the tests are not simply there for that first iteration, they are then there proving that code is stable for the rest of the life of the code. This… builds confidence.

Adam @ 24:22:

That's what testing is all about, right? It's increasing confidence that you can deploy this code and nothing is going to be wrong with it. […] When I think about testing, the pinnacle of testing for me is 100% confidence that I can deploy on my way out the door at 4:55pm on Friday afternoon, with [a high degree of ~] confidence that I am not going to get paged on Saturday at 4am because some of that code that I just deployed… it went "wrong".

Exactly.

Carol @ 25:12:

What difference between the team I'm on and the team you guys have is we have I think it's 15-ish people touching the exact same code daily. So a patch I can put out today may have not even been in the codebase they pulled yesterday when they started working on a bug, or a week ago when they had theirs. So me writing that little extra bit of test gives them some accountability for what I've done, and me some.

Again: exactly.

Ben @ 26:36:

Even if you have a huge test suite, I can't help but think you have to do the manual testing, because what if something critical was missed. [...] I think the exhaustive test suite, what that does is it catches unexpected bugs unrelated. Or things that broke because you didn't expect them to break in a certain way. And I think that's very important.

To Ben's first point, you could just as easily (and arguably more validly) switch that around: a human, doing ad-hoc manual testing is more likely to miss something, because every manual test run is at their whim and subject to their focus and attention at the time. Whereas the automated tests - which let's not forget were written by a diligent human, but right at the time they are most focused on the requirements - are run by the computer and it will do exactly the same job every time. What having the historical corpus of automated tests give you is increased confidence that all that stuff being tested still works the way it is supposed to, so the manual testing - which is always necessary, can be more a case of dotting the Is and crossing the Ts. With no automated tests, the manual tests need to be exhaustive. And the effort needs to be repeated every release (Adam mentions this a few minutes later as well).

To the second point: yeah precisely. Automated tests will pick up regressions. And the effort to do this only needs to be done once (writing the test). Without automated tests, you rely on the manual testing to pick this stuff up, but - being realistic - if your release is focused on PartX of the code, your manual tests are going to focus there, and possibly not bother to re-test PartZ which has just inadvertantly been broken by PartX's work.

Ben also mentions this quote from Rich Hickey "Q: What happened to every bug out there? A: it passed the type checker, and it passed all tests." (I found this reference on Google: Simple Made Easy, it's at about 15:45). It's a nifty quote, but what it's also saying is that there wasn't actually a test for the buggy behaviour. Because if there was one: the test would have caught it. The same could be said more readily of manual-only testing. Obviously nothing is going to be 100%, but automated tests are going to be more reliable at maintaining the same confidence level, and be less effort, than manual-only testing.

Ben @ 28:15:

When people say it increases the velocity of development over time. I have trouble embracing that.

(Ben's also alluding back to a comment he made immediately prior to that, relating to always needing to manually test anyhow). "Over time" is one of the keys here. Once a test is written once, it's there. It sticks around. Every subsequent test round there is no extra effort to test that element of the application (Tim draws attention to this a coupla minutes later too). With manual testing the effort needs to be duplicated every time you test. Surely this is not complicated to understand. Ben's point about "you still need to manually test" misses the point that if there's a foundation of automated tests, your manual testing can become far more perfunctory. Without the tests: the manual testing is monolithic. Every. Single. Time. To be honest though, I don't know why I need to point this out. It's a) obvious; and b) very well-trod ground. There's an entire industry that thinks automated tests are the foundation of testing. And then there's Ben who's "just not sure". This is like someone "just not sure" that the world isn't actually flat. It's no small amount of hubris on his part, if I'm honest. And obviously Ben is not the only person out there in the same situation. But he's the one here on this podcast supposedly discussing testing.

Ben @ 37:08:

One thing that I've never connected with: when I hear people talk about testing, there's this idea of being able to - I think they call them spies? - create these spies where you can see if private methods get called in certain ways. And I always think to myself: "why do you care about your private methods?" That's an implementation detail. That private method may not exist next week. Just care about what your public methods are returning and that should inherently test your private methods. And people have tried to explain it to me why actually sometimes you wanna know, but I've ust never understood it.

Yes good point. I can try to explain. I think there's some nuance missing in your understanding of what's going on, and what we're testing. It starts with your position that testing is only concerning itself with (my wording, paraphrasing you from earlier) "you're interested in what values a public method takes, and what it returns". Not quite. You care about given inputs to a unit, whether the behaviour within the unit correctly provides the expected outputs from the unit. The outputs might not be the return value. Think about a unit that takes a username and password, hashes the password, and saves it to the DB. We then return the new ID of the record. Now… we're less interested in the ID returned by the method, we are concerned that the hashing takes place correctly. There is an output boundary of this unit at the database interface. We don't want our tests to actually hit the database (too slow, as Adam found out), but we mock-out the DB connector or the DAO method being called that takes the value that the model layer has hashed. When then spy on the values passed to the DB boundary, and make sure it's worked OK. Something like this:

describe("my new thing", function () {
    it ("hashes the password", function () {
        testPassword = "letmein"
        expectedHash = "whatevs"
        
        myDAO = new Mock(DAO)
        myDAO.insertRecord.should.be.passed(anything(), expectedHash)
        
        myService = new Service(myDAO)
        
        newId = myService.addUser("LOGIN_ID_NOT_TESTED", testPassword)
        
        newId.should.be.integer() // not really that useful
    });
});

class Service {

    private dao

    Service(dao) {
        this.dao = dao
    }
    
    addUser(loginId, password) {
        hashedPassword = excellentHashingFunction(password)
        
        return this.dao.insertRecord(loginId, hashedPassword)
    }
}

class DAO {
    insertRecord(loginId, password) {
        return db.insertQuery("INSERT INTO users (loginId, password) VALUES (:loginId, :password)", [loginId, password])
    }
}

OK so insertRecord isn't a private method here, but the DAO is just an abstraction from the public interface of the unit anyhow, so it amounts to the same thing, and it makes my example clearer. insertRecord could be a private method of Service.

So the thing is that you are checking boundaries, not specifically method inputs/outputs.

Also, yes, the implementation of DAO might change tomorrow. But if we're doing TDD - and we should be - the tests will be updated at the same time. More often than not though, the implementation isn't as temporary as this line of thought often assumes (for the convenience of the argument, I suspect).

Adam @ 48:41:

The more that I learn how to test well, and the more that I write good tests, the more I become a believer in automated testing (Carol: Amen). […] The more I do it the better I get. And the better I get the more I appreciate what I can get from it.

Indeed.

Tim @ 49:32:

In a business I think that short term testing is a sunk cost maybe, but long term I have seen the benefit of it. Particularly whenever you are adding stuff to a mature system, those tests pay dividends later. They don't pay dividends now […] (well they don't pay as many dividends now) […] but they do pay dividends in the long run.

Also a good quote / mindset. Testing is about the subsequent rounds of development as much as the current one.

Ben @ 50:05:

One thing I've never connected with emotionally, when I hear people talk about testing, is when they refer to tests as providing documentation about how a feature is supposed to work. And as someone who has tried to look at tests to understand why something's not working, I have found that they provide no insight into how the feature is supposed to work. Or I guess I should say specifically they don't provide answers to the question that I have.

Different docs. They don't provide developer docs, but if following BDD practices, they can indicate the expected behaviour of the piece of functionality. Here's the test run from some tests I wrote recently:

root@1b011f8852b1:/usr/share/nodeJs# npm test

> nodejs@1.0 test
> mocha test/**/*.js



  Tests for Date methods functions
    Tests Date.getLastDayOfMonth method
      ✓ returns Jan 31, given Jan 1
      ✓ returns Jan 31, given Jan 31
      ✓ returns Feb 28, given Feb 1 in 2021
      ✓ returns Feb 29, given Feb 1 in 2020
      ✓ returns Dec 31, given Dec 1
      ✓ returns Dec 31, given Dec 31
    Tests Date.compare method
      ✓ returns -1 if d1 is before d2
      ✓ returns 1 if d1 is after d2
      ✓ returns 0 if d1 is the same d2
      ✓ returns 0 if d1 is the same d2 except for the time part
    Tests Date.daysBetween method
      ✓ returns -1 if d1 is the day before d2
      ✓ returns 1 if d1 is the day after d2
      ✓ returns 0 if d1 is the same day as d2
      ✓ returns 0 if d1 is the same day as d2 except for the time part
    Tests for addDays method
      ✓ works within a month
      ✓ works across the end of a month
      ✓ works across the end of the year
      ✓ works with zero
      ✓ works with negative numbers

  Tests a method Reading.getEstimatesFromReadingsArray that returns an array of Readings representing month-end estimates for the input range of customer readings
    Tests for validation cases
      ✓ should throw a RangeError if the readings array does not have at least two entries
      ✓ should not throw a RangeError if the readings array has at least two entries
      ✓ should throw a RangeError if the second date is not after the first date
    Tests for returned estimation array cases
      ✓ should not include a final month-end reading in the estimates
      ✓ should return the estimate between two monthly readings
      ✓ should return three estimates between two reading dates with three missing estimates
      ✓ should return the integer part of the estimated reading
      ✓ should return all estimates between each pair of reading dates, for multiple reading dates
      ✓ should not return an estimate if there was an actual reading on that day
      ✓ should return an empty array if all readings are on the last day of the month
      ✓ tests a potential off-by-one scenario when the reading is the day before the end of the month

  Tests for helper functions
    Tests for Reading.getEstimationDatesBetweenDates method
      ✓ returns nothing when there are no estimates dates between the test dates
      ✓ correctly omits the first date if it is an estimation date
      ✓ correctly omits the second date if it is an estimation date
      ✓ correctly returns the last date of the month for all months between the dates

  Test Timer
    ✓ handles a lap (100ms)

  Test TimerViaPrototype
    ✓ handles a lap (100ms)


  36 passing (221ms)

root@1b011f8852b1:/usr/share/nodeJs#

When I showed this to the person I was doing the work for, he immediately said "no, that test case is wrong, you have it around the wrong way", and they were right, and I fixed it. That's the documentation "they" are talking about.

Oh, and Carol goes on to confirm this very thing one minute later.

Also bear in mind that just cos a test could be written in such a way as to impart good clear information doesn't mean that all tests do. My experience with looking at open-source project's tests to get any clarity on things (and I include testing frameworks' own tests in this!), I am left knowing less than I did before I looked. It's almost like there's a rule in OSS projects that the code needs to be shite or they won't accept it ;-)


And that's it. It was an interesting podcast, but I really really strongly disagreed with most of what Ben said, and why he said it. It would be done thing if he was held to account (and the others tried this at times), but as it is other than joking that Ben is a nae-sayer, I think there's some dangerous content in here.

Oh, one last thing… in the outro the team suggests some resources for testing. Most of what the suggested seems to be "what to do", not "why you do it". I think the first thing one should do when considering testing is to read Test Driven Development by Kent Beck. Start with that. Oh this reminds me… not actually much discussion on TDD in this episode. TDD is tangential to testing per se, but it's an important topic. Maybe they can do another episode focusing on that.

Follow-up

The Working Code Podcast team have responded to my obersvations here in a subsequent podcast episode, which you can listen to here: 011: Listener Questions #1. Go have a listen.

Righto.

--
Adam