Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Thursday 8 April 2021

TDD and external services

G'day

You might have noticed I spend a bit of my time encouraging people to use TDD, or at the very least making sure yer code is tested somehow. But use TDD ;-)

As an interesting aside, I recently failed a technical interview because the interviewer didn't feel I was strong enough at the testing side of things. Given what I see around the industry… that seems to be a moderately high bar yer setting for yerselves there, peeps. Or perhaps I'm just shit at articulating myself. Hrm. But anyway.

OK, so I rattled out a quick article a few days ago - "TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode" - which revisits some existing ground and by-and-large is not relevant to what I'm going to say here, other than the "TDD & professionalism" being why I bang on about it so much. And you might think I bang on about it here, but I also bang on about it at work (when I have work I mean), and in my background conversations too. I try to limit it to only my technical associates, that said.

Right so Mingo hit me up in a comment on that article, asking this question:

Something I ran into was needing to access the external API for the tests and I understand that one usually uses mocking for that, right? But, my question is then: how do you then **know** that you're actually calling the API correctly? Should I build the error handling they have in their API into my mocked up API as well (so I can test my handling of invalid inputs)? This feels like way too much work. I chose to just call the API and use a test account on there, which has it's own issues, because that test account could be setup differently than the multiple different live ones we have. I guess I should just verify my side of things, it's just that it's nice when it's testing everything together.

Yep, good question. With new code, my approach to the TDD is based on the public interface doing what's been asked of it. One can see me working through this process in my earlier article "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements". Here I'm building a web service end point - by definition the public interface to some code - and I am always hitting the controller (via the routing). And whatever I start testing, I just "fake it until I make it". My first test case here is "It needs to return a 200-OK status for GET requests on the /workshops endpoint", and the test is this:

/**
 * @testdox it needs to return a 200-OK status for successful GET requests
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturns200()
{
    $this->client->request('GET', '/workshops/');

    $this->assertEquals(Response::HTTP_OK, $this->client->getResponse()->getStatusCode());
}

To get this to pass, the first iteration of the implementation code is just this:

public function doGet() : JsonResponse
{
    return new JsonResponse(null);
}

The next case is "It returns a collection of workshop objects, as JSON", implemented thus:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturnsJson()
{
    $workshops = [
        new Workshop(1, 'Workshop 1'),
        new Workshop(2, 'Workshop 2')
    ];

    $this->client->request('GET', '/workshops/');

    $resultJson = $this->client->getResponse()->getContent();
    $result = json_decode($resultJson, false);

    $this->assertCount(count($workshops), $result);
    array_walk($result, function ($workshopValues, $i) use ($workshops) {
        $workshop = new Workshop($workshopValues->id, $workshopValues->name);
        $this->assertEquals($workshops[$i], $workshop);
    });
}

And the code to make it work shows I've pushed the mocking one level back into the application:

class WorkshopsController extends AbstractController
{

    private WorkshopCollection $workshops;

    public function __construct(WorkshopCollection $workshops)
    {
        $this->workshops = $workshops;
    }

    public function doGet() : JsonResponse
    {
        $this->workshops->loadAll();

        return new JsonResponse($this->workshops);
    }
}

class WorkshopCollection implements \JsonSerializable
{
    /** @var Workshop[] */
    private $workshops;

    public function loadAll()
    {
        $this->workshops = [
            new Workshop(1, 'Workshop 1'),
            new Workshop(2, 'Workshop 2')
        ];
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

(I've skipped a step here… the first iteration could/should be to mock the data right there in the controller, and then refactor it into the model, but this isn't about refactoring, it's about mocking).

From here I refactor further, so that instead of having the data itself in loadAll, the WorkshopCollection calls a repository, and the repository calls a DAO, which for now ends up being:

class WorkshopsDAO
{
    public function selectAll() : array
    {
        return [
            ['id' => 1, 'name' => 'Workshop 1'],
            ['id' => 2, 'name' => 'Workshop 2']
        ];
    }
}

The next step is where Mingo's question comes in. The next refactor is to swap out the mocked data for a DB call. We'll end up with this:

class WorkshopsDAO
{
    private Connection $connection;

    public function __construct(Connection $connection)
    {
        $this->connection = $connection;
    }

    public function selectAll() : array
    {
        $sql = "
            SELECT
                id, name
            FROM
                workshops
            ORDER BY
                id ASC
        ";
        $statement = $this->connection->executeQuery($sql);

        return $statement->fetchAllAssociative();
    }
}

But wait. if we do that, our unit tests will be hitting the DB. Which we are not gonna do. We've run out of things to directly mock as we're at the lower-boundary of our application, and the connection object is "someon else's code" (Doctrine/DBAL in this case). We can't mock that, but fortunately this is why I have the DAO tier. It acts as the frontier between our app and the external service provider, and we still mock that:

public function testDoGetReturnsJson()
{
    $workshopDbValues = [
        ['id' => 1, 'name' => 'Workshop 1'],
        ['id' => 2, 'name' => 'Workshop 2']
    ];

    $this->mockWorkshopDaoInServiceContainer($workshopDbValues);

    // ... unchanged ...

    array_walk($result, function ($workshopValues, $i) use ($workshopDbValues) {
        $this->assertEquals($workshopDbValues[$i], $workshopValues);
    });
}

private function mockWorkshopDaoInServiceContainer($returnValue = []): void
{
    $mockedDao = $this->createMock(WorkshopsDAO::class);
    $mockedDao->method('selectAll')->willReturn($returnValue);

    $container = $this->client->getContainer();
    $workshopRepository = $container->get('test.WorkshopsRepository');

    $reflection = new \ReflectionClass($workshopRepository);
    $property = $reflection->getProperty('dao');
    $property->setAccessible(true);
    $property->setValue($workshopRepository, $mockedDao);
}

We just use a mocking library (baked into PHPUnit in this case) to create a runtime mock, and we put that into our repository.

The tests pass, the DB is left alone, and the code is "complete" so we can push it to production perhaps. But we are not - as Mingo observed - actually testing that what we are asking the DB to do is being done. Because all our tests mock the DB part of things out.

The solution is easy, but it's not done via a unit test. It's done via an integration test (or end-to-end test, or acceptance test or whatever you wanna call it), which hits the real endpoint which queries the real database, and gets the real data. Adjacent to that in the test we hit the DB directly to fetch the records we're expecting, and then we compare the JSON that the end point returns represents the same data we manually fetched from the DB. This tests the SQL statement in the DAO, that the data fetched models OK in the repo, and that the model (WorkshopCollection here) applies whatever business logic is necessary to the data from the repo before passing it back to the controller to return with the response, which was requested via the external URL. IE: it tests end-to-end.

public function testDoGetExternally()
{
    $client = new Client([
        'base_uri' => 'http://fullstackexercise.backend/'
    ]);

    $response = $client->get('workshops/');
    $this->assertEquals(Response::HTTP_OK, $response->getStatusCode());
    $workshops = json_decode($response->getBody(), false);

    /** @var Connection */
    $connection = static::$container->get('database_connection');
    $expectedRecords = $connection->query("SELECT id, name FROM workshops ORDER BY id ASC")->fetchAll();

    $this->assertCount(count($expectedRecords), $workshops);
    array_walk($expectedRecords, function ($record, $i) use ($workshops) {
        $this->assertEquals($record['id'], $workshops[$i]->id);
        $this->assertSame($record['name'], $workshops[$i]->name);
    });
}

Note that despite I'm saying "it's not a unit test, it's an integration test", I'm still implementing it via PHPUnit. The testing framework should just provide testing functionality: it should not dictate what kind of testing you implement with it. And similarly not all tests written with PHPUnit are unit tests. They are xUnit style tests, eg: in a class called SomethingTest, and the the methods are prefixed with test and use assertion methods to implement the test constraints.

Also: why don't I just use end-to-end tests then? They seem more reliable? Yep they are. However they are also more fiddly to write as they have more set-up / tear-down overhead, so they take longer to write. Also they generally take longer to run, and given TDD is supposed to be a very quick cadence of test / run / code / run / refactor / run, the less overhead the better. The slower your tests are, the more likely you are to switch to writing code and testing later once you need to clear your head. In the mean time your code design has gone out the window. Also unit tests are more focused - addressing only a small part of the codebase overall - and that has merit in itself. Aso I used a really really trivial example here, but some end-to-end tests are really very tricky to write, given the overall complexity of the functionality being tested. I've been in the lucky place that at my last gig we had a dedicated QA development team, and they wrote the end-to-end tests for us, but this also meant that those tests were executed after the dev considered the tasks "code complete", and QA ran the tests to verify this. There is no definitive way of doing this stuff, that said.

To round this out, I'm gonna digress into another chat I had with Mingo yesterday:

Normally I'd say this:

Unit tests
Test logic of one small part of the code (maybe a public method in one class). If you were doing TDD and about to add a condition into your logic, you'd write a until test to cover the new expectations that the condition brings to the mix.
Functional tests
These are a subset of unit tests which might test a broader section of the application, eg from the public frontier of the application (so like an endpoint) down to where the code leaves the system (to a logger, or a DB, or whatever). The difference between unit tests and functional tests - to me - are just how distributed the logic being tests is throughout the system.
Integration tests
Test that the external connections all work fine. So if you use the app's DB configuration, the correct database is usable. I'd personally consider a test an integration test if it only focused on a single integration.
Acceptance tests(or end-to-end tests)
Are to integration tests what functional tests are to unit tests: a broader subset. That test above is an end-to-end test, it tests the web server, the application and the DB.

And yes I know the usages of these terms vary a bit.

Furthermore, considering the distinction between BDD and TDD:

  • The BDD part is the nicely-worded case labels, which in theory (but seldom in practise, I find) are written in direct collaboration with the client user.
  • The TDD part is when in the design-phase they are created: with TDD it's before the implementation is written; I am not sure whether in BDD it matters or is stipulated.
  • But both of them are design / development strategies, not testing strategies.
  • The tests can be implemented as any sort of test, not specifically unit tests or functional tests or end-to-end tests. The point is the test defines the design of the piece of code being written: it codifies the expectations of the behaviour of the code.
  • BDD and TDD tests are generally implemented via some unit testing framework, be it xUnit (testMyMethodDoesSomethingRight), or Jasmine-esque (it("does something right", function (){}).

One can also do testing that is not TDD or BDD, but it's a less than ideal way of going about things, and I would image result in subpar tests, fragmented test coverage, and tests that don't really help understand the application, so are harder to maintain in a meaningful way. But they are still better than no tests at all.

When I am designing my code, I use TDD, and I consider my test cases in a BDD-ish fashion (except I do it on the client's behalf generally, and sadly), and I use PHPUnit (xUnit) to do so on PHP, and Mocha (Jasime-esque) to do so on Javascript.

Hopefully that clarifies some things for people. Or people will leap at me and tell me where I'm wrong, and I can learn the error in my ways.

Righto.

--
Adam

Tuesday 6 April 2021

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD is not a testing strategy

TDD. Is. Not. A. Testing. Strategy.

Just a passing thought. Apropros to absolutely nothing. 'Onest guv.(*)

Dunno if it occurred to you, but that TDD thing? It's not a testing strategy. It's a design strategy.

Let's look at the name. In the name test-driven is a compound adjective: it just modifies the subject. The subject is development. It's about development. It's not about testing.

It's probably better described by BDD, although to me that's a documentation strategy, rather than a development (or testing) one. BDD is about arriving at the test cases (with the client), TDD is about implementing those cases.

The purpose of TDD is to define a process that leads you - as a developer - to pay more attention to the design of your code. It achieves this by forcing you to address the requirement as a set of needs (or cases), eg "it needs to return the product of the two operands". Then you demonstrate your understanding of the case by demonstrating what it is for the case to "work" (a test that when you pass 2 and 3 to the function it returns 6), and then you implement the code to address that case. Then you refine the case, refactor the implementation so it's "nicer", or move on to the next case, and cycle through that one. Rinse and repeat.

But all along the object of the exercise is to think about what needs to be done, break it into small pieces, and code just what's needed to implement the customer need. It also provides a firm foundation to be able to safely refactor the code once it's working. You know: the bit that you do to make your code actually good; rather than just settling for "doesn't break", which is a very low bar to set yourself.

That you end up with repeatable tests along the way is a by-product of TDD. Not the reason you're doing it. Although obviously it's key to offering that stability and confidence for the refactor phase.

Too many people I interact with when they're explaining why it's OK they don't do TDD [because reasons], fall back to the validity / longevity of the tests. It's… not about the tests. It's about how you design your solutions.

Lines of code are not a measure of productivity

Tangential to this, we all know that LOC are not a measure of productivity. There's not a uniform relationship between one line of code and another adjacent line of code. Or ten lines of code in one logic block that represent the implementation of a method are likely to represent less productivity burden than a single line of code nested 14-levels deep in some flow-control-logic monstrousity. We all know this. Not all lines of code are created equal. More is definitely not better. But fewer is also not intrinsically better. It's just an invalid metric.

So why is it that so many people are prepared to count the lines of code a test adds to the codebase as a rationalisation (note: not a justification, because it's invalid) as to why they don't have time to write that test? Or that the test represents an undue burden in the codebase. Why are they back to measuring productivity with LOC? Why won't they let it occur to them that - especially when employing TDD - the investment in the LOC for the test code reduces the investment in the LOC for the production code? And note I am not meaning this as a benefit that one only realises over time having amortised it over a long code lifespan. I mean on the first iteration of code<->test<->release, because the bouncing back and forth between each step there will be reduced. Even for code which this might (although probably won't) be the only iteration the production code sees.

It's just "measure twice, cut once" for code. Of course one doesn't have the limitation in code that one can only cut once; the realisation here needs to be that "measuring" takes really a lot less time than "cutting" when it comes to code.

In closing, if you are rationalising to me (or, in reality: to yourself) why you don't do TDD, and that rationalisation involves lines of code or how often code will be revisited, then you are not talking about TDD. You are pretty much just setting up a strawman to tilt at to make yourself feel better. Hopefully that tactic will seem a little more transparent to you now.

Design your code. Measure your code.

Righto.

--
Adam

(*) that's a lie. It's obviously a retaliation to a coupla comments I've been getting on recent articles I've written about TDD.

Sunday 4 April 2021

TDD & professionalism: a brief follow-up to Thoughts on Working Code podcast's Testing episode

G'day:

Yer gonna need to go back and read the comments on Thoughts on Working Code podcast's Testing episode for context here. Especially as I quote a couple of them. I kinda left the comments run themselves there a bit and didn't reply to everyone as I didn't want to dominate the conversation. But one earlier comment that made me itchy, and now one comment that came in in the last week or so, have made me decide to - briefly - follow-up one point that I think warrants drawing attention to and building on.

Briefly, Working Code Pod did an episode on testing, and I got all surly about some of the things that were said, and wrote them up in the article I link to above. BTW Ben's reaction to my feedback in their follow-up episode ("Listener Questions #1) was the source of my current strapline quote: "the tone... it sounds very heated and abrasive". That should frame things nicely.

Right so in the comments to that previous article, we have these discussion fragments:

  • Sean Corfield - Heck, there's still a sizable portion that still doesn't use version control or has some whacked-out manual approach to "source code control".
  • Ben replied to that with - Yikes, I have trouble believing that there are developers _anywhere_ that don't use source-control in this day-and-age. That seems like "table stakes" for development. Period.
  • [off-screen, Adam bites his tongue]
  • Then Sean Hogge replied to the article rather than that particular comment thread. I'm gonna include a chunk of what he said here:

    18 months ago, I was 100% Ben-shaped. Testing simply held little ROI. I have a dev server that's a perfect replica of production, with SSL and everything. I can just log in, open the dashboard, delete the cache and check things with a few clicks.

    But as I started developing features that interact with other features, or that use the same API call in different ways, or present the same data with a partial or module with options, I started seeing an increase in production errors. I could feel myself scrambling more and more. When I stepped back and assessed objectively, tests were the only efficient answer.

    After about 3 weeks of annoying, frustrating, angry work learning to write tests, every word of Adam C's blog post resonates with me. I am not good at it (yet), I am not fast at it (yet), but it is paying off exactly as he and those he references promised it would.

    I recommend reading his entire comment, because it's bloody good.

  • Finally last week I listened to a YouTube video "Jim Coplien and Bob Martin Debate TDD", from which I extracted these two quotes from Martin that drew me back to this discussion:
    • (@ 43sec) My thesis is that it has become infeasible […] for a software developer to consider himself professional if [(s)he] does not practice test-driven development.
    • (@ 14min 42sec) Nowadays it is […] irresponsible for a developer to ship a line of code that [(s)he] has not executed any unit test [upon].. It's important to note that "nowadays" being 2012 in this case: that's when the video was from.
    And, yes, OK the two quotes say much the same thing. I just wanted to emphasise the words "professional" and "irresponsible".

This I think is what Ben is missing. He shows incredulity that someone in 2021 would not use source control. People's reaction is going to be the same to his suggestion he doesn't put much focus on test-automatic, or practise TDD as a matter of course when he's designing his code. And Sean (Hogge) nails it for me.

(And not just Ben. I'm not ragging on him here, he's just the one providing the quote for me to start from).

TDD is not something to be framed in a context alongside other peripheral things one might pick up like this week's kewl JS framework, or Docker or some other piece of utility one might optionally use when solving a client's problem. It's lower level than that, so it's false equivalence to bracket it like that conceptually. Especially as a rationalisation for not addressing your shortcomings in this area.

Testing yer code and using TDD is as fundamental to your professional responsibilities as using source control. That's how one ought to contextualise this. Now I know plenty of people who I'd consider professional and strong pillars of my dev community who aren't as good as they could be with testing/TDD. So I think Martin's first quote is a bit strong. However I think his second quote nails it. If you're not doing TDD you are eroding your professionalism, and you are being professionalbly irresponsible by not addressing this.

In closing: thanks to everyone for putting the effort you did into commenting on that previous article. I really appreciate the conversation even if I didn't say thanks etc to everyone participating.

Righto.

--
Adam

Friday 12 March 2021

Vue.js and TDD: adding client-side form field validation

G'day:

In the previous article ("TDDing the reading of data from a web service to populate elements of a Vue JS component"), I started connecting my front-end form to the back-end web service that does its data-handling. The first step was easy: just fetching some data the form needed to display a multi-select:

The next step for the back-end will be to receive the form-field values, but before we do that we need to ensure we're doing a superficial level of value validation on the front-end. I say "superficial" because this sort of validation is only there to guide the user, not prevent any nefarious activity: any attempt to prevent nefarious activity on the front-end is too easy to circumvent by the baddy, so it's not worth sinking too much time into. The reallyreally validation needs to be done on the back-end, and that will be done in the next article.

The first TDD step is to identify a test case. I'm gonna cheat a bit here and list the cases I'm gonna address once, here. This is kinda getting ahead of myself, but it makes things clearer for the article. I'll only be addressing one of them at a time though. One thing to note with TDD: don't try to compile an exhaustive list of test cases first, then work through them: that'll just delay you getting on with it, and there's no need to identify every case before you start anyhow: sometimes that can seem like a daunting task in and of itself (not so much with this exercise, granted, as it's so trivial). Identify a case. Test it. Implement it. Identify the next case. Test it. Implement it. Maybe see if there's any refactoring that is bubbling-up; but probably wait until your third case for those refactoring opportunities to surface more clearly. Rinse and repeat. Eat that elephant one bite at a time.

Actually before I get on with the new validation cases, we've already got some covered:

  • should have a required text input for fullName, maxLength 100, and label 'Full name'
  • should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
  • should have a required text input for emailAddress, maxLength 320, and label 'Email address'
  • should have a required password input for password, maxLength 255, and label 'Password'
  • should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
  • should list the workshop options fetched from the back-end
  • should have a button to submit the registration
  • should leave the submit button disabled until the form is filled
  • should disable the form and indicate data is processing when the form is submitted
  • should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
  • should display the registration summary 'template' after the registration has been submitted
  • should display the summary values in the registration summary

We're at least validating requiredness and (max) string-length. And we're part way there with "should leave the submit button disabled until the form is filled", if we just update that to be suffixed with "… with valid data".

Now that I look at the form, I can only see one validation case we need to address: password strength: "it has a password that is at least 8 characters long, has at least one each of lowercase letter, uppercase letter, digit, and one other character not in those subsets (eg: punctuation etc)". One might think we need to validate that the email address is indeed an email address, but trying to get that 100% right is a mug's game, and anyhow there's no way in form field validation to check whether it's their email address - ie: they can receive email at it - anyhow. And we're not asking for an RFC-5322-compliant email address, we're asking for their email address. We need to trust that if they want to engage with us, they'll allow us to communicate with them. That said, I am at least going to change the input type to email for that field; first having updated the relevant test to expect it. This is a matter of semantics, not validation though. They can still type-in anything they like there.

For the password validation, I am going to test by example:

describe("Password-strength validation tests", () => {
    const examples = [
        {case: "cannot have fewer than 8 characters", password: "Aa1!567", valid: false},
        {case: "can have exactly 8 characters", password: "Aa1!5678", valid: true},
        {case: "can have more than 8 characters", password: "Aa1!56789", valid: true},
        {case: "must have at least one lowercase letter", password: "A_1!56789", valid: false},
        {case: "must have at least one uppercase letter", password: "_a1!56789", valid: false},
        {case: "must have at least one digit", password: "Aa_!efghi", valid: false},
        {case: "must have at least one non-alphanumeric character", password: "Aa1x56789", valid: false}
    ];

    examples.forEach((testCase) => {
        it(testCase.case, async () => {
            await populateForm(testCase.password);
            await flushPromises();

            let buttonDisabledAttribute = component.find("form.workshopRegistration button").attributes("disabled");
            let failureMessage = `${testCase.password} should be ${testCase.valid ? "valid" : "invalid"}`;

            testCase.valid
                ? expect(buttonDisabledAttribute, failureMessage).to.not.exist
                : expect(buttonDisabledAttribute, failureMessage).to.exist;
        });
    });
});

This:

  • specifies some examples (both valid and invalid examples);
  • loops over them;
  • populates the form with otherwise known-good values (so, all things being equal, the form can be submitted);
  • sets the password to be the test password;
  • tests whether the form can be submitted (which it only should be able to be if the password is valid).

I've just thought of another case: we better explain to the user why the form is only submittable when their password follows our rules. We'll deal with that once we get the functionality operational.

I've also refactored the populateForm function slightly:

let populateForm = async (password = VALID_PASSWORD) => {
    let form = component.find("form.workshopRegistration");
    form.find("input[name='fullName").setValue(TEST_INPUT_VALUE +"_fullName");
    form.find("input[name='phoneNumber").setValue(TEST_INPUT_VALUE +"_phoneNumber");
    form.find("input[name='emailAddress").setValue(TEST_INPUT_VALUE +"_emailAddress");
    form.find("input[name='password").setValue(password);
    form.find("select").setValue(TEST_SELECT_VALUE);
    await flushPromises();
};

Previously it was like this:

let populateForm = async () => {
    let form = component.find("form.workshopRegistration")
    form.findAll("input").forEach((input) => {
        let name = input.attributes("name");
        input.setValue(TEST_INPUT_VALUE + name);
    });
    form.find("select").setValue(TEST_SELECT_VALUE);

    await flushPromises();
};

I was being lazy and looping over the input fields setting dummy values, but now I wanted to set the password with a different sort of value I could not just set all the input elements; it was only the text and email ones. I could make my element selector more clever, or… I could just make the whole thing more explicit. Part of the reason for simplification was that I messed up the refactor of this three times trying to get the loop to hit only the correct inputs, so I decided just to keep it simple. Conveniently, this tweak also meant that the other tests would not start failing due to not being able to submit the form due to not having a valid password :-)

Cool so when I run my tests now, all the ones expecting the invalid passwords to prevent the form from being submitted now fail. The two cases with valid passwords do not fail, because they meet the current validation requirements already: they have length, and that's all we're validating. I'm currently thinking whether it's OK that we have passing test cases here already. I'm not sure. As long as they keep passing once we have the correct validation in, I guess that's all right.

The implementation here is easy, I just add a function that checks the password follows the rules, and then use that in the isFormUnready function instead of the length check:

computed : {
    isFormUnready: function () {
        let unready = this.formValues.fullName.length === 0
            || this.formValues.phoneNumber.length === 0
            || this.formValues.workshopsToAttend.length === 0
            || this.formValues.emailAddress.length === 0
            || this.formValues.password.length === 0;
            || !this.isPasswordValid;

        return unready;
    },
    // ...
    isPasswordValid: function () {
        const validPasswordPattern = new RegExp("^(?=.*[a-z])(?=.*[A-Z])(?=.*\\d)(?=.*\\W)(?:.){8,}$");
        return validPasswordPattern.test(this.formValues.password);
    }
}

Actually I had a sudden thought how I could feel more happy about those cases that are currently passing. I updated isPasswordValid to just return false for a moment, and verified those cases now failed. It seems pedantic and obvious, but it does demonstrate they're using the correct logic flow, and the behaviour is working. I could also have pushed the validation rule out to only accept say 10-character passwords, and note those cases failing, or something like that too.

Anyway, that implementation worked, and the tests all pass, and the UI behaviour also seems correct.

Next we need to address this other case I spotted: "it shows the user a message if the password value is not valid after they have typed one in". Although as I was writing the test, it occurred to me there are three cases to address here:

  • it does not show the user a message if the password value is valid after they have typed one in
  • it shows the user a message if the password value is not valid after they have typed one in
  • hides the previously displayed bad password message if the password is updated to be valid

The message will be implemented as an <aside> element, and the current password will be evaluated for validity when the keyup event fires. So it won't display until the user starts typing in the box; and will re-evaluate every keystroke. The code below shows the final rendition of these tests. Initially I had all the code inline for each of them, but each of them followed a similar sequence of the same two steps, so I extracted those:


it("does not show the user a message if the password value is valid after they have typed one in", async () => {
    confirmMessageVisibility(false);

    await enterAPassword(VALID_PASSWORD);
    confirmMessageVisibility(false);
});

it("shows the user a message if the password value is not valid after they have typed one in", async () => {
    confirmMessageVisibility(false);

    await enterAPassword(INVALID_PASSWORD);
    confirmMessageVisibility(true);
});

it("hides the previously displayed bad password message if the password is updated to be valid", async () => {
    confirmMessageVisibility(false);

    await enterAPassword(INVALID_PASSWORD);
    confirmMessageVisibility(true);

    await enterAPassword(VALID_PASSWORD);
    confirmMessageVisibility(false);
});

let enterAPassword = async function (password) {
    let passwordField = component.find("form.workshopRegistration input[type='password']");
    await passwordField.setValue(password);
    await passwordField.trigger("keyup");

    await flushPromises();
};

let confirmMessageVisibility = function (visible) {
    let messageField = component.find("form.workshopRegistration aside.passwordMessage");
    visible
        ? expect(messageField.exists(), "message should be visible").to.be.true
        : expect(messageField.exists(), "message should not be visible").to.be.false;
};
  • Each test confirms the message is not displayed to start with.
  • I mimic keying in a password by setting the value, then triggering a keyup event;
  • and once that's done, I check whether I can now see the message.
  • The last test checks first an invalid, then a valid password.

The implementation is really easy. Here's the template changes. I just added an event listener, and the <aside/>, which is set to be visible based on showPasswordMessage:

<input @keyup="checkPassword" type="password" name="password" required="required" maxlength="255" id="password" v-model="formValues.password" autocomplete="new-password">

<aside class="passwordMessage" v-if="showPasswordMessage">
    Password be at least eight characters long
    and must comprise at least one uppercase letter, one lowercase letter, one digit, and one other non-alphanumeric character
</aside>

And in the code part, we just check if the password is legit, and compute that variable accordingly:

checkPassword() {
    this.showPasswordMessage = !this.isPasswordValid;
}

That seems way too easy, but it all works. The tests pass, and it works on-screen too:

 

Haha. You know what? I wondered if I should test the text of the message as well, and went "nah, screw that". And now that I have that image in there, I see the typo in it. Ahem.

That will do for that lot. I quite enjoyed working out how to test the password changes there (how I trigger the keyup event. And doing stuff in Vue.js is pretty easy.

Tomorrow I'll go back to th web service end of things and do the back-end validation, which will need to be a lot more comprehensive than this. But it'll be easier to test as it's all PHP which I know better, and it's just value checking, not messing around with a UI.

Oh. The code for the two files I was working in is here, on Github:

Righto.

--
Adam

Wednesday 10 March 2021

TDDing the reading of data from a web service to populate elements of a Vue JS component

G'day:

This article leads on from "Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements", which itself continues a saga I started back with "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 1: Intro & Nginx" (that's a 12-parter), and a coupla other intermediary articles also related to this body of work. What I'm doing here specifically leads on from "Vue.js: using TDD to develop a data-entry form". In that article I built a front-end Vue.js component for a "workshop registration" form, submission, and summary:


Currently the front-end transitions work, but it's not connected to the back-end at all. It needs to read that list of "Workshops to attend" from the DB, and it needs to save all the data between form and summary.

In the immediately previous article I set up the endpoint the front end needs to call to fetch that list of workshops, and today I'm gonna remove the mocked data the form is using, and get it to use reallyreally data from the DB, via that end point.

To recap, this is what we currently have:

In the <template/> section of WorkshopRegistrationForm.vue:

<label for="workshopsToAttend" class="required">Workshops to attend:</label>

<select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
    <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
</select>

And in the code part of it:

mounted() {
      this.workshops = this.workshopService.getWorkshops();
},

And WorkshopService just has some mocked data:

getWorkshops() {
    return [
        {value: 2, text:"Workshop 1"},
        {value: 3, text:"Workshop 2"},
        {value: 5, text:"Workshop 3"},
        {value: 7, text:"Workshop 4"}
    ];
}

In this exercise I will be taking very small increments in my test / code / test / code cycle here. It's slightly arbitrary, but it's just because I like to keep separate the "refactoring" parts from the "new development parts", and I'll be doing both. The small increments help surface problems quickly, and also assist my focus so that I'm less likely to stray off-plan and start implementing stuff that is not part of the increment's requirements. This might, however, make for tedious reading. Shrug.

We're gonna refactor things a bit to start with, to push the mocked data closer to the boundary between the application and the DB. We're gonna follow a similar tiered application approach as the back-end work, in that we're going to have these components:

WorkshopRegistrationForm.vue
The Vue component providing the view part of the workshop registration form.
WorkshopService
This is the boundary between WorkshopRegistrationForm and the application code. The component calls it to get / process data. In this exercise, all it needs to do is to return a WorkshopCollection.
WorkshopCollection
This is the model for the collection of Workshops we display. It loads its data via WorkshopRepository
Workshop
This is the model for a single workshop. Just an ID and name.
WorkshopsRepository
This deals with fetching data from storage, and modelling it for the application to use. It knows about the application domain, and it knows about the DB schema. And translates between the two.
WorkshopsDAO
Because the repository has logic in it that we will be testing, our code that actually makes the calls to the DB connector has been extracted into this DAO class, so that when testing the repository it can be mocked-out.
Client
This is not out code. We're using Axios to make calls to the API, and the DAO is a thin layer around this.

As a first step we are going to push that mocked data back to the repository. We can do this without having to change our tests or writing any data manipulation logic.

Here's the current test run:

  Tests of WorkshopRegistrationForm component
     should have a required text input for fullName, maxLength 100, and label 'Full name'
     should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
     should have a required text input for emailAddress, maxLength 320, and label 'Email address'
     should have a required password input for password, maxLength 255, and label 'Password'
     should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
     should list the workshop options fetched from the back-end
     should have a button to submit the registration
     should leave the submit button disabled until the form is filled
     should disable the form and indicate data is processing when the form is submitted
     should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
     should display the registration summary 'template' after the registration has been submitted
     should display the summary values in the registration summary

We have to keep that green, and all we can do is move code around. No new code for now (other than a wee bit of scaffolding to let the app know about the new classes). Here are all the first round of changes.

Currently the deepest we go in the new code is the repository, which just returns the mocked data at the moment:

class WorkshopsRepository {

    selectAll() {
        return [
            {value: 2, text:"Workshop 1"},
            {value: 3, text:"Workshop 2"},
            {value: 5, text:"Workshop 3"},
            {value: 7, text:"Workshop 4"}
        ];
    }
}

export default WorkshopsRepository;

The WorkshopCollection grabs that data and uses it to populate itself. It extends the native array class so can be used as an array by the Vue template code:

class WorkshopCollection extends Array {

    constructor(repository) {
        super();
        this.repository = repository;
    }

    loadAll() {
        let workshops = this.repository.selectAll();

        this.length = 0;
        this.push(...workshops);
    }
}

module.exports = WorkshopCollection;

And the WorkshopService has had the mocked values removed, an in their place it tells its WorkshopCollection to load itself, and it returns the populated collection:

class WorkshopService {

    constructor(workshopCollection) {
        this.workshopCollection = workshopCollection;
    }

    getWorkshops() {
        this.workshopCollection.loadAll();
        return this.workshopCollection;
    }

    // ... rest of it unchanged...
}

module.exports = WorkshopService;

WorkshopRegistrationForm.vue has not changed, as it still receives an array of data to its this.workshops property, which is still used in the same way by the template code:

<template>
    <form method="post" action="" class="workshopRegistration" v-if="registrationState !== REGISTRATION_STATE_SUMMARY">
        <fieldset :disabled="isFormDisabled">

            <!-- ... -->

            <select name="workshopsToAttend[]" multiple="true" required="required" id="workshopsToAttend" v-model="formValues.workshopsToAttend">
                <option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
            </select>

            <!-- ... -->

        </fieldset>
    </form>

    <!-- ... -->
</template>

<script>
// ...

export default {
    // ...
    mounted() {
        this.workshops = this.workshopService.getWorkshops();
    },
    // ...
}
</script>

Before we wire this up, we need to change our test to reflect where the data is coming from now. It used to be just stubbing the WorkshopService's getWorkshops method, but now we will need to be mocking the WorkshopRepository's selectAll method instead:

workshopService = new WorkshopService();
sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(expectedOptions);

The tests now break as we have not yet implemented this change. We need to update App.vue, which has a bit more work to do to initialise that WorkshopService now, with its dependencies:

<template>
    <workshop-registration-form></workshop-registration-form>
</template>

<script>
import WorkshopRegistrationForm from "./WorkshopRegistrationForm";
import WorkshopService from "./WorkshopService";
import WorkshopCollection from "./WorkshopCollection";
import WorkshopsRepository from "./WorkshopsRepository";

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
    )
);

export default {
    name: 'App',
    components: {
        WorkshopRegistrationForm
    },
    provide: {
        workshopService: workshopService
    }
}
</script>

Having done this, the tests are passing again.

Next we need to deal with the fact that so far the mocked data we're passing around is keyed on how it is being used in the <option> tags it's populating:

return [
    {value: 2, text:"Workshop 1"},
    {value: 3, text:"Workshop 2"},
    {value: 5, text:"Workshop 3"},
    {value: 7, text:"Workshop 4"}
]

value and text are derived from the values' use in the mark-up, whereas what we ought to be mocking is an array of Workshop objects, which have keys id and name:

class Workshop {

    constructor(id, name) {
        this.id = id;
        this.name = name;
    }
}

module.exports = Workshop;

We will need to update our tests to expect this change:

sinon.stub(workshopService, "getWorkshops").returns(expectedOptions);
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

The tests fail again, so we're ready to change the code to make the tests pass. The repo now returns objects:

class WorkshopsRepository {

    selectAll() {
        return [
            new Workshop(2, "Workshop 1"),
            new Workshop(3, "Workshop 2"),
            new Workshop(5, "Workshop 3"),
            new Workshop(7, "Workshop 4")
        ];
    }
}

And we need to make a quick change to the stubbed saveWorkshopRegistration method in WorkshopService too, so the selectedWorkshops are now filtered on id, not value:

saveWorkshopRegistration(details) {
    let allWorkshops = this.getWorkshops();
    let selectedWorkshops = allWorkshops.filter((workshop) => {
        return details.workshopsToAttend.indexOf(workshop.name) >= 0;
        return details.workshopsToAttend.indexOf(workshop.id) >= 0;
    });
    // ...

And the template now expects them:

<option v-for="workshop in workshops" :value="workshop.value" :key="workshop.value">{{workshop.text}}</option>
<option v-for="workshop in workshops" :value="workshop.id" :key="workshop.id">{{workshop.name}}</option>

I almost forgot about this, but the summary section also uses the WorkshopCollection data, and currently it's also coded to expect mocked workshops with value/text properties, rather than id/name one. So the test needs updating:

it("should display the summary values in the registration summary", async () => {
    const summaryValues = {
        registrationCode : "TEST_registrationCode",
        fullName : "TEST_fullName",
        phoneNumber : "TEST_phoneNumber",
        emailAddress : "TEST_emailAddress",
        workshopsToAttend : [{value: "TEST_workshopToAttend_VALUE", text:"TEST_workshopToAttend_TEXT"}]
        workshopsToAttend : [{id: "TEST_workshopToAttend_ID", name:"TEST_workshopToAttend_NAME"}]
    };
    
	// ...

    let ddValue = actualWorkshopValue.find("ul>li");
    expect(ddValue.exists()).to.be.true;
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].name);
    expect(ddValue.text()).to.equal(expectedWorkshopValue[0].value);

	// ...
});

And the code change in the template:

<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.value">{{workshop.text}}</li>
<li v-for="workshop in summaryValues.workshopsToAttend" :key="workshop.id">{{workshop.name}}</li>

And the tests pass again.

Now we will introduce the WorkshopsDAO with the data that it will get back from the API call mocked. It'll return this to the WorkshopsRepository which will implement the modelling now:

class WorkshopDAO {

    selectAll() {
        return [
            {id: 2, name: "Workshop 1"},
            {id: 3, name: "Workshop 2"},
            {id: 5, name: "Workshop 3"},
            {id: 7, name: "Workshop 4"}
        ];
    }
}

export default WorkshopDAO;

We also now need to update the test initialisation to pass a DAO instance into the repository, and for now we can remove the Sinon stubbing because the DAO is appropriately stubbed already:

let repository = new WorkshopsRepository();
sinon.stub(repository, "selectAll").returns(
    expectedOptions.map((option) => new Workshop(option.value, option.text))
);

workshopService = new WorkshopService(
    new WorkshopCollection(
        repository
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);

That's the only test change here (and the tests now break). We'll update the code to also pass-in a DAO when we initialise WorkshopService's dependencies, and also implement the modelling code in the WorkshopRepository class's selectAll method:

let workshopService = new WorkshopService(
    new WorkshopCollection(
        new WorkshopsRepository()
        new WorkshopsRepository(
            new WorkshopsDAO()
        )
    )
);
class WorkshopRepository {

    constructor(dao) {
        this.dao = dao;
    }

    selectAll() {
        return this.dao.selectAll().map((unmodelledWorkshop) => {
            return new Workshop(unmodelledWorkshop.id, unmodelledWorkshop.name);
        });
    }
}

We're stable again. The next bit of development is the important bit: add the code in the DAO that actually makes the DB call. We will be changing the DAO to be this:

class WorkshopDAO {

    constructor(client, config) {
        this.client = client;
        this.config = config;
    }

    selectAll() {
        return this.client.get(this.config.workshopsUrl)
            .then((response) => {
                return response.data;
            });
    }
}

export default WorkshopDAO;

And that Config class will be this:

class Config {
    static workshopsUrl = "http://fullstackexercise.backend/workshops/";
}

module.exports = Config;

Now. Because Axios's get method returns a promise, we're going to need to cater to this in the up-stream methods that need manipulate the data being promised. Plus, ultimately, WorkshopRegistration.vue's code is going to receive a promise, not just the raw data. IE, this code:

mounted() {
    this.workshops = this.workshopService.getWorkshops();
    this.workshops = this.workshopService.getWorkshops()
        .then((workshops) => {
            this.workshops = workshops;
        });
},

A bunch of the tests rely on the state of the component after the data has been received:

  • should have a required text input for fullName, maxLength 100, and label 'Full name'
  • should have a required text input for phoneNumber, maxLength 50, and label 'Phone number'
  • should have a required text input for emailAddress, maxLength 320, and label 'Email address'
  • should have a required password input for password, maxLength 255, and label 'Password'
  • should have a required workshopsToAttend multiple-select box, with label 'Workshops to attend'
  • should list the workshop options fetched from the back-end
  • should have a button to submit the registration
  • should leave the submit button disabled until the form is filled
  • should disable the form and indicate data is processing when the form is submitted
  • should send the form values to WorkshopService.saveWorkshopRegistration when the form is submitted
  • should display the registration summary 'template' after the registration has been submitted
  • should display the summary values in the registration summary

In these tests we are gonna have to put the test code within a watch handler, so it waits until the promise returns the data (and accordingly workshops gets set, and the watch-handler fires) before trying to test with it, eg:

it("should list the workshop options fetched from the back-end", () => {
    component.vm.$watch("workshops", async () => {
    	await flushPromises();
        let options = component.findAll("form.workshopRegistration select[name='workshopsToAttend[]']>option");

        expect(options).to.have.length(expectedOptions.length);
        options.forEach((option, i) => {
            expect(option.attributes("value"), `option[${i}] value incorrect`).to.equal(expectedOptions[i].value.toString());
            expect(option.text(), `option[${i}] text incorrect`).to.equal(expectedOptions[i].text);
        });
    });
});

Other than that the tests shouldn't need changing. Also now that the DAO is not itself a mock as it was in the last iteration, we need to go back to mocking it again, this time to return a promise of things to come:

sinon.stub(dao, "selectAll").returns(Promise.resolve(
    expectedOptions.map((option) => ({id: option.value, name: option.text}))
));

It might seem odd that we are making changes to the DAO class but we're not unit testing those changes: the test changes here are only to accommodate the other objects' changes from being synchronous to being asynchrous. Bear in mind that these are unit tests and the DAO only exists as a thin wrapper around the Axios Client object, and its purpose is to give us some of our code that we can mock to prevent Axios from making an actual API call when we test the objects above it. We'll do a in integration test of the DAO separately after we deal with the unit testing and implementation.

After updating those tests to deal with the promises, the tests collapse in a heap, but this is to be expected. We'll make them pass again by bubbling-back the promise from returned from the DAO.

For the code revisions to the other classes I'm just going to show an image of the diff between the files from my IDE. I hate using pictures of code, but this is the clearest way of showing the diffs. All the actual code is on Github @ frontend/src/workshopRegistration

Here in the repository we need to use the values from the promise, so we need to wait for them to be resolved.

Similarly with the collection, service, and component (in two places), as per below:



It's important to note that the template code in the component did not need changing at all: it was happy to wait until the promise resolved before rendering the workshops in the select box.

Having done all this, the tests pass wonderfully, but the UI breaks. Doh! But in an "OK" way:

 


Entirely fair enough. I need to send that header back with the response.

class WorkshopControllerTest extends TestCase
{

    private WorkshopsController $controller;
    private WorkshopCollection $collection;

    protected function setUp(): void
    {
        $this->collection = $this->createMock(WorkshopCollection::class);
        $this->controller = new WorkshopsController($this->collection);
    }

    /**
     * @testdox It makes sure the CORS header returns with the origin's address
     * @covers ::doGet
     */
    public function testDoGetSetsCorsHeader()
    {
        $testOrigin = 'TEST_ORIGIN';
        $request = new Request();
        $request->headers->set('origin', $testOrigin);

        $response = $this->controller->doGet($request);

        $this->assertSame($testOrigin, $response->headers->get('Access-Control-Allow-Origin'));
    }
}
class WorkshopsController extends AbstractController
{

    // ...

    public function doGet(Request $request) : JsonResponse
    {
        $this->workshops->loadAll();

        $origin = $request->headers->get('origin');
        return new JsonResponse(
            $this->workshops,
            Response::HTTP_OK,
            [
                'Access-Control-Allow-Origin' => $origin
            ]
        );
    }
}

That sorts it out. And… we're code complete.

But before I finish, I'm gonna do an integration test of that DAO method, just to automate proof that it does what it needs do. I don't count "looking at the UI and going 'seems legit'" as testing.

import {expect} from "chai";
import Config from "../../src/workshopRegistration/Config";
import WorkshopsDAO from "../../src/workshopRegistration/WorkshopsDAO";
const client = require('axios').default;

describe("Integration tests of WorkshopsDAO", () => {

    it("returns the correct records from the API", async () => {
        let directCallObjects = await client.get(Config.workshopsUrl);

        let daoCallObjects = await new WorkshopsDAO(client, Config).selectAll();

        expect(daoCallObjects).to.eql(directCallObjects.data);
    });
});

And that passes.

OK we're done here. I learned a lot about JS promises in this exercise: I hid a lot of it from you here, but working out how to get those tests working for async stuff coming from the DAO took me an embarassing amount of time and frustration. Once I nailed it it all made sense though, and I was kinda like "duh, Cameron". So that's… erm… good…?

The next thing we need to do is to sort out how to write the data to the DB now. But before we do that, I'm feeling a bit naked without having any data validation on either the form or on the web service. Well: the web service end of things doesn't exist yet, but I'm going to start that work with putting some expected validation in. But I'll put some on the client-side first. As with most of the stuff in this current wodge of work: I do not have the slightest idea of how to do form validation with Vue.js. But tomorrow I will start to find out (see "Vue.js and TDD: adding client-side form field validation" for the results of that). Once again, I have a beer appointment to get myself too for now.

Righto.

--
Adam

Saturday 6 March 2021

Symfony & TDD: adding endpoints to provide data for front-end workshop / registration requirements

G'day:

That's probably a fairly enigmatic title if you have not read the preceding article: "Vue.js: using TDD to develop a data-entry form". It's pretty long-winded, but the gist of it is summarised in this copy and pasted tract:

Right so the overall requirement here is (this is copy and pasted from the previous article) to construct an event registration form (personal details, a selection of workshops to register for), save the details to the DB and echo back a success page. Simple stuff. Less so for me given I'm using tooling I'm still only learning (Vue.js, Symfony, Docker, Kahlan, Mocha, MariaDB).

There's been two articles around this work so far:

There's also a much longer series of articles about me getting the Docker environment running with containers for Nginx, Node.js (and Vue.js), MariaDB and PHP 8 running Symfony 5. It starts with "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 1: Intro & Nginx" and runs for 12 articles.

In the previous article I did the UI for the workshop registration form…

… and the summary one gets after submitting one's registration:

Today we're creating the back-end endpoint to fetch the list of workshops in that multiple select, and also another endpoint to save the registration details that have been submitted (ran out of time for this bit). The database we'll be talking to is as follows:

(BTW, dbdiagram.io is a bloody handy website... I just did a mysqldump of my tables (no data), imported it their online charting tool, and... done. Cool).

Now… as for me and Symfony… I'm really only starting out with it. The entirety of my hands-on exposure to it is documented in "Part 6: Installing Symfony" and "Part 7: Using Symfony". And the "usage" was very superficial. I'm learning as I go here.

And as-always: I will be TDDing every step, using a tight cycle of identify a case (eg: "it needs to return a 200-OK status for GET requests on the endpoint /workshops"); create tests for that case; do the implementation code just for that case.


It needs to return a 200-OK status for GET requests on the /workshops endpoint

This is a pretty simple test:

namespace adamCameron\fullStackExercise\spec\functional\Controller;

use adamCameron\fullStackExercise\Kernel;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;

describe('Tests of WorkshopController', function () {

    beforeAll(function () {
        $this->request = Request::createFromGlobals();
        $this->kernel  = new Kernel('test', false);
    });

    describe('Tests of doGet', function () {
        it('needs to return a 200-OK status for GET requests', function () {

            $request = $this->request->create("/workshops/", 'GET');
            $response = $this->kernel->handle($request);

            expect($response->getStatusCode())->toBe(Response::HTTP_OK);
        });
    });
});

I make a request, I check the status code of the response. That's it. And obviously it fails:

root@58e3325d1a16:/usr/share/fullstackExercise# vendor/bin/kahlan --spec=spec/functional/Controller/workshopController.spec.php --lcov="var/tmp/lcov/coverage.info" --ff

[error] Uncaught PHP Exception Symfony\Component\HttpKernel\Exception\NotFoundHttpException: "No route found for "GET /workshops/"" at /usr/share/fullstackExercise/vendor/symfony/http-kernel/EventListener/RouterListener.php line 136

F                                                                   1 / 1 (100%)


Tests of WorkshopController
  Tests of doGet
    ✖ it needs to return a 200-OK status for GET requests
      expect->toBe() failed in `.spec/functional/Controller/workshopController.spec.php` line 22

      It expect actual to be identical to expected (===).

      actual:
        (integer) 404
      expected:
        (integer) 200

Perfect. Now let's add a route. And probably wire it up to a controller class and method I guess. Here's what I've got:

# backend/config/routes.yaml
workshops:
  path: /workshops/
  controller: adamCameron\fullStackExercise\Controller\WorkshopsController::doGet
// backend/src/Controller/WorkshopsController.php
namespace adamCameron\fullStackExercise\Controller;

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\JsonResponse;

class WorkshopsController extends AbstractController
{
    public function doGet() : JsonResponse
    {
        return new JsonResponse(null);
    }
}

Initially this continued to fail with a 404, but I worked out that Symfony caches a bunch of stuff when it's not in debug mode, so it wasn't seeing the new route and/or the controller until I switched the Kernel obect initialisation to switch debug on:

// from workshopsController.spec.php, as above:

$this-&gt;kernel  = new Kernel('test', true);

And then the actual code ran. Which is nice:

Passed 1 of 1 PASS in 0.108 seconds (using 8MB)

(from now on I'll just let you know if the tests pass, rather than spit out the output).

Bye Kahlan, hi again PHPUnit

I've been away from this article for an entire day, a lot of which was down to trying to get Kahlan to play nicely with Symfony, and also getting Kahlan's own functionality to not obstruct me from forward progress. However I've hit a wall with some of the bugs I've found with it, specifically "Documented way of disabling patching doesn't work #378" and "Bug(?): Double::instance doesn't seem to cater to stubbing methods with return-types #377". These render Kahlan unusable for me.

The good news is I sniffed around PHPUnit a bit more, and discovered its testdox functionality which allows me to write my test cases in good BDD fashion, and have those show up in the test results. It'll either "rehydrate" a human-readable string from the test name (testTheMethodDoesTheThing becomes "test the method does the thing"), or one can specify an actual case via the @testdox annotation on methods and the classes themselves (I'll show you below, outside this box). This means PHPUnit will achieve what I need, so I'm back to using that.

OK, backing up slightly and switching over to the PHPUnit test (backend/tests/functional/Controller/WorkshopsControllerTest.php):

/**
 * @testdox it needs to return a 200-OK status for GET requests
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturns200()
{
    $this->client->request('GET', '/workshops/');

    $this->assertEquals(Response::HTTP_OK, $this->client->getResponse()->getStatusCode());
}
> vendor/bin/phpunit --testdox 'tests/functional/Controller/WorkshopsControllerTest.php' '--filter=testDoGetReturns200'
PHPUnit 9.5.2 by Sebastian Bergmann and contributors.

Tests of WorkshopController
it needs to return a 200-OK status for GET requests

Time: 00:00.093, Memory: 12.00 MB

OK (1 test, 1 assertion)

Generating code coverage report in HTML format ... done [00:00.682]
root@5f9133aa9de3:/usr/share/fullstackExercise#

Cool.


It returns a collection of workshop objects, as JSON

The next case is:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 */
public function testDoGetReturnsJson()
{
}

This is a bit trickier, in that I actually need to write some application code now. And wire it into Symfony so it works. And also test it. Via Symfony's wiring. Eek.

Here's a sequence of thoughts:

  • We are getting a collection of Workshop objects from [somewhere] and returning them in JSON format.
  • IE: that would be a WorkshopCollection.
  • The values for the workshops are stored in the DB.
  • The WorkshopCollection will need a way of getting the data into itself. Calling some method like loadAll
  • That will need to be called by the controller, so the controller will need to receive a WorkshopCollection via Symfony's DI implementation.
  • A model class like WorkshopCollection should not be busying itself with the vagaries of storage. It should hand that off to a repository class (see "The Repository Pattern"), which will handle the fetching of DB data and translating it from a recorset to an array of Workshop objects.
  • As WorkshopsRepository will contain testable data-translation logic, it will need unit tests. However we don't want to have to hit the DB in our tests, so we will need to abstract the part of the code that gets the data into something we can mock away.
  • As we're using Doctrine/DBAL to connect to the database, and I'm a believer in "don't mock what you don't own", we will put a thin (-ish) wrapper around that as WorkshopsDAO. This is not completely "thin" because it will "know" the SQL statements to send to its connector to get the data, and will also "know" the DBAL statements to get the data out and pass back to WorkshopsRepository for modelling.

That seems like a chunk to digest, and I don't want you to think I have written any of this code, but this is the sequence of thoughts that leads me to arrive at the strategy for handling the next case. I think from the first bits of that bulleted list I can derive sort of how the test will need to work. The controller doesn't how this WorkshopCollection gets its data, but it needs to be able to tell it to do it. We'll mock that bit out for now, just so we can focus on the controller code. We will work our way back from the mock in another test. For now we have backend/tests/functional/Controller/WorkshopsControllerTest.php

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 */
public function testDoGetReturnsJson()
{
    $workshops = [
        new Workshop(1, 'Workshop 1'),
        new Workshop(2, 'Workshop 2')
    ];

    $this->client->request('GET', '/workshops/');

    $resultJson = $this->client->getResponse()->getContent();
    $result = json_decode($resultJson, false);

    $this->assertCount(count($workshops), $result);
    array_walk($result, function ($workshopValues, $i) use ($workshops) {
        $workshop = new Workshop($workshopValues->id, $workshopValues->name);
        $this->assertEquals($workshops[$i], $workshop);
    });
}

To make this pass we need just enough code for it to work:

class WorkshopsController extends AbstractController
{

    private WorkshopCollection $workshops;

    public function __construct(WorkshopCollection $workshops)
    {
        $this->workshops = $workshops;
    }

    public function doGet() : JsonResponse
    {
        $this->workshops->loadAll();

        return new JsonResponse($this->workshops);
    }
}
class WorkshopCollection implements \JsonSerializable
{
    /** @var Workshop[] */
    private $workshops;

    public function loadAll()
    {
        $this->workshops = [
            new Workshop(1, 'Workshop 1'),
            new Workshop(2, 'Workshop 2')
        ];
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

And thanks to Symphony's dependency-injection service container's autowiring, all that just works, just like that. That's the test for that end point done.

Now there was all thant bumpf I mentioned about repositories and DAOs and connectors and stuff. As part of the refactoring part of this, we are going to push our implementation right back to the DAO. This allows us to complete the parts of the code in the WorkshopCollection, WorkshopsRepository and just mock-out the DAO for now.

class WorkshopCollection implements \JsonSerializable
{
    private WorkshopsRepository $repository;

    /** @var Workshop[] */
    private $workshops;

    public function setRepository(WorkshopsRepository $repository) : void
    {
        $this->repository = $repository;
    }

    public function loadAll()
    {
        $this->workshops = $this->repository->selectAll();
    }

    public function jsonSerialize()
    {
        return $this->workshops;
    }
}

My thinking here is:

  • It's going to need a WorkshopsRepository to get stuff from the DB.
  • It doesn't seem right to me to pass in a dependency to a model as a constructor arguments. The model should work without needing a DB connection; just the methods around storage interaction should require the repository. On the other hand the only thing the collection does now is to be able to load the stuff from the DB and serialise it, so I'm kinda coding for the future here, and I don't like that. But I'm sticking with it for reasons we'll come to below.
  • I also really hate model classes with getters and setters. This is usually a sign of bad OOP. But here I have a setter, to get the repository in there.

The reason (it's not a reason, it's an excuse) I'm not passing in the repo as a constructor argument and instead using a setter is because I wanted to check out how Symfony's service config dealt with the configuration of this. If yer classes all have type-checked constructor args, Symfony just does it all automatically with no code at all (just a config switch). However to handle using the setRepository method I needed a factory method to do so. The config for it is thus (in backend/config/services.yaml):

adamCameron\fullStackExercise\Factory\WorkshopCollectionFactory: ~
adamCameron\fullStackExercise\Model\WorkshopCollection:
    factory: ['@adamCameron\fullStackExercise\Factory\WorkshopCollectionFactory', 'getWorkshopCollection']

Simple! And the code for WorkshopCollectionFactory:

class WorkshopCollectionFactory
{
    private WorkshopsRepository $repository;

    public function __construct(WorkshopsRepository $repository)
    {
        $this->repository = $repository;
    }

    public function getWorkshopCollection() : WorkshopCollection
    {
        $collection = new WorkshopCollection();
        $collection->setRepository($this->repository);

        return $collection;
    }
}

Also very simple. But, yeah, it's an exercise in messing about, and there's no way I should have done this. I should have just used a constructor argument. Anyway, moving on.

The WorkshopsRepository is very simple too:

class WorkshopsRepository
{
    private WorkshopsDAO $dao;

    public function __construct(WorkshopsDAO $dao)
    {
        $this->dao = $dao;
    }

    /** @return Workshop[] */
    public function selectAll() : array
    {
        $records = $this->dao->selectAll();
        return array_map(
            function ($record) {
                return new Workshop($record['id'], $record['name']);
            },
            $records
        );
    }
}

I get some records from the DAO, and map them across to Workshop objects. Oh! Workshop:

class Workshop implements \JsonSerializable
{
    private int $id;
    private string $name;

    public function __construct(int $id, string $name)
    {
        $this->id = $id;
        $this->name = $name;
    }

    public function jsonSerialize()
    {
        return (object) [
            'id' => $this->id,
            'name' => $this->name
        ];
    }
}

And lastly I mock WorkshopsDAO. I can't implement any further down the stack of this process because the DAO is what uses the DBAL Connector object, and I don't own that, so if I actually started to use it, I'd be hitting the DB. Or hitting the ether and getting an error. Either way: no good for our test. So a mocked DAO:

class WorkshopsDAO
{
    public function selectAll() : array
    {
        return [
            ['id' => 1, 'name' => 'Workshop 1'],
            ['id' => 2, 'name' => 'Workshop 2']
        ];
    }
}

Having done all that refactoring, we check if our test is still good, and it is. I can verify this is not a trick of the light by changing some of that data in the DAO, and watch the test break (which it does). I can also now go back to the test and stick some more code-coverage annotations in:

/**
 * @testdox it returns a collection of workshop objects, as JSON
 * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
 * @covers \adamCameron\fullStackExercise\Factory\WorkshopCollectionFactory
 * @covers \adamCameron\fullStackExercise\Repository\WorkshopsRepository
 * @covers \adamCameron\fullStackExercise\Model\WorkshopCollection
 * @covers \adamCameron\fullStackExercise\Model\Workshop
 */

And see that all the code is indeed covered:


It returns the expected workshops from the database

But now we need to implement the real DAO. once we do that, our test will break because the DAO will suddenly start hitting the DB, and we'll be getting back whatever is in the DB, not our expected canned response. Plus we don't want this test to hit the DB anyhow. So first we're gonna mock-out the DAO using PHPUnit's mocks instead of our code-mock. To do this turned out to be a bit tricky, initially, given Symfony's DI container is looking after all the dependencies for us, but fortunately when in test mode, Symfony allows us to hack into that container. I've updated my test, thus:

public function testDoGetReturnsJson()
{
    $workshopDbValues = [
        ['id' => 1, 'name' => 'Workshop 1'],
        ['id' => 2, 'name' => 'Workshop 2']
    ];

    $this->mockWorkshopDaoInServiceContainer($workshopDbValues);

    // ... unchanged ...

    array_walk($result, function ($workshopValues, $i) use ($workshopDbValues) {
        $this->assertEquals($workshopDbValues[$i], $workshopValues);
    });
}

private function mockWorkshopDaoInServiceContainer($returnValue = []): void
{
    $mockedDao = $this->createMock(WorkshopsDAO::class);
    $mockedDao->method('selectAll')->willReturn($returnValue);

    $container = $this->client->getContainer();
    $workshopRepository = $container->get('test.WorkshopsRepository');

    $reflection = new \ReflectionClass($workshopRepository);
    $property = $reflection->getProperty('dao');
    $property->setAccessible(true);
    $property->setValue($workshopRepository, $mockedDao);
}

We are popping a mocked WorkshopsDAO into the WorkshopsRepository object in the container So when the repo calls it, it'll be calling the mock.

Oh! to be able to access that 'test.WorkshopsRepository' container key, we need to expose it via the services_test.xml container config:

services:
  test.WorkshopsRepository:
    alias: adamCameron\fullStackExercise\Repository\WorkshopsRepository
    public: true

And running that, the test works, and is ignoring the reallyreally DAO.

To test the final DAO implementation, we're gonna do an end-to-end test:

class WorkshopControllerTest extends WebTestCase
{
    private KernelBrowser $client;

    public static function setUpBeforeClass(): void
    {
        $dotenv = new Dotenv();
        $dotenv->load(dirname(__DIR__, 3) . "/.env.test");
    }

    protected function setUp(): void
    {
        $this->client = static::createClient(['debug' => false]);
    }

    /**
     * @testdox it returns the expected workshops from the database
     * @covers \adamCameron\fullStackExercise\Controller\WorkshopsController
     * @covers \adamCameron\fullStackExercise\Factory\WorkshopCollectionFactory
     * @covers \adamCameron\fullStackExercise\Model\WorkshopCollection
     * @covers \adamCameron\fullStackExercise\Repository\WorkshopsRepository
     * @covers \adamCameron\fullStackExercise\Model\Workshop
     */
    public function testDoGet()
    {
        $this->client->request('GET', '/workshops/');
        $response = $this->client->getResponse();
        $workshops = json_decode($response->getContent(), false);

        /** @var Connection */
        $connection = static::$container->get('database_connection');
        $expectedRecords = $connection->query("SELECT id, name FROM workshops ORDER BY id ASC")->fetchAll();

        $this->assertCount(count($expectedRecords), $workshops);
        array_walk($expectedRecords, function ($record, $i) use ($workshops) {
            $this->assertEquals($record['id'], $workshops[$i]->id);
            $this->assertSame($record['name'], $workshops[$i]->name);
        });
    }
}

The test is pretty familiar, except it's actually getting its expected data from the database, and making sure the whole process, end to end, is doing what we want. Currently when we run this it fails because we still have our mocked DAO in place (the real mock, not the… mocked mock. Um. You know what I mean: the actual DAO class that just returns hard-coded data). Now we put the proper DAO code in:

class WorkshopsDAO
{
    private Connection $connection;

    public function __construct(Connection $connection)
    {
        $this->connection = $connection;
    }

    public function selectAll() : array
    {
        $sql = "
            SELECT
                id, name
            FROM
                workshops
            ORDER BY
                id ASC
        ";
        $statement = $this->connection->executeQuery($sql);

        return $statement->fetchAllAssociative();
    }
}

And now if we run the tests:

> vendor/bin/phpunit --testdox
PHPUnit 9.5.2 by Sebastian Bergmann and contributors.

Tests of WorkshopController
it needs to return a 200-OK status for GET requests
it returns a collection of workshop objects, as JSON

Tests of baseline Symfony install
it displays the Symfony welcome screen
it returns a personalised greeting from the /greetings end point

PHP config tests
gdayWorld.php outputs G'day world!

Webserver config tests
It serves gdayWorld.html with expected content

End to end tests of WorkshopController
it returns the expected workshops from the database

Tests that code coverage analysis is operational
It reports code coverage of a simple method correctly

Time: 00:00.553, Memory: 22.00 MB

OK (8 tests, 26 assertions)

Generating code coverage report in HTML format ... done [00:00.551]
root@5f9133aa9de3:/usr/share/fullstackExercise#

Nice one!

And if we look at code coverage:

I was gonna try to cover the requirements for the process ot save the form fields in the article too, but it took ages to work out how some of the Symfony stuff worked, plus I sunk about a day into trying to get Kahlan to work, and this article is already super long anyhow. I now have the back-end processing sorted out to update the front-end form to actually use the values from the DB instead of test values. I might look at that tomorrow (see "TDDing the reading of data from a web service to populate elements of a Vue JS component" for that exercise). I need a rest from Symfony.


It needs to drink some beer

I'm rushing the outro of this article because I am supposed to be on a webcam with a beer in my hand in 33min, and I need to proofread this still…

Righto.

--
Adam