Showing posts with label Unit Testing. Show all posts
Showing posts with label Unit Testing. Show all posts

Wednesday 24 February 2021

Docker: using TDD to initialise my app's DB with some data

G'day:

I'll start by saying I am not convinced this exercise really ought to be a TDD-oriented one, but I'm gonna approach it that way anyhow because I suspect I'm going to need to mess around a bit to get this working. Secondly, this is very much a log of what I'm (trying to ~) work on today, and I doubt there will be any shrewd insights going on, given I'm basically googling and RTFMing, then doing what the docs say.

The exercise here is to take the MariaDB database that I already have in my Docker set up (see "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 5: MariaDB"), which is currently empty and only accessible via the root login; and add some baseline tables and data to it. At the same time also create a user for code to connect to the DB with so I don't need code using root access. Another thing I want to do is stop storing the DB passwords in the Docker .env file like I am now:

COMPOSE_PROJECT_NAME=fullStackExercise
DATABASE_ROOT_PASSWORD=123

The data I need is to fulfil an exercise I have given myself (well: it wasn't me who gave me the original exercise, but I'm reinventing it a bit here) to construct an event registration form (personal details, a selection of workshops to register for), save the details to the DB and echo back a success page. Simple stuff. Less so for me given I'm using tooling I'm still only learning (Vue.js, Symfony, Docker, Kahlan, Mocha, MariaDB).

For my tests, I can already derive a bunch of test specs from those first few paragraphs above, so let's put them together now in spec/integration/baselineDatabase.spec.php:

<?php

namespace adamCameron\fullStackExercise\spec\integration;

describe('Tests for registration database', function () {
    describe('Connectivity tests', function () {
        it('can connect to the database with environment-based credentials', function () {
        });
    });

    describe('Schema tests', function () {
        it('has a workshops table with the required schema', function () {
        });

        it('has a registrations table with the required schema', function () {
        });

        it('has a registeredWorkshops table with the required schema', function () {
        });
    });

    describe('Data tests', function () {
        it('has the required baseline workshop data', function () {
        });
    });
});

And I can now run those to see them… not be implemented:

root@fde4be76c908:/usr/share/fullstackExercise# composer spec -- --spec=spec/integration/baselineDatabase.spec.php --reporter=verbose
> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
      ✓ it can connect to the database with environment-based credentials
    Schema tests
      ✓ it has a workshops table with the required schema
      ✓ it has a registrations table with the required schema
      ✓ it has a registeredWorkshops table with the required schema
    Data tests
      ✓ it has the required baseline workshop data


  Pending specifications: 5
  .spec/integration/baselineDatabase.spec.php, line 8
  .spec/integration/baselineDatabase.spec.php, line 13
  .spec/integration/baselineDatabase.spec.php, line 16
  .spec/integration/baselineDatabase.spec.php, line 19
  .spec/integration/baselineDatabase.spec.php, line 24


Expectations   : 0 Executed
Specifications : 5 Pending, 0 Excluded, 0 Skipped

Passed 0 of 0 PASS in 0.015 seconds (using 4MB)

root@fde4be76c908:/usr/share/fullstackExercise#

And now I can implement that first test:

describe('Tests for registration database', function () {

    $this->getConnectionDetailsFromEnvironment = function () {
        return (object) [
            'database' => $_ENV['MYSQL_DATABASE'],
            'user' => $_ENV['MYSQL_USER'],
            'password' => $_ENV['MYSQL_PASSWORD']
        ];
    };

    describe('Connectivity tests', function () {
        it('can connect to the database with environment-based credentials', function () {
            $connectionDetails = $this->getConnectionDetailsFromEnvironment();
            $connection = new PDO(
                "mysql:dbname=$connectionDetails->database;host=database.backend",
                $connectionDetails->user,
                $connectionDetails->password
            );
            $statement = $connection->query("SELECT 'OK' AS test FROM dual");
            $statement->execute();

            $testResult = $statement->fetch(PDO::FETCH_ASSOC);

            expect($testResult)->toContainKey('test');
            expect($testResult['test'])->toBe('OK');
        });
    });

There's not much to this. I'm reading the DB connectivity details from the environment variables Docker has set for me, and using those to do a simple DB query from the database, and just verify the DB is responding as expected. To be honest I don't think I need / ought to be using the environment variable for the database name here: that environment variable is just for MariaDB to create a DB of that name when it first starts up. In the app itself, we'll have a static value for the database name, because the app wants to use that exact database, not simply whatever DB is in that environment variable. Hopefully you see the subtle difference in intent there. Anyhow, we now run our tests:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
      ✖ it can connect to the database with environment-based credentials
        an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 11

        message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` Undefined array key \"MYSQL_DATABASE\""

          [NA] - spec/integration/baselineDatabase.spec.php, line 7 to 11
          […etc…]

Cool. Now we can sort those credentials out and watch that test pass. The first thing I have done is to update docker/.env (see above) to get rid of the root password, and add the other credentials MariaDB expects to initialise a database when its container is first built (see the "Environment Variables" section in mariadb - Docker Official Images for info about that):

COMPOSE_PROJECT_NAME=fullStackExercise
MYSQL_DATABASE=fullstackExercise
# the following are to be provided to `docker-compose up`
DATABASE_ROOT_PASSWORD=
MYSQL_USER=
MYSQL_PASSWORD=

Those empty entries are not necessary, I've just left them there for the sake of documentation. The bit I do actually need to do is in docker/docker-compose.yml. This is best shown with a diff I think:

$ git diff docker/docker-compose.yml
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index bc399a0..0eec553 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -24,7 +24,9 @@ services:
       context: ../backend
       dockerfile: ../docker/php-fpm/Dockerfile
     environment:
-      - DATABASE_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
+      - MYSQL_DATABASE=${MYSQL_DATABASE}
+      - MYSQL_USER=${MYSQL_USER}
+      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
     volumes:
       - ../backend/config:/usr/share/fullstackExercise/config
       - ../backend/public:/usr/share/fullstackExercise/public
@@ -52,6 +54,9 @@ services:
       context: ./mariadb
     environment:
       - MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
+      - MYSQL_DATABASE=${MYSQL_DATABASE}
+      - MYSQL_USER=${MYSQL_USER}
+      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
     ports:
       - "3306:3306"
     volumes:

I've taken out PHP's access to the DB root password as it doesn't need it any more. It has two tests that will now fail, but they were only ever temporary ones until I did this work anyhow, so I'll be deleting those when I verify they now fail. And I've also added in the three new environment variables to both the MariaDB service, and the PHP one. MariaDB uses it to create the fullstackExercise DB, and PHP will use the same credentials to connect to it. I now have no DB credentials anywhere in the codebase. Instead, I pass them in when I first bring the containers up:

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise/docker$ DATABASE_ROOT_PASSWORD=123 MYSQL_USER=fullstackExercise MYSQL_PASSWORD=1234 docker-compose up --build --detach

This is not completely secure. One can still see the passwords if one terminals into the containers, eg:

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise/docker$ docker exec --interactive --tty fullstackexercise_php-fpm_1 /bin/bash
root@ac3872091c8e:/usr/share/fullstackExercise# set | grep MYSQL
MYSQL_DATABASE=fullstackExercise
MYSQL_PASSWORD=1234
MYSQL_USER=fullstackExercise
root@ac3872091c8e:/usr/share/fullstackExercise#

A better way would perhaps be to use Docker Secrets, but I could not work out how to get the values from the files it creates into environment variables in the docker-compose.yml file. But will also admit I pretty much read the docs and went "yeah CBA with that right now". It might be dead easy (UPDATE: just now when linking to the MariaDB docker image page a coupla paragraphs up, I noticed it's all actually explained there, and it is dead easy. I might look at doing this later then).

Now I will run my tests again. My expectations are that that test that failed before will now be passing; and one each of the Kahlan and PHPUnit tests will start to fail because they are testing connecting to the DB using the root credentials, which I've removed.

root@ac3872091c8e:/usr/share/fullstackExercise# composer spec
> vendor/bin/kahlan

..........E.PPPP.                                                 17 / 17 (100%)


  Pending specifications: 4
  .spec/integration/baselineDatabase.spec.php, line 37
  .spec/integration/baselineDatabase.spec.php, line 40
  .spec/integration/baselineDatabase.spec.php, line 43
  .spec/integration/baselineDatabase.spec.php, line 48

Tests database availability
  ✖ it should return the expected database version
    an uncaught exception has been thrown in `spec/integration/database.spec.php` line 14

    message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` Undefined array key \"DATABASE_ROOT_PASSWORD\""

      [NA] - spec/integration/database.spec.php, line 11 to 14
      Kahlan\Filter\Filters::run() - vendor/kahlan/kahlan/src/Suite.php, line 236
      […etc…]


Expectations   : 18 Executed
Specifications : 4 Pending, 0 Excluded, 0 Skipped

Passed 12 of 13 FAIL (EXCEPTION: 1) in 0.491 seconds (using 6MB)

Script vendor/bin/kahlan handling the spec event returned with error code 255

This is good: only one failing test: the one we expect to fail, and it's failing for the right reason. And with PHPUnit:

PHPUnit 9.5.2 by Sebastian Bergmann and contributors.

.....E                                                              6 / 6 (100%)

Time: 00:00.268, Memory: 14.00 MB

There was 1 error:

1) adamCameron\fullStackExercise\tests\integration\DatabaseTest::testDatabaseVersion
Undefined array key "DATABASE_ROOT_PASSWORD"

/usr/share/fullstackExercise/tests/integration/DatabaseTest.php:16

ERRORS!
Tests: 6, Assertions: 13, Errors: 1.

Generating code coverage report in HTML format ... done [00:00.374]
Script vendor/bin/phpunit handling the test event returned with error code 2

I'll get rid of those failing tests. They are redundant now.

The next test cases we have to address are these ones:

    Schema tests
      ✓ it has a workshops table with the required schema
      ✓ it has a registrations table with the required schema
      ✓ it has a registeredWorkshops table with the required schema

Looking at the docs for MariaDB's Docker image ("Docker Official Images > mariadb > Initializing a fresh instance"), when the DB starts up, it looks for files in a docker-entrypoint-initdb.d directory, and runs any scripts it finds in there. This makes things easy.

However let's not get ahead of ourselves. We need tests first. But first… a bit of an aside. I'm actually questioning the merits of these tests. They are handy when I'm doing the initial DB setup though. Later as the application develops, we'll have more finely-tuned integration tests that will implicitly test the table schemata are correct; but I guess at the moment all we need to have is the schema (then some baseline data), so as transient tests I suppose the have some merit. I'm not sure. One one hand it might be overkill; on another hand we're supposed to be developing the application iteratively, and these are a first iteration. I guess the situation is similar to the DB tests I had that were using the root connectivity details, because for that iteration that's where we were at. Now we've moved on so those tests are redundant, and these new tests replace them. And these tests will likely be replaced in the next coupla iterations as we go. Anyhow: I'm writing them. Here we go.

describe('Schema tests', function () {
    $schemata = [
        [
            'tableName' => 'workshops',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'name', 'Type' => 'varchar(500)']
            ]
        ],
        [
            'tableName' => 'registrations',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'fullName', 'Type' => 'varchar(100)'],
                ['Field' => 'phoneNumber', 'Type' => 'varchar(50)'],
                ['Field' => 'emailAddress', 'Type' => 'varchar(320)'],
                ['Field' => 'password', 'Type' => 'varchar(255)'],
                ['Field' => 'ipAddress', 'Type' => 'varchar(15)'],
                ['Field' => 'uniqueCode', 'Type' => 'varchar(36)'],
                ['Field' => 'created', 'Type' => 'timestamp']
            ]
        ],
        [
            'tableName' => 'registeredWorkshops',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'registrationId', 'Type' => 'int(11)'],
                ['Field' => 'workshopId', 'Type' => 'int(11)']
            ]
        ]
    ];

    array_walk($schemata, function ($tableSchema) {
        $tableName = $tableSchema['tableName'];
        $expectedSchema = $tableSchema['schema'];

        it("has a $tableName table with the required schema", function () use ($tableName, $expectedSchema) {
            $statement = $this->connection->query("SHOW COLUMNS FROM $tableName");
            $statement->execute();

            $columns = $statement->fetchAll(PDO::FETCH_ASSOC);

            expect($columns)->toHaveLength(count($expectedSchema));
            foreach ($expectedSchema as $i => $column) {
                expect($columns[$i]['Field'])->toBe($expectedSchema[$i]['Field']);
                expect($columns[$i]['Type'])->toBe($expectedSchema[$i]['Type']);
            }
        });
    });
});

There was an intermediary refactoring here: initially I had three "hard-coded" cases, as listed further up. As I wrote the test for the second case I noticed I was duplicating everything from the first test except the table name and the details of the schema, so I extracted those as test data, and looped over them. All the test does here is to get the table columns description, and verify they match the name, type and length of my expectations. The expectations were taken directly from the requirement I had been given to implement.

If I now run the tests, those three cases fail, as we'd expect given the tables don't yet exist:

Tests for registration database
  Schema tests
    ✖ it has a workshops table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.workshops' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]

    ✖ it has a registrations table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.registrations' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]

    ✖ it has a registeredWorkshops table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.registeredWorkshops' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]
[…etc…]

Now to add the tables.I've set up these files:

adam@DESKTOP-QV1A45U:/mnt/c/src/ttct$ tree docker/mariadb/do*
docker/mariadb/docker-entrypoint-initdb.d
├── 1.createAndPopulateWorkshops.sql
├── 2.createRegistrations.sql
└── 3.createRegisteredWorkshops.sql

Note: for now that first file name is slightly misnamed, as it'll only have the DDL statement in it at the moment, and the data-insertion will come in a subsequent step. The file contents are as follows:

/* docker/mariadb/docker-entrypoint-initdb.d/1.createAndPopulateWorkshops.sql */

USE fullstackExercise;

CREATE TABLE workshops (
    id INT NOT NULL AUTO_INCREMENT,
    name VARCHAR(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
    
    PRIMARY KEY (id)
) ENGINE=InnoDB;


/* docker/mariadb/docker-entrypoint-initdb.d/2.createRegistrations.sql */

USE fullstackExercise;

CREATE TABLE registrations (
   id INT NOT NULL AUTO_INCREMENT,
   fullName VARCHAR(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   phoneNumber VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   emailAddress VARCHAR(320) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   password VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   ipAddress VARCHAR(15) NOT NULL,
   uniqueCode VARCHAR(36) NOT NULL DEFAULT (UUID()),
   created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
   
   PRIMARY KEY (id)
) ENGINE=InnoDB;


/* docker/mariadb/docker-entrypoint-initdb.d/3.createRegisteredWorkshops.sql */

USE fullstackExercise;

CREATE TABLE registeredWorkshops (
   id INT NOT NULL AUTO_INCREMENT,
   registrationId INT NOT NULL,
   workshopId INT NOT NULL,
   PRIMARY KEY (id),
   FOREIGN KEY (registrationId) REFERENCES registrations(id),
   FOREIGN KEY (workshopId) REFERENCES workshops(id)
);

And lastly I need to copy that directory into my MariaDB container when I build it (docker/mariadb/Dockerfile):

FROM mariadb:latest
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
CMD ["mysqld"]
EXPOSE 3306

After I rebuild my containers, I run the tests and we're all good:

    Schema tests
       it has a workshops table with the required schema
       it has a registrations table with the required schema
       it has a registeredWorkshops table with the required schema

Finally I need some seed data in the workshops table. First I'm going to write my test cases for this:

describe('Data tests', function () {
    it('has the required baseline workshop data', function () {
        $expectedWorkshops = [
            ['id' => '2', 'name' => 'TEST_WORKSHOP 1'],
            ['id' => '3', 'name' => 'TEST_WORKSHOP 2'],
            ['id' => '5', 'name' => 'TEST_WORKSHOP 3'],
            ['id' => '7', 'name' => 'TEST_WORKSHOP 4']
        ];

        $statement = $this->connection->query("SELECT id, name FROM workshops ORDER BY id");
        $statement->execute();
        $workshops = $statement->fetchAll(PDO::FETCH_ASSOC);

        expect($workshops)->toEqual($expectedWorkshops);
    });

    it('correctly auto-increments the ID on new insertions', function () {
        $$expectedWorkshopName = 'TEST_WORKSHOP 5';

        $this->connection->beginTransaction();

        $statement = $this->connection->prepare(query: "INSERT INTO workshops (name) VALUES (:name)");
        $statement->execute(['name' => $expectedWorkshopName]);
        $id = $this->connection->lastInsertId();

        $statement = $this->connection->prepare("SELECT id, name FROM workshops WHERE id = :id");
        $statement->execute(['id' => $id]);
        $workshops = $statement->fetchAll(PDO::FETCH_ASSOC);

        expect($workshops)->toHaveLength(1)
        expect($workshops[0])->toContainKey('name')
        expect($workshops[0]['name'])->toBe($expectedWorkshopName)

        $this->connection->rollback();
    });
});

Those are reasonably self-explanatory. I need to insert four baseline workshop records, and in the first case I just SELECT the data and check it's what I expect it to be. The second case only occurred to me when I went to look at the changes to the SQL I needed to make in 1.createAndPopulateWorkshops.sql to insert that data. I needed to take the auto-increment off the table-create statement so I could insert records with the specific IDs I need, then after doing that I altered the table to have the ID auto-increment. I figured I had better test that that worked too. So I insert a new record (just the name, letting the DB handle the ID), get the ID back and use that to get the whole record back for that ID, verifying it's also got the correct name. I do no want that data cluttering my DB so I put the whole thing in a transaction so that it rolls-back when I'm done or if there's an error.

Running those, only the first one errors:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php'

F.                                                                  2 / 2 (100%)


Tests for registration database
  Data tests
    ✖ it has the required baseline workshop data
      expect->toEqual() failed in `.spec/integration/baselineDatabase.spec.php` line 102

      It expect actual to be equal to expected (==).

      actual:
        (array) []
      expected:
        (array) [
            0 => [
                "id" => "2",
                "name" => "TEST_WORKSHOP 1"
            ],
            […etc…]


Expectations   : 4 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 1 of 2 FAIL (FAILURE: 1) in 0.025 seconds (using 4MB)

Focus Mode Detected in the following files:
fdescribe - spec/integration/baselineDatabase.spec.php, line 89 to 124
exit(-1)

Script vendor/bin/kahlan handling the spec event returned with error code 255
root@e7d6aa6cf839:/usr/share/fullstackExercise#

This puzzled me at first, but then it occurred to me that the auto-increment test case really ought to have been added when I did the first round of tests before creating the table, because that is when that functionality was added. All I'm doing with the changes I'm about to make is insert some data-insertion code into the script. It's already doing the auto-increment on the ID, and all I'm doing with that is changing when it's being applied: from the table-creation statement to its own statement after the inserts are done. See below for what I mean.

And now I'll now update that docker/mariadb/docker-entrypoint-initdb.d/1.createAndPopulateWorkshops.sql to also insert the baseline data:

USE fullstackExercise;

CREATE TABLE workshops (
    id INT NOT NULL /* AUTO_INCREMENT <- this has been removed from here */,
    name VARCHAR(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
    
    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO workshops (id, name)
VALUES
    (2, 'TEST_WORKSHOP 1'),
    (3, 'TEST_WORKSHOP 2'),
    (5, 'TEST_WORKSHOP 3'),
    (7, 'TEST_WORKSHOP 4')
;

ALTER TABLE workshops MODIFY COLUMN id INT auto_increment;

(Note I've moved the auto-increment on the ID field when I create the table now, so the seed data can have specific IDs. Once I do the insert, then I make the column auto-increment).

Once I rebuild my containers, all the tests now pass:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
       it can connect to the database with environment-based credentials
    Schema tests
       it has a workshops table with the required schema
       it has a registrations table with the required schema
       it has a registeredWorkshops table with the required schema
    Data tests
       it has the required baseline workshop data
       it correctly auto-increments the ID on new insertions



Expectations   : 35 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 6 of 6 PASS in 0.033 seconds (using 5MB)

root@44850303b17a:/usr/share/fullstackExercise#

And I think that's about it. I'm not doing anything with the data yet, but that'll start to be fleshed out in the next article (or maybe the following one. Not sure). This was just an exercise in doing some stuff with Docker and MariaDB, and thinking about the merits of TDDing exercises like this. I think it was worth it, especially during this early phase of working with these containers as I'm still reconfiguring stuff a lot, so it's good to know things don't get messed up when I'm monkeying with stuff.

Righto.

--
Adam

Friday 19 February 2021

Part 12: unit testing Vue.js components

G'day

OKOK, another article in that bloody series I've been doing. Same caveats as usual: go have a breeze through the earlier articles if you feel so inclined. That said this is reasonably stand-alone.

  1. Intro / Nginx
  2. PHP
  3. PHPUnit
  4. Tweaks I made to my Bash environment in my Docker containers
  5. MariaDB
  6. Installing Symfony
  7. Using Symfony
  8. Testing a simple web page built with Vue.js using Mocha, Chai and Puppeteer
  9. I mess up how I configure my Docker containers
  10. An article about moving files and changing configuration
  11. Setting up a Vue.js project and integrating some existing code into it
  12. Unit testing Vue.js components (this article)

This is really very frustrating. You might recall I ended my previous article with this:

Now I just need to work out how to implement a test of just the GreetingMessage.vue component discretely, as opposed to the way I'm doing it now: curling a page it's within and checking the page's content. […]

Excuse me whilst I do some reading.

[Adam runs npm install of many combinations of NPM libs]

[Adam downgrades his version of Vue.js and does the npm install crap some more]

OK screw that. Seems to me - on initial examination - that getting all the libs together to make stand-alone components testable is going to take me longer to work out than I have patience for. I'll do it later. […] Sigh.

I have returned to this situation, and have got everything working fine. Along the way I had an issue with webpack that I eventually worked around, but once I circled back to replicate the work and write this article, I could no longer replicate the issue. Even rolling back to the previous version of the application code and step-by-step repeating the steps to get to where the problem was. This is very frustrating. However other people have had similar issues in the past so I'm going to include the steps to solve the problem here, even if I have to fake the issue to get error messages, etc.

Right so I'm back with the Vue.js docs regarding testing: "Vue.JS > Testing > Component Testing". The docs are incredibly content-lite, and pretty much just fob you off onto other people to work out what to do. Not that cool, and kinda suggests Vue.js considers the notion of unit testing pretty superficially. I did glean I needed to install a coupla Node modules.

First, @testing-library/vue, for which I ran into a glitch immediately:

root@00e2ea0a3109:/usr/share/fullstackExercise# npm install --save-dev @testing-library/vue
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: full-stack-exercise@2.13.0
npm ERR! Found: vue@3.0.5
npm ERR! node_modules/vue
npm ERR!   vue@"^3.0.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer vue@"^2.6.10" from @testing-library/vue@5.6.1
npm ERR! node_modules/@testing-library/vue
npm ERR!   dev @testing-library/vue@"*" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /var/cache/npm/eresolve-report.txt for a full report.

npm ERR! A complete log of this run can be found in:
npm ERR!     /var/cache/npm/_logs/2021-02-19T11_29_15_901Z-debug.log
root@00e2ea0a3109:/usr/share/fullstackExercise#

The current version of @testing-library/vue doesn't work with the current version of Vue.js. Neato. Nice one Vue team. After some googling of "wtf?", I landed on an issue someone else had raised already: "Support for Vue 3 #176": I need to use the next branch (npm install --save-dev @testing-library/vue@next). This worked OK.

The other module I needed was @vue/cli-plugin-unit-mocha. That installed with no problem. This all gives me the ability to run vue-cli-service test:unit, which will run call up Mocha and run some tests. The guidance is to set this up in package.json, thus:

  "scripts": {
    "test": "mocha test/**/*Test.js",
    "serve": "vue-cli-service serve --watch",
    "build": "vue-cli-service build",
    "lint": "vue-cli-service lint",
    "test:unit": "vue-cli-service test:unit test/unit/**/*.spec.js"
  },

Then one can jus run it as npm run test:unit.

I looked at the docs for how to test a component ("Unit Testing Vue Components" - for Vue v2, but there's no equivalent page in the v3 docs), and knocked together an initial first test which would just load the component and do nothing else: just to check everything was running (frontend/test/unit/GreetingMessage.spec.js (final version) on Github):

import GreetingMessage from "../../src/gdayWorld/components/GreetingMessage";

import { shallowMount } from "@vue/test-utils";

import {expect} from "chai";

describe("Tests of GreetingMessage component", () => {
    it("should successfully load the component", () => {
        let greetingMessage = shallowMount(GreetingMessage, {propsData: {message: "G'day world"}});

        expect(true).to.be.true;
    });
});

Here's where I got to the problem I now can't replicate. When I ran this, I got something like:

root@eba0490b453d:/usr/share/fullstackExercise# npm run test:unit

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

 WEBPACK  Compiling...

  [=========================] 98% (after emitting)

 ERROR  Failed to compile with 1 error

error  in ./node_modules/mocha/lib/cli/cli.js

Module parse failed: Unexpected character '#' (1:0)
File was processed with these loaders:
* ./node_modules/cache-loader/dist/cjs.js
* ./node_modules/babel-loader/lib/index.js
* ./node_modules/eslint-loader/index.js
You may need an additional loader to handle the result of these loaders.
> #!/usr/bin/env node
| 'use strict';
|
[…etc…]

There's a JS file that has a shell-script shebang thing at the start of it, and the Babel transpiler doesn't like that. Fair enough, but I really don't understand why it's trying to transpile stuff in the node_modules directory, but at this point in time, I just thought "Hey-ho, it knows what it's doing so I'll take its word for it".

Googling about I found a bunch of other people having a similar issue with Webpack compilation, and the solution seemed to be to use a shebang-loader in the compilation process (see "How to keep my shebang in place using webpack?", "How to Configure Webpack with Shebang Loader to Ignore Hashbang…", "Webpack report an error about Unexpected character '#'"). All the guidance for this solution was oriented aroud sticking stuff in the webpack.config.js file, but of course Vue.js hides that away from you, and you need to do things in a special Vue.js way, but adding stuff with a special syntax to the vue.config.js file. The docs for this are at "Vue.js > Working with Webpack". The docs there showed how to do it using chainWebpack, but I never actually got this approach to actually solve the problem, so I mention this only because it's "something I tried".

From there I starting thinking, "no, seriously why is it trying to transpile stuff in the node_modules directory?" This does not seem right. I changed my googling tactic to try to find out what was going on there, and came across "Webpack not excluding node_modules", and that let me to update my vue.config.js file to actively exclude node_modules (copied from that answer):

var nodeExternals = require('webpack-node-externals');
...
module.exports = {
    ...
    target: 'node', // in order to ignore built-in modules like path, fs, etc. 
    externals: [nodeExternals()], // in order to ignore all modules in node_modules folder 
    ...
};

And that worked. Now when I ran the test I have made progress:

root@eba0490b453d:/usr/share/fullstackExercise# npm run test:unit

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

 WEBPACK   Compiling...

  [=========================] 98% (after emitting)

 DONE   Compiled successfully in 1161ms

  [=========================] 100% (completed)

WEBPACK  Compiled successfully in 1161ms

MOCHA  Testing...



  Tests of GreetingMessage component
    ✓ should successfully load the component


  1 passing (17ms)

MOCHA  Tests completed successfully

root@eba0490b453d:/usr/share/fullstackExercise#

From there I rounded out the tests properly (frontend/test/unit/GreetingMessage.spec.js):

import GreetingMessage from "../../src/gdayWorld/components/GreetingMessage";

import { shallowMount } from "@vue/test-utils";

import {expect} from "chai";

describe("Tests of GreetingMessage component", () => {

    let greetingMessage;
    let expectedText = "TEST_MESSAGE";

    before("Load test GreetingMessage component", () => {
        greetingMessage = shallowMount(GreetingMessage, {propsData: {message: expectedText}});
    });

    it("should return the correct heading", () => {
        let heading = greetingMessage.find("h1");
        expect(heading.exists()).to.be.true;

        let headingText = heading.text();
        expect(headingText).to.equal(expectedText);
    });

    it("should return the correct content", () => {
        let contentParagraph = greetingMessage.find("h1+p");
        expect(contentParagraph.exists()).to.be.true;

        let contentParagraphText = contentParagraph.text();
        expect(contentParagraphText).to.equal(expectedText);
    });
});

Oh! Just a reminder of what the component is (frontend/src/gdayWorld/components/GreetingMessage.vue)! Very simple stuff, as the tests indicate:

<template>
    <h1>{{ message }}</h1>
    <p>{{ message }}</p>
</template>

<script>
export default {
  name: 'GreetingMessage',
  props : {
    message : {
      type: String,
      required: true
    }
  }
}
</script>

One thing I found was that every time I touched the test file, I was getting this compilation error:

> full-stack-exercise@2.13.0 test:unit
> vue-cli-service test:unit test/unit/**/*.spec.js

WEBPACK  Compiling...

  [=========================] 98% (after emitting)

ERROR  Failed to compile with 1 error

error  in ./test/unit/GreetingMessage.spec.js

Module Error (from ./node_modules/eslint-loader/index.js):

/usr/share/fullstackExercise/test/unit/GreetingMessage.spec.js
   7:1  error  'describe' is not defined  no-undef
  12:5  error  'before' is not defined    no-undef
  16:5  error  'it' is not defined        no-undef
  24:5  error  'it' is not defined        no-undef

error4 problems (4 errors, 0 warnings)

But if I ran it again, the problem went away. Somehow ESLint was getting confused by things; it only lints things when they've changed, and on the second - and subsequent - runs it doesn't run so the problem doesn't occur. More googling, and I found this: "JavaScript Standard Style does not recognize Mocha". The guidance here is to let the linter know I'm running Mocha, with the inference that there will be some global functions it can just assume are legit. This is just an entry in package.json:

  "eslintConfig": {
    "root": true,
    "env": {
      "node": true,
      "mocha": true
    },
    // etc

Having done that, everything works perfectly, and despite the fact that is a very very simple unit test… I'm quite pleased with myself that I got it running OK.

After sorting it all out, I reverted everything in source control back to how it was before I started the exercise, so as to replicate it and write it up here. This is when I was completely unable to reproduce that shebang issue at all. I cannot - for the life of me - work out why not. Weird. Anyway, I decided to not waste too much time trying to reproduce a bug I had solved, and just decided to mention it here as a possible "thing" that could happen, but otherwise move on with my life.

I have now got to where I wanted to get at the beginning of this series of articles, so I am gonna stop now. My next task with Vue.js testing is to work out how to mock things, because my next task will require me to make remote calls and/or hit a DB. And I don't want to be doing that when I'm running tests. But that was never an aim of this article series, so I'll revert to stand-alone articles now.

Righto.

--
Adam

Thursday 11 February 2021

Thoughts on Working Code podcast's Testing episode

G'day:

Working Code (@WorkingCodePod on Twitter) is a podcast by some friends and industry colleagues: Tim Cunningham, Carol Hamilton, Ben Nadel and Adam Tuttle.


(apologies for swiping your image without permission there, team)

It's an interesting take on a techo podcast, in their own words from their strapline:

Working Code is a technology podcast unlike all others. Instead of diving deep into specific technologies to learn them better, or focusing on soft-skills, this one is like hanging out together at the water cooler or in the hallway at a technical conference. Working Code celebrates the triumphs and fails of working as a developer, and aims to make your career in coding more enjoyable.

I think they achieve this, and it makes for a good listen.

So that's that, I just wanted to say they've done good work, and go listen.

Oh, just one more thing.

Yesterday they released their episode "Testing". I have to admit my reaction to a lot of what was said was… "poor", so I pinged my namesake and said "I have some feedback". After a brief discussion on Signal, Adam & I concluded that I might try to do a "reaction blog article" on the topic, and they might see if they can respond to the feedback, if warranted, at a later date. They are recording tonight apparently, and I'm gonna try to get this across to them for their morning coffee.

Firstly as a reminder: I'm pretty keen on testing, and I am also keen on TDD as a development practice. I've written a fair bit on unit testing and TDD both. I'm making a distinction between unit testing and TDD very deliberately. I'll come back to this. But anyway this is why I was very very interested to see what the team had to say about testing. Especially as I already knew one of them doesn't do automated testing (Ben), and another (Carol) I believe has only recently got into it (I think that's what she said, a coupla episodes ago). I did not know Adam or Tim's position on it.

And just before I get under way, I'll stick a coupla Twitter messages here I saw recently. At the time I saw them I was thinking about Ben's claim to a lack of testing, and they struck a particular chord with me.




I do not know Maaret or Mathias, but I think they're on the money here.

OK So I'm now listening to the podcast now. I'm going to pull quotes from it, and comment on them where I think it's "necessary".

[NB I'm exercising my right to reproduce small parts of the Working Code Podcast transcript here, as I'm doing so for the purposes of commentary / criticism]


Ahem.

Adam @ 11:27:

…up front we should acknowledge you know we're not testing experts. None of us [...] have been to like 'testing college'. […]There's a good chance we're going to get something wrong.

I think, in hindsight, this podcast needed this caveat to be made louder and clearer. To be blunt - and I don't think any would disagree with me here - all four are very far from being testing experts. Indeed one is even a testing naesayer. I think there's some dangerously ill-informed opinions being expressed as the podcast progresses, and as these are all people who are looked-up-to in their community, I think there's a risk people will take onboard what they say as "advice". Even if there's this caveat at the beginning. This might seem like a very picky thing to draw on, but perhaps I should have put it at then end of the article, after there's more context.

Carol @ 12:07:

Somebody find that monster already.

Hi guys.

Ben @ 12:41

I test nothing. And it's not like a philosophical approach to life, it's more just I'm not good at testing

Adam @ 13:09:

Clearly that's working out pretty well for you, you've got a good career going.

I hear this a bit. "I don't test and I get by just fine". This is pretty woolly thinking, and it's false logic people will jump on to justify why they don't do things. And Adam is just perpetuating the myth here. The problem with this rationalisation is demonstrated with an analogy of going on a journey without a map and just wandering aimlessly but you still (largely accidentally) arrive at your intended destination. In contrast had you used a map, you'd've been more efficient with your time and effort, and been able to progress even further on the next leg of your journey sooner. Ben's built a good career for himself. Undoubtedly. Who knows how much better it would be had he… used a map.

Also going back to Ben's comment about kind of explaining away why he doesn't test because he's not good at it. Everyone starts off not being good at testing mate. The rest of us do something about it. This is a disappointing attitude from someone as clued-up as Ben. Also if you don't know about something… don't talk about it mate. Inform yourself first and then talk about it.

Ben @ 13:21:

[…] some additional context. So - one - I work on a very small team. Two: all the people who work on my team are very very familiar with the software. Three: we will never ever hire a new engineer specifically for my team. Cos I work on the legacy codebase. The legacy codebase is in the process of being phased out. […] I am definitely in a context where I don't have to worry about hiring a new person and training the up on a system and then thinking they'll touch something in the code that they don't understand how it works. That's like the farthest possible thing from my day-to-day operations currently.

Um… so? I'm not being glib. You're still writing new logic, or altering existing logic. If you do that, it intrinsically needs testing. I mean you admitted you do manual testing, but it beggars belief that a person in the computer industry will favour manually performing a pre-defined repetitive task as a (prone-to-error) human, rather than automating this. We're in the business of automating repetitive tasks!

I'd also add that this would be a brilliant, low-risk, environment for you to get yourself up to speed with TDD and unit testing, and work towards the point where it's just (brain) muscle memory to work that way. And then you'll be all ready once you progress to more mission-critical / contemporary codebases.

Ben @ 14:42:

I can wrap my head around testing when it comes to testing a data workflow that is completely pure, meaning you have a function or you have a component that has functions and you give it some inputs and it generates some outputs. I can 100% wrap my head around testing that. And sometimes actually when I'm writing code that deals with something like that, even though I'm not writing tests per se, I might write a scratch file that instantiates that component and sends data to it and checks the output just during the development process that I don't have to load-up the whole application.

Ben. That's a unit test. You have written a unit test there. So why don't you put it in a test class instead of a scratch file, and - hey presto - you have a persistent test that will guard against that code's behaviour somehow changing to break those rules later on. You are doing the work here, you're just not doing it in a sensible fashion.

Ben @ 15:18:

Where it breaks down immediately for me is when I have to either a) involve a database, or b) involve a user interface. And I know that ther's all kinds of stuff that the industry has brought to cater to those problems. I've just never taken the time to learn.

There's a bit of a blur here between the previous train of thought which was definitely talking about unit tests of a unit of code, and now we're talking about end-to-end testing. These are two different things. I am not saying Ben doesn't realise this, but they're jammed up next to each other in the podcast so the distinction is not being made. These two kinds of testing are separate ideas, only really coupled by the fact they are both types of testing. Ben's right, the tooling is there, and - in my experience at least with the browser emulation stuff - it's pretty easy and almost fun to use. Ben already tests his stuff manually every time he does a release, so it would seem sensible to me to take the small amount of time it takes to get up to speed with these things, and then instead of testing something manually, take the time to automate the same testing, then it's taken care of thenceforth. It just a matter of being a bit more wise with one's time usage.

Adam @ 16:39

The reason that we don't have a whole lot of automated tests for our CFML code is simply performance. So when we started our product I tried really hard to do TDD. If I was writing a new module or a new section of that module I would work on tests along with that code, and would try to stay ahead of the game there. And what ended up happening was I had for me - let's say - 500 functions that could run, I had 400 tests. And I don't want to point a finger at any particular direction, but when you take the stack as a whole and you say "OK now run my test suite" and it takes ten minutes to run those tests and [my product, the project I was working on] is still in its infancy, and you can see this long road of so much more work that has to be done, and it takes ten minutes to run the tests - you know, early on - there was no way that that was going to be sustainable. So we kind of abandoned hope there. [...] I have, in more recent years, on a more recent stack seen way better performance of tests. [...] So we are starting to get more into automated testing and finding it actually really helpful. [...] I guess what I wanted to say there is that a perfectly valid reason to have fewer or no tests is if it doesn't work well on your platform.

Adam starts off well here, both in what he's saying and his historical efforts with tests, but he then goes on to pretty much blame ColdFusion for not being very good at running tests. This is just untrue, sorry mate. We had thousands upon thousands of tests running on ColdFusion, and they ran in an amount of time best measured in seconds. And when we ported that codebase to PHP, we had a similar number of test cases, and they ran in round about the same amount of time (PHP was faster, that said, but also the tests were better). I think the issue is here - and confirms this about 30sec after the quote above, and again about 15min later when he comes back to this - is that your tests weren't written so well, and they were not focused on the logic (as TDD-oriented tests ought to be), they were basically full integration tests. Full integration tests are excellent, but you don't want your tight red / green / refactor testing cycle to be slowed down by external services like databases. That's the wrong sort of testing there. My reaction to you saying your test runs are slow is not to say "ColdFusion's fault", it's to say "your tests' fault". And that's not a reason to not test. It's a reason to check what you've been doing, and fix it. I'm applying hindsight here for you obviously, but this ain't the conclusion/message you should be delivering here.

Carol @ 20:03:

I also want to say that if you are starting out and you're starting to add test, don't let slowness stop you from doing it.

Spot on. I hope Ben was listening there. When you start to learn something new, it is going to take more time. I think this is sometimes why people conclude that testing takes a lot of time: the people arriving at that conclusion are basing it on their time spent on the learning curve. Accept that things go slow when you are learning, but also accept that things will become second nature. And especially with writing automated tests it's not exactly rocket-science, the initial learning time is not that long. Just… decide to learn how to test stuff, start automating your testing, and stick at it.

Ben @ 21:24:

I was thinking about debugging incidents and getting a page in the middle of the night and having to jump on a call and you seeing the problem, and now you have to do a hotfix, and push a deployment in the middle of the night […]. And imagine having to sit there for 30 minutes for your tests to run just so you can push out a hotfix. Which I thought to myself: that would drive me crazy.

At this point I think Ben is just trying to invent excuses to justify to himself why he's right to eschew testing. I'm reminded of Maaret's Twitter message I included above. The subtext of Ben's here is that if one tests manually, then you're more flexible in what you can choose to re-test when you are hotfixing. Well obviously if you can make that call re manual tests, then you can make the same call with automated tests! So his position here is just specious. Doubly so because automated tests are intrinsically going to be faster than the equivalent manual tests to start with. Another thing I'll note that with this entire analogy: you've already got yourself in a shit situation by needing to hotfix stuff in the middle of the night. Are you really sure you want to be less diligent in how you implement that fix? In my experience that approach can lead to a second / third / fourth hotfix being needed in rapid succession. Hotfix situations are definitely ones of "work smarter, not faster".

Ben @ 21:56:

I'm wondering if there should be a test budget that you can have for your team where you like have "here is the largest amount of time we're willing to let testing block a deployment". And anything above that have to be tests that sit in an optional bucket where it's up to the developer to run them as they see fit, but isn't necessarily tests that would block deployment. I don't know if that's totally crazy.

Adam continues @ 22:36:

You have to figure out which tests are critical path, which ones are "must pass", and these ones are like "low risk areas" […] are the things I would look for to make optional.

Yep, OK there's some sense here, but I can't help thinking that we are talking about testing in this podcast, and we're spending our time inventing situations in which we're not gonna test. It all seems a bit inverted to me. How about instead you just do your testing and then if/when a situation arises you then deal with it. Instead of deciding there will be situations and justify to yourselves why you oughtn't test in the first place.

But I also have to wonder: why the perceived rush here? What's wrong with putting over 30min to test stuff if it "proves" thaty your work has maintained stability, and you'll be less likely to need that midnight hotfix. What percentage of the whole cycle time of feature request to delivery is that 30 minutes? Especially if taking the effort to write the tests in the first place will inately improve the stability of your code in the first place, and then help to keep it stable? It's a false economy.

Tim @ 23:15:

When we have contractors do work for us. I require unit tests. I require so much testing just because it's a way for me to validate the truth of what they're saying they've done. So that everything that we have that's done by third parties is very well tested, and it's fantastic because I have a high level of confidence.

Well: precisely. Why do you not want that same level of confidence in your in-house work? Like you say: confidence is fantastic. Be fantastic, Tim. Also: any leader should eat their own dogfood I think. If there's sense in you making the contractors work like this, clearly you ought to be working that way yourself.

Tim @ 23:36:

Any time I start a new project, if I have a greenfields project, I always start with some level of unit tests, and then I get so involved in the actual architecture of the system that I put it off, and like "well I don't really need a test for this", "I'm not really sure where I'm going with this, so I'm not going to write a test first" because I'm kinda experimenting. Then my experiment becomes reality, then my reality becomes the released version. And then it's like "well what's the point of writing a test now?"

I think we've all been there. I think what Tim needs here is just a bit more self-discipline in identifying what is "architectural spike" and what's "now doing the work". If one is doing TDD, then the spike can be used to identify the test cases (eg "it's going to need to capture their phone number") without necessarily writing the test to prove the phone number has been captured. So you write this:

describe("my new thing", function () {
    it ("needs to capture the phone number", function () {
        // @todo need to test this
    });
});

And then when you detect you are not spiking any more, you write the test, and then introduce the code to make the test pass. I also think Tim is overlooking that the tests are not simply there for that first iteration, they are then there proving that code is stable for the rest of the life of the code. This… builds confidence.

Adam @ 24:22:

That's what testing is all about, right? It's increasing confidence that you can deploy this code and nothing is going to be wrong with it. […] When I think about testing, the pinnacle of testing for me is 100% confidence that I can deploy on my way out the door at 4:55pm on Friday afternoon, with [a high degree of ~] confidence that I am not going to get paged on Saturday at 4am because some of that code that I just deployed… it went "wrong".

Exactly.

Carol @ 25:12:

What difference between the team I'm on and the team you guys have is we have I think it's 15-ish people touching the exact same code daily. So a patch I can put out today may have not even been in the codebase they pulled yesterday when they started working on a bug, or a week ago when they had theirs. So me writing that little extra bit of test gives them some accountability for what I've done, and me some.

Again: exactly.

Ben @ 26:36:

Even if you have a huge test suite, I can't help but think you have to do the manual testing, because what if something critical was missed. [...] I think the exhaustive test suite, what that does is it catches unexpected bugs unrelated. Or things that broke because you didn't expect them to break in a certain way. And I think that's very important.

To Ben's first point, you could just as easily (and arguably more validly) switch that around: a human, doing ad-hoc manual testing is more likely to miss something, because every manual test run is at their whim and subject to their focus and attention at the time. Whereas the automated tests - which let's not forget were written by a diligent human, but right at the time they are most focused on the requirements - are run by the computer and it will do exactly the same job every time. What having the historical corpus of automated tests give you is increased confidence that all that stuff being tested still works the way it is supposed to, so the manual testing - which is always necessary, can be more a case of dotting the Is and crossing the Ts. With no automated tests, the manual tests need to be exhaustive. And the effort needs to be repeated every release (Adam mentions this a few minutes later as well).

To the second point: yeah precisely. Automated tests will pick up regressions. And the effort to do this only needs to be done once (writing the test). Without automated tests, you rely on the manual testing to pick this stuff up, but - being realistic - if your release is focused on PartX of the code, your manual tests are going to focus there, and possibly not bother to re-test PartZ which has just inadvertantly been broken by PartX's work.

Ben also mentions this quote from Rich Hickey "Q: What happened to every bug out there? A: it passed the type checker, and it passed all tests." (I found this reference on Google: Simple Made Easy, it's at about 15:45). It's a nifty quote, but what it's also saying is that there wasn't actually a test for the buggy behaviour. Because if there was one: the test would have caught it. The same could be said more readily of manual-only testing. Obviously nothing is going to be 100%, but automated tests are going to be more reliable at maintaining the same confidence level, and be less effort, than manual-only testing.

Ben @ 28:15:

When people say it increases the velocity of development over time. I have trouble embracing that.

(Ben's also alluding back to a comment he made immediately prior to that, relating to always needing to manually test anyhow). "Over time" is one of the keys here. Once a test is written once, it's there. It sticks around. Every subsequent test round there is no extra effort to test that element of the application (Tim draws attention to this a coupla minutes later too). With manual testing the effort needs to be duplicated every time you test. Surely this is not complicated to understand. Ben's point about "you still need to manually test" misses the point that if there's a foundation of automated tests, your manual testing can become far more perfunctory. Without the tests: the manual testing is monolithic. Every. Single. Time. To be honest though, I don't know why I need to point this out. It's a) obvious; and b) very well-trod ground. There's an entire industry that thinks automated tests are the foundation of testing. And then there's Ben who's "just not sure". This is like someone "just not sure" that the world isn't actually flat. It's no small amount of hubris on his part, if I'm honest. And obviously Ben is not the only person out there in the same situation. But he's the one here on this podcast supposedly discussing testing.

Ben @ 37:08:

One thing that I've never connected with: when I hear people talk about testing, there's this idea of being able to - I think they call them spies? - create these spies where you can see if private methods get called in certain ways. And I always think to myself: "why do you care about your private methods?" That's an implementation detail. That private method may not exist next week. Just care about what your public methods are returning and that should inherently test your private methods. And people have tried to explain it to me why actually sometimes you wanna know, but I've ust never understood it.

Yes good point. I can try to explain. I think there's some nuance missing in your understanding of what's going on, and what we're testing. It starts with your position that testing is only concerning itself with (my wording, paraphrasing you from earlier) "you're interested in what values a public method takes, and what it returns". Not quite. You care about given inputs to a unit, whether the behaviour within the unit correctly provides the expected outputs from the unit. The outputs might not be the return value. Think about a unit that takes a username and password, hashes the password, and saves it to the DB. We then return the new ID of the record. Now… we're less interested in the ID returned by the method, we are concerned that the hashing takes place correctly. There is an output boundary of this unit at the database interface. We don't want our tests to actually hit the database (too slow, as Adam found out), but we mock-out the DB connector or the DAO method being called that takes the value that the model layer has hashed. When then spy on the values passed to the DB boundary, and make sure it's worked OK. Something like this:

describe("my new thing", function () {
    it ("hashes the password", function () {
        testPassword = "letmein"
        expectedHash = "whatevs"
        
        myDAO = new Mock(DAO)
        myDAO.insertRecord.should.be.passed(anything(), expectedHash)
        
        myService = new Service(myDAO)
        
        newId = myService.addUser("LOGIN_ID_NOT_TESTED", testPassword)
        
        newId.should.be.integer() // not really that useful
    });
});

class Service {

    private dao

    Service(dao) {
        this.dao = dao
    }
    
    addUser(loginId, password) {
        hashedPassword = excellentHashingFunction(password)
        
        return this.dao.insertRecord(loginId, hashedPassword)
    }
}

class DAO {
    insertRecord(loginId, password) {
        return db.insertQuery("INSERT INTO users (loginId, password) VALUES (:loginId, :password)", [loginId, password])
    }
}

OK so insertRecord isn't a private method here, but the DAO is just an abstraction from the public interface of the unit anyhow, so it amounts to the same thing, and it makes my example clearer. insertRecord could be a private method of Service.

So the thing is that you are checking boundaries, not specifically method inputs/outputs.

Also, yes, the implementation of DAO might change tomorrow. But if we're doing TDD - and we should be - the tests will be updated at the same time. More often than not though, the implementation isn't as temporary as this line of thought often assumes (for the convenience of the argument, I suspect).

Adam @ 48:41:

The more that I learn how to test well, and the more that I write good tests, the more I become a believer in automated testing (Carol: Amen). […] The more I do it the better I get. And the better I get the more I appreciate what I can get from it.

Indeed.

Tim @ 49:32:

In a business I think that short term testing is a sunk cost maybe, but long term I have seen the benefit of it. Particularly whenever you are adding stuff to a mature system, those tests pay dividends later. They don't pay dividends now […] (well they don't pay as many dividends now) […] but they do pay dividends in the long run.

Also a good quote / mindset. Testing is about the subsequent rounds of development as much as the current one.

Ben @ 50:05:

One thing I've never connected with emotionally, when I hear people talk about testing, is when they refer to tests as providing documentation about how a feature is supposed to work. And as someone who has tried to look at tests to understand why something's not working, I have found that they provide no insight into how the feature is supposed to work. Or I guess I should say specifically they don't provide answers to the question that I have.

Different docs. They don't provide developer docs, but if following BDD practices, they can indicate the expected behaviour of the piece of functionality. Here's the test run from some tests I wrote recently:

root@1b011f8852b1:/usr/share/nodeJs# npm test

> nodejs@1.0 test
> mocha test/**/*.js



  Tests for Date methods functions
    Tests Date.getLastDayOfMonth method
      ✓ returns Jan 31, given Jan 1
      ✓ returns Jan 31, given Jan 31
      ✓ returns Feb 28, given Feb 1 in 2021
      ✓ returns Feb 29, given Feb 1 in 2020
      ✓ returns Dec 31, given Dec 1
      ✓ returns Dec 31, given Dec 31
    Tests Date.compare method
      ✓ returns -1 if d1 is before d2
      ✓ returns 1 if d1 is after d2
      ✓ returns 0 if d1 is the same d2
      ✓ returns 0 if d1 is the same d2 except for the time part
    Tests Date.daysBetween method
      ✓ returns -1 if d1 is the day before d2
      ✓ returns 1 if d1 is the day after d2
      ✓ returns 0 if d1 is the same day as d2
      ✓ returns 0 if d1 is the same day as d2 except for the time part
    Tests for addDays method
      ✓ works within a month
      ✓ works across the end of a month
      ✓ works across the end of the year
      ✓ works with zero
      ✓ works with negative numbers

  Tests a method Reading.getEstimatesFromReadingsArray that returns an array of Readings representing month-end estimates for the input range of customer readings
    Tests for validation cases
      ✓ should throw a RangeError if the readings array does not have at least two entries
      ✓ should not throw a RangeError if the readings array has at least two entries
      ✓ should throw a RangeError if the second date is not after the first date
    Tests for returned estimation array cases
      ✓ should not include a final month-end reading in the estimates
      ✓ should return the estimate between two monthly readings
      ✓ should return three estimates between two reading dates with three missing estimates
      ✓ should return the integer part of the estimated reading
      ✓ should return all estimates between each pair of reading dates, for multiple reading dates
      ✓ should not return an estimate if there was an actual reading on that day
      ✓ should return an empty array if all readings are on the last day of the month
      ✓ tests a potential off-by-one scenario when the reading is the day before the end of the month

  Tests for helper functions
    Tests for Reading.getEstimationDatesBetweenDates method
      ✓ returns nothing when there are no estimates dates between the test dates
      ✓ correctly omits the first date if it is an estimation date
      ✓ correctly omits the second date if it is an estimation date
      ✓ correctly returns the last date of the month for all months between the dates

  Test Timer
    ✓ handles a lap (100ms)

  Test TimerViaPrototype
    ✓ handles a lap (100ms)


  36 passing (221ms)

root@1b011f8852b1:/usr/share/nodeJs#

When I showed this to the person I was doing the work for, he immediately said "no, that test case is wrong, you have it around the wrong way", and they were right, and I fixed it. That's the documentation "they" are talking about.

Oh, and Carol goes on to confirm this very thing one minute later.

Also bear in mind that just cos a test could be written in such a way as to impart good clear information doesn't mean that all tests do. My experience with looking at open-source project's tests to get any clarity on things (and I include testing frameworks' own tests in this!), I am left knowing less than I did before I looked. It's almost like there's a rule in OSS projects that the code needs to be shite or they won't accept it ;-)


And that's it. It was an interesting podcast, but I really really strongly disagreed with most of what Ben said, and why he said it. It would be done thing if he was held to account (and the others tried this at times), but as it is other than joking that Ben is a nae-sayer, I think there's some dangerous content in here.

Oh, one last thing… in the outro the team suggests some resources for testing. Most of what the suggested seems to be "what to do", not "why you do it". I think the first thing one should do when considering testing is to read Test Driven Development by Kent Beck. Start with that. Oh this reminds me… not actually much discussion on TDD in this episode. TDD is tangential to testing per se, but it's an important topic. Maybe they can do another episode focusing on that.

Follow-up

The Working Code Podcast team have responded to my obersvations here in a subsequent podcast episode, which you can listen to here: 011: Listener Questions #1. Go have a listen.

Righto.

--
Adam