Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Saturday 17 April 2021

Using Docker to strum up an Nginx website serving CFML via Lucee

G'day

OK so this is not the blog article I expected to be writing, had you asked me two weeks ago. But here we are. I'll go into the reason why I'm doing this a bit later.

This will be a CFML-oriented version of the "VueJs/Symfony/Docker/TDD series", and has its own tag: "Lucee/CFWheels/Docker series":

  • Nginx website.
  • Proxying for Lucee as the CFML-processing application layer.
  • Running inside Docker containers.
  • TDD the whole enterprise.

If I have time (and any will-to-live remaining), I will add this lot into the mix:

  • Work out how Forgebox works, which seems to be CFML's equivalent of Composer / NPM
  • Use that to install Testbox (CFML-based Jasmine-ish testing framework)
  • And also install CFWheels, a CFML-based framework akin to Ruby on Rails.

I'll also be returning to SublimeText for the first time in seven-or-so years. Apparently it's still a reasonable text editor to use for CFML code.

For those few of you that have started paying attention to me more recently: CFML is not new to me. I spent over a decade as a CFML developer (2001-2013). I shifted to PHP because my erstwhile employer (HostelBookers, CFML shop), was bought by Hostelworld (PHP shop) back then. I've been doing PHP since. That said, I am very rusty with CFML, and - well, hopefully - they CFML landscape has moved on since then too. So whilst I'm not a newbie with CFML stuff, getting Lucee running in a container, Forgebox and CFWheels is entirely new to me.

I'm still gonna be using PHP to do the initial testing of things, because I won't have Testbox running for the first while. So I'll need a PHP container too. I'll refactor this out once I get Testbox in.

It needs a PHP container for running tests

There's nothing new here, and what I've done is largely irrelevant to this exercise, so I'll just list the files and link through to that current state of the files in source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── php-cli
│   │   ├── root_home
│   │   │   ├── .bash_history
│   │   │   ├── .bashrc
│   │   │   ├── .gitignore
│   │   │   └── .vimrc
│   │   └── Dockerfile
│   ├── .env
│   └── docker-compose.yml
├── test
│   └── php
│       └── SelfTest.php
├── .gitignore
├── LICENSE
├── README.md
├── composer.json
├── composer.lock
└── phpunit.xml.dist

5 directories, 14 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The test this just this:

/** @testdox Tests PHPUnit install */
class SelfTest extends TestCase
{
    /** @testdox it self-tests PHPUnit */
    public function testSelf()
    {
        $this->assertTrue(true);
    }
}

And it passes:

root@18c5eabeb9f2:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.002, Memory: 6.00 MB

OK (1 test, 1 assertion)
root@18c5eabeb9f2:/usr/share/cfml-in-docker#

In this instance I could not actually run the test before I implemented the work, for what should seem obvious reasons. However I followed the TDD mindset of just doing the least amount of work possible to make the test pass. I also monkeyed around with the test itself to see it fail if I had an assertion that was no good (I changed the argument to that assertion to false, basically).

The TDD lesson here is: I've set myself a case - "It needs a PHP container for running tests" - and only resolved that case before pausing and assessing the situation. I also didn't move any further forward than I needed to to address that case.

It returns a 200-OK from requests to /gdayWorld.html

Next I need an Nginx container running, and serving a test file. Well: I need the test for that.

/** @testdox Tests Nginx is serving html */
class NginxTest extends TestCase
{
    /** @testdox It serves gdayWorld.html as 200-OK */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.backend/']);

        $response = $client->get('gdayWorld.html');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

Once again, I'll largely just list the added files here, and link through to source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── nginx
│   │   ├── root_home
│   │   │   ├── .gitignore
│   │   │   ├── .profile
│   │   │   └── .vimrc
│   │   ├── sites
│   │   │   └── default.conf
│   │   ├── Dockerfile
│   │   └── nginx.conf
│   └── [...]
├── public
│   └── gdayWorld.html
├── test
│   └── php
│       ├── NginxTest.php
│       └── [...]
├── var
│   └── log
│       └── nginx
│           ├── .gitkeep
│           ├── access.log
│           └── error.log
└── [...]

12 directories, 25 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The contents of gdayWorld.html should be obvious from the test, but it's just:

G'day world!

OK so that was all stuff I've done a few times before now. Next… Lucee

It has a Lucee container which serves CFML code via its internal web server

I'm kinda guessing at this next case. I'm gonna need to have a Lucee container, this is a cert. And I recollect Adobe's ColdFusion CFML engine ships with an wee stubbed web server for dev use. I can't recall if Lucee does too. I'm assuming it does. You can see how prepared I am for all this: I've not even RTFMed about the Lucee Docker image on DockerHub yet (I did at least make sure there was one though ;-). The idea is that there's a two-step here: getting the Lucee container up and doing "something", and after that, wire it through from Nginx. But that's a separate case.

Right so this is all new to me, so I'll actually list the files I've created. First the test:

/** @testdox Tests Lucee is serving cfml */
class LuceeTest extends TestCase
{
    /** @testdox It serves gdayWorld.cfm as 200-OK on Lucee's internal web server */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.lucee:8888/']);

        $response = $client->get('gdayWorld.cfm');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

It's the same as the HTML one except I'm hitting a different host, and on port 8888 (I have now done that RTFM I mentioned, and found the port Lucee serves on by default).

The Dockerfile is simple:

FROM lucee/lucee:5.3

RUN apt-get update
RUN apt-get install vim --yes

COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc

WORKDIR  /var/www

EXPOSE 8888

It's more complex than it needs to be as I always like vi installed in my containers because I inevitably need it (this is prescient as it turns out: I definitely did need it).

And the relevant bit from docker-compose.yml:

lucee:
    build:
        context: ./lucee
    volumes:
        - ../public:/var/www
        - ../var/log/tomcat:/usr/local/tomcat/log
        - ../var/log/lucee:/opt/lucee/web/logs
        - ./lucee/root_home:/root
    ports:
        - "8888:8888"
    stdin_open: true
    tty: true
    networks:
        backend:
            aliases:
                - cfml-in-docker.lucee

That's mostly just me mapping logging directories back to my host for convenience-sake.

Currently my test file - gdayWorld.cfm - is just plonked in the web root, which is not where one would normally put CFML files (except the application entry point file I mean), but it'll do for now:

<cfset message="G'day world!">
<cfoutput>#message#</cfoutput>

And that's it. After rebuilding my containers and running the tests, everything passes now:

root@a034afe670d4:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Lucee is serving cfml
It serves gdayWorld.cfm as 200-OK on Lucee's internal web server

Tests Nginx is serving html
It serves gdayWorld.html as 200-OK

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.028, Memory: 6.00 MB

OK (3 tests, 5 assertions)
root@a034afe670d4:/usr/share/cfml-in-docker#

It proxies .cfm requests from Nginx to Lucee

OK so Lucee is working. Painless. Now I need to tell Nginx about it. I have NFI how to do that… I hope Google and/or Stack Overflow does.

After some googling, my recollection that some sort of connector was needed to run between the web server and the application server seems outdated, and all I need to do is use proxy_pass from Nginx to the address Lucee has configured Tomcat to listen on (Lucee runs atop of Tomcat: it's basically a Java Servlet). I can never remember the syntax for this, but fortunately Nando Breiter has documented it in article "Using Nginx With ColdFusion or Lucee". It's also reminded me a few other cases I need to test for, but first the baseline. Well actually first the test:

/** @testdox It proxies a CFM request to Lucee */
public function testCfmReturns200OK()
{
    $client = new Client(['base_uri' => 'http://cfml-in-docker.frontend/']);

    $response = $client->get('gdayWorld.cfm');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
}

This is the same as the previous one except I'm using the Nginx website's host, and on port 80. Also note I've changed the name of the host to be cfml-in-docker.frontend not cfml-in-docker.backend. This is cosmetic, and just to distinguish between references to stuff happening on the network within the containers (called backend), and addresses browsed from the public-facing websites.

The implementation for this case is simply this, in the website config default.conf:

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name;
}

Adding this and restarting Nginx has that test passing, as well as not interfering with any non-CFML requests (ie: the other Nginx tests still pass).

This config has some shortfalls though. Well I say "shortfalls". Basically I mean it doesn't work properly for a real-world situation. More test cases…

It passes query values to Lucee

The test demonstrates this:

/** @testdox It passes query values to Lucee */
public function testCfmReceivesQueryParameters()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('queryTest.cfm?testParam=expectedValue');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("expectedValue", trim($content));
}

and queryTest.cfm is just this:

<cfoutput>#URL.testParam#</cfoutput>

If I run this test I get a failure because the 500 INTERNAL SERVER ERROR response from Lucee doesn't match the expected 200. This happens because Lucee can't see that param value. Because Nginx is not passing it. Easily fixed.

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

It passes the upstream remote address to Lucee

As it currently stands, Lucee will be receiving all requests as it they came from Nginx, rather than from whoever requested them. This is the nature of proxying, but we can work around this. First the test to set expectations:

/** @testdox It passes the upstream remote address to Lucee */
public function testLuceeReceivesCorrectRemoteAddr()
{
    $directClient = new Client([
        'base_uri' => 'http://cfml-in-docker.lucee:8888/',
        'http_errors' => false
    ]);
    $response = $directClient->get('remoteAddrTest.cfm');
    $expectedRemoteAddr = $response->getBody()->getContents();

    $proxiedClient = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $testResponse = $proxiedClient->get('remoteAddrTest.cfm');

    $this->assertEquals(200, $testResponse->getStatusCode());
    $actualRemoteAddr = $testResponse->getBody()->getContents();
    $this->assertSame($expectedRemoteAddr, $actualRemoteAddr);
}

And remoteAddrTest.cfm is just this:

<cfoutput>#CGI.remote_addr#</cfoutput>

This is slightly more complicated than the previous tests, but only in that I can't know what the remote address is of the service running the test, because it could be "anything" (in reality inside these Docker containers, if they're brought up in the same order with the default bridging network, then it'll always be the same, but we don't want to break these tests if unrelated config should happen to change). The best way is to just check what the remote address is if we make the call directly to Lucee, and then expect that value if we make the same call via the Nginx proxy. As of now it fails because Lucee correctly sees the request as coming from the PHP container when we hit Lucee directly; but it sees the request as coming from the Nginx container when using Nginx's proxy. No surprise there. Fortunately Nando had the solution to this baked into his blog article already, so I can just copy and paste his work:

location ~ \.(?:cfm|cfc) {
    proxy_http_version  1.1;
    proxy_set_header    Connection "";
    proxy_set_header    Host                $host;
    proxy_set_header    X-Forwarded-Host    $host;
    proxy_set_header    X-Forwarded-Server  $host;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;     ## CGI.REMOTE_ADDR
    proxy_set_header    X-Forwarded-Proto   $scheme;                        ## CGI.SERVER_PORT_SECURE
    proxy_set_header    X-Real-IP           $remote_addr;
    expires             epoch;

    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

And if I restart Nginx: all good. One more issue to deal with…

It passes URL path_info to Lucee correctly

Something too few people know about, is there's an optional part of a URL between the script name and the query: path info. An example is: http://example.com/script/name/path/document.html/additional/path/info?queryParam=paramValue. That path is nothing to do with the script to be executed or where it's located, it's just… some extra pathing information for the script to do something with. It's seldom used, but it's part of the spec (RFC-3875, section 4.1.5). The spec says this:

The PATH_INFO variable specifies a path to be interpreted by the CGI script. It identifies the resource or sub-resource to be returned by the CGI script, and is derived from the portion of the URI path hierarchy following the part that identifies the script itself.

Anyway, from what I could see of what I have in the Nginx config, I suspected that we're not passing that on to Lucee, so its CGI.path_info value would be blank. A test for this is easy, and much the same as the earlier ones:

/** @testdox It passes URL path_info to Lucee correctly */
public function testLuceeReceivesPathInfo()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('pathInfoTest.cfm/additional/path/info/');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("/additional/path/info/", trim($content));
}

And pathInfoTest.cfm is similarly familiar:

<cfoutput>#CGI.path_info#</cfoutput>

And as I predicted (although as we'll see below, not for the reasons I thought!) the test errors:

> vendor/bin/phpunit --testdox '--filter=testLuceeReceivesPathInfo'
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Nginx proxies CFML requests to Lucee
It passes URL path_info to Lucee correctly
  
   Failed asserting that 404 matches expected 200.
  
   /usr/share/cfml-in-docker/test/php/NginxProxyToLuceeTest.php:71
  

Time: 00:00.090, Memory: 8.00 MB


FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
Script vendor/bin/phpunit --testdox handling the test event returned with error code 1
root@29840662fdf9:/usr/share/cfml-in-docker#

At this point I disappeared down a rabbit hole of irritation, as detailed in article "Repro for Lucee weirdness". There are two bottom lines to this:

  1. For reasons best known to [someone other than me], Lucee only handles path_info on requests to index.cfm, but not to any other .cfm file! This can be shown by changing that test by renaming pathInfoTest.cfm to index.cfm, and calling that instead.
  2. Actually Nginx already handles it correctly anyhow. In that the value is passed on already, and I don't need to do anything extra to make it work (as far as Nginx is concerned, anyhow).

I can fix the situation for pathInfoTest.cfm if I hack Lucee's web.xml file (this is down at line 4643):

<servlet-mapping>
    <servlet-name>CFMLServlet</servlet-name>
    <url-pattern>*.cfm</url-pattern>
    <url-pattern>*.cfml</url-pattern>
    <url-pattern>*.cfc</url-pattern>
    <url-pattern>/index.cfm/*</url-pattern>
    <url-pattern>/index.cfc/*</url-pattern>
    <url-pattern>/index.cfml/*</url-pattern>
</servlet-mapping>

I could slap a special mapping for it in there. But that's a daft way to deal with this. I'm going to just mark that test as "incomplete", and move on.

Thanks to Pete Freitag, Adam Tuttle, Zac Spitzer and Sean Corfield for putting me on the right direction for working out this particular "WTF, Lucee?" episode.


Speaking of "moving on", I said I'd get the code this far, but only progress onto the more CFML-oriented stuff if I still had will to live. Well Lucee has eroded that for now, so I'll get back to that part later, when I've stopped shaking my fist at the screen.

NB: this has become part of a series of articles, as things get more complicated, and require more effort on my part to achieve my end goal: Lucee/CFWheels/Docker series.

Righto.

--
Adam

Wednesday 24 February 2021

Docker: using TDD to initialise my app's DB with some data

G'day:

I'll start by saying I am not convinced this exercise really ought to be a TDD-oriented one, but I'm gonna approach it that way anyhow because I suspect I'm going to need to mess around a bit to get this working. Secondly, this is very much a log of what I'm (trying to ~) work on today, and I doubt there will be any shrewd insights going on, given I'm basically googling and RTFMing, then doing what the docs say.

The exercise here is to take the MariaDB database that I already have in my Docker set up (see "Creating a web site with Vue.js, Nginx, Symfony on PHP8 & MariaDB running in Docker containers - Part 5: MariaDB"), which is currently empty and only accessible via the root login; and add some baseline tables and data to it. At the same time also create a user for code to connect to the DB with so I don't need code using root access. Another thing I want to do is stop storing the DB passwords in the Docker .env file like I am now:

COMPOSE_PROJECT_NAME=fullStackExercise
DATABASE_ROOT_PASSWORD=123

The data I need is to fulfil an exercise I have given myself (well: it wasn't me who gave me the original exercise, but I'm reinventing it a bit here) to construct an event registration form (personal details, a selection of workshops to register for), save the details to the DB and echo back a success page. Simple stuff. Less so for me given I'm using tooling I'm still only learning (Vue.js, Symfony, Docker, Kahlan, Mocha, MariaDB).

For my tests, I can already derive a bunch of test specs from those first few paragraphs above, so let's put them together now in spec/integration/baselineDatabase.spec.php:

<?php

namespace adamCameron\fullStackExercise\spec\integration;

describe('Tests for registration database', function () {
    describe('Connectivity tests', function () {
        it('can connect to the database with environment-based credentials', function () {
        });
    });

    describe('Schema tests', function () {
        it('has a workshops table with the required schema', function () {
        });

        it('has a registrations table with the required schema', function () {
        });

        it('has a registeredWorkshops table with the required schema', function () {
        });
    });

    describe('Data tests', function () {
        it('has the required baseline workshop data', function () {
        });
    });
});

And I can now run those to see them… not be implemented:

root@fde4be76c908:/usr/share/fullstackExercise# composer spec -- --spec=spec/integration/baselineDatabase.spec.php --reporter=verbose
> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
      ✓ it can connect to the database with environment-based credentials
    Schema tests
      ✓ it has a workshops table with the required schema
      ✓ it has a registrations table with the required schema
      ✓ it has a registeredWorkshops table with the required schema
    Data tests
      ✓ it has the required baseline workshop data


  Pending specifications: 5
  .spec/integration/baselineDatabase.spec.php, line 8
  .spec/integration/baselineDatabase.spec.php, line 13
  .spec/integration/baselineDatabase.spec.php, line 16
  .spec/integration/baselineDatabase.spec.php, line 19
  .spec/integration/baselineDatabase.spec.php, line 24


Expectations   : 0 Executed
Specifications : 5 Pending, 0 Excluded, 0 Skipped

Passed 0 of 0 PASS in 0.015 seconds (using 4MB)

root@fde4be76c908:/usr/share/fullstackExercise#

And now I can implement that first test:

describe('Tests for registration database', function () {

    $this->getConnectionDetailsFromEnvironment = function () {
        return (object) [
            'database' => $_ENV['MYSQL_DATABASE'],
            'user' => $_ENV['MYSQL_USER'],
            'password' => $_ENV['MYSQL_PASSWORD']
        ];
    };

    describe('Connectivity tests', function () {
        it('can connect to the database with environment-based credentials', function () {
            $connectionDetails = $this->getConnectionDetailsFromEnvironment();
            $connection = new PDO(
                "mysql:dbname=$connectionDetails->database;host=database.backend",
                $connectionDetails->user,
                $connectionDetails->password
            );
            $statement = $connection->query("SELECT 'OK' AS test FROM dual");
            $statement->execute();

            $testResult = $statement->fetch(PDO::FETCH_ASSOC);

            expect($testResult)->toContainKey('test');
            expect($testResult['test'])->toBe('OK');
        });
    });

There's not much to this. I'm reading the DB connectivity details from the environment variables Docker has set for me, and using those to do a simple DB query from the database, and just verify the DB is responding as expected. To be honest I don't think I need / ought to be using the environment variable for the database name here: that environment variable is just for MariaDB to create a DB of that name when it first starts up. In the app itself, we'll have a static value for the database name, because the app wants to use that exact database, not simply whatever DB is in that environment variable. Hopefully you see the subtle difference in intent there. Anyhow, we now run our tests:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
      ✖ it can connect to the database with environment-based credentials
        an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 11

        message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` Undefined array key \"MYSQL_DATABASE\""

          [NA] - spec/integration/baselineDatabase.spec.php, line 7 to 11
          […etc…]

Cool. Now we can sort those credentials out and watch that test pass. The first thing I have done is to update docker/.env (see above) to get rid of the root password, and add the other credentials MariaDB expects to initialise a database when its container is first built (see the "Environment Variables" section in mariadb - Docker Official Images for info about that):

COMPOSE_PROJECT_NAME=fullStackExercise
MYSQL_DATABASE=fullstackExercise
# the following are to be provided to `docker-compose up`
DATABASE_ROOT_PASSWORD=
MYSQL_USER=
MYSQL_PASSWORD=

Those empty entries are not necessary, I've just left them there for the sake of documentation. The bit I do actually need to do is in docker/docker-compose.yml. This is best shown with a diff I think:

$ git diff docker/docker-compose.yml
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index bc399a0..0eec553 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -24,7 +24,9 @@ services:
       context: ../backend
       dockerfile: ../docker/php-fpm/Dockerfile
     environment:
-      - DATABASE_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
+      - MYSQL_DATABASE=${MYSQL_DATABASE}
+      - MYSQL_USER=${MYSQL_USER}
+      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
     volumes:
       - ../backend/config:/usr/share/fullstackExercise/config
       - ../backend/public:/usr/share/fullstackExercise/public
@@ -52,6 +54,9 @@ services:
       context: ./mariadb
     environment:
       - MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
+      - MYSQL_DATABASE=${MYSQL_DATABASE}
+      - MYSQL_USER=${MYSQL_USER}
+      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
     ports:
       - "3306:3306"
     volumes:

I've taken out PHP's access to the DB root password as it doesn't need it any more. It has two tests that will now fail, but they were only ever temporary ones until I did this work anyhow, so I'll be deleting those when I verify they now fail. And I've also added in the three new environment variables to both the MariaDB service, and the PHP one. MariaDB uses it to create the fullstackExercise DB, and PHP will use the same credentials to connect to it. I now have no DB credentials anywhere in the codebase. Instead, I pass them in when I first bring the containers up:

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise/docker$ DATABASE_ROOT_PASSWORD=123 MYSQL_USER=fullstackExercise MYSQL_PASSWORD=1234 docker-compose up --build --detach

This is not completely secure. One can still see the passwords if one terminals into the containers, eg:

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise/docker$ docker exec --interactive --tty fullstackexercise_php-fpm_1 /bin/bash
root@ac3872091c8e:/usr/share/fullstackExercise# set | grep MYSQL
MYSQL_DATABASE=fullstackExercise
MYSQL_PASSWORD=1234
MYSQL_USER=fullstackExercise
root@ac3872091c8e:/usr/share/fullstackExercise#

A better way would perhaps be to use Docker Secrets, but I could not work out how to get the values from the files it creates into environment variables in the docker-compose.yml file. But will also admit I pretty much read the docs and went "yeah CBA with that right now". It might be dead easy (UPDATE: just now when linking to the MariaDB docker image page a coupla paragraphs up, I noticed it's all actually explained there, and it is dead easy. I might look at doing this later then).

Now I will run my tests again. My expectations are that that test that failed before will now be passing; and one each of the Kahlan and PHPUnit tests will start to fail because they are testing connecting to the DB using the root credentials, which I've removed.

root@ac3872091c8e:/usr/share/fullstackExercise# composer spec
> vendor/bin/kahlan

..........E.PPPP.                                                 17 / 17 (100%)


  Pending specifications: 4
  .spec/integration/baselineDatabase.spec.php, line 37
  .spec/integration/baselineDatabase.spec.php, line 40
  .spec/integration/baselineDatabase.spec.php, line 43
  .spec/integration/baselineDatabase.spec.php, line 48

Tests database availability
  ✖ it should return the expected database version
    an uncaught exception has been thrown in `spec/integration/database.spec.php` line 14

    message:`Kahlan\PhpErrorException` Code(0) with message "`E_WARNING` Undefined array key \"DATABASE_ROOT_PASSWORD\""

      [NA] - spec/integration/database.spec.php, line 11 to 14
      Kahlan\Filter\Filters::run() - vendor/kahlan/kahlan/src/Suite.php, line 236
      […etc…]


Expectations   : 18 Executed
Specifications : 4 Pending, 0 Excluded, 0 Skipped

Passed 12 of 13 FAIL (EXCEPTION: 1) in 0.491 seconds (using 6MB)

Script vendor/bin/kahlan handling the spec event returned with error code 255

This is good: only one failing test: the one we expect to fail, and it's failing for the right reason. And with PHPUnit:

PHPUnit 9.5.2 by Sebastian Bergmann and contributors.

.....E                                                              6 / 6 (100%)

Time: 00:00.268, Memory: 14.00 MB

There was 1 error:

1) adamCameron\fullStackExercise\tests\integration\DatabaseTest::testDatabaseVersion
Undefined array key "DATABASE_ROOT_PASSWORD"

/usr/share/fullstackExercise/tests/integration/DatabaseTest.php:16

ERRORS!
Tests: 6, Assertions: 13, Errors: 1.

Generating code coverage report in HTML format ... done [00:00.374]
Script vendor/bin/phpunit handling the test event returned with error code 2

I'll get rid of those failing tests. They are redundant now.

The next test cases we have to address are these ones:

    Schema tests
      ✓ it has a workshops table with the required schema
      ✓ it has a registrations table with the required schema
      ✓ it has a registeredWorkshops table with the required schema

Looking at the docs for MariaDB's Docker image ("Docker Official Images > mariadb > Initializing a fresh instance"), when the DB starts up, it looks for files in a docker-entrypoint-initdb.d directory, and runs any scripts it finds in there. This makes things easy.

However let's not get ahead of ourselves. We need tests first. But first… a bit of an aside. I'm actually questioning the merits of these tests. They are handy when I'm doing the initial DB setup though. Later as the application develops, we'll have more finely-tuned integration tests that will implicitly test the table schemata are correct; but I guess at the moment all we need to have is the schema (then some baseline data), so as transient tests I suppose the have some merit. I'm not sure. One one hand it might be overkill; on another hand we're supposed to be developing the application iteratively, and these are a first iteration. I guess the situation is similar to the DB tests I had that were using the root connectivity details, because for that iteration that's where we were at. Now we've moved on so those tests are redundant, and these new tests replace them. And these tests will likely be replaced in the next coupla iterations as we go. Anyhow: I'm writing them. Here we go.

describe('Schema tests', function () {
    $schemata = [
        [
            'tableName' => 'workshops',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'name', 'Type' => 'varchar(500)']
            ]
        ],
        [
            'tableName' => 'registrations',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'fullName', 'Type' => 'varchar(100)'],
                ['Field' => 'phoneNumber', 'Type' => 'varchar(50)'],
                ['Field' => 'emailAddress', 'Type' => 'varchar(320)'],
                ['Field' => 'password', 'Type' => 'varchar(255)'],
                ['Field' => 'ipAddress', 'Type' => 'varchar(15)'],
                ['Field' => 'uniqueCode', 'Type' => 'varchar(36)'],
                ['Field' => 'created', 'Type' => 'timestamp']
            ]
        ],
        [
            'tableName' => 'registeredWorkshops',
            'schema' => [
                ['Field' => 'id', 'Type' => 'int(11)'],
                ['Field' => 'registrationId', 'Type' => 'int(11)'],
                ['Field' => 'workshopId', 'Type' => 'int(11)']
            ]
        ]
    ];

    array_walk($schemata, function ($tableSchema) {
        $tableName = $tableSchema['tableName'];
        $expectedSchema = $tableSchema['schema'];

        it("has a $tableName table with the required schema", function () use ($tableName, $expectedSchema) {
            $statement = $this->connection->query("SHOW COLUMNS FROM $tableName");
            $statement->execute();

            $columns = $statement->fetchAll(PDO::FETCH_ASSOC);

            expect($columns)->toHaveLength(count($expectedSchema));
            foreach ($expectedSchema as $i => $column) {
                expect($columns[$i]['Field'])->toBe($expectedSchema[$i]['Field']);
                expect($columns[$i]['Type'])->toBe($expectedSchema[$i]['Type']);
            }
        });
    });
});

There was an intermediary refactoring here: initially I had three "hard-coded" cases, as listed further up. As I wrote the test for the second case I noticed I was duplicating everything from the first test except the table name and the details of the schema, so I extracted those as test data, and looped over them. All the test does here is to get the table columns description, and verify they match the name, type and length of my expectations. The expectations were taken directly from the requirement I had been given to implement.

If I now run the tests, those three cases fail, as we'd expect given the tables don't yet exist:

Tests for registration database
  Schema tests
    ✖ it has a workshops table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.workshops' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]

    ✖ it has a registrations table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.registrations' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]

    ✖ it has a registeredWorkshops table with the required schema
      an uncaught exception has been thrown in `spec/integration/baselineDatabase.spec.php` line 74

      message:`PDOException` Code(42S02) with message "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'fullstackexercise.registeredWorkshops' doesn't exist"

        [NA] - spec/integration/baselineDatabase.spec.php, line 73 to 74
        […etc…]
[…etc…]

Now to add the tables.I've set up these files:

adam@DESKTOP-QV1A45U:/mnt/c/src/ttct$ tree docker/mariadb/do*
docker/mariadb/docker-entrypoint-initdb.d
├── 1.createAndPopulateWorkshops.sql
├── 2.createRegistrations.sql
└── 3.createRegisteredWorkshops.sql

Note: for now that first file name is slightly misnamed, as it'll only have the DDL statement in it at the moment, and the data-insertion will come in a subsequent step. The file contents are as follows:

/* docker/mariadb/docker-entrypoint-initdb.d/1.createAndPopulateWorkshops.sql */

USE fullstackExercise;

CREATE TABLE workshops (
    id INT NOT NULL AUTO_INCREMENT,
    name VARCHAR(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
    
    PRIMARY KEY (id)
) ENGINE=InnoDB;


/* docker/mariadb/docker-entrypoint-initdb.d/2.createRegistrations.sql */

USE fullstackExercise;

CREATE TABLE registrations (
   id INT NOT NULL AUTO_INCREMENT,
   fullName VARCHAR(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   phoneNumber VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   emailAddress VARCHAR(320) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   password VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
   ipAddress VARCHAR(15) NOT NULL,
   uniqueCode VARCHAR(36) NOT NULL DEFAULT (UUID()),
   created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
   
   PRIMARY KEY (id)
) ENGINE=InnoDB;


/* docker/mariadb/docker-entrypoint-initdb.d/3.createRegisteredWorkshops.sql */

USE fullstackExercise;

CREATE TABLE registeredWorkshops (
   id INT NOT NULL AUTO_INCREMENT,
   registrationId INT NOT NULL,
   workshopId INT NOT NULL,
   PRIMARY KEY (id),
   FOREIGN KEY (registrationId) REFERENCES registrations(id),
   FOREIGN KEY (workshopId) REFERENCES workshops(id)
);

And lastly I need to copy that directory into my MariaDB container when I build it (docker/mariadb/Dockerfile):

FROM mariadb:latest
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
CMD ["mysqld"]
EXPOSE 3306

After I rebuild my containers, I run the tests and we're all good:

    Schema tests
       it has a workshops table with the required schema
       it has a registrations table with the required schema
       it has a registeredWorkshops table with the required schema

Finally I need some seed data in the workshops table. First I'm going to write my test cases for this:

describe('Data tests', function () {
    it('has the required baseline workshop data', function () {
        $expectedWorkshops = [
            ['id' => '2', 'name' => 'TEST_WORKSHOP 1'],
            ['id' => '3', 'name' => 'TEST_WORKSHOP 2'],
            ['id' => '5', 'name' => 'TEST_WORKSHOP 3'],
            ['id' => '7', 'name' => 'TEST_WORKSHOP 4']
        ];

        $statement = $this->connection->query("SELECT id, name FROM workshops ORDER BY id");
        $statement->execute();
        $workshops = $statement->fetchAll(PDO::FETCH_ASSOC);

        expect($workshops)->toEqual($expectedWorkshops);
    });

    it('correctly auto-increments the ID on new insertions', function () {
        $$expectedWorkshopName = 'TEST_WORKSHOP 5';

        $this->connection->beginTransaction();

        $statement = $this->connection->prepare(query: "INSERT INTO workshops (name) VALUES (:name)");
        $statement->execute(['name' => $expectedWorkshopName]);
        $id = $this->connection->lastInsertId();

        $statement = $this->connection->prepare("SELECT id, name FROM workshops WHERE id = :id");
        $statement->execute(['id' => $id]);
        $workshops = $statement->fetchAll(PDO::FETCH_ASSOC);

        expect($workshops)->toHaveLength(1)
        expect($workshops[0])->toContainKey('name')
        expect($workshops[0]['name'])->toBe($expectedWorkshopName)

        $this->connection->rollback();
    });
});

Those are reasonably self-explanatory. I need to insert four baseline workshop records, and in the first case I just SELECT the data and check it's what I expect it to be. The second case only occurred to me when I went to look at the changes to the SQL I needed to make in 1.createAndPopulateWorkshops.sql to insert that data. I needed to take the auto-increment off the table-create statement so I could insert records with the specific IDs I need, then after doing that I altered the table to have the ID auto-increment. I figured I had better test that that worked too. So I insert a new record (just the name, letting the DB handle the ID), get the ID back and use that to get the whole record back for that ID, verifying it's also got the correct name. I do no want that data cluttering my DB so I put the whole thing in a transaction so that it rolls-back when I'm done or if there's an error.

Running those, only the first one errors:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php'

F.                                                                  2 / 2 (100%)


Tests for registration database
  Data tests
    ✖ it has the required baseline workshop data
      expect->toEqual() failed in `.spec/integration/baselineDatabase.spec.php` line 102

      It expect actual to be equal to expected (==).

      actual:
        (array) []
      expected:
        (array) [
            0 => [
                "id" => "2",
                "name" => "TEST_WORKSHOP 1"
            ],
            […etc…]


Expectations   : 4 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 1 of 2 FAIL (FAILURE: 1) in 0.025 seconds (using 4MB)

Focus Mode Detected in the following files:
fdescribe - spec/integration/baselineDatabase.spec.php, line 89 to 124
exit(-1)

Script vendor/bin/kahlan handling the spec event returned with error code 255
root@e7d6aa6cf839:/usr/share/fullstackExercise#

This puzzled me at first, but then it occurred to me that the auto-increment test case really ought to have been added when I did the first round of tests before creating the table, because that is when that functionality was added. All I'm doing with the changes I'm about to make is insert some data-insertion code into the script. It's already doing the auto-increment on the ID, and all I'm doing with that is changing when it's being applied: from the table-creation statement to its own statement after the inserts are done. See below for what I mean.

And now I'll now update that docker/mariadb/docker-entrypoint-initdb.d/1.createAndPopulateWorkshops.sql to also insert the baseline data:

USE fullstackExercise;

CREATE TABLE workshops (
    id INT NOT NULL /* AUTO_INCREMENT <- this has been removed from here */,
    name VARCHAR(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
    
    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO workshops (id, name)
VALUES
    (2, 'TEST_WORKSHOP 1'),
    (3, 'TEST_WORKSHOP 2'),
    (5, 'TEST_WORKSHOP 3'),
    (7, 'TEST_WORKSHOP 4')
;

ALTER TABLE workshops MODIFY COLUMN id INT auto_increment;

(Note I've moved the auto-increment on the ID field when I create the table now, so the seed data can have specific IDs. Once I do the insert, then I make the column auto-increment).

Once I rebuild my containers, all the tests now pass:

> vendor/bin/kahlan '--spec=spec/integration/baselineDatabase.spec.php' '--reporter=verbose'


  Tests for registration database
    Connectivity tests
       it can connect to the database with environment-based credentials
    Schema tests
       it has a workshops table with the required schema
       it has a registrations table with the required schema
       it has a registeredWorkshops table with the required schema
    Data tests
       it has the required baseline workshop data
       it correctly auto-increments the ID on new insertions



Expectations   : 35 Executed
Specifications : 0 Pending, 0 Excluded, 0 Skipped

Passed 6 of 6 PASS in 0.033 seconds (using 5MB)

root@44850303b17a:/usr/share/fullstackExercise#

And I think that's about it. I'm not doing anything with the data yet, but that'll start to be fleshed out in the next article (or maybe the following one. Not sure). This was just an exercise in doing some stuff with Docker and MariaDB, and thinking about the merits of TDDing exercises like this. I think it was worth it, especially during this early phase of working with these containers as I'm still reconfiguring stuff a lot, so it's good to know things don't get messed up when I'm monkeying with stuff.

Righto.

--
Adam

Tuesday 9 February 2021

Part 10: An article about moving files and changing configuration

G'day:

I hope I correctly set the excitement expectations with the title of this one. It's gonna be dead dull. In the previous article ("I mess up how I configure my Docker containers"), I detailed a fundamental flaw in how I was configuring my Dockerfiles and docker-compose.yml file, which pretty much had a logic-conflict in them. Instead of using the fullstack-exercise codebase I've been working on in this series, I used a cut down one that focused specifically on the issue. In this article I am detailing the file-system and config reorganisation I then performed on the fullstack-exercise codebase to fix the issue. TBH I'd probably not bother reading it if I was you (my fictitious reader), cos it's even more dry than my usual efforts. I'm pretty much only writing it out of a sense of completeistness (!), and also in case someone happens to be reading along with the rest of the series and - if they came to the next article - suddenly thought "hang on all the files have moved around? What subterfuge is this?". So it's a full-disclosure exercise I guess. If you do insist on reading this, read the previous article first though, eh? Good luck.

As per usual: I'll remind you that this is part 10 of an 11(?) part series, with the earlier articles linked below:

  1. Intro / Nginx
  2. PHP
  3. PHPUnit
  4. Tweaks I made to my Bash environment in my Docker containers
  5. MariaDB
  6. Installing Symfony
  7. Using Symfony
  8. Testing a simple web page built with Vue.js using Mocha, Chai and Puppeteer
  9. I mess up how I configure my Docker containers
  10. An article about moving files and changing configuration (this article)
  11. Setting up a Vue.js project and integrating some existing code into it
  12. Unit testing Vue.js components

The TL;DR of the previous article is kinda:

Don't map volumes in docker-compose.yml over the top of the working directory specified in Dockerfile, if the Dockerfile actually creates stuff you need in that working directory (like a node_modules subdirectory, for example). This is because a volume mapping replaces what's there, it does not merge with it.

Schoolkid dumbarsery from me there.

Now I'm gonna apply the lessons learned there to the main codeabse for this project. This is the directory structure I had previously (on github). Note some stuff not relevant to this exercise has been removed:

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise$ tree -F --dirsfirst -L 2
.
├── bin/
│   └── console*
├── config/
│   └── [… Symfony stuff …]
├── docker/ [… subdirectory contents elided for brevity …]
│   ├── mariadb/
│   ├── nginx/
│   ├── node/
│   ├── php-fpm/
│   └── docker-compose.yml*
├── public/
│   ├── button.html*
│   ├── gdayWorld.html*
│   ├── gdayWorld.php*
│   ├── gdayWorldViaVue.html*
│   ├── githubProfiles.html*
│   ├── index.php*
│   ├── invalidNotificationType.html*
│   └── notification.html*
├── src/
│   ├── Kernel.php*
│   └── MyClass.php*
├── tests/
│   ├── functional/
│   │   ├── public/
│   │   │   ├── ButtonTest.js*
│   │   │   ├── GdayWorldViaVueTest.js*
│   │   │   ├── GithubProfilesTest.js*
│   │   │   ├── NotificationTest.js*
│   │   │   ├── PhpTest.php*
│   │   │   └── WebServerTest.php*
│   │   └── SymfonyTest.php*
│   ├── integration/
│   │   └── DatabaseTest.php*
│   └── unit/
│       └── MyClassTest.php*
├── LICENSE*
├── README.md*
├── composer.json*
├── composer.lock*
├── package-lock.json*
├── package.json*
├── phpcs.xml.dist*
├── phpmd.xml*
├── phpunit.xml.dist*
└── symfony.lock*

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise$

The two points that make it most obvious that things are poorly-organised here are:

  • the root directory which has a mix of stuff intended for the PHP container and other stuff intended for the Node.js container.
  • And, similarly the tests subdirectory has a mix of back-end PHPUnit tests and front-end Mocha tests in the same substructure.

Where there's a mess or a mix of things intended for two different purposes in the same place, it's a flag that something's possibly not right. Now I will be honest and say that this decision originally was purposeful on my part. I'm not distinguishing between the front-end part of the app (Node.js, Vue and Mocha stuff), and the back-end running Symfony and PHP. The front-end stuff is the web site for this app; the back-end will be the web service to support the front-end. They are not two distinct apps in my view (or one way of looking at it). This is not to say they can't still be organised a bit more coherently than I have.

To separate my concerns, I've decided to move all the code-related stuff into one of backend or frontend subdirectories. First the backend directory (see backend on Github):

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise$ tree -F --dirsfirst
.
└── backend/
   ├── config/
   │   ├── packages/
   │   │   ├── prod/
   │   │   │   └── routing.yaml*
   │   │   ├── test/
   │   │   │   └── framework.yaml*
   │   │   ├── cache.yaml*
   │   │   ├── framework.yaml*
   │   │   └── routing.yaml*
   │   ├── routes/
   │   │   └── dev/
   │   │       └── framework.yaml*
   │   ├── bundles.php*
   │   ├── preload.php*
   │   ├── routes.yaml*
   │   └── services.yaml*
   ├── public/
   │   ├── test-coverage-report/
   │   ├── gdayWorld.html*
   │   ├── gdayWorld.php*
   │   └── index.php*
   ├── src/
   │   ├── Controller/
   │   │   └── GreetingsController.php*
   │   ├── Kernel.php*
   │   └── MyClass.php*
   └── tests/
       ├── functional/
       │   ├── public/
       │   │   ├── PhpTest.php*
       │   │   └── WebServerTest.php*
       │   └── SymfonyTest.php*
       ├── integration/
       │   └── DatabaseTest.php*
       ├── unit/
       │   └── MyClassTest.php*
       └── bootstrap.php*

In the backend subdirectory I have all the PHP / Symfony / PHPUnit stuff, plus a public directory that is purely for the back-end web root. And - below - the Docker Nginx config now has separate websites for back-end and front-end; and in the php-fpm section we now have all the PHP / Symfony config stuff moved out of the application root, and into its own specific root:

└── docker/
    ├── nginx/
    │   ├── sites/
    │   │   ├── backend.conf*
    │   │   └── frontend.conf*
    │   └── Dockerfile*
    ├── php-fpm/
    │   ├── app_root/
    │   │   ├── bin/
    │   │   │   └── console*
    │   │   ├── var/
    │   │   │   └── cache/
    │   │   ├── composer.json*
    │   │   ├── composer.lock*
    │   │   ├── phpcs.xml.dist*
    │   │   ├── phpmd.xml*
    │   │   ├── phpunit.xml.dist*
    │   │   └── symfony.lock*
    │   ├── root_home/
    │   ├── Dockerfile*
    │   └── phpunit-code-coverage-xdebug.ini*
    └── docker-compose.yml*

The key part of the Nginx configuration changes here is that the two sites now have distinct host names: fullstackexercise.backend (see backend.conf on Github) for the PHP-oriented stuff, and fullstackexercise.frontend (see frontend.config on Github) for the Vue- / Node-based stuff. Each website only serves the type of files appropriate for their purpose.

The Nginx Dockerfile (on Github) has not changed significantly, but now the PHP-FPM one (on Github) copies all the application-root stuff into the working directory, rather than docker-compose.yml file using a volume to do this:

WORKDIR  /usr/share/fullstackExercise/
COPY ./app_root/ /usr/share/fullstackExercise/

It's worth looking at the whole lot of the service definitions for these from docker-compose.yml:

services:
  nginx:
    build:
      context: ./nginx
    volumes:
      - ../frontend/public:/usr/share/nginx/html/frontend
      - ../backend/public:/usr/share/nginx/html/backend
      - ../log:/var/log
      - ./nginx/root_home:/root
    ports:
      - "80:80"
    stdin_open: true # docker run -i
    tty: true        # docker run -t
    networks:
      backend:
        aliases:
          - fullstackexercise.frontend
          - fullstackexercise.backend

  php-fpm:
    build:
      context: ./php-fpm
    environment:
      - DATABASE_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
    volumes:
      - ../backend/config:/usr/share/fullstackExercise/config
      - ../backend/public:/usr/share/fullstackExercise/public
      - ../backend/src:/usr/share/fullstackExercise/src
      - ../backend/tests:/usr/share/fullstackExercise/tests
      - ./php-fpm/root_home:/root
    stdin_open: true
    tty: true
    networks:
      - backend

For Nginx we are mapping-in two separate volumes into the html directory: as per above, one for the back-end site, one for the front-end site. These are then used as the webroots in the site configuration for each website. We are also setting an alias for each website. This is just so the other containers can access the websites too.

In the PHP block, we now have separate volumes for each of the code directories in the application route (note that the config sub-directory there is Symfony app config, not like the composer.json, phpunit.xml.dist etc stuff that has been copied to the application root by (spoilers) php-fpm/Dockerfile. And, yeah, now the Dockerfile (on Github) for the PHP stuff. The only significant line is this one:

COPY ./app_root/ /usr/share/fullstackExercise/

That copies all the config files the PHP components need to run into the application root. One downside of this is that I can't make on-the-fly changes to things like the PHPUnit config from within PHPStorm, I need to use vi in the container, test it, then copy it back to the host machine. But that stuff changes so seldom it's fine by me.

The changes on the front-end side of things is along the same lines. Here's the file structure (and on Github):

adam@DESKTOP-QV1A45U:/mnt/c/src/fullstackExercise$ tree -F --dirsfirst
. # a lot of stuff has been removed for the sake of brevity
├── docker/
│   └── node/
│       ├── config/
│       │   ├── babel.config.js*
│       │   ├── package-lock.json*
│       │   ├── package.json*
│       │   └── vue.config.js*
│       └── Dockerfile*
└── frontend/
    ├── public/
    │   ├── assets/
    │   │   └── scripts/
    │   │       ├── button.js*
    │   │       ├── gdayWorldViaVue.js*
    │   │       ├── githubProfiles.js*
    │   │       └── notification.js*
    │   ├── button.html*
    │   ├── gdayWorld.html*
    │   ├── gdayWorldViaVue.html*
    │   ├── githubProfiles.html*
    │   ├── invalidNotificationType.html*
    │   └── notification.html*
    ├── src/
    └── test/
        └── functional/
            ├── ButtonTest.js*
            ├── GdayWorldViaVueTest.js*
            ├── GithubProfilesTest.js*
            └── NotificationTest.js*

And the relevant bit of the node/Dockerfile (on Github):

WORKDIR  /usr/share/fullstackExercise/
COPY config/* ./

And docker-compose.yml (on Github):

  node:
    build:
      context: ./node
    environment:
      - GITHUB_PERSONAL_ACCESS_TOKEN=${GITHUB_PERSONAL_ACCESS_TOKEN}
    volumes:
      - ../frontend/public:/usr/share/fullstackExercise/public
      - ../frontend/src:/usr/share/fullstackExercise/src
      - ../frontend/test:/usr/share/fullstackExercise/test
      - ./node/root_home:/root
    stdin_open: true
    tty: true
    networks:
      backend:
        aliases:
          - vuejs.backend

Here we see how - same as with the PHP stuff - we copy the config files over in Dockerfile, and then map volumes for the code directories in docker-compose.yml.

That's pretty much it really. The good thing with all this is that because I have full test coverage of my code, and some functional and integration tests as well, I have testing for all the config and all the interactions between all the containers, so at any moment in time when I go to refactor something - because all this really is is an exercise in refactoring - at every step I can check that everything still works. Or spend time working out why something didn't work. But that safety net is always there.

OK. I promise the next article is actually gonna get around to looking at Vue.js components, testing thereof, and hopefully draw a line under this series. BTW if you read this article all the way to here, you're bloody weird. But well done ;-)

Righto.

--
Adam