Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Saturday 8 April 2023

Changing where I home my source code dramatically speeds up my Windows / WSL2 / Docker environment

G'day:

This is more an admission of "not initially thinking things through" on my part, but the outcome has helped me a lot, so in case there are others out there who don't think things through, maybe this will be helpful to them as well.

Or people can just point and laugh at me for being so thick.

Either way, perhaps someone will get something out of this.

My dev environment is Windows (nono, that's not the "not thinking things through" bit, just behave please). All my applications run in Docker containers, and the way I get the code into the container during dev is via a volume from my file system. For example this snippet from one of my docker-compose.yml files:

version: "3"
services:

    # ...

    php:
        build:
            context: php
            dockerfile: Dockerfile

        env_file:
            - envVars.public
            - envVars.private

        stdin_open: true
        tty: true

        volumes:
            - ..:/var/www

I'm just using a volume there to mount my app directory as /var/www in the container.

So the source code for the app is in - say - C:\src\myApp.

When I'm building and starting my containers, I drop into a shell in WSL, navigate to /mnt/c/src/myApp/docker, and do the docker compose up from there.

On Windows 10 and with older versions of WSL2 and Docker, this worked reasonably well. The app was a bit slow, but only as much as a shrug seemed to be a reasonable reaction to it. It's only dev.

When I migrated to Windows 11 things slowed down a chunk more, and it's been getting progressively worse. I've been working on a Symfony app recently, and clearing its cache is taking about 3-4min. Clearly this is ballocks cos it's PHP and nothing is measured in minutes with PHP.

Also my rig was comparatively slower than the other bods in my team. For me the unit tests in our CFML project have gone from taking - about a year ago - 5min to run (already not great) to about 10min now. Obviously a lot of this is that the tests we inherited were not great (almost all hit the DB), and we've also been adding a lot more tests in that intervening year. Recently though I found out that for other teams members it was slow, but they were only meaning like 3min was slow. Oh I wish they only took me 3min to run.

Clearly something is wrong on this machine. It's 4yrs old, but it was reasonably high spec when I bought it, and its drive is an SSD. So: no excuses there. And it's not like I'm Bitcoin mining; I'm just doing file system operations.

Whatever it is: I need to fix it.

I concluded it was something to do with misconfiguration of Docker or WSL making file operations from my host machine being dog slow when run from the container. I googled around a bit and it seems a lot of other people have had similar problems; but various settings, registry hacks, and even disabling Windows Defender (not a viable solution long-term, but something to try) were not helping.

Then someone mentioned "when the files are in the native part of the WSL file system, not the /mnt/c partition, then the overhead of the WSL->Windows file system processing doesn't occur". Their solution was to develop the code locally, then automatically deploy it via SSH into the container.

At the same time, I read that whilst there is the /mnt/c mount inside WSL, there is also the reverse: \\wsl.localhost points to the WSL file system, specifically for me \\wsl.localhost\Ubuntu is the filesystem for the Ubuntu distro I am using.

Putting two and two together to see how close to four I could get it, I did this:

  • Got rid of my code from C: drive.
  • Instead: I checked-out my code within WSL into ~/src/myApp.
  • Ran all my docker stuff from there, in ~/src/myApp/docker.
  • In VSCode and IntelliJ, homed my projects in \\wsl.localhost\Ubuntu\home\adam\src\myApp.

When I run those tests that before took >10min to run, now they take around 50sec. That is more than an order of magnitude faster.

In my Symfony project the cache-clear now takes a few seconds. And the tests there run in a second or so too.

I realise I am perhaps inheriting some slowness in reverse by accessing \\wsl.localhost\Ubuntu from Windows, but I am only dealing with occasional file edits and such like. Speed there is not a problem. Not one I could perceive anyhow.

I wish I had sat down to sort this out a few months back now. I had aimlessly googled in the past for 10min or so trying to find an easy silver bullet, but never found it and each time I looked I saw the same stuff. Today I rolled up my sleeves and said "right, I'm fixing this", and after about an extra 45min of googling and trying stuff (and then backing-out each thing that didn't work again), I landed on the solution.

Righto.

--
Adam

Sunday 22 January 2023

Docker: adding a MariaDB container to my PHP & Nginx ones

G'day:

I'm pretty much just noting down how I've progressed my PHP8 test app in this one (see PHP: returning to PHP and setting up a PHP8 dev environment and other articles around this date tagged with the PHP8 label, around this date). I need a DB added to the PHP8 and Nginx containers I already have, for the next bit of stuff I want to look at.


docker/docker-compose.yml

I've added a mariadb service, and set some environment variables in the PHP8 service as well:

version: "3"
services:
  nginx:
  	# […]

  php:
    build:
      context: php
      dockerfile: Dockerfile

    environment:
      - MARIADB_DATABASE=${MARIADB_DATABASE}
      - MARIADB_USER=${MARIADB_USER}
      - MARIADB_PASSWORD=${MARIADB_PASSWORD}

    stdin_open: true
    tty: true

    volumes:
      - ..:/var/www

  mariadb:
    build:
      context: mariadb
      dockerfile: Dockerfile

    environment:
      - MARIADB_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MARIADB_DATABASE=${MARIADB_DATABASE}
      - MARIADB_USER=${MARIADB_USER}
      - MARIADB_PASSWORD=${MARIADB_PASSWORD}

    ports:
      - "3382:3306"

    stdin_open: true
    tty: true

    volumes:
      - mariaDbData:/var/lib/mariadb

volumes:
  mariaDbData:

Those env vars are ones the MariaDB image docs on Dockerhub mandate. I'm also passing them into the PHP container so that it doeesn't need them recorded anywhere.


docker/.env

COMPOSE_PROJECT_NAME=php8

MARIADB_DATABASE=db1
MARIADB_USER=user1

# the following are to be provided to `docker-compose up`
MARIADB_ROOT_PASSWORD=
MARIADB_PASSWORD=

The only notable thing here is that - because this file is going into source control - I am not specifying the passwords; I'm just signifiying they need to exist.


docker/mariadb/Dockerfile

FROM mariadb:latest

COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/

CMD ["mysqld"]

EXPOSE 3306

And in ./docker-entrypoint-initdb.d/ I have these:

# docker/mariadb/docker-entrypoint-initdb.d/1.createAndPopulateTestTable.sql

USE db1;

CREATE TABLE test (
    id INT NOT NULL,
    value VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,

    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO test (id, value)
VALUES
    (101, 'Test row 1'),
    (102, 'Test row 2')
;

ALTER TABLE test MODIFY COLUMN id INT auto_increment;
# docker/mariadb/docker-entrypoint-initdb.d/2.createAndPopulateNumbersTable.sql
USE db1;

CREATE TABLE numbers (
    id INT NOT NULL,
    en VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
    mi VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,

    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO numbers (id, en, mi)
VALUES
    (1, 'one', 'tahi'),
    (2, 'two', 'rua'),
    (3, 'three', 'toru'),
    (4, 'four', 'wha'),
    (5, 'five', 'rima'),
    (6, 'rima', 'ono')
;

ALTER TABLE numbers MODIFY COLUMN id INT auto_increment;

I'm seeding the DB with some test data. Any files dropped into that /docker-entrypoint-initdb.d/ directory in the image will be picked up by the MariaDB process when it first creates the DB (see the docs for the image again: Docker hub › MariaDB › Initializing a fresh instance).


Shell scripts

Because this rig requires one to pass passwords to docker-compose up, I've created a coupla shell scripts to remind me to do it right:

#!/bin/bash
# docker/bin/rebuildContainers.sh

# usage
# cd to directory containing docker-compose.yml
# bin/rebuildContainers.sh [DB root password] [DB user password]
# EG:
# cd ~/src/php8/docker
# bin/rebuildContainers.sh 123 1234

clear; printf "\033[3J"
docker-compose down --remove-orphans --volumes
docker-compose build --no-cache
MARIADB_ROOT_PASSWORD=$1 MARIADB_PASSWORD=$2 docker-compose up --force-recreate --detach
#!/bin/bash
# docker/bin/restartContainers.sh

# usage
# cd to directory containing docker-compose.yml
# bin/restartContainers.sh [DB root password] [DB user password]
# use same passwords as when initially calling rebuildContainers.sh

# EG:
# cd ~/src/php8/docker
# bin/restartContainers.sh 123 1234

clear; printf "\033[3J"
docker-compose stop
docker-compose up --detach nginx
MARIADB_PASSWORD=$2 docker-compose up --detach php
MARIADB_ROOT_PASSWORD=$1 MARIADB_PASSWORD=$2 docker-compose up --detach mariadb

This just saves some typing. In all honesty I am using 123 and 1234 for the respective passwords, but it doesn't matter. It's a good practice to not have passwords anywhere in source code, and this seems a reasonable way to me to make sure the values end up being where they need to be.


readme.md

I'll spare you the content here (the heading there is linked to the file), all I did to that was update my installation instructions to use the docker/bin/rebuildContainers.sh script instead of individual statements, and added a section about docker/bin/restartContainers.sh.


Test

Where would I be without a test. I've thrown a quick one together to test that the test data is there. And I will run this in conjunction with the rest of the tests, to make sure I have not caused any regressions.

// test/integration/DbTest.php

namespace adamcameron\php8\test\integration;

use Doctrine\DBAL\Connection;
use Doctrine\DBAL\DriverManager;
use PHPUnit\Framework\TestCase;
use \stdClass;

/** @testdox Tests the stub DB */
class DbTest extends TestCase
{

    /** @testdox it can fetch records from the test table */
    public function testFetchRecords()
    {
        $expectedRecords = [
            ["id" => 101, "value" => "Test row 1"],
            ["id" => 102, "value" => "Test row 2"]
        ];

        $connection = $this->getDbalConnection();
        $result = $connection->executeQuery("SELECT id, value FROM test ORDER BY id");

        $actualRecords = $result->fetchAllAssociative();

        $this->assertEquals($expectedRecords, $actualRecords);
    }

    private function getDbalConnection() : Connection
    {
        $parameters = $this->getConnectionParameters();
        return DriverManager::getConnection([
            'dbname' => $parameters->database,
            'user' => $parameters->username,
            'password' => $parameters->password,
            'host' => $parameters->host,
            'port' => $parameters->port,
            'driver' => 'pdo_mysql'
        ]);
    }

    private function getConnectionParameters() : stdClass
    {
        return (object) [
            "host" => "mariadb",
            "port" => "3306",
            "database" => getenv("MARIADB_DATABASE"),
            "username" => getenv("MARIADB_USER"),
            "password" => getenv("MARIADB_PASSWORD")
        ];
    }
}

All pretty straight forward. Note how I'm reading the DB info from the environment variables in getConnectionParameters. Take my word for it for now on the DB-handling code. I'll get to that in a different article.


That's it. Nothing insightful. I'm just documenting what I've done and why.

Righto.

--
Adam

Saturday 21 January 2023

PHP: returning to PHP and setting up a PHP8 dev environment

G'day:

I need to do some PHP work, and for that I need to have a PHP dev environment. I'm very rusty when it comes to PHP - I've not touched it for 2-3 years or so and my old brain doesn't hold on to things very well - and since that time I have shifted to using Docker for my environments anyhow. I've never used PHP in Docker before. So there's a challenge. And what's this? PHP is now up to version 8.2, with 8.3 being tested. The last time I touched PHP 7.2 was the new thing (we were still mostly on 5.5 at that time, that said).

Therefore I have a mini project ahead of me:

  • Get PHP8.2 running in a Docker container.
  • Get Nginx running in a different container, proxying requests to the PHP one.
  • Have Composer up and running.
  • So I can install PHPUnit.
  • And run some basic tests of the installation.
  • With code-coverage reporting on the tests (code coverage requires a debug module to be installed and running too).
  • Also get PHPMD and PHPCS running too.
  • Bonus: be able to run the tests from my IDE, on my host machine.

Success here will be to be able to view the HTML code coverage report, served by Nginx, showing code being covered by testing.

Full disclosure: I did all this a few nights ago, and I am repeating the exercise now for the purposes of this article.

Application file structure

This shows the file system layout I'm aiming for:

/var/www# tree -L 1
.
|-- docker
|-- html
|-- src
|-- test
`-- vendor

5 directories, 0 files
/var/www#
  • docker.Docker stuff like docker-compose.yml and sub-directories for the various containers' Dockerfiles and other config / assets are in here.
  • html. Files that will be served by Nginx go in here.
  • src. Application code goes here.
  • test. Test code goes here.
  • vendor. The app's Composer dependencies go in here.

This is all standard PHP-app stuff, except my personal decision of how to organise the Docker files.


PHP in a container

docker-compose.yml

The docker-compose.yml service definition is pretty simple:

version: "3"

services:
  php:
    build:
      context: php
      dockerfile: Dockerfile

    stdin_open: true
    tty: true

    volumes:
      - ..:/var/www

/var/www is the directory the container expected to see PHP stuff in, so I ran with it. It doesn't really matter.

.env

Oh I have a wee .env file too:

COMPOSE_PROJECT_NAME=php8

Just so the container names are a bit more on-point when they get created.

Dockerfile

The Dockerfile, on the other hand, is a bit complicated, and took me ages to google all the crap I needed to get together to make PHP 8.2 work in a container with a real-world set of extensions loaded, etc. Deep breath…

FROM php:8.2.1-fpm

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "zip", "unzip", "git", "vim"]

COPY php.ini /usr/local/etc/php/php.ini

COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer

RUN pecl install xdebug && docker-php-ext-enable xdebug
COPY conf.d/xdebug.ini /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
COPY conf.d/error_reporting.ini /usr/local/etc/php/conf.d/error_reporting.ini

RUN apt-get install -y libicu-dev && docker-php-ext-configure intl && docker-php-ext-install intl

RUN ["apt-get", "install", "-y", "libz-dev", "libzip-dev"]
RUN docker-php-ext-configure zip && docker-php-ext-install zip
RUN docker-php-ext-configure bcmath && docker-php-ext-install bcmath
RUN docker-php-ext-configure pdo_mysql && docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure opcache && docker-php-ext-install opcache

RUN curl -1sLf 'https://dl.cloudsmith.io/public/symfony/stable/setup.deb.sh' | bash
RUN ["apt-get", "install", "-y", "symfony-cli"]

WORKDIR /var/www
ENV COMPOSER_ALLOW_SUPERUSER 1

I'll go line-by-line, where there is anything noteworthy, or to explain my decisions.

  • php-fpm. I readily concede I have no idea what all the tag variants of PHP images are on Docker Hub. But I have used php-fpm in the past and know it to work. So: running with it. I am specifically not using the Alpine variant as it doesn't come with BASH, and life is too short to negotiate ASH instead. And I am not trying to economise on disk space for this container anyhow.
  • Baseline APT stuff. Composer needs zip, unzip and git (I learned this by trial and error). I need vim.
  • PHP doesn't have a php.ini file by default although needs it. It ships with php.ini-development and php.ini-production, and it's down to the dev to pick which to use. This file is base on php.ini-development, with the following changes (mostly from recommendations from Symfony › Performance › Use the OPcache Byte Code Cache):
    • realpath_cache_size = 4096k
    • realpath_cache_ttl = 600
    • date.timezone = Europe/London
    • opcache.enable=1
    • opcache.memory_consumption=256
    • opcache.max_accelerated_files=20000

    These are all just a matter of "uncommenting the example setting and tweak its value": normal php.ini stuff. No doubt I will further tweak that as I go, but that's a start.
  • Install Composer. It seems odd that Composer doesn't have the ubiquity that there's an APT package for it.
  • Install Xdebug. PHPUnit needs this for code coverage analysis. Plus at some stage I might start coding like a grown-up and use line debugging. Maybe.
  • I was following along the instructions on "Setup Step Debugging in PHP with Xdebug 3 and Docker Compose" to install Xdebug, and it suggested these settings to use. Yeah cool. I do not know any better. See below this list for the file contents.
  • All of this lot are just libraries that the cited PHP extensions need to be able to run.
  • The ultimate object of the exercise (well: the next part of the exercise) is to get Symfony installed in this app. Whilst setting up the PHP extensions I knew I would be wanting, I thought to look-up what Symfony would need too, and its guidance was to install the Symfony CLI and it would tell me (via symfony check:requirements. Hence installing this now. It was Symfony that reminded me to install all the highlighted extensions. handy. It's also handy that the PHP Docker image comes with those docker-php-ext-configure and docker-php-ext-configure utils, as it makes it a lot easier. There's also a bit of dependency-heck (not quite bad enough to use the work "hell" here) going on installing them, cos sometimes - like with libz-dev and libzip-dev - there are upsteam dependencies needed too. I think I got off pretty lightly here, just needing those two.
  • /var/www is where I want to land when I start a shell on the container.
  • I need to set this otherwise Composer complains about installing stuff as root. This'd be an issue in a production container, maybe. But it's not an issue on dev IMO.

Here are those PHP config files I mentioned above in the Xdebug bit:

# docker/php/conf.d/xdebug.ini

zend_extension=xdebug

[xdebug]
xdebug.mode=develop,debug,coverage
xdebug.client_host=host.docker.internal
xdebug.start_with_request=no
  • I added coverage to this, for PHPUnit.
  • The suggested setting for start_with_request was yes for this, but this meant IntelliJ would interrupt PHPUnit every time I ran my tests from the shell, so I've switched it off.
# docker/php/conf.d/error_reporting.ini
error_reporting=E_ALL

Normally I'd set this directly in php.ini, but I actually did this part of the config before I remembered about php.ini needing to be configured, so stuck with it.


After doing that lot I could build the container and bring it up, and run composer install:

/mnt/c/src/containers/php8/docker$ docker-compose build
[+] Building 4.0s (25/25) FINISHED
[...]
 => exporting to image                                                                                                                                                                                      0.1s
 => => exporting layers                                                                                                                                                                                     0.0s
 => => writing image sha256:3e9dfd6aca1527f8d0906a0d9f2b2ec2c74ddc0bd9ea9d2b8f0d8b1dce773951                                                                                                                0.0s
 => => naming to docker.io/library/php8-php                                                                                                                                                                 0.0s

/mnt/c/src/containers/php8/docker$ docker compose up --detach
[+] Running 2/2
 ⠿ Network php8_default  Created                                                                                                                                                                            0.0s
 ⠿ Container php8-php-1  Started                                                                                                                                                                            0.5s

/mnt/c/src/containers/php8/docker$ docker exec -it php8-php-1 /bin/bash

/var/www# composer install
Installing dependencies from lock file (including require-dev)
Verifying lock file contents can be installed on current platform.
Package operations: 49 installs, 0 updates, 0 removals
  - Downloading 
    [...]
Generating autoload files
41 packages you are using are looking for funding.
Use the `composer fund` command to find out more!
/var/www#
/var/www# composer validate
./composer.json is valid

composer.json

Speaking of Composer, here's the composer.json file thusfar (don't worry too much about it: I'm including it here for completeness):

{
    "name" : "adamcameron/php8",
    "description" : "PHP8 containers",
    "type" : "project",
    "license" : "proprietary",
    "require": {
        "php" : "^8.2",
        "ext-iconv": "*",
        "ext-pdo_mysql": "*",
        "ext-mbstring": "*",
        "ext-intl": "*",
        "ext-json": "*",
        "ext-curl": "*",
        "ext-simplexml": "*",
        "ext-zip": "*",
        "ext-pcre": "*",
        "ext-ctype": "*",
        "ext-session": "*",
        "ext-tokenizer": "*",
        "ext-bcmath": "*",
        "ext-zend-opcache": "*",
        "monolog/monolog": "^3.2.0",
        "doctrine/dbal": "^3.5.3"
    },
    "require-dev": {
        "phpunit/phpunit": "^9.5.28",
        "phpmd/phpmd": "^2.13.0",
        "squizlabs/php_codesniffer": "^3.7.1"
    },
    "autoload": {
        "psr-4": {
            "adamcameron\\php8\\": "src/"
        }
    },
    "autoload-dev": {
        "psr-4": {
            "adamcameron\\php8\\test\\": "test/"
        }
    },
    "scripts" : {
        "test": "phpunit --testdox test",
        "phpmd": "phpmd src,test text phpmd.xml",
        "phpcs": "phpcs src test",
        "test-all": [
            "@test",
            "@phpmd",
            "@phpcs"
        ]
    }
}

The require section there is not stuff I needed for the install, it's also a bunch of baseline stuff I know I will need for the app I'm heading towards. I don't think there's anything surprising there.

I also already have some PHPUnit, phpmd and phpcs scripts in there. I'll get to those next…

Testing the PHP container

It would not be me if I didn't test stuff. I will admit I did not TDD the bits above, cos that would just be mad. However I wanted to test things worked, so have put a few tests in. Plus part of this is testing the debug module and PHPUnit work together as well.

PHPUnit

Here's the phpunit.xml.dist file:

<?xml version="1.0" encoding="UTF-8"?>
<phpunit
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/9.5/phpunit.xsd"
        colors="true"
        cacheResult="false"
        testdox="true"
        stopOnFailure="true"
        stopOnError="true"
        stopOnWarning="true"
        failOnWarning="true"
>
    <coverage>
        <include>
            <directory suffix=".php">src</directory>
        </include>
        <report>
            <html outputDirectory="html/test-coverage-report/" />
        </report>
    </coverage>
    <testsuites>
        <testsuite name="Integration tests">
            <directory>test/integration/</directory>
        </testsuite>
        <testsuite name="Unit tests">
            <directory>test/unit/</directory>
        </testsuite>
    </testsuites>
</phpunit>

All standard stuff, and there's nothing I can say about it that the docs don't already say.

Tests

A a few minimal tests, which hopefully are self-explanatory:

// test/integration/PhpTest.php

namespace adamcameron\php8\test\integration;

use PHPUnit\Framework\TestCase;

/** @testdox Tests of the PHP installation */
class PhpTest extends TestCase
{
    /** @testdox It has the expected PHP version */
    public function testPhpVersion()
    {
        $expectedPhpVersion = "8.2";
        $actualPhpVersion = phpversion();
        $this->assertStringStartsWith(
            $expectedPhpVersion,
            $actualPhpVersion,
            "Expected PHP version to start with $expectedPhpVersion, but got $actualPhpVersion"
        );
    }
}
// test/integration/ComposerTest.php

namespace adamcameron\php8\test\integration;

use PHPUnit\Framework\TestCase;

/** @testdox Tests of the Composer installation */
class ComposerTest extends TestCase
{
    /** @testdox It passes composer validate */
    public function testComposerValidates()
    {
        exec("composer validate 2> /dev/null", $output, $returnCode);
        $this->assertEquals(
            0,
            $returnCode,
            "Composer validate failed: " . implode("\n", $output)
        );
    }
}

These two test three things: PHP is running the version I expect; Composer is happy it's configured properly; and PHPUnit itself is running otherwise all this would go splat.

For the code coverage testing I need some source code to test:

// src/Greeter.php

namespace adamcameron\php8;

class Greeter
{
    public const FORMAL = 1;
    public const INFORMAL = 2;

    public static function greet(string $name, int $style = self::FORMAL): string
    {
        if ($style === self::FORMAL) {
            return "Hello, $name";
        }
        return "Hi, $name";
    }
    
}

And a test:

// test/unit/GreeterTest.php

namespace adamcameron\php8\test\unit;

use adamcameron\php8\Greeter;

use PHPUnit\Framework\TestCase;

/** @testdox Tests of the Greeter class */
class GreeterTest extends TestCase
{
    /** @testdox It greets formally */
    public function testFormalGreeting()
    {
        $name = "Zachary";
        $expectedGreeting = "Hello, $name";
        $actualGreeting = Greeter::greet($name, Greeter::FORMAL);
        $this->assertEquals(
            $expectedGreeting,
            $actualGreeting,
            "Expected greeting to be $expectedGreeting, but got $actualGreeting"
        );
    }

    /** @testdox It greets informally */
    public function testInformalGreeting()
    {
        $this->markTestSkipped("skipping this so the coverage report is more interesting");
        $name = "Zachary";
        $expectedGreeting = "Hey, $name";
        $actualGreeting = Greeter::greet($name, Greeter::INFORMAL);
        $this->assertEquals(
            $expectedGreeting,
            $actualGreeting,
            "Expected greeting to be $expectedGreeting, but got $actualGreeting"
        );
    }
}

Note how I am skipping one of the tests. This is so code coverage is not 100%.

Running the tests

root@e8896f5d5bd6:/var/www# composer test
> phpunit --testdox test
PHPUnit 9.5.28 by Sebastian Bergmann and contributors.

Tests of the Composer installation
  It passes composer validate

Tests of the PHP installation
  It has the expected PHP version

Tests of the Greeter class
  It greets formally
  It greets informally

Time: 00:05.675, Memory: 10.00 MB

Summary of non-successful tests:

Tests of the Greeter class
  It greets informally
OK, but incomplete, skipped, or risky tests!
Tests: 4, Assertions: 3, Skipped: 1.

Generating code coverage report in HTML format ... done [00:01.492]
root@e8896f5d5bd6:/var/www#

Cool! It all worked. Let's have a look at the code coverage report. Because I don't have Nginx installed yet I can't browse to it, but I can just open the file in a browser:

I've drilled down the report slightly to show the file I was testing. It's correctly showing that I have only tested one path in the logic. Excellent. This proves that the Xdebug extension is running.

phpmd and phpcs

I've installed these too, and have used a fairly stock config file for each (see: phpmd.xml and phpcs.xml). Running them is dead boring as there's hardly any code, and IntelliJ makes sure it's formatted well:

/var/www# composer phpmd
> phpmd src,test text phpmd.xml
/var/www# composer phpcs
> phpcs src test

FILE: /var/www/src/Greeter.php
------------------------------------------------------------------------------------------
FOUND 1 ERROR AFFECTING 1 LINE
------------------------------------------------------------------------------------------
 18 | ERROR | [x] The closing brace for the class must go on the next line after the body
------------------------------------------------------------------------------------------
PHPCBF CAN FIX THE 1 MARKED SNIFF VIOLATIONS AUTOMATICALLY
------------------------------------------------------------------------------------------

Time: 2.51 secs; Memory: 6MB

Script phpcs src test handling the phpcs event returned with error code 2
/var/www#

Ha! I didn't actually expect that. I clearly didn't run this before I did my final commit. If you scroll up to the Greeter.php file, it's complaining that there's an empty line between the last method closing brace and the class's closing brace:

That breaks one of PRS-12's rules. Fair cop. And hey: a good test that it's working!


Nginx in a container

docker-compose.yml

The relevant bit is this:

nginx:
  build:
    context: nginx
    dockerfile: Dockerfile

  ports:
    - "8008:80"

  stdin_open: true
  tty: true

  volumes:
    - ../html:/usr/share/nginx/html/

Nothing interesting there.

Dockerfile

FROM nginx:alpine
WORKDIR /usr/share/nginx/
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./sites/ /etc/nginx/sites-available/
COPY ./conf.d/ /etc/nginx/conf.d/
CMD ["nginx"]
EXPOSE 80

Also nothing noteworthy here. Everything is in the config files.

Nginx config files

I freely admin to pretty much lifting these from other projects I already had. I don't really know what I'm doing with Nginx. I learn enough to achieve some goal, then I forget it all within about 5min.

// docker/nginx/nginx.conf
user  nginx;
worker_processes  4;
daemon off;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    access_log  /var/log/nginx/access.log;
    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-available/*.conf;
}
// docker/nginx/conf.d/default.conf
upstream php-upstream {
    server php:9000;
}
// docker/nginx/sites/default.conf
server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    server_name localhost;
    root /usr/share/nginx/html;
    index index.html index.php;

    location / {
        autoindex on;
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_pass php-upstream;
        fastcgi_index index.php;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
        fastcgi_param SCRIPT_FILENAME /var/www/html/$fastcgi_script_name;
        fastcgi_read_timeout 600;
        include fastcgi_params;
    }

    location ~ /\.ht {
        deny all;
    }
}

I hope it doesn't seem dismissive or that I'm tired of writing in that I add nothing here. I literally don't know what most of that stuff does, other than where it's obvious.

A test PHP file to browse to

I need to be able to test that Nginx is passing stuff to PHP:

<?php
// html/test.php
phpinfo();

Having done all that, I rebuild the containers (and note I now have an Nginx one as well), and bring them up.

If I browse to http://localhost:8008/test.php, I get this:

And if I run the PHPUnit tests again, I can now browse to the report via http://localhost:8008/test-coverage-report/. Cool.


Success

OK so I'm gonna consider that a victory. I did some tinkering around in IntelliJ and I can run the unit tests from in there as well, all via drilling into the Docker container and running it from in there, and presenting the results in IntelliJ:

However that took more dicking around than I can be arsed with re-doing right now. I needed a coupla extensions installed, and set the PHP interpreter to be locatable via the config in docker-compose.yml. It's handy anyhow. I need my team to set all this up, so I'll get the first one of them I dump all this on to work it out and write it down, and I'll report back.

I've linked to all the individual files as I reference them, but you could also clone the repo, checkout the 1.0 tag, and you should be able to set this up locally and have a look if you so choose. There are full instructions in the readme.md file. I have had one of my team test the instructions out on their own PC, and they seemed to have worked.

Righto.

--
Adam

Sunday 16 October 2022

Kotlin / Ktor: G'day world from a Docker container

G'day:

Not sure what this article is going to end up being about. However I am hovering over the "New Project" button in IntelliJ, and am gonna attempt to at least get to a "G'day world" sort of situation with a Ktor-driven web service today.

Why Ktor

We need to port our monolithic CFML/CFWheels app to a more… erm… forward-thinking and well-designed solution. The existing app got us to where we are, and pays our salaries, but its design reflects a very "CFML-dev" approach to application design. We've decided to shift to Kotlin, as you know. We also need to adopt some sort of framework to implement the application atop-of, and we've chosen Ktor for a few reasons:

  • It's focus is micro-services and small footprint.
  • From what I've read, it focuses on being a framework instead of being an opinion-mill, how other frameworks can tend to be.
  • It's written for Kotlin; unlike say Spring, which is written for Java and it shows. We're using Kotlin to have the benefits of the JVM, but to steer clear the Java Way™ of doing things.
  • It's created by JetBrains, who created Kotlin, so hopefully the Ktor design team with be aligned with the Kotlin design team, so it should be a pretty Kotlin-idiomatic way of doing things.
  • Support for it is baked-in to IntelliJ, so it's a "first class citizen" in the IDE.

Also basically we need to pick something, so we're cracking on with it. If we do some quick investigation and it turns our Ktor ain't for us: I'd rather know sooner rather than later.

Let's get on with it.


Project

One can create a new Ktor project via IntelliJ ("New Project"):

I've only filled in the situation-specific stuff here, and left everything else as default. I've clicked the "Create Git repository" option: I hope if gives me the option to provide a name for it before it charges off and does it, cos I don't want it just to be called "gdayworld". So I might back out of that choice if it doesn't work for me:

Let's press "Next"…

Argh! I have to make decisions! I haven't even finished my first coffee of the day yet!

There are roughly one million plug-ins on offer here, and I don't even know what most of them are. For now, all I need this thing to do is to have testing for a greeting endpoint that says "G'day world" or something, so I doubt I'll need most of this stuff. Let's have a scan through.

OK, I've selected these ones:

  • Routing
  • DefaultHeaders
  • CallLogging
  • CallId
  • kotlinx.serialization - this also required the ContentNegotiation plug-in

After clicking "create" it got on with it, downloaded some stuff, built the project and declared everything was fine. I now have this lot:


Baseline checks

Right, let's see what tests it installed by default:

package me.adamcameron

import io.ktor.server.routing.*
import io.ktor.http.*
import io.ktor.server.plugins.callloging.*
import org.slf4j.event.*
import io.ktor.server.request.*
import io.ktor.server.plugins.callid.*
import io.ktor.serialization.kotlinx.json.*
import io.ktor.server.plugins.contentnegotiation.*
import io.ktor.server.application.*
import io.ktor.server.response.*
import io.ktor.client.request.*
import io.ktor.client.statement.*
import kotlin.test.*
import io.ktor.server.testing.*
import me.adamcameron.plugins.*

class ApplicationTest {
    @Test
    fun testRoot() = testApplication {
        application {
            configureRouting()
        }
        client.get("/").apply {
            assertEquals(HttpStatusCode.OK, status)
            assertEquals("Hello World!", bodyAsText())
        }
    }
}

Most of those imports aren't necessary btw, that's a wee bit sloppy. It only claims to need these ones:

import io.ktor.http.*
import io.ktor.client.request.*
import io.ktor.client.statement.*
import kotlin.test.*
import io.ktor.server.testing.*
import me.adamcameron.plugins.*

I'll leave it as-is for now. The test looks sound actually. Well: I've purposely not looked at the code yet, but a test that tests that a GET to / returns "Hello World!" seems reasonable. Let's run it:

Cool. OK, let's run the app then, given it looks like it'll work:

C:\Users\camer\.jdks\semeru-11.0.17\bin\java.exe […]
2022-10-16 11:01:49.119 [main]  INFO  ktor.application - Autoreload is disabled because the development mode is off.
2022-10-16 11:01:49.205 [main]  INFO  ktor.application - Application started in 0.148 seconds.
2022-10-16 11:01:49.205 [main]  INFO  ktor.application - Application started: io.ktor.server.application.Application@57312fcd
2022-10-16 11:01:50.448 [DefaultDispatcher-worker-1]  INFO  ktor.application - Responding at http://127.0.0.1:8080  

It ran. Does it actually respond on http://127.0.0.1:8080?

Cool. OK, so I have a test that passes an an app that works. Gonna push that to GitHub as v0.2 (v0.1 was the empty repo). And I'm gonna have a shufti around the files it's created and see what's what.


Tweaking

OK, I'm not gonna look at those old-school xUnit-style tests. I'm gonna adapt them to use the more declarative BDD style I've been using so far when testing stuff with Kotlin. So this means I'm going to add some Kotest dependencies. The test is now:

@DisplayName("Tests of the / route")
class ApplicationTest {
    @Test
    fun `Tests the root route responds with the correct status and message`() = testApplication {
        application {
            configureRouting()
        }
        client.get("/").apply {
            status shouldBe HttpStatusCode.OK
            bodyAsText() shouldBe "Hello World!"
        }
    }
}

I'm also refactoring the class name and location to src/test/kotlin/acceptance/IndexRouteTest.kt. It's not testring the app, it's testing the route. Plus it's an acceptance test, and I wanna keep those separate from unit tests / integration tests etc (poss premature optimisation here I guess). I've also lost the subdirectory structure from /src/main/kotlin/me/adamcameron/Application.kt to be just /src/main/kotlin/Application.kt. Kotlin's own style guide recommends this:

In pure Kotlin projects, the recommended directory structure follows the package structure with the common root package omitted. For example, if all the code in the project is in the org.example.kotlin package and its subpackages, files with the org.example.kotlin package should be placed directly under the source root, and files in org.example.kotlin.network.socket should be in the network/socket subdirectory of the source root.

Next I feel there's a design bug in the index route, but I'm gonna push my current tweaks first, and sort that out in the next section.


Giving control to a controller

This design bug: here's the entirety of the implementation of that index route and its handling:

fun Application.configureRouting() {

    routing {
        get("/") {
            call.respondText("Hello World!")
        }
    }
}

That's in /src/main/kotlin/plugins/Routing.kt

Routing should limit itself to what it says ion the tin: routing. It should not be providing the response. It should route the request to a controller which should control how the response is handled. I know this is only example code, but example could should still follow appropriate design practices. So erm: now I have to work out how to create a controller in Ktor. I'm pleased I have a green test on that index route though, cos this is all pretty much a refactoring exercise, so whatever I do: in the end I'll know I have done good if the test still passes.

Hrm. Having not found any examples in the Ktor docs of how to extract controller code out of the routing class, I found myself reading Application structure, specifically these paras:

Different to many other server-side frameworks, it doesn't force us into a specific pattern such as having to place all cohesive routes in a single class name CustomerController for instance. While it is certainly possible, it's not required.
Frameworks such as ASP.NET MVC or Ruby on Rails, have the concept of structuring applications using three folders - Model, View, and Controllers (Routes).

My emphasis. I see. Ktor does not separate-out the idea of routing from the idea of controllers, I see. To me they're different things, but I guess I can see there's overlap. I'm not hugely enamoured with their thinking that "despite the rest of the world using the term MVC, we know better: we're gonna think of it as MVR". Just… why. If you wanna conflate routing and controllers, yeah fine. But in that case they conflate into the controller part of MVC. You don't just go "ah nah it's MVR, trust me". Remember what I said before about opinionated frameworks? This is why I don't like it when frameworks have opinions.

But anyway.

We can still separate out groups of "route-handlers" (sigh) into separate functions. ow I have this:

package routes

import io.ktor.server.application.*
import io.ktor.server.response.*
import io.ktor.server.routing.*

fun Route.indexRouting() {
    route("/") {
        get {
            call.respondText("Hello World!")
        }
    }
}

And my original configureRouting function is just this:

fun Application.configureRouting() {

    routing {
        indexRouting()
    }
}

That's good enough.


Auto-reload

One good thing my RTFMing about controllers lead me to was how to get my app to rebuild / reload when I make code changes. By default every time I changed my code I had to shut down the app (remember it's serving a web app now), rebuild, and then re-run the app. That was not the end of the world, but was pretty manual.

Ktor have thought about this, and the solution is easy.

First, I tell my app it's in development mode (in gradle.properties):

junitJupiterVersion=5.9.0
kotestVersion=5.5.0
kotlinVersion=1.7.20
ktorVersion=2.1.2
logbackVersion=1.2.11

kotlin.code.style=official

org.gradle.warning.mode=all

development=true

This in turn is picked up by code in build.gradle.kts

application {
    mainClass.set("ApplicationKt")

    val isDevelopment: Boolean = project.ext.has("development")
    applicationDefaultJvmArgs = listOf("-Dio.ktor.development=$isDevelopment")
}

(that code was already there).

Then I needed to tell the app what to pay attention to for reloading (in Application.kt):

fun main() {
    embeddedServer(Netty, port = 8080, host = "0.0.0.0", watchPaths = listOf("classes")) {
        configureMonitoring()
        configureSerialization()
        configureRouting()
    }.start(wait = true)
}

classes there is a reference to build/classes in the project file system.

Then get Gradle to rebuild when any source code changes:

PS C:\src\kotlin\ktor\gdayworld> ./gradlew --continuous :build
BUILD SUCCESSFUL in 1s
13 actionable tasks: 13 up-to-date

Waiting for changes to input files... (ctrl-d then enter to exit)
<-------------> 0% WAITING
> IDLE

And another instance of Gradle to run the app with it watching the build results:

PS C:\src\kotlin\ktor\gdayworld> ./gradlew :run                  
> Task :run
2022-10-16 14:21:19.374 [main]  DEBUG ktor.application - Java Home: C:\apps\openjdk\EclipseAdoptium
2022-10-16 14:21:19.374 [main]  DEBUG ktor.application - Class Loader: jdk.internal.loader.ClassLoaders$AppClassLoader@73d16e93:...]
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\me\adamcameron\plugins for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\routes for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\me for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\META-INF for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\me\adamcameron for changes.
2022-10-16 14:21:19.390 [main]  DEBUG ktor.application - Watching C:\src\kotlin\ktor\gdayworld\build\classes\kotlin\main\plugins for changes.
2022-10-16 14:21:19.562 [main]  INFO  ktor.application - Application started in 0.298 seconds.
2022-10-16 14:21:19.562 [main]  INFO  ktor.application - Application started: io.ktor.server.application.Application@5a45133e
2022-10-16 14:21:19.937 [main]  INFO  ktor.application - Responding at http://127.0.0.1:8080
<===========--> 85% EXECUTING [15s]
> :run

When I change any source code now, the project rebuilds, and the app notices the recompiled classes, and restarts itself:

modified: C:\src\kotlin\ktor\gdayworld\src\main\kotlin\routes\IndexRoutes.kt
Change detected, executing build...


BUILD SUCCESSFUL in 6s
13 actionable tasks: 12 executed, 1 up-to-date

Waiting for changes to input files... (ctrl-d then enter to exit)
<=============> 100% EXECUTING [8m 55s]
2022-10-16 14:26:20.438 [eventLoopGroupProxy-4-2]  INFO  ktor.application - 200 OK: GET - /
2022-10-16 14:26:34.861 [eventLoopGroupProxy-3-1]  INFO  ktor.application - Changes in application detected.
2022-10-16 14:26:35.073 [eventLoopGroupProxy-3-1]  DEBUG ktor.application - Changes to 18 files caused application restart.
[...]
2022-10-16 14:26:35.106 [eventLoopGroupProxy-3-1]  INFO  ktor.application - Application auto-reloaded in 0.012 seconds.
2022-10-16 14:26:35.106 [eventLoopGroupProxy-3-1]  INFO  ktor.application - Application started: io.ktor.server.application.Application@33747fec
2022-10-16 14:26:35.107 [eventLoopGroupProxy-4-2]  INFO  ktor.application - 200 OK: GET - /
<===========--> 85% EXECUTING [5m 32s]
> :run

Note how the app doesn't restart until I actually use it, which is good thinking.

One might as why I have dropped down to a shell to do this autoload stuff? As far as I can tell it's not baked into IntelliJ yet, so needs to be handled directly by Gradle for now. It's not a hardship. I mean: the shells I am running there are being run from within IntelliJ, it's just slightly more complicated than a key-combo or some mouseclicks.

OK. That's all good progress. I'm gonna take a break and come back and create my own controller / response / etc, which is what the object of the exercise was today.


Docker

Ktor's way

I was not expecting this to be the next step, but I just spotted some stuff about Docker in the Ktor docs ("Docker"), so I decided to see what they said.

[time passes whilst I do battle with the docs]

OK, screw that. It's a very perfunctory handling of it. I can build a jar and create an image that will run it, and then run the container - and it all works - but it's… a bit… "proof of concept". From reading the docs and the code snippets that link from the docs (Deployment - Ktor plugin › Build and run a Docker image).

I had to add this to my build.gradle.kts file:

ktor {
    fatJar {
        archiveFileName.set("gday-world-ktor.jar")
    }
    docker {
        jreVersion.set(io.ktor.plugin.features.JreVersion.JRE_17)
        localImageName.set("gday-world-ktor")
        imageTag.set("${project.version}-preview")
        portMappings.set(listOf(
            io.ktor.plugin.features.DockerPortMapping(
                8080,
                8080,
                io.ktor.plugin.features.DockerPortMappingProtocol.TCP
            )
        ))
    }
}

And then from the shell I could run this lot:

PS C:\src\kotlin\ktor\gdayworld> ./gradlew :buildFatJar     
[…]
PS C:\src\kotlin\ktor\gdayworld> ./gradlew :runDocker

And I would indeed end up with a running Docker container. Which is handy, but I had no control over what params were passed to docker run, so I couldn't even give the container a name, so I just ended up with one of Docker's random ones. That's a bit amateurish. I checked to see if I was missing anything with the plugin, but this is the code (from Ktor's repo on GitHub):

private abstract class RunDockerTask : DefaultTask() {
    @get:Inject
    abstract val execOperations: ExecOperations

    @get:Input
    abstract val fullImageName: Property<String>

    @TaskAction
    fun execute() {
        val dockerExtension = project.getKtorExtension<DockerExtension>()
        execOperations.exec {
            it.commandLine(buildList {
                add("docker")
                add("run")
                for (portMapping in dockerExtension.portMappings.get()) {
                    add("-p")
                    with(portMapping) {
                        add("${outsideDocker}:${insideDocker}/${protocol.name.lowercase()}")
                    }
                }
                add(fullImageName.get())
            })
        }
    }
}

It looks to me like it simply builds a string docker run [port mappings] [image name], and that's it. No scope for me to specify any other of docker run's parameters in my build config.

So: nah, not doing that; I'll DIY. It's at least shown me what I need to do in a DockerFile and I can organise my own docker-compose.yml file.


My way

[…]

I have this docker/Dockerfile:

FROM gradle:7-jdk17 AS build
COPY --chown=gradle:gradle .. /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle test --no-daemon
RUN gradle buildFatJar --no-daemon

FROM openjdk:17
EXPOSE 8080:8080
RUN mkdir /app
COPY --from=build /home/gradle/src/build/libs/*.jar /app/gday-world-ktor.jar
ENTRYPOINT ["java","-jar","/app/gday-world-ktor.jar"]

This is pretty much lifted from the Ktor Docker › Prepare Docker image docs I linked to above, I've just added the test-round in first.

And this docker/docker-compose.yml:

version: '3'

services:
  gday-world-ktor:
    build:
      context: ..
      dockerfile: docker/Dockerfile
    ports:
      - "8080:8080"
    stdin_open: true
    tty: true

And when I run docker-compose up --build --detach, after a couple of minutes, I get an up and running container with my app in it. Bonus: it halts if my tests don't first pass.

I'm not enamoured with the "after a couple of minutes" part of this: seems really slow for what it needs to do. I am "sure" there must be a way of telling Gradle to do the build, test-run and jar-build all on one operation. However I'm over googling things starting with "gradle" today, so I'm gonna leave it for now.

I'm pretty happy with the progress I made today.

Righto.

--
Adam

Sunday 25 April 2021

Misc changes to environment for my ongoing Docker / Lucee / CFWheels series

G'day

This will be a bit of a scrappy article just summarising some changes to my project environment since the last article in this series on Lucee / CFWheels / Docker; "Adding TestBox, some tests and CFConfig into my Lucee container". By the end of that article I'd got Nginx proxying calls to Lucee, and some tests to verify its integrity and my expectations of how it ought to be working. I'm about to continue with an article about getting CFWheels to work (URL TBC), but before that - and for the ake of full disclosure - I'll detail these wee changes I've made.

It can connect Lucee to a MariaDB database and fetch records

The test summarises the aim here. /test/integration/TestDatabaseConnection.cfc:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests we can connect to the database", () => {
            it("can retrieve test records", () => {
                expectedRecords = queryNew("id,value", "int,varchar", [
                    [101, "Test row 1"],
                    [102, "Test row 2"]
                ])

                actualRecords = queryExecute("SELECT id, value FROM test ORDER BY id")
                
                expect(actualRecords).toBe(expectedRecords)
            })
        })
    }
}

Note that this filed under test/integration because it's testing the integration between Lucee and the DB, rather than any business logic.

I've aded some config to the test suite's Application.cfc too:

component {

    this.mappings = {
        "/cfmlInDocker/test" = expandPath("/test"),
        "/testbox" = expandPath("/vendor/testbox")
    }

    this.localmode = "modern"

    this.datasources["cfmlInDocker"] = {
        type = "mysql",
        host = "database.backend",
        port = 3306,
        database = "cfmlindocker",
        username = "cfmlindocker",
        password = server.system.environment.MYSQL_PASSWORD,
        custom = {
            useUnicode = true,
            characterEncoding = "UTF-8"
        }
    }
    this.datasource = "cfmlInDocker"
}

One key thing to note here is that I am setting this.localmode in here. Previous I was setting this in Lucee's global config via CFConfig, but Zac Spitzer dropped me a line and pointed out it could be set at runtime in Application.cfc. This is a much more elegant approach, so I'm running with it.

Other than that I'm setting a data source. Note I'm picking up the password from the environment, not hard-coding it. This is passed by the docker-compose.yml file:

lucee:
    build:
        context: ./lucee
        args:
            - LUCEE_PASSWORD=${LUCEE_PASSWORD}
    environment:
        - MYSQL_PASSWORD=${MYSQL_PASSWORD}

For the implementation of this requirement I've added a Docker container for MariaDB, added a test table into it and tested that Lucee can read data from it. This was all straight forward. Here are the file changes:

/docker/mariadb/Dockerfile:

FROM mariadb:latest

COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
COPY ./conf/logging.cnf /etc/mysql/conf.d/logging.cnf
RUN chmod -R 644 /etc/mysql/conf.d/logging.cnf

CMD ["mysqld"]

EXPOSE 3306

Nothing mysterious there. I'm using the entrypoint to create the DB table and populate it (docker-entrypoint-initdb.d/1.createAndPopulateTestTable.sql):

USE cfmlindocker;

CREATE TABLE test (
    id INT NOT NULL,
    value VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,

    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO test (id, value)
VALUES
    (101, 'Test row 1'),
    (102, 'Test row 2')
;

ALTER TABLE test MODIFY COLUMN id INT auto_increment;

I'm also moving logging to a different directory so I can see them on my host machine (via conf/logging.cnf):

[mysqld]
log_error = /var/log/mariadb/error.log

This is all wired-together in docker-compose.yml

mariadb:
    build:
        context: ./mariadb
    environment:
        - MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
        - MYSQL_DATABASE=${MYSQL_DATABASE}
        - MYSQL_USER=${MYSQL_USER}
        - MYSQL_PASSWORD=${MYSQL_PASSWORD}
    ports:
        - "3306:3306"
    volumes:
        - mysqlData:/var/lib/mariadb
        - ./mariadb/root_home:/root
        - ../var/log:/var/log
    stdin_open: true
    tty: true
    networks:
        backend:
            aliases:
                - database.backend

volumes:
    mysqlData:

Note that I am sticking the DB data into a Docker volume instead of in a volume from my host machine. This means I need to take some care if I ever get around to adding non-test data into it, but for the time being it saves cluttering up my host machine with DB files, plus it's easier during initial configuration to completely reset the DB. It's easy enough to change later on when I need to.

I'm setting some of those magic environment variable in .env:

COMPOSE_PROJECT_NAME=cfml-in-docker
MYSQL_DATABASE=cfmlindocker
MYSQL_USER=cfmlindocker

# the following are to be provided to `docker-compose up`
LUCEE_PASSWORD=
DATABASE_ROOT_PASSWORD=
MYSQL_PASSWORD=

And the passwords when I build the containers:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker/docker$ DATABASE_ROOT_PASSWORD=123 MYSQL_PASSWORD=1234 LUCEE_PASSWORD=12345 docker-compose up --build --detach --force-recreate

It got rid of CFConfig

Both the Lucee settings I needed to change with CFConfig before hand can be done natively with Lucee, so I didn't need CFConfig any more. I might need it again later, in which case I will re-install it. But for now it's dead-weight.

RUN box install commandbox-cfconfig
RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web
RUN box cfconfig set adminPassword=${LUCEE_PASSWORD} to=/opt/lucee/web
RUN echo ${LUCEE_PASSWORD} > /opt/lucee/server/lucee-server/context/password.txt # this handles the passwords for both server and web admins

It can run the tests from the shell

Running TestBox's tests in a browser is all very pretty, but not very practical. Fortunately I read the TestBox docs some more and found out how to run them from the shell. They show how to run it from within CommandBox's own special shell here in "TestBox integration › Test runner", but that's weird and no help to me. However I finally twigged that it seems that whatever one might do within the special shell, one can also call from the normal shell via the box command. All I needed to do to enable this was to tell CommandBox how to run the tests in docker/lucee/box.json, which is used by CommandBox in the docker/lucee/Dockerfile:

{
    "devDependencies":{
        "testbox":"^4.2.1+400"
    },
    "installPaths":{
        "testbox":"vendor/testbox/"
    },
    "testbox":{
        "runner":"http://localhost:8888/test/runTests.cfm"
    }
}
COPY ./box.json /var/www/box.json
RUN mkdir -p /var/www/vendor
RUN box install

This has the benefit that the test run doesn't simply return a 200-OK all the time whether tests all passed or not; it exits with a 1 if there's any test failures. So it's usable in a CI/CD situation.

It resolves the slowness with CommandBox

In the previous article I observed that running stuff with CommandBox seemed to have about a 45sec overhead for any action. I tracked this down to the fact that I have my /root/home directory as a volume from my host machine so my various shell histories persist across container rebuilds. And I then realised that CommandBox dumps a whole lot of shite in that directory which it needs to load every time it runs. Because of the shenanigans Docker needs to do when bridging from its file system across to WSL across to the native Windows file systems, these operations are S-L-O-W. OK for a few files. Not OK for stacks of them.

Fortunately CommandBox can be configured to put its temp files elsewhere, so I have configured it to put them in /var/temp instead. As they regenerate if they are missing, this seems like the best place for them. It also prevents clutter leaking out of my container and onto my host machine. This is done via a commandbox.properties file:

commandbox_home=/var/tmp/commandbox

Which I copy into place in the Dockerfile. CommandBox picks it up automatically when I place it there:

COPY ./commandbox.properties /usr/local/bin/commandbox.properties

Good stuff. Now it only takes about 5sec for box to start doing anything, which is fine.

It no longer has the problem with path_info

I covered the shortfall in how Lucee handles path_info in "Repro for Lucee weirdness". I've managed to work around this. Kind of. In a way that solves the problem for this project anyhow.

Well I guess really it is just "learning to live with it". I've done some other experimentation with CFWheels, and all it uses path_info for is indeed to implement semi-user-friendly URLs tacked on to index.cfm. It has no need for any other .cfm file to use its path_info, so the default mappings are actually fine as they are.

However it occurred to me when I was configuring Nginx to do its part of the user-friendly URLs that all requests coming into Lucee from the web server will land in /public, so I could just put in a servlet mapping for the index.cfm in that directory (from web.xml):

<servlet-mapping>
    <servlet-name>CFMLServlet</servlet-name>
    <url-pattern>*.cfm</url-pattern>
    <url-pattern>*.cfml</url-pattern>
    <url-pattern>*.cfc</url-pattern>
    <url-pattern>/index.cfm/*</url-pattern>
    <url-pattern>/index.cfc/*</url-pattern>
    <url-pattern>/index.cfml/*</url-pattern>

    <url-pattern>/public/index.cfm/*</url-pattern>
</servlet-mapping>

One might think that instead of using <url-pattern>/public/index.cfm/*</url-pattern>, I might be able to just specify a match for the entire directory, like this: <url-pattern>/public/*</url-pattern>. From a POV of Tomcat's expectations this ought to be good enough, but from Lucee's perspective it doesn't see that as a "anything in that directory", it's expecting that pattern to be a file that matches a CFML file, so when I tried that I just got an error along the lines of "/public is a directory". Ah well. FWIW, ColdFusion said pretty much the same thing.

One downside to this is that I cannot work out how to add a servlet mapping just for this Lucee application, so I need to replace the entire Tomcat web.xml file, with another one with just one additional line (the original file is 4655 lines long). This is less than ideal, and I've followed it up on the Lucee Slack channel. I just copy the file over in the Dockerfile:


COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc
COPY ./web.xml /usr/local/tomcat/conf/web.xml

I had to rename my test file to index.cfm (so this means the test will need to just go once I install CFWheels which needs that file), but for now I was able to test the change:


it("passes URL path_info to Lucee correctly", () => {
    testPathInfo = "/additional/path/info/"

    http url="http://cfml-in-docker.frontend/index.cfm#testPathInfo#" result="response";

    expect(response.status_code).toBe(200, "HTTP status code incorrect")
    expect(response.fileContent.trim()).toBe(testPathInfo, "PATH_INFO value was incorrect")
})
<cfoutput>#CGI.path_info#</cfoutput>

Oh! And the Nginx changes! docker/nginx/sites/default.conf:

location / {
    try_files $uri $uri/ =404;
    try_files $uri $uri/ @rewrite;
}

location @rewrite {
    rewrite ^/(.*)? /index.cfm$request_uri last;
    rewrite ^ /index.cfm last;
}

(Thanks to the ColdBox docs for those)

It no longer needs PHP to test things

I'm happy that TestBox is working well enough now that I don't need to test things with PHPUnit, and that's all the PHP container was for, so I've removed all that stuff.


That's it. In the next article I shall continue from here, and get CFWheels set up in a waythat doesn't require the entire application being a) messed in with my own code; b) in a web browsable directory. Stay tuned…

Righto.

--
Adam

Monday 19 April 2021

Adding TestBox, some tests and CFConfig into my Lucee container

G'day:

On Fri/Sat (it's currently Sunday evening, but I'll likely not finish this until Monday now) I started looking at getting some CFML stuff running on Lucee in a Docker container (see earlier/other articles in this series: Lucee/CFWheels/Docker series). If you like you can read about that stuff: "Using Docker to strum up an Nginx website serving CFML via Lucee" and "Repro for Lucee weirdness". This article resumes from where I got to with the former one, so that one might be good for some context.

Full disclosure: I spent all today messing around in a spike: experimenting with stuff, and now am finally happy I have got something to report back on, so I have rolled-back the spike and am going to do the "production" version of it via TDD again. I just say this - and it's not the first time - if yer doing TDD it's OK to spike-out and do a bunch of stuff to work out how to do things without testing every step. Especially if yer like me and start from a position of having NFI what you need to do. However once yer done: revert everything and start again, testing-first as you go. What I've done here is save all my stuff in a branch, and now I'm looking at a diff of that and main, as a reference to what I actually need to do, and what is fluff that represents a dead end, or something I didn't need to do anyhow, or whatever.

It needs to only expose public stuff to the public

As per the article subject line, today I'm gonna install a bit more tooling and get me in a better position to do some dev work. The first thing I noticed is that as things stand, the Nginx wesbite is serving everything in the Lucee application root (/var/www), whereas that directory is going to be a home directory for the application code, test code and third-party apps, so I don't want that browsable. I'm going to shift things around a bit. A notional directory structure would be along these lines:

root
├── public
│   ├── css
│   ├── images
│   ├── js
│   ├── Application.cfc
│   ├── favicon.ico
│   └── index.cfm
├── src
│   └── Application.cfc
├── test
└── vendor
    ├── cfwheels
    └── testbox

I've taken a fairly Composer / PHP approach there, but I could equally follow a more Java-centric approach to the directory structure:

root
├── com
│   └── ortussolutions
│       └── testBox
├── me
│   └── adamcameron
│       └── cfmlInDocker
│           ├── src
│           │   └── Application.cfc
│           └── test
├── org
│   └── cfwheels
│       └── cfwheels
└── public
    ├── css
    ├── images
    ├── js
    ├── Application.cfc
    ├── favicon.ico
    └── index.cfm

The point being: the website directory and my code and other people's code should be kept well away from one another. That's just common-sense.

Anyway, back to the point. Whichever way I organise the rest of things, only stuff that is supposed to be browsed-to should be browsable. Everything else should not be. So I'm gonna move the website's docroot, as well as the files that need to be served. This is just a "refactoring" exercise, so no tests should change here. We just want to make sure they still all pass.

This just means some changes to docker-compose.yml:

lucee:
    build:
        context: ./lucee
    volumes:
    	- ../public:/var/www
        - ../public:/var/www/public
        - ../root/src:/var/www/src
        - ../test:/var/www/test
        - ../var/log/tomcat:/usr/local/tomcat/log
        - ../var/log/lucee:/opt/lucee/web/logs
        - ./lucee/root_home:/root

And the website config (docker/nginx/sites/default.conf):

location ~ \.(?:cfm|cfc) {
    # ...

    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
    proxy_pass  http://cfml-in-docker.lucee:8888/public$fastcgi_script_name$is_args$args;
}

Once a rebuild the containers, I get two failing tests. Oops: I did not expect that. What's going on? Checking the front-end, the public-facing website still behaves the same. So… erm …?

One thing that didn't occur to me when doing this change is that a couple of the tests are hitting the internal Lucee website (reminder: the public-facing website for me is http://cfml-in-docker.frontend/ and the internal Lucee web site is http://cfml-in-docker.lucee:8888/). And that internal website still points to /var/www/, so where previously I'd access http://cfml-in-docker.lucee:8888/remoteAddrTest.cfm, now the URL for the backend site is be http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm. This is by design (kinda, just… for now), but I forgot about this when I made the change.

This means to me that my change is not simply a refactoring: therefore I need to start with a failing tests. I roll back my config changes, fix the tests so they hit http://cfml-in-docker.lucee:8888/public/gdayWorld.cfm and http://cfml-in-docker.lucee:8888//public/remoteAddrTest.cfm respectively, and watch them fail. Good. Now I roll forward my config changes again and see the tests pass: cool. Job done.

Later when I'm reconfiguring things I might remap it to /var/www/public, if I can work out how to do that without hacking Tomcat config files too much. But remember the test case here: It needs to only expose public stuff to the public. And we've achieved that. Let's not worry about a test case we don't need to address for now. Moving on…

It can run tests with TestBox

Currently I am running my tests via a separate container running PHP and PHPUnit. This has been curling the website to test Nginx and Lucee behaviour. Now that I have Lucee working, I can shift the tests to TestBox, which is - as far as I know - the current state of the art when it comes to testing CFML code. It provides both xUnit and Jasmine-style testing syntax.

The test for this is going to be a "physician heal thyself" kind of affair. I'm going to write a TestBox test. Once I can run it and it doesn't just go splat: job done. The test is simply this:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Baseline TestBox tests", () => {
            it("can verify that true is, indeed, true", () => {
                expect(true).toBe(true)
            })
        })
    }
}

Testbox seems to be web-based. I'd much prefer just running my tests from the shell like I would any other testing framework I've used in the last 7-8 years, but CFML has much more "it's aimed at websites, so evertything is implemented as a website" mentality (hammers and nails spring to mind here). So be it I guess. I do see that TestBox does have the ability to integrate with Ant, but that seems to be via an HTTP request as well. Hrm. What I know is I can't simply do something like testbox /path/to/my/tests or similar. What I do need to do is write a small runner file (runTests.cfm), which I then browse to:

<cfscript>
    testBox = new testbox.system.TestBox(directory="cfmlInDocker.test")
    result = testBox.run(
        reporter = "testbox.system.reports.SimpleReporter"
    )
    writeOutput(result)
</cfscript>

To use that testbox.system and cfmlInDocker.test paths, I need to define mappings for them at application level (ie: not in the same file that uses it, but a different unrelated file, Application.cfc):

component {
    this.mappings = {
        "/cfmlInDocker/test" = expandPath("/test"),
        "/testbox" = expandPath("/vendor/testbox")
    }
}

And when I browse to that, I get a predictable error:

Let's call that our "failing test".

OK so right, we install TestBox from ForgeBox (think packagist.org or npmjs.com). And to install stuff from ForgeBox I need CommandBox. And that is pretty straight forward; just a change to my Lucee Dockerfile:

FROM lucee/lucee:5.3

RUN apt-get update
RUN apt-get install vim --yes

COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc

WORKDIR  /var/www

RUN curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
RUN echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
RUN apt-get update && apt-get install apt-transport-https commandbox --yes
RUN echo exit | box
EXPOSE 8888

That last step there is because CommandBox needs to configure itself before it works, so I might as well do that when it's first installed.

Once I rebuild the container with that change, we can get CommandBox to install TestBox for us:

root@b73f0836b708:/var/www# box install id=testbox directory=vendor savedev=true
√ | Installing package [forgebox:testbox]
   | √ | Installing package [forgebox:cbstreams@^1.5.0]
   | √ | Installing package [forgebox:mockdatacfc@^3.3.0+22]

root@b73f0836b708:/var/www#
root@b73f0836b708:/var/www#
root@b73f0836b708:/var/www# ll vendor/
total 12
drwxr-xr-x 3 root root 4096 Apr 19 09:23 ./
drwxr-xr-x 1 root root 4096 Apr 19 09:23 ../
drwxr-xr-x 9 root root 4096 Apr 19 09:23 testbox/
root@b73f0836b708:/var/www#

Note that commandbox is glacially slow to do anything, so be patient rather than be like me going "WTH is going on here?" Check this out:

root@b73f0836b708:/var/www# time box help

**************************************************
* CommandBox Help
**************************************************

Here is a list of commands in this namespace:

// help stuff elided…

To get further help on any of the items above, type "help command name".

real    0m48.508s
user    0m10.298s
sys     0m2.448s
root@b73f0836b708:/var/www#

48 bloody seconds?!?!. Now… fine. I'm doing this inside a Docker container. But even still. Blimey fellas. This is the equivalent for composer:

root@b21019120bca:/usr/share/cfml-in-docker# time composer --help
Usage:
  help [options] [--] [<command_name>]

// help stuff elided…


To display the list of available commands, please use the list command.

real    0m0.223s
user    0m0.053s
sys     0m0.035s
root@b21019120bca:/usr/share/cfml-in-docker#

That is more what I'd expect. I suspect they are strumming up a CFML server inside the box application to execute CFML code to do the processing. Again: hammer and nails eh? But anyway, it's not such a big deal. The important thing is: did it work?

Yes it bloody did! Cool! Worth the wait, I say.

I still need to find a way to run it from the shell, and I also need to work out how to integrate it into my IDE, but I have a minimum baseline of being able to run tests now, so that is cool.

The installation process also generated a box.json file, which is the equivalent of a composer.json / packages.json file:

root@b73f0836b708:/var/www# cat box.json
{
    "devDependencies":{
        "testbox":"^4.2.1+400"
    },
    "installPaths":{
        "testbox":"vendor/testbox/"
    }
}

It doesn't seem to have the corresponding lock file though, so I'm wondering how deployment works. The .json dictates what could be installed (eg: for testbox it's stating it could be anything above 4.2.1+400 but less than 5.0), but there's nothing controlling what is installed. EG: specifically 4.2.1+400. If I run this process tomorrow, I might get 4.3 instead. It doesn't matter so much with dev dependencies, but for production dependencies, one wants to make sure that whatever version is being used on one box will also be what gets installed on another box. Which is why one needs some lock-file concept. The Composer docs explain this better than I have been (and NPM works the same way). Anyway, it's fine for now.

Now that I have the box.json file, I can simply run box install in my Dockerfile:

# …
WORKDIR  /var/www

RUN curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
RUN echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
RUN apt-get update && apt-get install apt-transport-https commandbox --yes
RUN echo exit | box

COPY ./box.json /var/www/box.json
RUN mkdir -p /var/www/vendor
RUN box install

EXPOSE 8888

It runs all the same tests via TestBox as it does via PHPUnit

I'm not going to do some fancy test that actually tests that my tests match some other tests (I'm not that retentive about TDD!). I'm just going to implement the same tests I've already got on PHPUnit in TestBox. Just as some practise at TestBox really. I've used it in the past, but I've forgotten almost everything I used to know about it.

Actually that was pretty painless. I'm glad I took the time to properly document CFScript syntax a few years ago, as I'd forgotten how Railo/Lucee handled tags-in-script, and the actual Lucee docs weren't revealing this to me very quickly. That was the only hitch I had along the way.

All the tests are variations on the same theme, so I'll just repeat one of the CFCs here (NginxProxyToLuceeTest.cfc):

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests Nginx proxies CFML requests to Lucee", () => {
            it("proxies a CFM request to Lucee", () => {
                http url="http://cfml-in-docker.frontend/gdayWorld.cfm" result="response";

                expect(response.status_code).toBe( 200, "HTTP status code incorrect")
                expect(response.fileContent.trim()).toBe( "G'day world!", "Response body incorrect")
            })

            it("passes query values to Lucee", () => {
                http url="http://cfml-in-docker.frontend/queryTest.cfm?testParam=expectedValue" result="response";

                expect(response.status_code).toBe( 200, "HTTP status code incorrect")
                expect(response.fileContent.trim()).toBe( "expectedValue", "Query parameter value was incorrect")
            })

            it("passes the upstream remote address to Lucee", () => {
                http url="http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm" result="response";
                expectedRemoteAddr = response.fileContent

                http url="http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm" result="testResponse";
                actualRemoteAddr = testResponse.fileContent

                expect(actualRemoteAddr).toBe(expectedRemoteAddr, "Remote address was incorrect")
            })
        })
    }
}

The syntax for making an http request uses that weirdo syntax that is neither fish nor fowl (and accordingly confuses Lucee itself as to what's a statement and what isn't, hence needing the semi-colon), but other than that it's all quite tidy.

And evidence of them all running:

I can get rid of the PHP container now!

It uses CFConfig to make some Lucee config tweaks

An observant CFMLer will notice that I did not var my variables in the code above. To non-CFMLers: one generally needs to actively declare a variable as local to the function its in (var myLocalVariable = "something"), otherwise without that var keyword it's global to the object it's in. This was an historically poor design decision by Macromedia, but we're stuck with it now. Kinda. Lucee has a setting such that the var is optional. And I've switched this setting on for this code.

Traditionally settings like this need to be managed through the Lucee Administrator GUI, but I don't wanna have to horse around with that: it's a daft way of setting config. There's no easy out-of-the-box way of making config changes like this outside the GUI, but there's a tool CFConfig that let's me do it with "code". Aside: why is this not called ConfigBox?

Before I do the implementation, I can actually test for this:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests Lucee's config has been tweaked'", () => {
            it("has 'Local scope mode' set to 'modern'", () => {
                testVariable = "value"

                expect(variables).notToHaveKey("testVariable", "testVariable should not be set in variables scope")
                expect(local).toHaveKey("testVariable", "testVariable should be set in local scope")
            })
        })
    }
}

Out of the box that first expectation will fail. Let's fix that.

Installing CFConfig is done via CommandBox/Forgebox, and I can do that within the Dockerfile:

RUN box install commandbox-cfconfig

Then I can make that setting change, thus:

RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web

I'm going to do one more tweak whilst I'm here. The Admin UI requires a coupla passwords to be set, and by default one needs to do the initial setting via putting it in a file on the server and importing it. Dunno what that's all about, but I'm not having a bar of it. We can sort this out with the Dockerfile and CFConfig too:

FROM lucee/lucee:5.3

ARG LUCEE_PASSWORD

# a bunch of stuff elided for brevity…

RUN box install commandbox-cfconfig
RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web
RUN box cfconfig set adminPassword=${LUCEE_PASSWORD} to=/opt/lucee/web # for web admin
RUN echo ${LUCEE_PASSWORD} > /opt/lucee/server/lucee-server/context/password.txt # for server admin (actually seems to deal with both, now that I check)

EXPOSE 8888

That argument is passed by docker-compose, via docker-compose.yml:

lucee:
    build:
        context: ./lucee
        args:
            - LUCEE_PASSWORD=${LUCEE_PASSWORD}

And that in turn is passed-in via the shell when the containers are built:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker/docker$ LUCEE_PASSWORD=12345678 docker-compose up --build --detach --force-recreate

I'd rather use CFConfig for both the passwords, but I could not find a setting to set the server admin one. I'll ask the CFConfig bods. I did find a setting to just disable the login completely (adminLoginRequired), but I suspect that setting is only for ColdFusion, not Lucee. It didn't work for me on Lucee anyhow.

It has written enough for today

I was gonna try to tackle getting CFWheels installed and running in this exercise too, but this article is already long enough and this seems like a good place to pause. Plus I've just spotted someone being wrong on the internet, and I need to go interject there first.

Righto.

--
Adam