Thursday, 29 April 2021

Definitions of ~

G'day

I'm just building on some thoughts here. Some of the thoughts are from learning from people who know a lot more than me, and from watching my teams through their successes and… "less than successes".

When we work, whether we realise it or not, the intent is to deliver value to some client. The client could be an external user who we would like to hand money over to us as a result of our efforts. Or the client might be the boss and their latest revenue-making idea. Or the client might be our own application that is need of some love because the codebase is not exactly as good as we'd like it to be. But there is always a client, and the end result of our work needs to benefit the client. Sometimes it's hard to get a handle on what the client wants, and what exactly we need to do to solve their problems and to add value.

But all our work needs to add value. And that value need to be able to be both measured, and to be realised.

To do this, when we set-out to do & deliver some work, we need a coupla benchmark points along the way. Most notably before we even agree to start, a "Definition of Ready"; and before we agree it's finished: a "Definition of Done".

My notes below are from an engineer's perspective, and there's no-doubt some nuance I'm missing, or the points are specific to situations I've been in. It's not really a generic template, but a theoretical example of what might go into these documents. But to get our work done, I figure we ought to be considering some super/sub -set of these kind of things. These points represent a statement of obvious, and an aide-mémoire for stuff that might be less obvious.

Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC-2119. I like including this, because it makes sure these words that might be read as being fairly loaded are just used as a reference vocab. I'm not looking at you when I say this.

Definition of Ready

  • A story should represent effort that SHOULD fit in one sprint (ie: from "Ready" to "Done" in one sprint).
  • Within the above constraint, some estimation of complexity, risk and effort SHOULD be indicated.
  • A Behaviour Driven Development (BDD) phraseology SHOULD be used for User Stories. This is so we actively and deliberately make clear each case that need to be addressed to fulfil the requirement, and to reduce ambiguity.
  • Inbound and outbound parameters (eg for a new web page or API end point) incl. route slug, query string params, request method, POST body, headers, cookies, anything else) MUST be clearly defined, where the definition MUST include any validation constraints (eg: "[some date] must be in the past").
  • Where relevant, outbound status codes SHOULD also be detailed (200-OK, 201-CREATED, 400-BAD-REQUEST etc). For non-2xx responses, error information (or lack thereof when there are TMI concerns) SHOULD be defined if appropriate.
  • Logging requirements of non-2xx statuses SHOULD be defined. EG: 404s probably nothing. Some 400s: maybe? 401 & 403: probably? 5xx: definitely.
  • Interactions with external systems - eg DBs - SHOULD be considered.
  • If the story represents partial work, consideration MUST be made as to how it will be deployed in an "unavailable state" to external uses (eg: what feature toggles are in place, and the criteria for toggle on/off).
  • Similarly, consideration SHOULD be made as to how the partial work can be internally QAed (eg: feature toggle rules include the environment, so that features behave "live" in a QA environment).
  • For stories that are investigative rather than development-oriented, BDD phraseology SHOULD still be used.
  • For investigative stories, there MUST still be a deliverable ("we have a clear idea how to integrate Varnish on the front end to remove load from the application servers on seldom-changing content"). The deliverable MUST be tangible (eg: written down and filed), not just "oh just google some stuff and get an idea how it works".
  • For stories related to bugs: steps to reproduce, expected behaviour and actual behaviour, any error messages and log entries SHOULD be included in the ticket detail, if possible.
  • For stories relating to UI changes, an example of how it will look SHOULD be included on the ticket. This MAY NOT be pixel-perfect, just indicative.
  • Engineering MUST have an idea of how any legacy code changes will have automated testing. This is because we accept legacy code has not been written with testing in mind, but we still MUST test our work.

One thing to note here is that all this stuff requires collaboration from the client and the squad. Some stuff is business need, some stuff is technical fulfilment. For example one would not expect the client to know about DB rollback minutiae, but it's our job to know that it might need to be done, and how to do it. We are their technical advocates and advisors. And gatekeepers. And backstops. And there might be a toner cartridge needing to be changed. Shrug. You know the deal. But it's an important deal.

But anyway… if some work meets a "Definition of Ready", then a squad MAY consider it to be added into the workflow for an upcoming release.

For some work to be considered completed, then we need to consider if it's been "done".

Definition of Done

  • There SHOULD be development guidelines (whatever they are) and they MUST be followed.
  • There SHOULD be automated code quality checks, and they MUST pass.
  • All new code SHOULD have automated tests, and test coverage MUST pass.
  • Code relating to other processes such as DB updates and rollbacks SHOULD be in source control and/or readily to hand.
  • In feature-toggled situations, both toggle-on and toggle-off states MUST be tested.
  • Work MUST have been QAed and actively signed-off as good-to-go.
  • Work MUST be actively accepted by the client (whoever the client is, and that could well be a colleague).
  • Work that is under feature toggle MAY be considered "done" even if toggled off, provided it's passed QA in the "off" state.
  • Work MUST be in production to be considered "done".

The bottom line here is particularly relevant for devs: you've not finished your work until it's live. It's not done when you've finished typing. It's not done when it's in code review. It's not done when it's in QA. It's not done when yer bored of it and would rather be doing something else. It's only done when the client sees the work and accepts it. You are on-the-hook until then, and it's your responsibility until then. You should be actively chasing whoever is necessary to get your work into production and earning value, because if it's not: there's no point in having done it.

For the squad as a whole, every person has a part in seeing the squad's work seeing the light of day. Make sure you're doing what you can to expedite work getting in front of the client and to help them either sign it off, or kick it back for revision.

There will be more factors on both sides of this table, measuring whether work ought to be done, or is done. My point here is more that it needs to be a series of active checks along the way. We don't undertake work if we can't see the light at the end of the tunnel, and we don't actually know it will benefit our client. And we need to think about this all the way.

Let me know what other considerations there might be in this. I'm very web-app-centric in my programming / business / leadership (such as it is) exposure, and there's probably more - of fewer! - things to think about.

Righto.

--
Adam

Monday, 26 April 2021

Change in look...

G'day:

I see my change in look has been leaked to the public:

I hope Adam Tuttle doesn't mind me copying his work.

This was taken from Top 30 Coldfusion RSS Feeds. I will touch base with them and ask them "WTF?"

Righto.

--
Adam

Sunday, 25 April 2021

On code review

G'day:

I'm pretty big on code review; I see it as a critical part of the process of developing solution for our clients, and making our own and our colleagues' lives easier. It's a useful communications exercise, and it's a useful technical exercise.

I've used a variation of these notes with a variety of teams in the past, but I've never - until recently - got around to writing a decent (semi-) generic document about it. I've polished things a bit here, and thought I'd get it down in a public-facing fashion. There are references in here to systems I am familiar working with like Bitbucket and Jira and PHP. But I think the guidance carries across, irrespective of what combination of tooling one uses; if not in the precise mechanics, then in the processes the mechanics facilitate.

The CSS for the <h4> through <h6> is not as good as it could be. It's annoying that Blogspot gets me down as far as an <h3> even before we get to the title of the article (the date gets the <h2> for some reason), but: so be it. Sorry about that (you'll see what I mean as you get further down).

Introduction

Before code can be considered ready for release, the dev team needs to give the work a once-over or a "second set of eyes" to make sure it meets code quality guidelines, is appropriately tested, and that it fulfils the requirements of the user story that the code addresses, using the domain design / architecture / approach originally agreed before coding started.

What code review is

It is part of the collaborative team approach to delivering value to the client, just like the initial planning stage is before any code is even written, and any pair-programming that takes place during the initial development cycle. Given a lot of the physical coding work is done in separately from the rest of the team, code review is a "re-union" exercise where the devs get back together to see how their solution has ended up. It's important to remember that any given code is our code, not an individual developer's code.

Code review gives the coding dev a stage to show how they've implemented the solution, draw attention to anything noteworthy or needs flagging for any other reason to the rest of the team. And it gives the reviewers a chance to check things out, and potentially spot anything in the work that might need addressing still, or tweaking, or done slightly differently maybe, or whatever.

Another thing code review is is a learning exercise for everyone concerned. Some nuance of the code, or a suggestion to amend the code could be new knowledge to one or all people on the review. Or it might just clarify how some less-often-trod part of the codebase works. And if nothing else, it'll add to everyone's familiarity with the code in question, and it might be you working on it next. Win.

What code review is not

I have experienced some code reviews that have taken an adversarial approach: pitting the coding developer against the reviewing developers, with the reviewers going "you got this wrong", "this doesn't work", "nup", etc, and trying to seem more clever than their colleagues, with arguments going back and forth in the review comments. This is not what code review is about.

It's also not about people checking up on each other. Nor is it about more senior people pronouncing to less senior people.

We are on the same team, working on the same stories, with the same end-goal in mind.

Where the disconnect happens

Still. If I've done some work and someone else adds a comment saying "this could be done differently because [reasons]", it's easy to get a wee bit defensive / protective of our work (for a moment you're claiming the team's work as your own because it happened to be you who typed it in). This is pretty normal. I mean when I'm doing my own hobby-work and something goes unexpected wrong, I'm usually pretty quick to blame the language or the framework or some library, or something other than just myself for getting something slightly wrong, and forgetting that it's OK to be wrong. I like to think that the blame-casting phase of this is a pretty short cycle these days, and I'm quickly back to going "OK, so what do I need to do to sort this out?". Code review is the same.

I have a good quote here:

An amateur is defensive. The professional finds learning (and even, occasionally, being shown up) to be enjoyable; they like being challenged and humbled, and engage in education as an ongoing and endless process.

Anyway, bottom line: code review is participants on the same team working together.

Code review mindset

A pull request is a way to communicate application changes to the team. The comments that are added to the code in the pull request have good intentions with the intent of improving our work, and the application. Everyone should be open for a discussion like that. HOwever because written communication can lack some nuance of a spoken in-person conversation, remember to over-compensate with the text and make sure it reads as polite, constructive, and respectful. I've found that sometimes even being too terse in a comment ("needs a test") can be seen as being abrupt, rather than just being to-the-point. Also remember to address the code, not the person. If something's not clear: perhaps ask about it rather than telling about. There's no harm in asking for clarification in something. Code is a communications mechanism - it's designed to articulate concepts between people - so if something needs clarification, this could indeed be something that needs to be addressed in the code. Or you might just've not clocked what the code was doing initially and just need a quick pointer from the dev, before going "duh, of course".

By the same token it's important for the recipient of a comment to frame things the same way. If you read a comment as being adversarial or confrontational, you are probably just be using the wrong internal voice to read it. Read it again with a neutral or collaborative voice. It's hardly ever the case that the comment writer was committing a personal attack to writing, and in a code review comment. It's more likely the person chose the wrong wording or didn't take time to polish-up the comment before pressing send. DEvs are notoriously bad at articulating themselves in writing, after all, right?

Comments

If you're reviewing and making a comment, read it back to yourself before pressing submit. Would you like to be on the receiving end of it?

If you're replying to a comment, act likewise.

Before adding a reply to a reply (… to a reply… to a reply…), remember it's a code review not a discussion thread. Consider getting on a call and actually talking to the person. This can really quickly solve the misunderstanding, but it is a really good way of defusing any tensions that could possibly be building.

Compromise

Some back-and-forth discussions are really useful and interesting! Some aren't. Either way, they're probably not what you all ought to be doing right at that moment. Give some thought as to whether a) the discussion is actually moving us closer to getting the work approved; b) needs to happen right then and there; c) via a text-based discussion. Possibly just go "yeah actually let's just run with it" (whether "it" is how the code already is, or if it's the fix being suggested). Or possibly go "OK, but let's consider revisiting this in another story later… it might be worth it".

Code review logistics

  • All work going into the codebase must be reviewed and approved by at least one other person (not the dev who wrote the code!). Bitbucket can be configured to disabled the Merge button until there are n approvals.
  • If a review yields any rework, this should be undertaken before any other new work is commenced. One should focus on completing existing work before starting the next piece of work.
  • From a reviewer's perspective: reviewing our colleagues existing work ought to generally take priority over continuing to develop our own new code. At any given moment work that is closer to be completed should be the focus, over starting or continuing newer work. This is not to say that one must drop everything when a code review comes in. What it does mean is that when re-commencing from a break (arriving at work, after or lunch or some other intermediary break), and before refocusing on new code: check what code review needs to be done, and deal with that first.
  • That said, it's important to minimise task-switching, so be pragmatic about this. But bear in mind that it is also part of our job to work as a team as well as work as an individual.

Handling code review tasks

Tasks are a feature in BitBucket that allows a code review comment to have a "task" attached to it. This indicates work that needs to be done before the code review can be completed.

Bitbucket can be configured to disable the Merge button whilst there are outstanding tasks.

As a reviewer, if the nature of an observation you make seems to require action, add a task to the comment. This signifies that you're not just making the observation, you mean it needs action.

As the developer, when addressing a task, mark it "done" in one of two situations:

  • whatever was requested has been done;
  • after further discussion, it's concluded there's no action to take. But add a comment to that effect in reply to the task comment.

Do not mark a task done unless one of those two resolutions have been made. IE: don't simply decide to not address a task because [reasons]. Get agreement from the reviewer as to the legitimacy of those reasons first.

Code review guidelines

Before the code gets there
  • The branch name, commit comments (and they should all have comments!), and pull request should be prefixed with the ticket number for the story the work is for. This is just to make it easier to tie everything together later. The important one of these is the pull request name should have the ticket reference in it so the reviewers can check what they're reviewing. In the past I've used a rule that the pull request title should be "JRA-1234 The Title Of The Ticket" (ie: exactly the title of the ticket). Hopefully Bitbucket has integrations with the ticketing system to it can automatically like through to the ticket just from the ID being there.
  • If there's no ticket, the work should not have been even started, let alone submitted for code review.
  • The ticket needs to have a story in it. The reviewers need to be able to to know what the work is setting out to do, otherwise how can they know it's fulfilled all its cases? For that matter, how did the dev who did the work?
  • Consider squashing your commits when doing a pull request (or configure Bitbucket to do it automatically). By the time the code is going into master, it's no longer useful to log the sequence the work was done it. And we still have the commit log in your own repo anyhow.
Before the review starts
  • There ought to be an automated build that ran code quality checks and unit tests. Hopefully there's integration between the CI tool and Bitbucket so the pull request automatically indicates a green status, or the dev should include a link to the green status report.
  • If that's not green: don't review anything. However the dev possibly did not notice the build is broken, so give them a (gentle) nudge.
  • As the developer, go through the pull request and put your own comments in, where necessary. Sometimes the code can legitimately not be obvious, or it just helps to contextualise which part of the story the code is addressing.
What to look for in a review
General
  • Does the work fulfil all of the requirements of the story on the ticket. This might seem obvious, but sometimes a dev can forget something. Or perhaps their reading of the requirement is different from yours, so what's been delivered might not meet actual expectations.
  • Does the work go off-piste anywhere? It should only address the requirement of the ticket. Not "oh, whilst I was in this file I saw something I could fix in this other bit of code". That can be done later on another ticket. Or: not at all. So check that the work has maintained focus.
  • As a reviewer do you understand the code. This is vital. Remember code is a tool for communicating between developers, and it's not doing its job if you don't understand it. Code can be as fancy as you like, but if no-one can understand it, it should gt flagged in code review.
  • All code needs to adhere to any prescribed guidelines:
    • In-house guidelines (whatever they are, but all teams should have them).
    • Probably by implication, external community-recognised guidelines such as PHP's PSR-12. People who are not currently in the team will need to understand this code later on, so let's write code that newcomers have more of a chance of understanding.
Specific
  • Are all the test cases in place and implemented?
  • Some code-sizing boundaries in coding guidelines docs won't literally be checked by the automated quality tests, it's up to the reviewer to think about it. A method will be rejected by the automated tests if it's - for example - over 100 lines long. But if the target is 20, warning bells should be going off at 40. If it's 80 lines long, this probably warrants a closer look. The size restrictions are not there to be gamed (101 bad; 99 good), they're just something we can get code analysis tooling (eg: PHPMD) to flag up. It's down to the humans to agree whether the method is A-OK at 80 lines.
  • Equally if new work takes a method that is 50 lines long and takes it up to 60 lines… it was already too big so the dev ought to have extracted the piece of logic they needed to work on into a separate method, and done the work there. Or if they new work definitely belongs in the mainline of the method, then extract a different section of the method (so employing the boy scout rule). There is seldom any good reason for a method that is already oversized to get even bigger.
  • Do class / method / variable names make sense? Are they spelt right? Yeah, it matters.
  • Are classes / methods doing just one thing?
    • A controller should not be getting stuff from the database or implementing business logic.
    • A method called getAccount should be getting the account. Not creating it if it's missing and then returning it.
    • A repository class should not be implementing logging, etc.
    • A class that hits the DB and also sets constants for other classes to use is doing two things. One or other of them does not belong in that class.
    • Any method that takes a Boolean parameter is almost certainly doing two things: one if the flag is true; another if it's false. This is definitely a warning sign.
    • Look at the levels of indentation. Every level suggests that the method is doing too much.

The value of code review

  • The codebase will be better.
  • You might have learned some new coding techniques. You have improved as a developer.
  • You know more about the codebase, which reduces the bus factor for the company.
  • You help your colleagues stay on track with their work.
  • You get to practise your written communication skills.
  • We get a better historical record of why work was done.
  • Team cohesion. It's a team-building exercise.
  • Reading code. It's good practise reading other people's code.
  • Writing code. It's good practise writing code other people can read.

Let me know what you think about code review. Any considerations I've missed?

Righto.

--
Adam

Misc changes to environment for my ongoing Docker / Lucee / CFWheels series

G'day

This will be a bit of a scrappy article just summarising some changes to my project environment since the last article in this series on Lucee / CFWheels / Docker; "Adding TestBox, some tests and CFConfig into my Lucee container". By the end of that article I'd got Nginx proxying calls to Lucee, and some tests to verify its integrity and my expectations of how it ought to be working. I'm about to continue with an article about getting CFWheels to work (URL TBC), but before that - and for the ake of full disclosure - I'll detail these wee changes I've made.

It can connect Lucee to a MariaDB database and fetch records

The test summarises the aim here. /test/integration/TestDatabaseConnection.cfc:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests we can connect to the database", () => {
            it("can retrieve test records", () => {
                expectedRecords = queryNew("id,value", "int,varchar", [
                    [101, "Test row 1"],
                    [102, "Test row 2"]
                ])

                actualRecords = queryExecute("SELECT id, value FROM test ORDER BY id")
                
                expect(actualRecords).toBe(expectedRecords)
            })
        })
    }
}

Note that this filed under test/integration because it's testing the integration between Lucee and the DB, rather than any business logic.

I've aded some config to the test suite's Application.cfc too:

component {

    this.mappings = {
        "/cfmlInDocker/test" = expandPath("/test"),
        "/testbox" = expandPath("/vendor/testbox")
    }

    this.localmode = "modern"

    this.datasources["cfmlInDocker"] = {
        type = "mysql",
        host = "database.backend",
        port = 3306,
        database = "cfmlindocker",
        username = "cfmlindocker",
        password = server.system.environment.MYSQL_PASSWORD,
        custom = {
            useUnicode = true,
            characterEncoding = "UTF-8"
        }
    }
    this.datasource = "cfmlInDocker"
}

One key thing to note here is that I am setting this.localmode in here. Previous I was setting this in Lucee's global config via CFConfig, but Zac Spitzer dropped me a line and pointed out it could be set at runtime in Application.cfc. This is a much more elegant approach, so I'm running with it.

Other than that I'm setting a data source. Note I'm picking up the password from the environment, not hard-coding it. This is passed by the docker-compose.yml file:

lucee:
    build:
        context: ./lucee
        args:
            - LUCEE_PASSWORD=${LUCEE_PASSWORD}
    environment:
        - MYSQL_PASSWORD=${MYSQL_PASSWORD}

For the implementation of this requirement I've added a Docker container for MariaDB, added a test table into it and tested that Lucee can read data from it. This was all straight forward. Here are the file changes:

/docker/mariadb/Dockerfile:

FROM mariadb:latest

COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
COPY ./conf/logging.cnf /etc/mysql/conf.d/logging.cnf
RUN chmod -R 644 /etc/mysql/conf.d/logging.cnf

CMD ["mysqld"]

EXPOSE 3306

Nothing mysterious there. I'm using the entrypoint to create the DB table and populate it (docker-entrypoint-initdb.d/1.createAndPopulateTestTable.sql):

USE cfmlindocker;

CREATE TABLE test (
    id INT NOT NULL,
    value VARCHAR(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,

    PRIMARY KEY (id)
) ENGINE=InnoDB;

INSERT INTO test (id, value)
VALUES
    (101, 'Test row 1'),
    (102, 'Test row 2')
;

ALTER TABLE test MODIFY COLUMN id INT auto_increment;

I'm also moving logging to a different directory so I can see them on my host machine (via conf/logging.cnf):

[mysqld]
log_error = /var/log/mariadb/error.log

This is all wired-together in docker-compose.yml

mariadb:
    build:
        context: ./mariadb
    environment:
        - MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
        - MYSQL_DATABASE=${MYSQL_DATABASE}
        - MYSQL_USER=${MYSQL_USER}
        - MYSQL_PASSWORD=${MYSQL_PASSWORD}
    ports:
        - "3306:3306"
    volumes:
        - mysqlData:/var/lib/mariadb
        - ./mariadb/root_home:/root
        - ../var/log:/var/log
    stdin_open: true
    tty: true
    networks:
        backend:
            aliases:
                - database.backend

volumes:
    mysqlData:

Note that I am sticking the DB data into a Docker volume instead of in a volume from my host machine. This means I need to take some care if I ever get around to adding non-test data into it, but for the time being it saves cluttering up my host machine with DB files, plus it's easier during initial configuration to completely reset the DB. It's easy enough to change later on when I need to.

I'm setting some of those magic environment variable in .env:

COMPOSE_PROJECT_NAME=cfml-in-docker
MYSQL_DATABASE=cfmlindocker
MYSQL_USER=cfmlindocker

# the following are to be provided to `docker-compose up`
LUCEE_PASSWORD=
DATABASE_ROOT_PASSWORD=
MYSQL_PASSWORD=

And the passwords when I build the containers:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker/docker$ DATABASE_ROOT_PASSWORD=123 MYSQL_PASSWORD=1234 LUCEE_PASSWORD=12345 docker-compose up --build --detach --force-recreate

It got rid of CFConfig

Both the Lucee settings I needed to change with CFConfig before hand can be done natively with Lucee, so I didn't need CFConfig any more. I might need it again later, in which case I will re-install it. But for now it's dead-weight.

RUN box install commandbox-cfconfig
RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web
RUN box cfconfig set adminPassword=${LUCEE_PASSWORD} to=/opt/lucee/web
RUN echo ${LUCEE_PASSWORD} > /opt/lucee/server/lucee-server/context/password.txt # this handles the passwords for both server and web admins

It can run the tests from the shell

Running TestBox's tests in a browser is all very pretty, but not very practical. Fortunately I read the TestBox docs some more and found out how to run them from the shell. They show how to run it from within CommandBox's own special shell here in "TestBox integration › Test runner", but that's weird and no help to me. However I finally twigged that it seems that whatever one might do within the special shell, one can also call from the normal shell via the box command. All I needed to do to enable this was to tell CommandBox how to run the tests in docker/lucee/box.json, which is used by CommandBox in the docker/lucee/Dockerfile:

{
    "devDependencies":{
        "testbox":"^4.2.1+400"
    },
    "installPaths":{
        "testbox":"vendor/testbox/"
    },
    "testbox":{
        "runner":"http://localhost:8888/test/runTests.cfm"
    }
}
COPY ./box.json /var/www/box.json
RUN mkdir -p /var/www/vendor
RUN box install

This has the benefit that the test run doesn't simply return a 200-OK all the time whether tests all passed or not; it exits with a 1 if there's any test failures. So it's usable in a CI/CD situation.

It resolves the slowness with CommandBox

In the previous article I observed that running stuff with CommandBox seemed to have about a 45sec overhead for any action. I tracked this down to the fact that I have my /root/home directory as a volume from my host machine so my various shell histories persist across container rebuilds. And I then realised that CommandBox dumps a whole lot of shite in that directory which it needs to load every time it runs. Because of the shenanigans Docker needs to do when bridging from its file system across to WSL across to the native Windows file systems, these operations are S-L-O-W. OK for a few files. Not OK for stacks of them.

Fortunately CommandBox can be configured to put its temp files elsewhere, so I have configured it to put them in /var/temp instead. As they regenerate if they are missing, this seems like the best place for them. It also prevents clutter leaking out of my container and onto my host machine. This is done via a commandbox.properties file:

commandbox_home=/var/tmp/commandbox

Which I copy into place in the Dockerfile. CommandBox picks it up automatically when I place it there:

COPY ./commandbox.properties /usr/local/bin/commandbox.properties

Good stuff. Now it only takes about 5sec for box to start doing anything, which is fine.

It no longer has the problem with path_info

I covered the shortfall in how Lucee handles path_info in "Repro for Lucee weirdness". I've managed to work around this. Kind of. In a way that solves the problem for this project anyhow.

Well I guess really it is just "learning to live with it". I've done some other experimentation with CFWheels, and all it uses path_info for is indeed to implement semi-user-friendly URLs tacked on to index.cfm. It has no need for any other .cfm file to use its path_info, so the default mappings are actually fine as they are.

However it occurred to me when I was configuring Nginx to do its part of the user-friendly URLs that all requests coming into Lucee from the web server will land in /public, so I could just put in a servlet mapping for the index.cfm in that directory (from web.xml):

<servlet-mapping>
    <servlet-name>CFMLServlet</servlet-name>
    <url-pattern>*.cfm</url-pattern>
    <url-pattern>*.cfml</url-pattern>
    <url-pattern>*.cfc</url-pattern>
    <url-pattern>/index.cfm/*</url-pattern>
    <url-pattern>/index.cfc/*</url-pattern>
    <url-pattern>/index.cfml/*</url-pattern>

    <url-pattern>/public/index.cfm/*</url-pattern>
</servlet-mapping>

One might think that instead of using <url-pattern>/public/index.cfm/*</url-pattern>, I might be able to just specify a match for the entire directory, like this: <url-pattern>/public/*</url-pattern>. From a POV of Tomcat's expectations this ought to be good enough, but from Lucee's perspective it doesn't see that as a "anything in that directory", it's expecting that pattern to be a file that matches a CFML file, so when I tried that I just got an error along the lines of "/public is a directory". Ah well. FWIW, ColdFusion said pretty much the same thing.

One downside to this is that I cannot work out how to add a servlet mapping just for this Lucee application, so I need to replace the entire Tomcat web.xml file, with another one with just one additional line (the original file is 4655 lines long). This is less than ideal, and I've followed it up on the Lucee Slack channel. I just copy the file over in the Dockerfile:


COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc
COPY ./web.xml /usr/local/tomcat/conf/web.xml

I had to rename my test file to index.cfm (so this means the test will need to just go once I install CFWheels which needs that file), but for now I was able to test the change:


it("passes URL path_info to Lucee correctly", () => {
    testPathInfo = "/additional/path/info/"

    http url="http://cfml-in-docker.frontend/index.cfm#testPathInfo#" result="response";

    expect(response.status_code).toBe(200, "HTTP status code incorrect")
    expect(response.fileContent.trim()).toBe(testPathInfo, "PATH_INFO value was incorrect")
})
<cfoutput>#CGI.path_info#</cfoutput>

Oh! And the Nginx changes! docker/nginx/sites/default.conf:

location / {
    try_files $uri $uri/ =404;
    try_files $uri $uri/ @rewrite;
}

location @rewrite {
    rewrite ^/(.*)? /index.cfm$request_uri last;
    rewrite ^ /index.cfm last;
}

(Thanks to the ColdBox docs for those)

It no longer needs PHP to test things

I'm happy that TestBox is working well enough now that I don't need to test things with PHPUnit, and that's all the PHP container was for, so I've removed all that stuff.


That's it. In the next article I shall continue from here, and get CFWheels set up in a waythat doesn't require the entire application being a) messed in with my own code; b) in a web browsable directory. Stay tuned…

Righto.

--
Adam

Tuesday, 20 April 2021

Heads-up: Disqus is being flaky ATM

G'day:

A quick note to advise that I am not getting email notifications of all comments people are making to my blog articles ATM. This is really annoying, and I think makes me seem a bit disrespectful in that it looks like I'm ignoring stuff. I really do value everyone's efforts to read and comment on the shite I push out here. If it seems like I'm ignoring you, ping me on Twitter @adam_cameron. I've downgraded my interactions on that platform, but I still to pay attn to it. If it's CFML-related, I'm also back on the CFML Slack channel.

Righto.

--
Adam

Why I've been looking at CFML again recently

G'day:

Just a quick one. You might have noticed that the subject matter of this thing abruptly changed from the usual PHP-oriented stuff, back to looking at CFML-oriented stuff. This deserves an explanation.

In case you didn't know, I finished my role at Hostelworld.com about a year ago. Since then I've been largely pottering around, avoiding CV-19 and joining in with the ubiquitous societal disallusionment / confusion / malaise surrounding the realities of not being able to do whatever I want whenever I want to, and being largely locked-up most of the time.

For reasons I won't go into, I've not needed to look for work. So I decided to wait until "things settled down". Which basically wrote-off 2020. I am really lucky to be in that position, and I feel for people who had to take things more urgently in the commercial climate that was 2020.

2021 rolled around and I decided I had better get my head out of my arse and find a job. This was quite daunting as I'd not needed to interview for a job for ten years, and I was reasonably rusty at it. I also decided to be fairly picky about what I looked at, given there was no financial desperation in the mix.

I went for two job interviews, and didn't get either. One made me laugh cos they reckoned I didn't know enough about testing. It's probably for the best I didn't end up there.

A week or so ago I started to talk to Easy Direct Debits Ltd, and this has worked out well for me (and hopefully them…). I'm starting today - I'll clock-on in about 15min - as "Technical Team Lead". Cool. I've met (well: via Zoom) two of the bods above me in the foodchain there, and they both seem like good blokes. This makes for a positive start. I generally look forward to going to work each day, but I'm enthusiastic (as much as I get enthusiastic about stuff) about cracking on in an environment that's new to me.

But not that new, and back to the subject line of this article: it's a CFML shop. I'm a wee bit rusty with my CFML, hence giving myself some exercises this last week. And my boss has given me more to do today. Ha. I will also be maintaining my focus on TDD, automated testing, and code quality. This is a big part of my role there. And this is excellent.

I'll be rejoining the CFML Slack community shortly. Apologies in advance to everyone there ;-)

And just to close… I can't not link to this, in the circumstances:

Righto.

--
Adam

Monday, 19 April 2021

How (not to) apply a feature toggle in your code

G'day:

I've had these notes lying around for a while, but they were never interesting enough (to me) to flesh out into a full article. But feature toggling has been on my mind recently (OK, it's because of Working Code Podcast's coverage of same: "018: Feature Flags (Finally!)"), so I figured I'll see what I can do with it.

BTW you don't need to listen to the podcast first. It doesn't contextualise anything I say here (well maybe one thing down the bottom), it was just the catalyst for the decision to write this up. But go listen anyway. It's cool.

I see a lot of code where the dev seems to have though "OK, I need to alternate behaviour here based on one of the toggles our FeatureToggleService provides, which is based on the request the controller receives". They are looking at this code:

class SomeDao {

    SomeDao(connector){
        this.connector = connector
    }

    getRecords() {
        // lines of code
        // that get records
        // from this.connector
        // and does something with them
        // and then something else
        // and return them
    }
}

And the alternative behaviour is to get the records from somewhere else, process them slightly differently, but the end result is the same sorta data structure. They decide to do this:

class SomeDao {

    SomeDao(someConnector, someOtherConnector) {
        this.someConnector = someConnector
        this.someOtherConnector = someOtherConnector
    }

    getRecords(someFeatureFlag) {
        // lines of code
        if (someFeatureFlag) {
            // that get records the new way
            // from this.otherConnector
            // and does something different with them
        } else {
            // that get records the old way
            // from this.connector
            // and does something with them
        }
        // and then something else
        // and return them
    }
}    

So the code that calls that needs to know the feature flag value to pass to getRecords. So they update it:

class SomeRepository {

    getObjects(someFeatureFlag) {
        records = this.dao.getRecords(someFeatureFlag)
        
        // code to make objects from the record data
        
        return objects
    }
}

But yeah the repo method needs to know about the someFeatureFlag value now, so we update the service that calls it:

class SomeService {

    getBusinessObjects(someFeatureFlag) {
        // code that does some business logic before we can get the objects

        objects = this.repo.getRecords(someFeatureFlag)
        
        // other code that does some business logic after we've got the objects
        
        return stuffTheControllerNeeds
    }
}

And the controller that calls the service:

class SomeController {

    SomeController(featureFlagService) {
        this.featureFlagService = featureFlagService
    }

    fulfilSomeRequest(request) {
        // some request stuff

        someFeatureFlag = this.featureFlagService.getSomeFeatureFlag(basedOn=request)
        results = this.service.getBusinessObjects(someFeatureFlag)
        
        // use those results to make the response
        
        return response
    }
}

And then somewhere between the routing and the call to the controller method, there's an IoC container stitching everything together:

class SomeIocContainer {

    getSomeController() {
        return new SomeController(
            new SomeService(
                new SomeRepository(
                    new SomeDao(
                        new SomeConnector(),
                        new SomeOtherConnector()
                    )
                )
            ),
            new FeatureFlagService()
        )
    }
}

And we conclude we need to do that, because the controller is the only thing that knows about the request, and we need to the request to get the correct toggle value.

Hrm. But that's less than idea (no shit, Sherlock)… maybe we could give the DAO constructor the FeatureToggleService? We'd then still need to pass the request through the call stack. Deep breath:

class SomeIocContainer {

    getSomeController() {
        return new SomeController(
            new SomeService(
                new SomeRepository(
                    new SomeDao(
                        new SomeConnector(),
                        new SomeOtherConnector(),
                        new FeatureFlagService()
                    )
                )
            )
        )
    }
}

class SomeController {

    function fulfilSomeRequest(request) {
        // some request stuff

        results = service.getBusinessObjects(request)
        
        // use those results to make the response
        
        return response
    }
}

class SomeService {

    getBusinessObjects(request) {
        // code that does some business logic before we can get the objects

        objects = repo.getRecords(request)
        
        // other code that does some business logic after we've got the objects
        
        return stuffTheControllerNeeds
    }
}

class SomeRepository {

    getObjects(request) {
        records = dao.getRecords(request)
        
        // code to make objects from the record data
        
        return objects
    }
}

class SomeDao {

    SomeDao(someConnector, someOtherConnector, featureFlagService) {
        this.someConnector = someConnector
        this.someOtherConnector = someOtherConnector
        this.featureFlagService = featureFlagService
    }

    getRecords(request) {
        // lines of code
        if (this.featureFlagService.getSomeFeatureFlag(basedOn=request)) {
            // that get records the new way
            // from this.otherConnector
            // and does something different with them
        } else {
            // that get records the old way
            // from this.connector
            // and does something with them
        }
        // and then something else
        // and return them
    }
}

This is not an improvement. I'd go as far as to say it's worse because there's no way the request should ever leak out of its controller method.

And you might think I'm exaggerating here, but I have seen both those implementations in the past. I mean: I saw some today. And in the former - historical - cases the devs concerned sincerely went "it's not possible any other way: I need the toggle in the DAO, and the featureFlagService needs the request object to work out the toggle value".

I take the dev back to a pseudo-code version of their DAO method:

getRecords(something) {
    // lines of code
    if (something) {
        // that get records the new way
        // from this.otherConnector
        // and does something different with them
    } else { // something else
        // that get records the old way
        // from this.connector
        // and does something with them
    }
    // and then something else
    // and return them
}

The first red flag here is the else. Don't have elses in yer code. You don't need them, and they are a code smell. We extract that code:

getRecords(something) {
    // lines of code
    things = doTheThing(something)
    // and then something else
    // and return them
}

doTheThing(something){
    if (something) {
        // that get records the new way
        // from this.otherConnector
        // and does something different with them
        return them
    }
    // something else
    // that get records the old way
    // from this.connector
    // and does something with them
    return them
}

But this makes it more clear we're going the wrong way about this. doTheThing? How many things is it doing? Two possible things. And that is based on it doing a third thing: checking which of the two things it's supposed to do. So three things. How many things is any one bit of code supposed to do? One thing. And even if we refactor that doTheThing method some more:

doTheThing(something){
    if (something) {
        return getThemTheNewWay()
    }
    
    return getThemTheOldWay()
}

We've still got a DAO now that's doing things in two different ways; and it still needs to know how to make that decision. It's not the job of a DAO to check feature toggles. It's the job of a DAO to get data from storage. So even with a decent clean-code refactoring, it still smells from an architectural POV.

(Oh yeah and I happen to be using a DAO in this example, but the same applies to any sort of class. It's going to be its job to doTheThing, not make decisions about how to do thing based on a feature toggle).

How are we supposed to implement feature toggling in our code? In our object oriented code? Use OOP.

Use a factory method in the IOC container to get the correct DAO for the job:

class SomeIocContainer {

    getDao() {
        if (this.featureFlagService.getSomeFeatureFlag(basedOn=this.request)) {
            return SomeOtherDao(new SomeOtherConnector())
        }
        return SomeDao(new SomeConnector())
    }

    getSomeController() {
        return new SomeController(
            new SomeService(
                new SomeRepository(
                    getDao()
                )
            )
        )
    }
}

It is precisely the job of a factory method to make a call as to which implementation of a class to use. And it is precisely the job of an IOC container to compose services. Each are doing one job.

DAOs:

class SomeDao {

    SomeDao(connector){
        this.connector = connector
    }

    getRecords() {
        // lines of code
        records = getRecordsFromConnector()
        // and then something else
        // and return them
    }
    
    getRecordsFromConnector() {
        // that get records
        // from this.connector
        // and does something with them
    }
}

This is just a refactoring of the original method to pull some implementation into its own method. It has no idea about feature toggles, or new ways of doing thing.

class SomeOtherDao extends SomeDAO {
    
    getRecordsFromConnector() {
        // that get records the new way
        // from this.connector
        // and does something different with them
    }
}

This implementation only knows about the new way of doing things. Other than that it's the same DAO as the original. It also doesn't know anything about feature toggles.

One could put a case forward here that getRecordsFromConnector might be a member of a different class which is injected into the one original DAO. IE: favouring composition over inheritance. This is a case of "it depends", I think. And either way, that's just shifting the implementation: the mechanism is still the same.

Aside from the architectural improvements, when we come to remove the toggle, all we need to do is remove that one if statement, in an isolated and very simple factory method, nowhere near any implementation logic. That's it.

Bottom line, here are two things to think about when implementing your toggling:

  • If you are needing to refer to the toggle in more than one place - passing it around, or checking it more than once - you are probably doing something in a less than ideal fashion.
  • If you have the code that checks the toggle and the code that acts on the toggle in the same place, you are also probably not nailing it.

Maybe you are nailing it for the situation you are in. But you should definitely be asking yourself about your architecture and your OOP if you find yourself in either of those situations. And it's more than likely gonna be really easy to design the solution cleanly.


Oh and of course you need to bloody test your toggling. You're adding a thumping great decision into yer app. Feature toggling doesn't free you from testing, it increases your testing burden, because it's another moving part in your app. The clue is in the acceptance criteria: "it should do things this new way in this new situation" and "it should keep doing stuff the old way in the usual run of events". Those are your test cases. And by the very nature of the toggling approach, it's obviously a sitter for test automation rather than eyeballing it every time you do a release. Anyone who suggests feature toggling somehow makes their code more robust and requires less testing as a result (/me looks directly down camera at Ben) is having a laugh ;-)

And on that semi-in-joke note, that's enough out of me for today. Got tomorrow to get my brain ready for, and practising my line "just when I thought I was out…".

Righto.

--
Adam

Adding TestBox, some tests and CFConfig into my Lucee container

G'day:

On Fri/Sat (it's currently Sunday evening, but I'll likely not finish this until Monday now) I started looking at getting some CFML stuff running on Lucee in a Docker container (see earlier/other articles in this series: Lucee/CFWheels/Docker series). If you like you can read about that stuff: "Using Docker to strum up an Nginx website serving CFML via Lucee" and "Repro for Lucee weirdness". This article resumes from where I got to with the former one, so that one might be good for some context.

Full disclosure: I spent all today messing around in a spike: experimenting with stuff, and now am finally happy I have got something to report back on, so I have rolled-back the spike and am going to do the "production" version of it via TDD again. I just say this - and it's not the first time - if yer doing TDD it's OK to spike-out and do a bunch of stuff to work out how to do things without testing every step. Especially if yer like me and start from a position of having NFI what you need to do. However once yer done: revert everything and start again, testing-first as you go. What I've done here is save all my stuff in a branch, and now I'm looking at a diff of that and main, as a reference to what I actually need to do, and what is fluff that represents a dead end, or something I didn't need to do anyhow, or whatever.

It needs to only expose public stuff to the public

As per the article subject line, today I'm gonna install a bit more tooling and get me in a better position to do some dev work. The first thing I noticed is that as things stand, the Nginx wesbite is serving everything in the Lucee application root (/var/www), whereas that directory is going to be a home directory for the application code, test code and third-party apps, so I don't want that browsable. I'm going to shift things around a bit. A notional directory structure would be along these lines:

root
├── public
│   ├── css
│   ├── images
│   ├── js
│   ├── Application.cfc
│   ├── favicon.ico
│   └── index.cfm
├── src
│   └── Application.cfc
├── test
└── vendor
    ├── cfwheels
    └── testbox

I've taken a fairly Composer / PHP approach there, but I could equally follow a more Java-centric approach to the directory structure:

root
├── com
│   └── ortussolutions
│       └── testBox
├── me
│   └── adamcameron
│       └── cfmlInDocker
│           ├── src
│           │   └── Application.cfc
│           └── test
├── org
│   └── cfwheels
│       └── cfwheels
└── public
    ├── css
    ├── images
    ├── js
    ├── Application.cfc
    ├── favicon.ico
    └── index.cfm

The point being: the website directory and my code and other people's code should be kept well away from one another. That's just common-sense.

Anyway, back to the point. Whichever way I organise the rest of things, only stuff that is supposed to be browsed-to should be browsable. Everything else should not be. So I'm gonna move the website's docroot, as well as the files that need to be served. This is just a "refactoring" exercise, so no tests should change here. We just want to make sure they still all pass.

This just means some changes to docker-compose.yml:

lucee:
    build:
        context: ./lucee
    volumes:
    	- ../public:/var/www
        - ../public:/var/www/public
        - ../root/src:/var/www/src
        - ../test:/var/www/test
        - ../var/log/tomcat:/usr/local/tomcat/log
        - ../var/log/lucee:/opt/lucee/web/logs
        - ./lucee/root_home:/root

And the website config (docker/nginx/sites/default.conf):

location ~ \.(?:cfm|cfc) {
    # ...

    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
    proxy_pass  http://cfml-in-docker.lucee:8888/public$fastcgi_script_name$is_args$args;
}

Once a rebuild the containers, I get two failing tests. Oops: I did not expect that. What's going on? Checking the front-end, the public-facing website still behaves the same. So… erm …?

One thing that didn't occur to me when doing this change is that a couple of the tests are hitting the internal Lucee website (reminder: the public-facing website for me is http://cfml-in-docker.frontend/ and the internal Lucee web site is http://cfml-in-docker.lucee:8888/). And that internal website still points to /var/www/, so where previously I'd access http://cfml-in-docker.lucee:8888/remoteAddrTest.cfm, now the URL for the backend site is be http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm. This is by design (kinda, just… for now), but I forgot about this when I made the change.

This means to me that my change is not simply a refactoring: therefore I need to start with a failing tests. I roll back my config changes, fix the tests so they hit http://cfml-in-docker.lucee:8888/public/gdayWorld.cfm and http://cfml-in-docker.lucee:8888//public/remoteAddrTest.cfm respectively, and watch them fail. Good. Now I roll forward my config changes again and see the tests pass: cool. Job done.

Later when I'm reconfiguring things I might remap it to /var/www/public, if I can work out how to do that without hacking Tomcat config files too much. But remember the test case here: It needs to only expose public stuff to the public. And we've achieved that. Let's not worry about a test case we don't need to address for now. Moving on…

It can run tests with TestBox

Currently I am running my tests via a separate container running PHP and PHPUnit. This has been curling the website to test Nginx and Lucee behaviour. Now that I have Lucee working, I can shift the tests to TestBox, which is - as far as I know - the current state of the art when it comes to testing CFML code. It provides both xUnit and Jasmine-style testing syntax.

The test for this is going to be a "physician heal thyself" kind of affair. I'm going to write a TestBox test. Once I can run it and it doesn't just go splat: job done. The test is simply this:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Baseline TestBox tests", () => {
            it("can verify that true is, indeed, true", () => {
                expect(true).toBe(true)
            })
        })
    }
}

Testbox seems to be web-based. I'd much prefer just running my tests from the shell like I would any other testing framework I've used in the last 7-8 years, but CFML has much more "it's aimed at websites, so evertything is implemented as a website" mentality (hammers and nails spring to mind here). So be it I guess. I do see that TestBox does have the ability to integrate with Ant, but that seems to be via an HTTP request as well. Hrm. What I know is I can't simply do something like testbox /path/to/my/tests or similar. What I do need to do is write a small runner file (runTests.cfm), which I then browse to:

<cfscript>
    testBox = new testbox.system.TestBox(directory="cfmlInDocker.test")
    result = testBox.run(
        reporter = "testbox.system.reports.SimpleReporter"
    )
    writeOutput(result)
</cfscript>

To use that testbox.system and cfmlInDocker.test paths, I need to define mappings for them at application level (ie: not in the same file that uses it, but a different unrelated file, Application.cfc):

component {
    this.mappings = {
        "/cfmlInDocker/test" = expandPath("/test"),
        "/testbox" = expandPath("/vendor/testbox")
    }
}

And when I browse to that, I get a predictable error:

Let's call that our "failing test".

OK so right, we install TestBox from ForgeBox (think packagist.org or npmjs.com). And to install stuff from ForgeBox I need CommandBox. And that is pretty straight forward; just a change to my Lucee Dockerfile:

FROM lucee/lucee:5.3

RUN apt-get update
RUN apt-get install vim --yes

COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc

WORKDIR  /var/www

RUN curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
RUN echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
RUN apt-get update && apt-get install apt-transport-https commandbox --yes
RUN echo exit | box
EXPOSE 8888

That last step there is because CommandBox needs to configure itself before it works, so I might as well do that when it's first installed.

Once I rebuild the container with that change, we can get CommandBox to install TestBox for us:

root@b73f0836b708:/var/www# box install id=testbox directory=vendor savedev=true
√ | Installing package [forgebox:testbox]
   | √ | Installing package [forgebox:cbstreams@^1.5.0]
   | √ | Installing package [forgebox:mockdatacfc@^3.3.0+22]

root@b73f0836b708:/var/www#
root@b73f0836b708:/var/www#
root@b73f0836b708:/var/www# ll vendor/
total 12
drwxr-xr-x 3 root root 4096 Apr 19 09:23 ./
drwxr-xr-x 1 root root 4096 Apr 19 09:23 ../
drwxr-xr-x 9 root root 4096 Apr 19 09:23 testbox/
root@b73f0836b708:/var/www#

Note that commandbox is glacially slow to do anything, so be patient rather than be like me going "WTH is going on here?" Check this out:

root@b73f0836b708:/var/www# time box help

**************************************************
* CommandBox Help
**************************************************

Here is a list of commands in this namespace:

// help stuff elided…

To get further help on any of the items above, type "help command name".

real    0m48.508s
user    0m10.298s
sys     0m2.448s
root@b73f0836b708:/var/www#

48 bloody seconds?!?!. Now… fine. I'm doing this inside a Docker container. But even still. Blimey fellas. This is the equivalent for composer:

root@b21019120bca:/usr/share/cfml-in-docker# time composer --help
Usage:
  help [options] [--] [<command_name>]

// help stuff elided…


To display the list of available commands, please use the list command.

real    0m0.223s
user    0m0.053s
sys     0m0.035s
root@b21019120bca:/usr/share/cfml-in-docker#

That is more what I'd expect. I suspect they are strumming up a CFML server inside the box application to execute CFML code to do the processing. Again: hammer and nails eh? But anyway, it's not such a big deal. The important thing is: did it work?

Yes it bloody did! Cool! Worth the wait, I say.

I still need to find a way to run it from the shell, and I also need to work out how to integrate it into my IDE, but I have a minimum baseline of being able to run tests now, so that is cool.

The installation process also generated a box.json file, which is the equivalent of a composer.json / packages.json file:

root@b73f0836b708:/var/www# cat box.json
{
    "devDependencies":{
        "testbox":"^4.2.1+400"
    },
    "installPaths":{
        "testbox":"vendor/testbox/"
    }
}

It doesn't seem to have the corresponding lock file though, so I'm wondering how deployment works. The .json dictates what could be installed (eg: for testbox it's stating it could be anything above 4.2.1+400 but less than 5.0), but there's nothing controlling what is installed. EG: specifically 4.2.1+400. If I run this process tomorrow, I might get 4.3 instead. It doesn't matter so much with dev dependencies, but for production dependencies, one wants to make sure that whatever version is being used on one box will also be what gets installed on another box. Which is why one needs some lock-file concept. The Composer docs explain this better than I have been (and NPM works the same way). Anyway, it's fine for now.

Now that I have the box.json file, I can simply run box install in my Dockerfile:

# …
WORKDIR  /var/www

RUN curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
RUN echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
RUN apt-get update && apt-get install apt-transport-https commandbox --yes
RUN echo exit | box

COPY ./box.json /var/www/box.json
RUN mkdir -p /var/www/vendor
RUN box install

EXPOSE 8888

It runs all the same tests via TestBox as it does via PHPUnit

I'm not going to do some fancy test that actually tests that my tests match some other tests (I'm not that retentive about TDD!). I'm just going to implement the same tests I've already got on PHPUnit in TestBox. Just as some practise at TestBox really. I've used it in the past, but I've forgotten almost everything I used to know about it.

Actually that was pretty painless. I'm glad I took the time to properly document CFScript syntax a few years ago, as I'd forgotten how Railo/Lucee handled tags-in-script, and the actual Lucee docs weren't revealing this to me very quickly. That was the only hitch I had along the way.

All the tests are variations on the same theme, so I'll just repeat one of the CFCs here (NginxProxyToLuceeTest.cfc):

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests Nginx proxies CFML requests to Lucee", () => {
            it("proxies a CFM request to Lucee", () => {
                http url="http://cfml-in-docker.frontend/gdayWorld.cfm" result="response";

                expect(response.status_code).toBe( 200, "HTTP status code incorrect")
                expect(response.fileContent.trim()).toBe( "G'day world!", "Response body incorrect")
            })

            it("passes query values to Lucee", () => {
                http url="http://cfml-in-docker.frontend/queryTest.cfm?testParam=expectedValue" result="response";

                expect(response.status_code).toBe( 200, "HTTP status code incorrect")
                expect(response.fileContent.trim()).toBe( "expectedValue", "Query parameter value was incorrect")
            })

            it("passes the upstream remote address to Lucee", () => {
                http url="http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm" result="response";
                expectedRemoteAddr = response.fileContent

                http url="http://cfml-in-docker.lucee:8888/public/remoteAddrTest.cfm" result="testResponse";
                actualRemoteAddr = testResponse.fileContent

                expect(actualRemoteAddr).toBe(expectedRemoteAddr, "Remote address was incorrect")
            })
        })
    }
}

The syntax for making an http request uses that weirdo syntax that is neither fish nor fowl (and accordingly confuses Lucee itself as to what's a statement and what isn't, hence needing the semi-colon), but other than that it's all quite tidy.

And evidence of them all running:

I can get rid of the PHP container now!

It uses CFConfig to make some Lucee config tweaks

An observant CFMLer will notice that I did not var my variables in the code above. To non-CFMLers: one generally needs to actively declare a variable as local to the function its in (var myLocalVariable = "something"), otherwise without that var keyword it's global to the object it's in. This was an historically poor design decision by Macromedia, but we're stuck with it now. Kinda. Lucee has a setting such that the var is optional. And I've switched this setting on for this code.

Traditionally settings like this need to be managed through the Lucee Administrator GUI, but I don't wanna have to horse around with that: it's a daft way of setting config. There's no easy out-of-the-box way of making config changes like this outside the GUI, but there's a tool CFConfig that let's me do it with "code". Aside: why is this not called ConfigBox?

Before I do the implementation, I can actually test for this:

component extends=testbox.system.BaseSpec {

    function run() {
        describe("Tests Lucee's config has been tweaked'", () => {
            it("has 'Local scope mode' set to 'modern'", () => {
                testVariable = "value"

                expect(variables).notToHaveKey("testVariable", "testVariable should not be set in variables scope")
                expect(local).toHaveKey("testVariable", "testVariable should be set in local scope")
            })
        })
    }
}

Out of the box that first expectation will fail. Let's fix that.

Installing CFConfig is done via CommandBox/Forgebox, and I can do that within the Dockerfile:

RUN box install commandbox-cfconfig

Then I can make that setting change, thus:

RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web

I'm going to do one more tweak whilst I'm here. The Admin UI requires a coupla passwords to be set, and by default one needs to do the initial setting via putting it in a file on the server and importing it. Dunno what that's all about, but I'm not having a bar of it. We can sort this out with the Dockerfile and CFConfig too:

FROM lucee/lucee:5.3

ARG LUCEE_PASSWORD

# a bunch of stuff elided for brevity…

RUN box install commandbox-cfconfig
RUN box cfconfig set localScopeMode=modern to=/opt/lucee/web
RUN box cfconfig set adminPassword=${LUCEE_PASSWORD} to=/opt/lucee/web # for web admin
RUN echo ${LUCEE_PASSWORD} > /opt/lucee/server/lucee-server/context/password.txt # for server admin (actually seems to deal with both, now that I check)

EXPOSE 8888

That argument is passed by docker-compose, via docker-compose.yml:

lucee:
    build:
        context: ./lucee
        args:
            - LUCEE_PASSWORD=${LUCEE_PASSWORD}

And that in turn is passed-in via the shell when the containers are built:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker/docker$ LUCEE_PASSWORD=12345678 docker-compose up --build --detach --force-recreate

I'd rather use CFConfig for both the passwords, but I could not find a setting to set the server admin one. I'll ask the CFConfig bods. I did find a setting to just disable the login completely (adminLoginRequired), but I suspect that setting is only for ColdFusion, not Lucee. It didn't work for me on Lucee anyhow.

It has written enough for today

I was gonna try to tackle getting CFWheels installed and running in this exercise too, but this article is already long enough and this seems like a good place to pause. Plus I've just spotted someone being wrong on the internet, and I need to go interject there first.

Righto.

--
Adam

Saturday, 17 April 2021

Using Docker to strum up an Nginx website serving CFML via Lucee

G'day

OK so this is not the blog article I expected to be writing, had you asked me two weeks ago. But here we are. I'll go into the reason why I'm doing this a bit later.

This will be a CFML-oriented version of the "VueJs/Symfony/Docker/TDD series", and has its own tag: "Lucee/CFWheels/Docker series":

  • Nginx website.
  • Proxying for Lucee as the CFML-processing application layer.
  • Running inside Docker containers.
  • TDD the whole enterprise.

If I have time (and any will-to-live remaining), I will add this lot into the mix:

  • Work out how Forgebox works, which seems to be CFML's equivalent of Composer / NPM
  • Use that to install Testbox (CFML-based Jasmine-ish testing framework)
  • And also install CFWheels, a CFML-based framework akin to Ruby on Rails.

I'll also be returning to SublimeText for the first time in seven-or-so years. Apparently it's still a reasonable text editor to use for CFML code.

For those few of you that have started paying attention to me more recently: CFML is not new to me. I spent over a decade as a CFML developer (2001-2013). I shifted to PHP because my erstwhile employer (HostelBookers, CFML shop), was bought by Hostelworld (PHP shop) back then. I've been doing PHP since. That said, I am very rusty with CFML, and - well, hopefully - they CFML landscape has moved on since then too. So whilst I'm not a newbie with CFML stuff, getting Lucee running in a container, Forgebox and CFWheels is entirely new to me.

I'm still gonna be using PHP to do the initial testing of things, because I won't have Testbox running for the first while. So I'll need a PHP container too. I'll refactor this out once I get Testbox in.

It needs a PHP container for running tests

There's nothing new here, and what I've done is largely irrelevant to this exercise, so I'll just list the files and link through to that current state of the files in source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── php-cli
│   │   ├── root_home
│   │   │   ├── .bash_history
│   │   │   ├── .bashrc
│   │   │   ├── .gitignore
│   │   │   └── .vimrc
│   │   └── Dockerfile
│   ├── .env
│   └── docker-compose.yml
├── test
│   └── php
│       └── SelfTest.php
├── .gitignore
├── LICENSE
├── README.md
├── composer.json
├── composer.lock
└── phpunit.xml.dist

5 directories, 14 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The test this just this:

/** @testdox Tests PHPUnit install */
class SelfTest extends TestCase
{
    /** @testdox it self-tests PHPUnit */
    public function testSelf()
    {
        $this->assertTrue(true);
    }
}

And it passes:

root@18c5eabeb9f2:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.002, Memory: 6.00 MB

OK (1 test, 1 assertion)
root@18c5eabeb9f2:/usr/share/cfml-in-docker#

In this instance I could not actually run the test before I implemented the work, for what should seem obvious reasons. However I followed the TDD mindset of just doing the least amount of work possible to make the test pass. I also monkeyed around with the test itself to see it fail if I had an assertion that was no good (I changed the argument to that assertion to false, basically).

The TDD lesson here is: I've set myself a case - "It needs a PHP container for running tests" - and only resolved that case before pausing and assessing the situation. I also didn't move any further forward than I needed to to address that case.

It returns a 200-OK from requests to /gdayWorld.html

Next I need an Nginx container running, and serving a test file. Well: I need the test for that.

/** @testdox Tests Nginx is serving html */
class NginxTest extends TestCase
{
    /** @testdox It serves gdayWorld.html as 200-OK */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.backend/']);

        $response = $client->get('gdayWorld.html');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

Once again, I'll largely just list the added files here, and link through to source control:

adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$ tree -a --dirsfirst -I "vendor|.git|.idea"
.
├── docker
│   ├── nginx
│   │   ├── root_home
│   │   │   ├── .gitignore
│   │   │   ├── .profile
│   │   │   └── .vimrc
│   │   ├── sites
│   │   │   └── default.conf
│   │   ├── Dockerfile
│   │   └── nginx.conf
│   └── [...]
├── public
│   └── gdayWorld.html
├── test
│   └── php
│       ├── NginxTest.php
│       └── [...]
├── var
│   └── log
│       └── nginx
│           ├── .gitkeep
│           ├── access.log
│           └── error.log
└── [...]

12 directories, 25 files
adam@DESKTOP-QV1A45U:/mnt/c/src/cfml-in-docker$

The contents of gdayWorld.html should be obvious from the test, but it's just:

G'day world!

OK so that was all stuff I've done a few times before now. Next… Lucee

It has a Lucee container which serves CFML code via its internal web server

I'm kinda guessing at this next case. I'm gonna need to have a Lucee container, this is a cert. And I recollect Adobe's ColdFusion CFML engine ships with an wee stubbed web server for dev use. I can't recall if Lucee does too. I'm assuming it does. You can see how prepared I am for all this: I've not even RTFMed about the Lucee Docker image on DockerHub yet (I did at least make sure there was one though ;-). The idea is that there's a two-step here: getting the Lucee container up and doing "something", and after that, wire it through from Nginx. But that's a separate case.

Right so this is all new to me, so I'll actually list the files I've created. First the test:

/** @testdox Tests Lucee is serving cfml */
class LuceeTest extends TestCase
{
    /** @testdox It serves gdayWorld.cfm as 200-OK on Lucee's internal web server */
    public function testReturns200OK()
    {
        $client = new Client(['base_uri' => 'http://cfml-in-docker.lucee:8888/']);

        $response = $client->get('gdayWorld.cfm');

        $this->assertEquals(200, $response->getStatusCode());
        $content = $response->getBody()->getContents();
        $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
    }
}

It's the same as the HTML one except I'm hitting a different host, and on port 8888 (I have now done that RTFM I mentioned, and found the port Lucee serves on by default).

The Dockerfile is simple:

FROM lucee/lucee:5.3

RUN apt-get update
RUN apt-get install vim --yes

COPY ./root_home/.bashrc /root/.bashrc
COPY ./root_home/.vimrc /root/.vimrc

WORKDIR  /var/www

EXPOSE 8888

It's more complex than it needs to be as I always like vi installed in my containers because I inevitably need it (this is prescient as it turns out: I definitely did need it).

And the relevant bit from docker-compose.yml:

lucee:
    build:
        context: ./lucee
    volumes:
        - ../public:/var/www
        - ../var/log/tomcat:/usr/local/tomcat/log
        - ../var/log/lucee:/opt/lucee/web/logs
        - ./lucee/root_home:/root
    ports:
        - "8888:8888"
    stdin_open: true
    tty: true
    networks:
        backend:
            aliases:
                - cfml-in-docker.lucee

That's mostly just me mapping logging directories back to my host for convenience-sake.

Currently my test file - gdayWorld.cfm - is just plonked in the web root, which is not where one would normally put CFML files (except the application entry point file I mean), but it'll do for now:

<cfset message="G'day world!">
<cfoutput>#message#</cfoutput>

And that's it. After rebuilding my containers and running the tests, everything passes now:

root@a034afe670d4:/usr/share/cfml-in-docker# composer test
> vendor/bin/phpunit --testdox
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Lucee is serving cfml
It serves gdayWorld.cfm as 200-OK on Lucee's internal web server

Tests Nginx is serving html
It serves gdayWorld.html as 200-OK

Tests PHPUnit install
it self-tests PHPUnit

Time: 00:00.028, Memory: 6.00 MB

OK (3 tests, 5 assertions)
root@a034afe670d4:/usr/share/cfml-in-docker#

It proxies .cfm requests from Nginx to Lucee

OK so Lucee is working. Painless. Now I need to tell Nginx about it. I have NFI how to do that… I hope Google and/or Stack Overflow does.

After some googling, my recollection that some sort of connector was needed to run between the web server and the application server seems outdated, and all I need to do is use proxy_pass from Nginx to the address Lucee has configured Tomcat to listen on (Lucee runs atop of Tomcat: it's basically a Java Servlet). I can never remember the syntax for this, but fortunately Nando Breiter has documented it in article "Using Nginx With ColdFusion or Lucee". It's also reminded me a few other cases I need to test for, but first the baseline. Well actually first the test:

/** @testdox It proxies a CFM request to Lucee */
public function testCfmReturns200OK()
{
    $client = new Client(['base_uri' => 'http://cfml-in-docker.frontend/']);

    $response = $client->get('gdayWorld.cfm');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertMatchesRegularExpression("/^\\s*G'day world!\\s*$/", $content);
}

This is the same as the previous one except I'm using the Nginx website's host, and on port 80. Also note I've changed the name of the host to be cfml-in-docker.frontend not cfml-in-docker.backend. This is cosmetic, and just to distinguish between references to stuff happening on the network within the containers (called backend), and addresses browsed from the public-facing websites.

The implementation for this case is simply this, in the website config default.conf:

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name;
}

Adding this and restarting Nginx has that test passing, as well as not interfering with any non-CFML requests (ie: the other Nginx tests still pass).

This config has some shortfalls though. Well I say "shortfalls". Basically I mean it doesn't work properly for a real-world situation. More test cases…

It passes query values to Lucee

The test demonstrates this:

/** @testdox It passes query values to Lucee */
public function testCfmReceivesQueryParameters()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('queryTest.cfm?testParam=expectedValue');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("expectedValue", trim($content));
}

and queryTest.cfm is just this:

<cfoutput>#URL.testParam#</cfoutput>

If I run this test I get a failure because the 500 INTERNAL SERVER ERROR response from Lucee doesn't match the expected 200. This happens because Lucee can't see that param value. Because Nginx is not passing it. Easily fixed.

location ~ \.(?:cfm|cfc) {
    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

It passes the upstream remote address to Lucee

As it currently stands, Lucee will be receiving all requests as it they came from Nginx, rather than from whoever requested them. This is the nature of proxying, but we can work around this. First the test to set expectations:

/** @testdox It passes the upstream remote address to Lucee */
public function testLuceeReceivesCorrectRemoteAddr()
{
    $directClient = new Client([
        'base_uri' => 'http://cfml-in-docker.lucee:8888/',
        'http_errors' => false
    ]);
    $response = $directClient->get('remoteAddrTest.cfm');
    $expectedRemoteAddr = $response->getBody()->getContents();

    $proxiedClient = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $testResponse = $proxiedClient->get('remoteAddrTest.cfm');

    $this->assertEquals(200, $testResponse->getStatusCode());
    $actualRemoteAddr = $testResponse->getBody()->getContents();
    $this->assertSame($expectedRemoteAddr, $actualRemoteAddr);
}

And remoteAddrTest.cfm is just this:

<cfoutput>#CGI.remote_addr#</cfoutput>

This is slightly more complicated than the previous tests, but only in that I can't know what the remote address is of the service running the test, because it could be "anything" (in reality inside these Docker containers, if they're brought up in the same order with the default bridging network, then it'll always be the same, but we don't want to break these tests if unrelated config should happen to change). The best way is to just check what the remote address is if we make the call directly to Lucee, and then expect that value if we make the same call via the Nginx proxy. As of now it fails because Lucee correctly sees the request as coming from the PHP container when we hit Lucee directly; but it sees the request as coming from the Nginx container when using Nginx's proxy. No surprise there. Fortunately Nando had the solution to this baked into his blog article already, so I can just copy and paste his work:

location ~ \.(?:cfm|cfc) {
    proxy_http_version  1.1;
    proxy_set_header    Connection "";
    proxy_set_header    Host                $host;
    proxy_set_header    X-Forwarded-Host    $host;
    proxy_set_header    X-Forwarded-Server  $host;
    proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;     ## CGI.REMOTE_ADDR
    proxy_set_header    X-Forwarded-Proto   $scheme;                        ## CGI.SERVER_PORT_SECURE
    proxy_set_header    X-Real-IP           $remote_addr;
    expires             epoch;

    proxy_pass  http://cfml-in-docker.lucee:8888$fastcgi_script_name$is_args$args;
}

And if I restart Nginx: all good. One more issue to deal with…

It passes URL path_info to Lucee correctly

Something too few people know about, is there's an optional part of a URL between the script name and the query: path info. An example is: http://example.com/script/name/path/document.html/additional/path/info?queryParam=paramValue. That path is nothing to do with the script to be executed or where it's located, it's just… some extra pathing information for the script to do something with. It's seldom used, but it's part of the spec (RFC-3875, section 4.1.5). The spec says this:

The PATH_INFO variable specifies a path to be interpreted by the CGI script. It identifies the resource or sub-resource to be returned by the CGI script, and is derived from the portion of the URI path hierarchy following the part that identifies the script itself.

Anyway, from what I could see of what I have in the Nginx config, I suspected that we're not passing that on to Lucee, so its CGI.path_info value would be blank. A test for this is easy, and much the same as the earlier ones:

/** @testdox It passes URL path_info to Lucee correctly */
public function testLuceeReceivesPathInfo()
{
    $client = new Client([
        'base_uri' => 'http://cfml-in-docker.frontend/',
        'http_errors' => false
    ]);

    $response = $client->get('pathInfoTest.cfm/additional/path/info/');

    $this->assertEquals(200, $response->getStatusCode());
    $content = $response->getBody()->getContents();
    $this->assertSame("/additional/path/info/", trim($content));
}

And pathInfoTest.cfm is similarly familiar:

<cfoutput>#CGI.path_info#</cfoutput>

And as I predicted (although as we'll see below, not for the reasons I thought!) the test errors:

> vendor/bin/phpunit --testdox '--filter=testLuceeReceivesPathInfo'
PHPUnit 9.5.4 by Sebastian Bergmann and contributors.

Tests Nginx proxies CFML requests to Lucee
It passes URL path_info to Lucee correctly
  
   Failed asserting that 404 matches expected 200.
  
   /usr/share/cfml-in-docker/test/php/NginxProxyToLuceeTest.php:71
  

Time: 00:00.090, Memory: 8.00 MB


FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
Script vendor/bin/phpunit --testdox handling the test event returned with error code 1
root@29840662fdf9:/usr/share/cfml-in-docker#

At this point I disappeared down a rabbit hole of irritation, as detailed in article "Repro for Lucee weirdness". There are two bottom lines to this:

  1. For reasons best known to [someone other than me], Lucee only handles path_info on requests to index.cfm, but not to any other .cfm file! This can be shown by changing that test by renaming pathInfoTest.cfm to index.cfm, and calling that instead.
  2. Actually Nginx already handles it correctly anyhow. In that the value is passed on already, and I don't need to do anything extra to make it work (as far as Nginx is concerned, anyhow).

I can fix the situation for pathInfoTest.cfm if I hack Lucee's web.xml file (this is down at line 4643):

<servlet-mapping>
    <servlet-name>CFMLServlet</servlet-name>
    <url-pattern>*.cfm</url-pattern>
    <url-pattern>*.cfml</url-pattern>
    <url-pattern>*.cfc</url-pattern>
    <url-pattern>/index.cfm/*</url-pattern>
    <url-pattern>/index.cfc/*</url-pattern>
    <url-pattern>/index.cfml/*</url-pattern>
</servlet-mapping>

I could slap a special mapping for it in there. But that's a daft way to deal with this. I'm going to just mark that test as "incomplete", and move on.

Thanks to Pete Freitag, Adam Tuttle, Zac Spitzer and Sean Corfield for putting me on the right direction for working out this particular "WTF, Lucee?" episode.


Speaking of "moving on", I said I'd get the code this far, but only progress onto the more CFML-oriented stuff if I still had will to live. Well Lucee has eroded that for now, so I'll get back to that part later, when I've stopped shaking my fist at the screen.

NB: this has become part of a series of articles, as things get more complicated, and require more effort on my part to achieve my end goal: Lucee/CFWheels/Docker series.

Righto.

--
Adam