Monday, 6 October 2025

Setting up a Python learning environment: Docker, pytest, and ruff

G'day:

I'm learning Python. Not because I particularly want to, but because my 14-year-old son Zachary has IT homework and I should probably be able to help him with it. I've been a web developer for decades, but Python's never been part of my stack. Time to fix that gap.

This article covers getting a Python learning environment set up from scratch: Docker container with modern tooling, pytest for testing, and ruff for code quality. The goal is to have a proper development environment where I can write code, run tests, and not have things break in stupid ways. Nothing revolutionary here, but documenting it for when I inevitably forget how Python dependency management works six months from now.

The repo's at github.com/adamcameron/learning-python (tag 3.0.2), and I'm tracking this as Jira tickets because that's how my brain works. LP-1 was the Docker setup, LP-2 was the testing and linting toolchain.

Getting Docker sorted

First job was getting a Python container running. I'm not installing Python directly on my Windows machine - everything goes in Docker. This keeps the host clean and makes it easy to blow away and rebuild when something inevitably goes wrong.

I went with uv for dependency management. It's the modern Python tooling that consolidates what used to be pip, virtualenv, and a bunch of other stuff into one fast binary. It's written in Rust, so it's actually quick, and it handles the virtual environment isolation properly.

The docker-compose.yml is straightforward:

services:
    python:
        build:
            context: ..
            dockerfile: docker/python/Dockerfile

        volumes:
            - ..:/usr/src/app
            - venv:/usr/src/app/.venv

        stdin_open: true
        tty: true

volumes:
    venv:

The key bit here is that separate volume for .venv. Without it, you get the same problem as with Node.js - the host's virtual environment conflicts with the container's. Using a named volume keeps the container's dependencies isolated while still letting me edit source files on the host.

The Dockerfile handles the initial setup:

FROM astral/uv:python3.11-bookworm

RUN echo "alias ll='ls -alF'" >> ~/.bashrc
RUN echo "alias cls='clear; printf \"\033[3J\"'" >> ~/.bashrc

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]

WORKDIR  /usr/src/app

ENV UV_LINK_MODE=copy

RUN \
    --mount=type=cache,target=/root/.cache/uv \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync \
    --no-install-project

ENTRYPOINT ["bash"]

Nothing fancy. The astral/uv base image already has Python and uv installed. I'm using Python 3.11 because it's stable and well-supported. The uv sync at build time installs dependencies from pyproject.toml, and that cache mount makes rebuilds faster.

The ENTRYPOINT ["bash"] keeps the container running so I can exec into it and run commands. I'm used to having PHP-FPM containers that stay up with their own service loop, and this achieves the same thing.

One thing I'm doing here differently from usual, is that I am using mount to temporarily expose files to the Docker build process. In the past I would have copied pyproject.toml into the image file system. Why the change? Cos I did't realise I could do this until I saw it in this article I googled up: "Using uv in Docker › Intermediate layers"! I'm gonna use this stragey from now on, I think…

Project configuration and initial code

Python projects use pyproject.toml for configuration - it's the equivalent of package.json in Node.js or composer.json in PHP. Here's the initial setup:

[project]
name = "learning-python"
version = "0.1"
description = "And now I need to learn Python..."
readme = "README.md"
requires-python = ">=3.11"
dependencies = []

[project.scripts]
howdy = "learningpython.lp2.main:greet"

[build-system]
requires = ["uv_build>=0.8.15,<0.9.0"]
build-backend = "uv_build"

[tool.uv.build-backend]
namespace = true

The project.scripts section defines a howdy command that calls the greet function from learningpython.main. The syntax is module.path:function. This makes the function callable via uv run howdy from the command line.

The namespace = true bit tells uv to use namespace packages, which means I don't need __init__.py files everywhere. Modern Python packaging is less fussy than it used to be.

The actual code in src/learningpython/lp2/main.py is about as simple as it gets:

def greet():
    print("Hello from learning-python!")

if __name__ == "__main__":
    greet()

Nothing to it. The if __name__ == "__main__" bit means the function runs when you execute the file directly, but not when you import it as a module. Standard Python pattern.

With all this in place, I could build and run the container:

$ docker compose -f docker/docker-compose.yml up --detach
[+] Running 2/2
 ✔ Volume "learning-python_venv"  Created
 ✔ Container learning-python-python-1  Started

$ docker exec learning-python-python-1 uv run howdy
Hello from learning-python!

Right. Basic container works, simple function prints output. Time to sort out testing.

Installing pytest and development dependencies

Python separates runtime dependencies from development dependencies. Runtime deps go in the dependencies array, dev deps go in [dependency-groups]. Things like test frameworks and linters are dev dependencies - you need them for development but not for running the actual application.

To add pytest, I used uv add --dev pytest. This is the Python equivalent of composer require --dev in PHP or npm install --save-dev in Node. The --dev flag tells uv to put it in the dev dependency group rather than treating it as a runtime requirement.

I wanted to pin pytest to major version 8, so I checked PyPI (pypi.org/project/pytest/) to see what was current. As of writing it's 8.4.2. Python uses different version constraint syntax than Composer - instead of ^8.0 you write >=8.0,<9.0. More verbose but explicit.

I also wanted a file watcher like vitest has. There's pytest-watch but it hasn't been maintained since 2020 and doesn't work with modern pyproject.toml files. There's a newer alternative called pytest-watcher that handles the modern Python tooling properly.

After running uv add --dev pytest pytest-watcher, the pyproject.toml updated to include:

[dependency-groups]
dev = [
    "pytest>=8.4.2,<9",
    "pytest-watcher>=0.4.3,<0.5",
]

The uv.lock file pins the exact versions that were installed, giving reproducible builds. It's the Python equivalent of composer.lock or package-lock.json.

Writing the first test

pytest discovers test files automatically. It looks for files named test_*.py or *_test.py and runs functions in them that start with test_. No configuration needed for basic usage.

I created tests/lp2/test_main.py to test the greet() function. The test needed to verify that calling greet() outputs the expected message to stdout. pytest has a built-in fixture called capsys that captures output streams:

from learningpython.lp2.main import greet

def test_greet(capsys):
    greet()
    captured = capsys.readouterr()
    assert captured.out == "Hello from learning-python!\n"

The capsys parameter is a pytest fixture - you just add it as a function parameter and pytest provides it automatically. Calling readouterr() gives you back stdout and stderr as a named tuple. The \n at the end is because Python's print() adds a newline by default.

Running the test:

$ docker exec learning-python-python-1 uv run pytest
======================================= test session starts ========================================
platform linux -- Python 3.11.13, pytest-8.4.2, pluggy-1.6.0
rootdir: /usr/src/app
configfile: pyproject.toml
collected 1 item

tests/lp2/test_main.py .                                                                     [100%]

======================================== 1 passed in 0.01s =========================================

Green. The test found the pyproject.toml config automatically and discovered the test file without needing to tell it where to look.

For continuous testing, pytest-watcher monitors files and re-runs tests on changes:

$ docker exec learning-python-python-1 uv run ptw
[ptw] Watching directories: ['src', 'tests']
[ptw] Running: pytest
======================================= test session starts ========================================
platform linux -- Python 3.11.13, pytest-8.4.2, pluggy-1.6.0
rootdir: /usr/src/app
configfile: pyproject.toml
collected 1 item

tests/lp2/test_main.py .                                                                     [100%]

======================================== 1 passed in 0.01s =========================================

Any time I change a file in src or tests, it automatically re-runs the relevant tests. Much faster feedback loop than running tests manually each time.

Code formatting and linting with ruff

Python has a bunch of tools for code quality - black for formatting, flake8 for linting, isort for import sorting. Or you can just use ruff, which consolidates all of that into one fast tool written in Rust.

Installation was the same pattern: uv add --dev ruff. This added "ruff>=0.8.4,<0.9" to the dev dependencies.

ruff has two main commands:

  • ruff check - linting (finds unused variables, style issues, code problems)
  • ruff format - formatting (fixes indentation, spacing, line length)

Testing it out with some deliberately broken code:

$ docker exec learning-python-python-1 uvx ruff check src/learningpython/lp2/main.py
F841 Local variable `a` is assigned to but never used
 --> src/learningpython/lp2/main.py:2:8
  |
1 | def greet():
2 |        a = "wootywoo"
  |        ^
3 |        print("Hello from learning-python!")
  |
help: Remove assignment to unused variable `a`

It caught the unused variable. It also didn't complain about the 7-space indentation, because ruff check is about code issues, not formatting. That's what ruff format is for:

$ docker exec learning-python-python-1 uvx ruff format src/learningpython/lp2/main.py
1 file reformatted

This fixed the indentation to Python's standard 4 spaces. The check command can also auto-fix some issues with --fix, similar to eslint.

I configured IntelliJ to run ruff format on save. Had to disable a conflicting AMD Adrenaline hotkey first - video driver software stealing IDE shortcuts is always fun to debug. It took about an hour to work out WTF was going on there. I really don't understand why AMD thinks its driver software needs hotkeys. Dorks.

A Python gotcha: hyphens in paths

I reorganised the code by ticket number, so I moved the erstwhile main.py to src/learningpython/lp-2/main.py. Updated the pyproject.toml entry point to match:

[project.scripts]
howdy = "learningpython.lp-2.main:greet"

This did not go well:

$ docker exec learning-python-python-1 uv run howdy
      Built learning-python @ file:///usr/src/app
Uninstalled 1 package in 0.37ms
Installed 1 package in 1ms
  File "/usr/src/app/.venv/bin/howdy", line 4
    from learningpython.lp-2.main import greet
                            ^
SyntaxError: invalid decimal literal

Python's import system doesn't support hyphens in module names. When it sees lp-2, it tries to parse it as "lp minus 2" and chokes. Module names need to be valid Python identifiers, which means letters, numbers, and underscores only.

Renaming to lp2 fixed it. No hyphens in directory names if those directories are part of the import path. You can use hyphens in filenames that you access directly (like python path/to/some-script.py), but not in anything you're importing as a module.

This caught me out because hyphens are fine in most other ecosystems. Coming from PHP and JavaScript where some-module-name is perfectly normal, Python's stricter rules take some adjustment.

Wrapping up

So that's the development environment sorted. Docker container running Python 3.11 with uv for dependency management. pytest for testing with pytest-watcher for continuous test runs. ruff handling both linting and formatting. All the basics for writing Python code without things being annoying.

The final project structure looks like this:

learning-python/
├── docker/
│   ├── docker-compose.yml
│   └── python/
│       └── Dockerfile
├── src/
│   └── learningpython/
│       └── lp2/
│           └── main.py
├── tests/
│   └── lp2/
│       └── test_main.py
├── pyproject.toml
└── uv.lock

Everything's on GitHub at github.com/adamcameron/learning-python (tag 3.0.2).

Now I can actually start learning Python instead of fighting with tooling. Which is the point.

Righto.

--
Adam

Saturday, 4 October 2025

Violating Blogger's community guidelines. Apparently.

G'day:

Earlier this evening I published TypeScript decorators: not actually decorators. And about 5min after it went live, it vanished. Weird. Looking in the back-end of Blogger, I see this warning:

This post was unpublished because it violates Blogger's community guidelines. To republish, please update the content to adhere to the guidelines.

What? Seriously?

Looking through my junk folder in my email client, I had an email thus:

Hello,

As you may know, our Community Guidelines (https://blogger.com/go/contentpolicy) describe the boundaries for what we allow – and don't allow – on Blogger. Your post titled 'TypeScript decorators: not actually decorators' was flagged to us for review. We have determined that it violates our guidelines and have unpublished the URL https://blog.adamcameron.me/2025/10/typescript-decorators-not-actually.html, making it unavailable to blog readers.

If you are interested in republishing the post, please update the content to adhere to Blogger's Community Guidelines. Once the content has been updated, you may republish it at [URL removed]. This will trigger a review of the post.

You may have the option to pursue your claims in court. If you have legal questions or wish to examine legal options that may be available to you, you may want to consult your own legal counsel.

For more information, please review the following resources:

Sincerely,

The Blogger Team

"OK," I thought. "I'll play yer silly game", knowing full-well I had done nothing to violate any sane T&Cs / guidelines. You can review the guidance yerself: obvs there's nothing in the article that comes anywhere near close to butting up against any of those rules.

I did make a coupla edits and resubmitted it:

  • Updated text in the first para to read "what the heck". You can imagine what it said before the edit. Not the only instance of that word in this blog, as one can imagine.
  • I was using my son's name instead of "Jed Dough". I have used Z's name a lot in the past, so can't see it was that.
  • I used a very cliched common password as sample data in place of tough_to_guess.
  • I removed most of one para. The para starting "Worth learning?" went on to explain how some noted TypeScript frameworks used decorators heavily. Why did I remove this? Well: Claudia wrote it, and this came from her knowledge not my own. I didn't know those frameworks even existed, let alone used decorators. I admonished her for using original "research", but I also went through and verified that she was correct in what she was saying. To me this was harmless and useful info: but it wasn't my own work, so I thought I'd get rid. I had included a note there that it was her and not me. There's nothing in the T&Cs that said one cannot use AI to help writing these articles, but I know people are getting a bit pearl-clutchy about the whole thing ATM, so figured it might be that. Daft though, given it was an admission it was AI-written; rather than try to shadily pass AI work as my own. Which, if you read this blog, I don't do. I always say when she's helped me draft things. And I always read what she's done ands tweak where necessary anyhow. It's my work.

And that was it. But maybe 30min later I got another email from them:

Hello,

We have re-evaluated the post titled 'TypeScript decorators: not actually decorators' against our Community Guidelines (https://blogger.com/go/contentpolicy). Upon review, the post has been reinstated. You may access the post at https://blog.adamcameron.me/2025/10/typescript-decorators-not-actually.html.

Sincerely,
The Blogger Team

Cool. No harm done, but I'd really like to know what triggered it. Of course they can't tell me as that would be leaking info that bad-actors could then use to circumvent their system. I get that. And it's better to err on the side of caution in these matters I guess.

Anyway, that was a thing.

Righto.

--
Adam (who wrote every word of this one. How bloody tedious)

TypeScript decorators: not actually decorators

G'day:

I've been working through TypeScript classes, and when I got to decorators I hit the @ syntax and thought "hang on, what the heck is all this doing inside the class being decorated? The class shouldn't know it's being decorated. Fundamentally it shouldn't know."

Turns out TypeScript decorators have bugger all to do with the Gang of Four decorator pattern. They're not about wrapping objects at runtime to extend behavior. They're metaprogramming annotations - more like Java's @annotations or C#'s [attributes] - that modify class declarations at design time using the @ syntax.

The terminology collision is unfortunate. Python had the same debate back in PEP 318 - people pointed out that "decorator" was already taken by a well-known design pattern, but they went with it anyway because the syntax visually "decorates" the function definition. TypeScript followed Python's lead: borrowed the @ syntax, borrowed the confusing name, and now we're stuck with it.

So this isn't about the decorator pattern at all. This is about TypeScript's metaprogramming features that happen to be called decorators for historical reasons that made sense to someone, somewhere.

What TypeScript deco

What TypeScript decorators actually do

A decorator in TypeScript is a function that takes a target (the thing being decorated - a class, method, property, whatever) and a context object, and optionally returns a replacement. They execute at class definition time, not at runtime.

The simplest example is a getter decorator:

function obscurer(
  originalMethod: (this: PassPhrase) => string,
  context: ClassGetterDecoratorContext
) {
  void context
  function replacementMethod(this: PassPhrase) {
    const duplicateOfThis: PassPhrase = Object.assign(
      Object.create(Object.getPrototypeOf(this) as PassPhrase),
      this,
      { _text: this._text.replace(/./g, '*') }
    ) as PassPhrase

    return originalMethod.call(duplicateOfThis)
  }

  return replacementMethod
}

export class PassPhrase {
  constructor(protected _text: string) {}

  get plainText(): string {
    return this._text
  }

  @obscurer
  get obscuredText(): string {
    return this._text
  }
}

(from accessor.ts)

The decorator function receives the original getter and returns a replacement that creates a modified copy of this, replaces the _text property with asterisks, then calls the original getter with that modified context. The original instance is untouched - we're not mutating state, we're intercepting the call and providing different data to work with. The @obscurer syntax applies the decorator to the getter.

The test shows this in action:

it('original text remains unchanged', () => {
  const phrase = new PassPhrase('tough_to_guess')
  expect(phrase.obscuredText).toBe('**************')
  expect(phrase.plainText).toBe('tough_to_guess')
})

(from accessor.test.ts)

The obscuredText getter returns asterisks, the plainText getter returns the original value. The decorator wraps one getter without affecting the other or mutating the underlying _text property.

Method decorators and decorator factories

Method decorators work the same way as getter decorators, except they handle methods with actual parameters. More interesting is the decorator factory pattern - a function that returns a decorator, allowing runtime configuration.

Here's an authentication service with logging:

interface Logger {
  log(message: string): void
}

const defaultLogger: Logger = console

export class AuthenticationService {
  constructor(private directoryServiceAdapter: DirectoryServiceAdapter) {}

  @logAuth()
  authenticate(userName: string, password: string): boolean {
    const result: boolean = this.directoryServiceAdapter.authenticate(
      userName,
      password
    )
    if (!result) {
      throw new AuthenticationException(
        `Authentication failed for user ${userName}`
      )
    }
    return result
  }
}

function logAuth(logger: Logger = defaultLogger) {
  return function (
    originalMethod: (
      this: AuthenticationService,
      userName: string,
      password: string
    ) => boolean,
    context: ClassMethodDecoratorContext<
      AuthenticationService,
      (userName: string, password: string) => boolean
    >
  ) {
    void context
    function replacementMethod(
      this: AuthenticationService,
      userName: string,
      password: string
    ) {
      logger.log(`Authenticating user ${userName}`)
      try {
        const result = originalMethod.call(this, userName, password)
        logger.log(`User ${userName} authenticated successfully`)
        return result
      } catch (e) {
        logger.log(`Authentication failed for user ${userName}: ${e}`)
        throw e
      }
    }
    return replacementMethod
  }
}

(from method.ts)

The factory function takes a logger parameter and returns the actual decorator function. The decorator wraps the method with logging: logs before calling, logs on success, logs on failure and re-throws. The @logAuth() syntax calls the factory which returns the decorator.

Worth noting: the logger has to be configured at module level because @logAuth() executes when the class is defined, not when instances are created. This means tests can't easily inject different loggers per instance - you're stuck with whatever was configured when the file loaded. It's a limitation of how decorators work, and honestly it's a bit crap for dependency injection.

Also note I'm just using the console as the logger here. It makes testing easy.

Class decorators and shared state

Class decorators can replace the entire class, including hijacking the constructor. This example is thoroughly contrived but demonstrates how decorators can inject stateful behavior that persists across all instances:

const maoriNumbers = ['tahi', 'rua', 'toru', 'wha']
let current = 0
function* generator() {
  while (current < maoriNumbers.length) {
    yield maoriNumbers[current++]
  }
  throw new Error('No more Maori numbers')
}

function maoriSequence(
  target: typeof Number,
  context: ClassDecoratorContext
) {
  void context

  return class extends target {
    _value = generator().next().value as string
  }
}

type NullableString = string | null

@maoriSequence
export class Number {
  constructor(protected _value: NullableString = null) {}

  get value(): NullableString {
    return this._value
  }
}

(from class.ts)

The class decorator returns a new class that extends the original, overriding the _value property with the next value from a generator. The generator and its state live at module scope, so they're shared across all instances of the class. Each time you create a new instance, the constructor parameter gets completely ignored and the decorator forces the next Maori number instead:

it('intercepts the constructor', () => {
  expect(new Number().value).toEqual('tahi')
  expect(new Number().value).toEqual('rua')
  expect(new Number().value).toEqual('toru')
  expect(new Number().value).toEqual('wha')
  expect(() => new Number()).toThrowError('No more Maori numbers')
})

(from class.test.ts)

First instance gets 'tahi', second gets 'rua', third gets 'toru', fourth gets 'wha', and the fifth throws an error because the generator is exhausted. The state persists across all instantiations because it's in the decorator's closure at module level.

This demonstrates that class decorators can completely hijack construction and maintain shared state, which is both powerful and horrifying. You'd never actually do this in real code - it's terrible for testing, debugging, and reasoning about behavior - but it shows the level of control decorators have over class behavior.

GitHub Copilot's code review was appropriately horrified by this. It flagged the module-level state, the generator that never resets, the constructor hijacking, and basically everything else about this approach. Fair cop - the code reviewer was absolutely right to be suspicious. This is demonstration code showing what's possible with decorators, not what you should actually do. In real code, if you find yourself maintaining stateful generators at module scope that exhaust after four calls and hijack constructors to ignore their parameters, you've gone badly wrong somewhere and need to step back and reconsider your life choices.

Auto-accessors and the accessor keyword

Auto-accessors are a newer feature that provides shorthand for creating getter/setter pairs with a private backing field. The accessor keyword does automatically what you'd normally write manually:

export class Person {
  @logCalls(defaultLogger)
  accessor firstName: string

  @logCalls(defaultLogger)
  accessor lastName: string

  constructor(firstName: string, lastName: string) {
    this.firstName = firstName
    this.lastName = lastName
  }

  getFullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

(from autoAccessors.ts)

The accessor keyword creates a private backing field plus public getter and setter, similar to C# auto-properties. The decorator can then wrap both operations:

function logCalls(logger: Logger = defaultLogger) {
  return function (
    target: ClassAccessorDecoratorTarget,
    context: ClassAccessorDecoratorContext
  ) {
    const result: ClassAccessorDecoratorResult = {
      get(this: This) {
        logger.log(`[${String(context.name)}] getter called`)
        return target.get.call(this)
      },
      set(this: This, value) {
        logger.log(
          `[${String(context.name)}] setter called with value [${String(value)}]`
        )
        target.set.call(this, value)
      }
    }

    return result
  }
}

(from autoAccessors.ts)

The target provides access to the original get and set methods, and the decorator returns a result object with replacement implementations. The getter wraps the original with logging before calling it, and the setter does the same.

Testing shows both operations getting logged:

it('should log the setters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  new Person('Jed', 'Dough')

  expect(consoleSpy).toHaveBeenCalledWith(
    '[firstName] setter called with value [Jed]'
  )
  expect(consoleSpy).toHaveBeenCalledWith(
    '[lastName] setter called with value [Dough]'
  )
})

it('should log the getters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  const person = new Person('Jed', 'Dough')

  expect(person.getFullName()).toBe('Jed Dough')
  expect(consoleSpy).toHaveBeenCalledWith('[firstName] getter called')
  expect(consoleSpy).toHaveBeenCalledWith('[lastName] getter called')
})

(from autoAccessors.test.ts)

The constructor assignments trigger the setters, which get logged. Later when getFullName() accesses the properties, the getters are logged.

Auto-accessors are actually quite practical compared to the other decorator types. They provide a clean way to add cross-cutting concerns like logging, validation, or change tracking to properties without cluttering the class with boilerplate getter/setter implementations.

What I learned

TypeScript decorators are metaprogramming tools that modify class behavior at design time. They're useful for cross-cutting concerns like logging, validation, or instrumentation - the kinds of things that would otherwise clutter your actual business logic.

The main decorator types are:

  • Getter/setter decorators - wrap property access
  • Method decorators - wrap method calls
  • Class decorators - replace or modify entire classes
  • Auto-accessor decorators - wrap the getter/setter pairs created by the accessor keyword

Decorator factories (functions that return decorators) allow runtime configuration, though "runtime" here means "when the module loads", not "when instances are created". This makes dependency injection awkward - you're stuck with module-level state or global configuration.

The syntax is straightforward once you understand the pattern: decorator receives target and context, returns replacement (or modifies via context), job done. The tricky bit is the type signatures and making sure your implementation signature is flexible enough to handle all the overloads you're declaring.

But fundamentally, these aren't decorators in the design pattern sense. They're annotations that modify declarations. If you're coming from a language with proper decorators (the GoF pattern), you'll need to context-switch your brain because the @ syntax is doing something completely different here.

Worth learning? Yeah, if only because you'll see them in the wild and need to understand what they're doing.

Would I use them in my own code? Probably sparingly. Auto-accessors are legitimately useful. Method decorators for logging or metrics could work if you're comfortable with the module-level configuration limitations. Class decorators that hijack constructors and maintain shared state can absolutely get in the sea.

But to be frank: if I wanted to decorate something - in the accurate sense of that term - I'd do it properly using the design pattern, and DI.


The full code for this investigation is in my learning-typescript repository.

Righto.

--
Adam

Thursday, 2 October 2025

TypeScript mixins: poor person's composition, but with generics

G'day:

I've been working through TypeScript classes, and today I hit mixins. For those unfamiliar, mixins are a pattern for composing behavior from multiple sources - think Ruby's modules or PHP's traits. They're basically "poor person's composition" - a way to share behavior between classes when you can't (or won't) use proper dependency injection.

I think they're a terrible pattern. If I need shared behavior, I'd use actual composition - create a proper class and inject it as a dependency. But I'm not always working with my own code, and mixins do exist in the wild, so here we are.

The TypeScript mixin implementation is interesting though - it's built on generics and functions that return classes, which is quite different from the prototype-mutation approach you see in JavaScript. And despite my reservations about the pattern itself, understanding how it works turned out to be useful for understanding TypeScript's type system better.

The basic pattern

TypeScript mixins aren't about mutating prototypes at runtime (though you can do that in JavaScript). They're functions that take a class and return a new class that extends it.

For this example, I wanted a mixin that would add a flatten() method to any class - something that takes all the object's properties and concatenates their values into a single string. Not particularly useful in real code, but simple enough to demonstrate the mechanics without getting lost in business logic.

type Constructor = new (...args: any[]) => {}

function applyFlattening<TBase extends Constructor>(Base: TBase) {
  return class Flattener extends Base {
    flatten(): string {
      return Object.entries(this).reduce(
        (flattened: string, [_, value]): string => {
          return flattened + String(value)
        },
        ''
      )
    }
  }
}

(from mixins.ts)

That Constructor type is saying "anything that can be called with new and returns an object". The mixin function takes a class that matches this type and returns a new anonymous class that extends the base class with additional behavior.

You can then apply it to any class:

export class Name {
  constructor(
    public firstName: string,
    public lastName: string
  ) {}

  get fullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

export const FlattenableName = applyFlattening(Name)

FlattenableName is now a class that has everything Name had plus the flatten() method. TypeScript tracks all of this at compile time, so you get proper type checking and autocomplete for both the base class members and the mixin methods.

The generics bit

The confusing part (at least initially) is this bit:

function applyFlattening<TBase extends Constructor>(Base: TBase)

Without understanding generics, this is completely opaque. The <TBase extends Constructor> is saying "this function is generic over some type TBase, which must be a constructor". The Base: TBase parameter then uses that type.

This lets TypeScript track what specific class you're mixing into. When you call applyFlattening(Name), TypeScript knows that TBase is specifically the Name class, so it can infer that the returned class has both Name's properties and methods plus the flatten() method.

Without generics, TypeScript would only know "some constructor was passed in" and couldn't give you proper type information about what the resulting class actually contains. The generic parameter preserves the type information through the composition.

I hadn't covered generics properly before hitting this (it's still on my todo list), which made the mixin syntax particularly cryptic. But the core concept is straightforward once you understand that generics are about preserving type information as you transform data - in this case, transforming a class into an extended version of itself.

Using the mixed class

Once you've got the mixed class, using it is straightforward:

const flattenableName: InstanceType<typeof FlattenableName> =
  new FlattenableName('Zachary', 'Lynch')
expect(flattenableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = flattenableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

(from mixins.test.ts)

The InstanceType<typeof FlattenableName> bit is necessary because FlattenableName is a value (the constructor function), not a type. typeof FlattenableName gives you the constructor type, and InstanceType<...> extracts the type of instances that constructor creates.

Once you've got an instance, it has both the original Name functionality (the fullName getter) and the new flatten() method. The mixin has full access to this, so it can see all the object's properties - in this case, firstName and lastName.

Constraining the mixin

The basic Constructor type accepts any class - it doesn't care what properties or methods the class has. But you can constrain mixins to only work with classes that have specific properties:

type NameConstructor = new (
  ...args: any[]
) => {
  firstName: string
  lastName: string
}

function applyNameFlattening<TBase extends NameConstructor>(Base: TBase) {
  return class NameFlattener extends Base {
    flatten(): string {
      return this.firstName + this.lastName
    }
  }
}

(from mixins.ts)

The NameConstructor type specifies that the resulting instance must have firstName and lastName properties. Now the mixin can safely access those properties directly - TypeScript knows they'll exist.

You can't constrain the constructor parameters themselves - that ...args: any[] is mandatory for mixin functions. TypeScript requires this because the mixin doesn't know what arguments the base class constructor needs. You can only constrain the instance type (the return type of the constructor).

This means a class like this won't work with the constrained mixin:

export class ShortName {
  constructor(public firstName: string) {}
}
// This won't compile:
// export const FlattenableShortName = applyNameFlattening(ShortName)
// Argument of type 'typeof ShortName' is not assignable to parameter of type 'NameConstructor'

TypeScript correctly rejects it because ShortName doesn't have a lastName property, and the mixin's flatten() method needs it.

Chaining multiple mixins

You can apply multiple mixins by chaining them - pass the result of one mixin into another:

function applyArrayifier<TBase extends Constructor>(Base: TBase) {
  return class Arrayifier extends Base {
    arrayify(): string[] {
      return Object.entries(this).reduce(
        (arrayified: string[], [_, value]): string[] => {
          return arrayified.concat(String(value).split(''))
        },
        []
      )
    }
  }
}

export const ArrayableFlattenableName = applyArrayifier(FlattenableName)

(from mixins.ts)

Now ArrayableFlattenableName has everything from Name, plus flatten() from the first mixin, plus arrayify() from the second mixin:

const transformableName: InstanceType<typeof ArrayableFlattenableName> =
  new ArrayableFlattenableName('Zachary', 'Lynch')
expect(transformableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = transformableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

const arrayifiedName: string[] = transformableName.arrayify()
expect(arrayifiedName).toEqual('ZacharyLynch'.split(''))

(from mixins.test.ts)

TypeScript correctly infers that all three sets of functionality are available on the final class. The type information flows through each composition step.

Why not just use composition?

Right, so having learned how mixins work in TypeScript, I still think they're a poor choice for most situations. If you need shared behavior, use actual composition:

class Flattener {
  flatten(obj: Record<string, unknown>): string {
    return Object.entries(obj).reduce(
      (flattened, [_, value]) => flattened + String(value),
      ''
    )
  }
}

class Name {
  constructor(
    public firstName: string,
    public lastName: string,
    private flattener: Flattener
  ) {}
  
  flatten(): string {
    return this.flattener.flatten(this)
  }
}

This is clearer about dependencies, easier to test (inject a mock Flattener), and doesn't require understanding generics or the mixin pattern. The behavior is in a separate class that can be reused anywhere, not just through inheritance chains.

Mixins make sense in languages where you genuinely can't do proper composition easily, or where the inheritance model is the primary abstraction. But TypeScript has first-class support for dependency injection and composition. Use it.

The main legitimate use case I can see for TypeScript mixins is when you're working with existing code that uses them, or when you need to add behavior to classes you don't control. Otherwise, favor composition.

The abstract class limitation

One thing you can't do with mixins is apply them to abstract classes. The pattern requires using new Base(...) to instantiate and extend the base class, but abstract classes can't be instantiated - that's their whole point.

abstract class AbstractBase {
  abstract doSomething(): void
}

// This won't work:
// const Mixed = applyMixin(AbstractBase)
// Cannot create an instance of an abstract class

The workarounds involve either making the base class concrete (which defeats the purpose of having it abstract), or mixing into a concrete subclass instead of the abstract parent. Neither is particularly satisfying.

This is a fundamental incompatibility between "can't instantiate" (abstract classes) and "must instantiate to extend" (the mixin pattern). It's another reason to prefer composition - you can absolutely inject abstract dependencies through constructor parameters without these limitations.

What I learned

TypeScript mixins are functions that take classes and return extended classes. They use generics to preserve type information through the composition, and TypeScript tracks everything at compile time so you get proper type checking.

The syntax is more complicated than it needs to be (that type Constructor = new (...args: any[]) => {} bit), and you need to understand generics before any of it makes sense. The InstanceType<typeof ClassName> dance is necessary because of how TypeScript distinguishes between constructor types and instance types.

You can constrain mixins to only work with classes that have specific properties, and you can chain multiple mixins together. But you can't use them with abstract classes, and they're generally a worse choice than proper composition for most real-world scenarios.

I learned the pattern because I'll encounter it in other people's code, not because I plan to use it myself. If I need shared behavior, I'll use dependency injection and composition like a sensible person. But now at least I understand what's happening when I see const MixedClass = applyMixin(BaseClass) in a codebase.

The full code for this investigation is in my learning-typescript repository. Thanks to Claudia for helping work through the type constraints and the abstract class limitation, and for assistance with this write-up.

Righto.

--
Adam

Tuesday, 30 September 2025

TypeScript constructor overloading: when one implementation has to handle multiple signatures

G'day:

I've been working through TypeScript classes, and today I hit constructor overloading. Coming from PHP where you can't overload constructors at all (you get one constructor, that's it), the TypeScript approach seemed straightforward enough: declare multiple signatures, implement once, job done.

Turns out the "implement once" bit is where things get interesting.

The basic pattern

TypeScript lets you declare multiple constructor signatures followed by a single implementation:

constructor()
constructor(s: string)
constructor(n: number)
constructor(s: string, n: number)
constructor(p1?: string | number, p2?: number) {
  // implementation handles all four cases
}

The first four lines are just declarations - they tell TypeScript "these are the valid ways to call this constructor". The final signature is the actual implementation that has to handle all of them.

Simple enough when you've got a no-arg constructor and a two-arg constructor - those are clearly different. But what happens when you need two different single-argument constructors, one taking a string and one taking a number?

That's where I got stuck.

The implementation signature problem

Here's what I wanted to support:

const empty = new Numeric()                    // both properties null
const justString = new Numeric('forty-two')    // asString set, asNumeric null
const justNumber = new Numeric(42)             // asNumeric set, asString null
const both = new Numeric('forty-two', 42)      // both properties set

(from constructors.test.ts)

My first attempt at the implementation looked like this:

constructor()
constructor(s: string)
constructor(s: string, n: number)
constructor(s?: string, n?: number) {
  this.asString = s ?? null
  this.asNumeric = n ?? null
}

Works fine for the no-arg, single-string, and two-arg cases. But then I needed to add the single-number constructor:

constructor(n: number)

And suddenly the compiler wasn't happy: "This overload signature is not compatible with its implementation signature."

The error pointed at the new overload, but the actual problem was in the implementation. It took me ages (and asking Claudia) to work this out. This is entirely down to me not reading, but just looking at what line it was pointing too. Duh. The first parameter was typed as string (or undefined), but the new overload promised it could also be a number. The implementation couldn't deliver on what the overload signature was promising.

Why neutral parameter names matter

The fix was to change the implementation signature to accept both types:

constructor(p1?: string | number, p2?: number) {
  // ...
}

But here's where the parameter naming became important. My initial instinct was to keep using meaningful names like s and n:

constructor(s?: string | number, n?: number)

This felt wrong. When you're reading the implementation code and you see a parameter called s, you expect it to be a string. But now it might be a number. The name actively misleads you about what the parameter contains.

Switching to neutral names like p1 and p2 made the implementation logic much clearer - these are just "parameter slots" that could contain different types depending on which overload was called. No assumptions about what they contain.

Runtime type checking

Once the implementation signature accepts both types, you need runtime logic to figure out which overload was actually called:

constructor(p1?: string | number, p2?: number) {
  if (typeof p1 === 'number' && p2 === undefined) {
    this.asNumeric = p1
    return
  }
  this.asString = (p1 as string) ?? null
  this.asNumeric = p2 ?? null
}

(from constructors.ts)

The first check handles the single-number case: if the first parameter is a number and there's no second parameter, we're dealing with new Numeric(42). Set asNumeric and bail out.

Everything else falls through to the default logic: treat the first parameter as a string (or absent) and the second parameter as a number (or absent). This covers the no-arg, single-string, and two-arg cases.

The type assertion (p1 as string) is necessary because TypeScript can't prove that p1 is a string at that point - we've only eliminated the case where it's definitely a number. From the compiler's perspective, it could still be string | number | undefined.

The bug I didn't notice

I had the implementation working and all my tests passing. Job done, right? Except when I submitted the PR, GitHub Copilot's review flagged this:

this.asString = (p1 as string) || null
this.asNumeric = p2 || null
The logic for handling empty strings is incorrect. An empty string ('') will be converted to null due to the || operator, but empty strings should be preserved as valid string values. Use nullish coalescing (??) instead or explicit null checks.

Copilot was absolutely right. The || operator treats all falsy values as "use the right-hand side", which includes:

  • '' (empty string)
  • 0 (zero)
  • false
  • null
  • undefined
  • NaN

So new Numeric('') would set asString to null instead of '', and new Numeric('test', 0) would set asNumeric to null instead of 0. Both are perfectly valid values that the constructor should accept.

The ?? (nullish coalescing) operator only treats null and undefined as "use the right-hand side", which is exactly what I needed:

this.asString = (p1 as string) ?? null
this.asNumeric = p2 ?? null

Now empty strings and zeros are preserved as valid values.

Testing the edge cases

The fact that this bug existed meant my initial tests weren't comprehensive enough. I'd tested the basic cases but missed the edge cases where valid values happen to be falsy.

I added tests for empty strings and zeros:

it('accepts an empty string as the only argument', () => {
  const o: Numeric = new Numeric('')

  expect(o.asString).toEqual('')
  expect(o.asNumeric).toBeNull()
})

it('accepts zero as the only argument', () => {
  const o: Numeric = new Numeric(0)

  expect(o.asNumeric).toEqual(0)
  expect(o.asString).toBeNull()
})

it('accepts an empty string as the first argument', () => {
  const o: Numeric = new Numeric('', -1)

  expect(o.asString).toEqual('')
})

it('accepts zero as the second argument', () => {
  const o: Numeric = new Numeric('NOT_TESTED', 0)

  expect(o.asNumeric).toEqual(0)
})

(from constructors.test.ts)

With the original || implementation, all four of these tests failed. After switching to ??, they all passed. That's how testing is supposed to work - the tests catch the bug, you fix it, the tests confirm the fix.

Fair play to Copilot for spotting this in the PR review. It's easy to miss falsy edge cases when you're focused on getting the type signatures right.

Method overloading in general

Worth noting that constructor overloading is just a specific case of method overloading. Any method can use this same pattern of multiple signatures with one implementation:

class Example {
  doThing(): void
  doThing(s: string): void
  doThing(n: number): void
  doThing(p?: string | number): void {
    // implementation handles all cases
  }
}

The same principles apply: the implementation signature needs to be flexible enough to handle all the declared overloads, and you need runtime type checking to figure out which overload was actually called.

Constructors just happen to be where I first encountered this pattern, because that's where you often want multiple ways to initialize an object with different combinations of parameters.

What I learned

Constructor overloading in TypeScript is straightforward once you understand that the implementation signature has to be a superset of all the overload signatures. The tricky bit is when you have overloads that look similar but take different types - that's when you need union types and runtime type checking to make it work.

Using neutral parameter names in the implementation helps avoid confusion about what types you're actually dealing with. And edge case testing matters - falsy values like empty strings and zeros are valid inputs that need explicit test coverage.

The full code is in my learning-typescript repository if you want to see the complete implementation. Thanks to Claudia for helping me understand why that compilation error was pointing at the overload when the problem was in the implementation, and to GitHub Copilot for catching the || vs ?? bug in the PR review.

Righto.

--
Adam

Monday, 29 September 2025

TypeScript late static binding: parameters that aren't actually parameters

G'day:

I've been working through classes in TypeScript as part of my learning project, and today I hit static methods. Coming from PHP, one of the first questions that popped into my head was "how does late static binding work here?"

In PHP, you can do this:

class Base {
    static function create() {
        return new static();  // Creates instance of the actual called class
    }
}

class Child extends Base {}

$instance = Child::create();  // Returns a Child instance, not Base

The static keyword in new static() means "whatever class this method was actually called on", not "the class where this method is defined". It's late binding - the class is resolved at runtime based on how the method was called.

Seemed like a reasonable thing to want in TypeScript. Turns out it's possible, but the syntax is... questionable.

The TypeScript approach

Here's what I ended up with:

export class TranslatedNumber {
  constructor(
    private value: number,
    private en: string,
    private mi: string
  ) {}

  getAll(): { value: number; en: string; mi: string } {
    return {
      value: this.value,
      en: this.en,
      mi: this.mi,
    }
  }

  static fromTuple<T extends typeof TranslatedNumber>(
    this: T,
    values: [value: number, en: string, mi: string]
  ): InstanceType<T> {
    return new this(...values) as InstanceType<T>
  }
}

export class ShoutyTranslatedNumber extends TranslatedNumber {
  constructor(value: number, en: string, mi: string) {
    super(value, en.toUpperCase(), mi.toUpperCase())
  }
}

(from static.ts)

And it works - when you call ShoutyTranslatedNumber.fromTuple(), you get a ShoutyTranslatedNumber instance back, not a TranslatedNumber:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

expect(translated.getAll()).toEqual({
  value: 3,
  en: 'THREE',
  mi: 'TORU',
})

(from static.test.ts)

The late binding works. But look at that fromTuple method signature again. Specifically this bit: this: T.

Parameters that aren't parameters

When I first saw this: T in the parameter list, my immediate reaction was "okay, so I need to pass the class as the first argument?"

But the usage doesn't have any extra parameter:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

No class being passed. Just the tuple. So what the hell is this: T, doing in the parameter list?

Turns out it's a TypeScript-specific construct that exists purely for the type system. It's not a runtime parameter at all - it gets completely erased during compilation. It's a type hint that tells TypeScript "remember which class this static method was called on".

When you write ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru']), TypeScript infers:

  • The this inside fromTuple refers to ShoutyTranslatedNumber
  • Therefore T is typeof ShoutyTranslatedNumber
  • Therefore InstanceType<T> is ShoutyTranslatedNumber

It's clever. It works. But it's also completely bizarre if you're coming from any language where parameters are just parameters.

Why this feels wrong

The thing that bothers me about this isn't that it doesn't work - it does work fine. It's that the solution is a hack at the type system level when it should be a language feature.

TypeScript could have introduced syntax like new static() or new this() and compiled it to whatever JavaScript pattern makes it work at runtime. Instead, they've made developers express "the class this method was called on" through a phantom parameter that only exists for the type checker.

Compare this to how other languages handle it:

PHP just gives you static as a keyword. You write new static() and the compiler handles the rest.

Kotlin compiles to JavaScript too, but when you write Kotlin, you write actual Kotlin - proper classes, sealed classes, data classes, all the language features. The compiler figures out how to make it work in JavaScript. You don't write weird pseudo-parameters because "JavaScript doesn't have that feature".

TypeScript has positioned itself as "JavaScript with types" rather than "a language that compiles to JavaScript", which means it's constantly constrained by JavaScript's limitations instead of abstracting them away. When JavaScript doesn't have a concept, TypeScript makes you do the workaround instead of the compiler doing it.

It's functional, but it's not elegant. And it's definitely not intuitive.

Does it matter?

In practice? Not really. Once you know the pattern, it's straightforward enough to use. The this: T parameter becomes just another TypeScript idiom you memorise and move on.

But it does highlight a fundamental tension in TypeScript's design philosophy. The language is scared to be a proper language with its own features and syntax. Everything has to map cleanly back to JavaScript, even when that makes the developer experience worse.

I found this Stack Overflow answer while researching this, which explains the mechanics well enough, but doesn't really acknowledge how weird the solution is. It's all type theory without much "here's why the language works this way".

For now, I've got late static binding working in TypeScript. It required some generics gymnastics and a phantom parameter, but it does what I need. I'll probably dig deeper into generics in a future ticket - there's clearly more to understand there, and I've not worked with generics in any language before, so that'll be interesting.

The code for this is in my learning-typescript repository if you want to see the full implementation. Thanks to Claudia for helping me understand what the hell this: T was actually doing and for assistance with this write-up.

Righto.

--
Adam

Saturday, 27 September 2025

JavaScript Symbols: when learning one thing teaches you fifteen others

G'day:

This is one of those "I thought I was learning one thing but ended up discovering fifteen other weird JavaScript behaviors" situations that seems to happen every time I try to understand a JavaScript feature properly.

I was working through my TypeScript learning project, specifically tackling symbols (TS / JS) as part of understanding primitive types. Seemed straightforward enough - symbols are unique primitive values, used for creating "private" object properties and implementing well-known protocols. Easy, right?

Wrong. What started as "symbols are just unique identifiers" quickly turned into a masterclass in JavaScript's most bizarre type coercion behaviors, ESLint's opinions about legitimate code patterns, and why semicolons sometimes matter more than you think.

The basics (that aren't actually basic)

Symbols are primitive values that are guaranteed to be unique:

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false - always unique

Except when they're not unique, because Symbol.for() maintains a global registry:

const s1 = Symbol.for('my-key');
const s2 = Symbol.for('my-key');
console.log(s1 === s2); // true - same symbol from registry

Fair enough. And you can't call Symbol as a constructor (unlike literally every other primitive wrapper):

const sym = new Symbol(); // TypeError: Symbol is not a constructor

This seemed like a reasonable safety feature until I tried to test it and discovered that TypeScript will happily let you write this nonsense, but ESLint immediately starts complaining about the any casting required to make it "work".

Where things get properly weird

The real fun starts when you encounter the well-known symbols - particularly Symbol.toPrimitive. This lets you control how objects get converted to primitive values, which sounds useful until you actually try to use it.

Here's a class that implements custom primitive conversion:

export class SomeClass {
  [Symbol.toPrimitive](hint: string) {
    if (hint === 'number') {
      return 42;
    }
    if (hint === 'string') {
      return 'forty-two';
    }
    return 'default';
  }
}

(from symbols.ts)

Now, which conversion do you think obj + '' would trigger? If you guessed "string", because you're concatenating with a string, you'd be wrong. It actually triggers the "default" hint because JavaScript's + operator is fundamentally broken.

The + operator with mixed types calls toPrimitive with hint "default", not "string". JavaScript has to decide whether this is addition or concatenation before converting the operands, so it plays it safe with the default hint. Only explicit string conversion like String(obj) or template literals get the string hint.

This is the kind of language design decision that makes you question whether the people who created JavaScript have ever actually used JavaScript.

ESLint vs. reality

Speaking of questionable decisions, try writing the template literal version:

expect(`${obj}`).toBe('forty-two');

ESLint immediately complains: "Invalid type of template literal expression". It sees a custom class being used in string interpolation and assumes you've made a mistake, despite this being exactly what Symbol.toPrimitive is designed for.

You end up with this choice:

  1. Suppress the ESLint rule for legitimate symbol behavior
  2. Use String(obj) explicitly (which actually works better anyway)
  3. Cast to any and deal with ESLint complaining about that instead

Modern tooling is supposedly designed to help us write better code, but it turns out "better" doesn't include using JavaScript's actual primitive conversion protocols.

Symbols as "secret" properties

The privacy model for symbols is... interesting. They're hidden from normal enumeration but completely discoverable if you know where to look:

const secret1 = Symbol('secret1');
const secret2 = Symbol('secret2');

const obj = {
  publicProp: 'visible',
  [secret1]: 'hidden',
  [secret2]: 'also hidden'
};

console.log(Object.keys(obj));                    // ['publicProp']
console.log(JSON.stringify(obj));                 // {"publicProp":"visible"}
console.log(Object.getOwnPropertySymbols(obj));   // [Symbol(secret1), Symbol(secret2)]
console.log(Reflect.ownKeys(obj));                // ['publicProp', Symbol(secret1), Symbol(secret2)]

So symbols provide privacy from accidental access, but not from intentional inspection. It's like having a door that's closed but not locked - good enough to prevent accidents, useless against anyone who actually wants to get in.

Semicolons matter (sometimes)

While implementing symbol properties, I discovered this delightful parsing ambiguity:

export class SomeClass {
  private stringName: string = 'StringNameOfClass'
  [Symbol.toStringTag] = this.stringName  // Prettier goes mental
}

Without a semicolon after the first line, Prettier interprets this as:

private stringName: string = ('StringNameOfClass'[Symbol.toStringTag] = this.stringName)

Because you can totally set properties on string literals in JavaScript (even though it's completely pointless), the parser thinks you're doing property access and assignment chaining.

The semicolon makes it unambiguous, and impressively, Prettier is smart enough to recognize that this particular semicolon is semantically significant and doesn't remove it like it normally would.

Testing arrays vs. testing values

Completely unrelated to symbols, but I learned that Vitest's toBe() and toEqual() are different beasts:

expect(Object.keys(obj)).toBe(['publicProp']);     // Fails - different array objects
expect(Object.keys(obj)).toEqual(['publicProp']);  // Passes - same contents

toBe() uses reference equality (like Object.is()), so even arrays with identical contents are different objects. toEqual() does deep equality comparison. This seems obvious in hindsight, but when you're in the middle of testing symbol enumeration behavior, it's easy to forget that arrays are objects too.

The real lesson

I set out to learn about symbols and ended up with a tour of JavaScript's most questionable design decisions:

  • Type coercion that doesn't work the way anyone would expect
  • Operators that behave differently based on hints that don't correspond to actual usage
  • Tooling that warns against legitimate language features
  • Parsing ambiguities that require strategic semicolon placement
  • Privacy models that aren't actually private

This is exactly why "learn by doing" beats "read the documentation" every time. The docs would never tell you about the ESLint conflicts, the semicolon parsing gotcha, or the + operator's bizarre hint behavior. You only discover this stuff when you're actually writing code and things don't work the way they should.

The symbols themselves are fine - they do what they're supposed to do. It's everything else around them that's… erm… "laden with interesting design decision "opportunities".[Cough].


The full code for this investigation is available in my learning-typescript repository if you want to see the gory details. Thanks to Claudia for helping debug the type coercion weirdness and for assistance with this write-up. Also props to GitHub Copilot for pointing out that I had three functions doing the same thing - sometimes the robots are right.

Righto.

--
Adam

Thursday, 25 September 2025

TypeScript namespaces: when the docs say one thing and ESLint says another

G'day:

This is one of those "the documentation says one thing, the tooling says another, what the hell am I actually supposed to do?" situations that seems to crop up constantly in modern JavaScript tooling.

I was working through TypeScript enums as part of my learning project, and I wanted to add methods to an enum - you know, the kind of thing you can do with PHP 8 enums where you can have both the enum values and associated behavior in the same construct. Seemed like a reasonable thing to want to do.

TypeScript enums don't support methods directly, but some digging around Stack Overflow led me to namespace merging as a solution. Fair enough - except as soon as I implemented it, ESLint started having a proper whinge about using namespaces at all.

Cue an hour of trying to figure out whether I was doing something fundamentally wrong, or whether the tooling ecosystem just hasn't caught up with legitimate use cases. Turns out it's a bit of both.

The contradiction

Here's what the official TypeScript documentation says about namespaces:

A note about terminology: It's important to note that in TypeScript 1.5, the nomenclature has changed. "Internal modules" are now "namespaces". "External modules" are now simply "modules", as to align with ECMAScript 2015's terminology, (namely that module X { is equivalent to the now-preferred namespace X {).

Note that "now-preferred" bit. Sounds encouraging, right?

And here's what the ESLint TypeScript rules say:

TypeScript historically allowed a form of code organization called "custom modules" (module Example {}), later renamed to "namespaces" (namespace Example). Namespaces are an outdated way to organize TypeScript code. ES2015 module syntax is now preferred (import/export).

So which is it? Are namespaces preferred, or are they outdated?

The answer, as usual with JavaScript tooling, is "it depends, and the documentation is misleading".

The TypeScript docs were written when they renamed the syntax from module to namespace - the "now-preferred" referred to using the namespace keyword instead of the old module keyword. It wasn't saying namespaces were preferred over ES modules; it was just clarifying the syntax change within the namespace feature itself.

The ESLint docs reflect current best practices: ES2015 modules (import/export) are indeed the standard way to organize code now. Namespaces are generally legacy for most use cases.

But "most use cases" isn't "all use cases". And this is where things get interesting.

The legitimate use case: enum methods

What I wanted to do was add a method to a TypeScript enum, similar to what you can do in PHP:

// What I wanted (conceptually)
enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
  
  // This doesn't work in TypeScript
  static fromValue(value: string): MaoriNumber {
    // ...
  }
}

The namespace merging approach lets you achieve this by declaring an enum and then a namespace with the same name:

// src/lt-15/namespaces.ts

export enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
}

// eslint-disable-next-line @typescript-eslint/no-namespace
export namespace MaoriNumber {
  const enumKeysOnly = Object.keys(MaoriNumber).filter(
    (key) =>
      typeof MaoriNumber[key as keyof typeof MaoriNumber] !== 'function'
  )

  export function fromValue(value: string): MaoriNumber {
    const valueAsMaoriNumber: MaoriNumber = value as MaoriNumber
    const index = Object.values(MaoriNumber).indexOf(valueAsMaoriNumber);
    if (index === -1) {
      throw new Error(`Value "${value}" is not a valid MaoriNumber`);
    }
    const elementName: string = enumKeysOnly[index];
    const typedElementName = elementName as keyof typeof MaoriNumber;

    return MaoriNumber[typedElementName] as MaoriNumber;
  }
}

This gives you exactly what you want: MaoriNumber.Tahi for enum access and MaoriNumber.fromValue() for the method, all properly typed.

The // eslint-disable-next-line comment acknowledges that yes, I know namespaces are generally discouraged, but this is a specific case where they're the right tool for the job.

Why the complexity in fromValue?

You might wonder why that fromValue function is doing so much filtering and type casting. It's because of the namespace merging itself.

When you merge an enum with a namespace, TypeScript sees MaoriNumber as containing both the enum values and the functions. So Object.keys(MaoriNumber) returns:

['Tahi', 'Rua', 'Toru', 'Wha', 'fromValue']

And keyof typeof MaoriNumber becomes:

"Tahi" | "Rua" | "Toru" | "Wha" | "fromValue"

The filtering step removes the function keys so we only work with the actual enum values. The type assertions handle the fact that TypeScript can't statically analyze that our runtime filtering has eliminated the function possibility.

Sidebar: that keyof typeof bit took a while for me to work out. Well I say "work out": I just read this Q&A on Stack Overflow: What does "keyof typeof" mean in TypeScript?. I didn't find anything useful in the actual docs. I look at it more closely in some other code I wrote today… there might be an article in that too. We'll see (I'll cross-ref it here if I write it).

Testing the approach

The tests prove that both aspects work correctly:

// tests/lt-15/namespaces.test.ts

describe('Emulating enum with method', () => {
  it('has accessible enums', () => {
    expect(MaoriNumber.Tahi).toBe('one')
  })
  
  it('has accessible methods', () => {
    expect(MaoriNumber.fromValue('two')).toEqual(MaoriNumber.Rua)
  })
  
  it("won't fetch the method as an 'enum' entry", () => {
    expect(() => {
      MaoriNumber.fromValue('fromValue')
    }).toThrowError('Value "fromValue" is not a valid MaoriNumber')
  })
  
  it("will error if the string doesn't match a MaoriNumber", () => {
    expect(() => {
      MaoriNumber.fromValue('rima')
    }).toThrowError('Value "rima" is not a valid MaoriNumber')
  })
})

The edge case testing is important here - we want to make sure the function doesn't accidentally treat its own name as a valid enum value, and that it properly handles invalid inputs.

Alternative approaches

You could achieve similar functionality with a class and static methods:

const MaoriNumberValues = {
  Tahi: 'one',
  Rua: 'two', 
  Toru: 'three',
  Wha: 'four'
} as const

type MaoriNumber = typeof MaoriNumberValues[keyof typeof MaoriNumberValues]

class MaoriNumbers {
  static readonly Tahi = MaoriNumberValues.Tahi
  static readonly Rua = MaoriNumberValues.Rua
  static readonly Toru = MaoriNumberValues.Toru
  static readonly Wha = MaoriNumberValues.Wha
  
  static fromValue(value: string): MaoriNumber {
    // implementation
  }
}

But this is more verbose, loses some of the enum benefits (like easy iteration), and doesn't give you the same clean MaoriNumber.Tahi syntax you get with the namespace approach.

So when should you use namespaces?

Based on this experience, I'd say namespace merging with enums is one of the few remaining legitimate use cases for TypeScript namespaces. The modern alternatives don't provide the same ergonomics for this specific pattern.

For everything else - code organisation, avoiding global pollution, grouping related functionality - ES modules are indeed the way forward. But when you need to add methods to enums and you want clean, intuitive syntax, namespace merging is still the right tool.

The key is being intentional about it. Use the ESLint disable comment to acknowledge that you're making a conscious choice, not just ignoring best practices out of laziness.

It's one of those situations where the general advice ("don't use namespaces") doesn't account for specific edge cases where they're still the best solution available. The tooling will complain, but sometimes the tooling is wrong.

I'll probably circle back to write up more about TypeScript enums in general - there's a fair bit more to explore there. But for now, I've got a working solution for enum methods that gives me the PHP-like behavior I was after, even if it did require wading through some contradictory documentation to get there.

Credit where it's due: Claudia (claude.ai) was instrumental in both working through the namespace merging approach and helping me understand the TypeScript type system quirks that made the implementation more complex than expected. The back-and-forth debugging of why MaoriNumber[typedElementName] was causing type errors was particularly useful - sometimes you need another perspective to spot what the compiler is actually complaining about. She also helped draft this article, which saved me a few hours of writing time. GitHub Copilot's code review feature has been surprisingly helpful too - it caught some genuine issues with error handling and performance that I'd missed during the initial implementation.

Righto.

--
Adam

Saturday, 6 September 2025

Setting up a TypeScript learning environment: Docker, TDD, and the inevitable config rabbit hole

G'day:

This is another one of those "I really should learn this properly" situations that's been nagging at me for a while now.

My approach to web development has gotten a bit stale. I'm still very much in the "app server renders markup and sends it to the browser" mindset, whereas the world has moved on to "browser runs the app and talks back to the server for server stuff". I've been dabbling with bits and pieces of modern JavaScript tooling, but it's all been very ad-hoc and surface-level. Time to get serious about it.

TypeScript seems like the sensible entry point into this brave new world. It's not like I'm allergic to types - I've been working with strongly-typed languages for decades. And from what I can see, the TypeScript ecosystem has matured to the point where it's not just hipster nonsense any more; it's become the pragmatic choice for serious JavaScript development.

I've decided it would be expedient to actually learn TypeScript properly, and I want to do it via TDD. That means I need a proper development environment set up: something that lets me write tests, run them quickly, and iterate on the code. Plus all the usual developer quality-of-life stuff like linting and formatting that stops me from having to think about trivial decisions.

The educational focus here is important. I'm not trying to build a production system; I'm trying to build a learning environment. That means optimizing for development speed and feedback loops, not for deployment efficiency or runtime performance. I want to be able to write a test, see it fail, write some code, see it pass, refactor, and repeat. Fast.

I always use Docker for everything these days - I won't install server software directly on my host machine in 2025. That decision alone introduces some complexity, but it's non-negotiable for me. The benefits of containerisation (isolation, reproducibility, easy cleanup) far outweigh the setup overhead.

And yes, I'm enough of a geek that I'm running this as a proper Jira project with tickets and everything. LT-7 was dockerizing the environment, LT-8 was getting TypeScript and Vitest working, LT-9 was ESLint and Prettier setup. It helps me track progress, maintain focus, and prevent rabbit-hole-ing - which, as you'll see, is a constant danger when setting up modern JavaScript tooling.

So this article documents the journey of setting up that environment. Spoiler alert: it took longer than actually learning the first few TypeScript concepts, but now I've got a solid foundation for iterative learning.

I should mention that I'm not tackling this learning project solo. I'm working with Claudia (okok, claude.ai. Fine. Whatever) as my TypeScript tutor. It's been an interesting experiment in AI-assisted learning - she's helping me understand concepts, troubleshoot setup issues, and even draft this article documenting the process. The back-and-forth has been surprisingly effective for working through both the technical challenges and the "why does this work this way" questions that come up constantly in modern JavaScript tooling.

This collaborative approach has turned out to be quite useful. I get to focus on the actual learning and problem-solving, while Claudia handles the research grunt work and helps me avoid some of the more obvious rabbit holes. Plus, having to explain what I'm doing and why I'm doing it (even to an AI) forces me to think more clearly about the decisions I'm making.

The Docker foundation

The first challenge was getting a Node.js environment running in Docker that wouldn't drive me mental. This sounds straightforward, but there are some non-obvious gotchas when you're trying to mount your source code into a container while still having node_modules work properly.

The core problem is this: you want your source code to be editable on the host machine (so your IDE can work with it), but you need node_modules to be installed inside the container (because native modules and platform-specific binaries). If you just mount your entire project directory into the container, you'll either overwrite the container's node_modules with whatever's on your host, or vice versa. Neither option ends well.

The solution is to use a separate named volume for node_modules:

# docker/docker-compose.yml

services:
    node:
        build:
            context: ..
            dockerfile: docker/node/Dockerfile

        volumes:
            - ..:/usr/src/app
            - node_modules:/usr/src/app/node_modules

        ports:
            - "51204:51204"

        stdin_open: true
        tty: true

volumes:
    node_modules:

This mounts the project root to /usr/src/app, but then overlays a separate volume specifically for the node_modules directory. The container gets its own node_modules that persists between container restarts, while the host machine never sees it.

The Dockerfile handles the initial npm install during the build process:

# docker/node/Dockerfile

FROM node:24-bullseye

RUN echo "alias ll='ls -alF'" >> ~/.bashrc
RUN echo "alias cls='clear; printf \"\033[3J\"'" >> ~/.bashrc

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "zip", "unzip", "git", "vim"]
RUN ["apt-get", "install", "xdg-utils", "-y"]

WORKDIR  /usr/src/app
COPY package*.json ./
RUN npm install

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node --version || exit 1

Note the xdg-utils package - this turned out to be essential for getting Vitest's web UI working properly. Without it, the test runner couldn't open browser windows from within the container, which meant the UI server would start but be inaccessible.

This setup means I can edit files on my host machine using any editor, but all the Node.js execution happens inside the container with the correct dependencies. It also means I can blow away the entire environment and rebuild it from scratch without affecting my host machine - very important when you're experimenting with JavaScript tooling that changes its mind about best practices every six months.

TypeScript configuration basics

With the Docker environment sorted, the next step was getting TypeScript itself configured. This is where some fundamental decisions need to be made about how the development workflow will actually work.

The tsconfig.json ended up being fairly straightforward:

{
    "compilerOptions": {
        "target": "ES2020",
        "module": "commonjs",
        "sourceMap": true,
        "skipLibCheck": true,
        "noEmitOnError": false,
        "outDir": "./dist",
        "esModuleInterop": true
    },
    "include": ["src/**/*"],
    "watchOptions": {
        "watchDirectory": "useFsEvents"
    }
}

The key choices here were ES2020 as the target (modern enough to be useful, old enough to be stable) and CommonJS for modules (because that's what Node.js expects by default, and I didn't want to fight that battle yet). Source maps are essential for debugging, and noEmitOnError: false means TypeScript will still generate JavaScript even when there are type errors - useful during development when you want to test partially-working code.

But the more interesting decision was about the testing strategy. Do you test the compiled JavaScript in ./dist, or do you test the TypeScript source files directly?

I spent quite a bit of time going back and forth on this. On one hand, it's the dist code that actually runs in production, so surely that's what's important to test. On the other hand, it's the src code that I'm writing and thinking about, so that's what should be tested during development. I was almost ready to go with the dist approach until I came across a Stack Overflow discussion from 2015 that helped clarify the thinking.

The crux of it comes down to the distinction between developer testing and QA testing responsibilities. As a developer, my job is to ensure my code logic is correct and that my implementation matches my intent. That's fundamentally about the source code I'm writing. QA teams, on the other hand, are responsible for verifying that the entire application works correctly under real-world conditions - which includes testing the compiled/built artifacts.

For a learning environment where I'm focused on understanding TypeScript concepts and language features, testing the source makes perfect sense. I want immediate feedback on whether my understanding of TypeScript's type system is correct, not whether the JavaScript compiler is working properly (that's TypeScript's problem, not mine).

I went with testing the source files directly. This means Vitest needs to understand TypeScript natively, but the payoff is faster iteration. When I change a source file, the test runner can immediately re-run the relevant tests without waiting for TypeScript compilation. For a learning environment where I'm constantly making small changes and want immediate feedback, this speed matters more than having tests that exactly mirror a production deployment.

The project structure reflects this educational focus:

src/
  lt-8/
    math.ts
    baseline.ts
    slow.ts
tests/
  lt-8/
    math.test.ts
    baseline.test.ts
    slow.test.ts

Each "learning ticket" gets its own subdirectory in both src and tests. This keeps different concepts isolated and makes it easy to look back at what I was working on during any particular phase. It also means I can experiment with one concept without accidentally breaking code from a previous lesson.

The numbered ticket approach might seem a bit over-engineered for a personal learning project, but it's proven useful for maintaining focus. Each directory represents a specific learning goal, and having that structure prevents me from mixing concerns or losing track of what I was supposed to be working on.

Vitest: the testing backbone

With TypeScript configured, the next step was getting a proper test runner in place. I'd heard good things about Vitest - it's designed specifically for modern JavaScript/TypeScript projects and promises to be fast and developer-friendly.

I'd evaluated JS test frameworks a few years ago, and dismissed Jest as being poorly reasoned/implemented (despite being popular), and had run with Mocha/Chai/Sinon/etc, which seemed more sensible. I'd cut my teeth on Jasmine stuff years ago, but that's faded into obscurity these days. As of 2025, Jest seems to have stumbled and Vitest has come through as being faster and better and more focused when it comes to TypeScript. So seems to be the way forward. Until the JS community does a reversal in probably two weeks time and decides something else is the new shineyshiney. Anyway, for now it's Vitest. I hope its popularity lasts at least until I finish writing this article. Fuck sake.

The installation was straightforward enough:

npm install --save-dev vitest @vitest/coverage-v8 @vitest/ui

The basic configuration in package.json gives you everything you need:

"scripts": {
    "test": "vitest",
    "test:coverage": "vitest run --coverage",
    "test:ui": "vitest --ui --api.host 0.0.0.0"
}

That --api.host 0.0.0.0 bit is crucial when running inside Docker - without it, the UI server only binds to localhost inside the container, which means you can't access it from your host machine.

But the real magic is in Vitest's intelligent watch mode. This thing is genuinely clever about what it re-runs when files change. I created a deliberately slow test to demonstrate this to myself:

async function sleep(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms))
}

export async function mySlowFunction(): Promise<void> {
  console.log('Starting slow function...')
  await sleep(2000)
  console.log('Slow function finished.')
}

When I run the tests and then modify unrelated files, Vitest doesn't re-run the slow test. But when I change slow.ts itself, it immediately runs just that test and its dependencies. You can actually see the delay when that specific test runs, but other changes don't trigger it. It's a small thing, but it makes the development feedback loop much more pleasant when you're not waiting for irrelevant tests to complete.

The web UI is where things get really interesting though. Running npm run test:ui spins up a browser-based interface that gives you a visual overview of all your tests, coverage reports, and real-time updates as you change code. This is why I needed to expose port 51204 in the Docker configuration and install xdg-utils in the container.

That's really nice, and it's "live": it updates whenever I change any code. Pretty cool.

Without xdg-utils, Vitest can start the UI server but can't open browser windows from within the container. The package provides the utilities that Node.js applications expect to be able to launch external programs - in this case, web browsers. It's one of those dependencies that's not immediately obvious until something doesn't work, and then you spend an hour googling why your UI won't open.

The combination of fast command-line testing for quick feedback and the rich web UI for deeper analysis turned out to be exactly what I wanted for a learning environment. I can run tests continuously in the background while coding, but also dive into the visual interface when I want to understand coverage or debug failing tests.

Code quality tooling: ESLint and Prettier

With the testing foundation in place, the next step was getting proper code quality tooling set up. This is where things got a bit more involved, and where I had to make some decisions about what constitutes "good" TypeScript style.

First up was ESLint. The modern TypeScript approach uses typescript-eslint which provides TypeScript-specific linting rules. But there was an immediate gotcha: the configuration format.

ESLint has moved to a new "flat config" format, and naturally this means configuration files with different extensions. Enter the .mts file. WTF is an .mts file? It's TypeScript's way of saying "this is a TypeScript module that should be treated as an ES module regardless of your package.json settings". It's part of the ongoing CommonJS vs ES modules saga that the JavaScript world still hasn't fully sorted out. The .mts extension forces ES module semantics, while .cts would force CommonJS. Since ESLint's flat config expects ES modules, but my project is using CommonJS for everything else, I needed the .mts extension to make the config file work properly. (NB: Claudia wrote all that. She explained it to me and I got it enough to nod along and go "riiiiiiight…" in a way I thought was convincing at the time, but doesn't seem that way now I write it down).

The resulting eslint.config.mts ended up being reasonably straightforward:

import js from '@eslint/js'
import globals from 'globals'
import tseslint from 'typescript-eslint'
import { defineConfig } from 'eslint/config'
import eslintConfigPrettier from 'eslint-config-prettier/flat'

export default defineConfig([
  {
    files: ['**/*.{js,mjs,cjs,ts,mts,cts}'],
    plugins: { js },
    languageOptions: {
      globals: globals.node,
    },
  },
  {
    files: ['**/*.js'],
    languageOptions: {
      sourceType: 'commonjs',
    },
  },
  {
    rules: {
      'prefer-const': 'error',
      'no-var': 'error',
      'no-undef': 'error',
    },
  },
  tseslint.configs.recommended,
  eslintConfigPrettier,
])

The philosophy here is to split responsibilities: Prettier handles style and formatting, ESLint handles potential bugs and code quality issues. The eslint-config-prettier integration ensures these two don't step on each other's toes by disabling any ESLint rules that conflict with Prettier's formatting decisions.

Speaking of Prettier, this is where I had to do some soul-searching about applying rules I don't necessarily agree with. The .prettierrc configuration reflects a mix of TypeScript community zeitgeist and my own preferences:

{
    "semi": false,
    "trailingComma": "es5",
    "singleQuote": true,
    "printWidth": 80,
    "tabWidth": 2,
    "useTabs": false,
    "endOfLine": "lf"
}

The "semi": false bit aligns with my long-standing view that semicolons are for the computer, not the human - only use them when strictly necessary. Single quotes over double quotes is just personal preference.

But then we get to the contentious bits. Two-space indentation instead of four? I've been a four-space person for decades, but the TypeScript world has largely standardised on two spaces. Trailing commas in ES5 style? Again, this is considered best practice in modern JavaScript because it makes diffs cleaner when you add array or object elements, but it feels wrong to someone coming from more traditional languages.

In the end, I decided to go with the community defaults rather than fight them. When you're learning a new ecosystem, there's value in following the established conventions even when they don't match your personal preferences. It makes it easier to read other people's code, easier to contribute to open source projects, and easier to get help when you're stuck.

The tooling integration worked exactly as advertised. ESLint caught real issues - like when I deliberately used var instead of let or const - while Prettier handled all the formatting concerns automatically. It's a surprisingly pleasant development experience once it's all wired up.

IDE integration: VSCode vs IntelliJ

This is where things got properly annoying, and where I had to make some compromises I wasn't entirely happy with.

My preferred IDE is IntelliJ. I've been using JetBrains products for years, and their TypeScript/Node.js support is generally excellent. The problem isn't with IntelliJ's understanding of TypeScript - it's with IntelliJ's support for dockerised Node.js development.

Here's the issue: IntelliJ can see that Node.js is running in a container. It can connect to it, execute commands against it, and generally work with the containerised environment. But when it comes to the node_modules directory, it absolutely requires those modules to exist on the host machine as well. Even though it knows the actual execution is happening in the container, even though it can see the modules in the container filesystem, it won't provide proper IntelliSense or code completion without a local copy of node_modules.

This is a complete show-stopper for my setup. The whole point of the separate volume for node_modules is that the host machine never sees those files. I'm not going to run npm install on my host just to make IntelliJ happy - that defeats the entire purpose of containerisation.

So: VSCode it is. And to be fair, VSCode's Docker integration is genuinely well done. The Dev Containers extension understands the setup immediately, provides proper IntelliSense for all the containerised dependencies, and generally "just gets it" in a way that IntelliJ doesn't.

There are some annoyances though. VSCode has this file locking behaviour when running against mounted volumes that occasionally interferes with file operations. Nothing catastrophic, but the kind of minor friction that makes you appreciate how smooth things usually are in IntelliJ. Still, it's livable - and the benefits of having an IDE that properly understands your containerised development environment far outweigh the occasional file system hiccup.

Getting Prettier integrated into VSCode required a few configuration tweaks. I had to install the Prettier extension, then configure VSCode to use it as the default formatter and enable format-on-save. The key settings in .vscode/settings.json were:

{
    "editor.defaultFormatter": "esbenp.prettier-vscode",
    "prettier.configPath": ".prettierrc",
    "editor.formatOnPaste": true,
    "editor.formatOnSave": true
}

I got these from How to use Prettier with ESLint and TypeScript in VSCode › Formatting using VSCode on save (recommended) .

The end result is a development environment where I can focus on learning TypeScript concepts rather than fighting with tooling. It's not my ideal setup - I'd rather be using IntelliJ - but it works well enough that the IDE choice doesn't get in the way of the actual learning.

Seeing it all work

With all the tooling in place, it was time to put it through its paces with some actual TypeScript code. I started with a baseline test just to prove that Vitest was operational:

// src/lt-8/baseline.ts
import process from 'node:process'

export function getNodeVersion(): string {
  return process.version
}
// tests/lt-8/baseline.test.ts
import { describe, it, expect } from 'vitest'
import { getNodeVersion } from '../../src/lt-8/baseline'

describe('tests vitest is operational and test TS code', () => {
  it('should return the current Node.js version', () => {
    const version = getNodeVersion()
    expect(version).toMatch(/^v24\.\d+\.\d+/)
  })
})

This isn't really testing TypeScript-specific features, but it proves that the basic infrastructure works - we're importing Node.js modules, calling functions, and verifying that we get the expected Node 24 version back. It's a good sanity check that the container environment and test runner are talking to each other properly.

"Interestingly" I ran this pull request through Github Copilot's code review mechanism, and it pulled me up for this test being fragile because I'm verifying the Node version is specifically 24, suggesting "this will break if the version isn't 24 Well… exactly mate. It's a test to verify I am running on the version we're expecting it to be! I guess though my test label 'should return the current Node.js version' is not correct. It should be 'should return the application\'s required Node.js version', or something.

The real TypeScript example was the math function:

// src/lt-8/math.ts
export function add(a: number, b: number): number {
  return a + b
}
// tests/lt-8/math.test.ts
import { describe, it, expect } from 'vitest'
import { add } from '../../src/lt-8/math'

describe('add function', () => {
  it('should return 3 when adding 1 and 2', () => {
    expect(add(1, 2)).toBe(3)
  })
})

Simple, but it demonstrates TypeScript's type annotations working properly. The function expects two numbers and returns a number, and TypeScript will complain if you try to pass strings or other types.

ESLint caught real issues too. When I deliberately changed the math function to use var instead of const:

export function add(a: number, b: number): number {
  var c = a + b
  return c
}

Running npx eslint src/lt-8/math.ts immediately flagged it:

/usr/src/app/src/lt-8/math.ts
  2:3  error  Unexpected var, use let or const instead  no-var
✖ 1 problem (1 error, 0 warnings)
1 error and 0 warnings potentially fixable with the --fix option.

Perfect - exactly the kind of feedback that helps enforce modern JavaScript practices. ESLint even suggested that it could auto-fix the issue, and running npx eslint src/lt-8/math.ts --fix would change var to const automatically.

I had some personal confusion here. My initial attempt at that var c = a + b thing was to omit the var entirely. But ESLint wasn't doing anything about it. Odd. WTF? It wasn't until Claudia explained to me that c = a + b is not valid TS at all that it made sense. That way of init-ing a variable is fine in JS, but invalid in TS. It needs a qualifier. So ESLint couldn't even parse the code, so didn't bother. Poss it should have gone "ummm… that ain't code…?" though?

The Vitest watch mode proved its worth during development. Running npm test puts it into watch mode, and it sits there monitoring file changes. When I modify math.ts, it immediately re-runs just the math tests. When I modify slow.ts, it runs the slow test and I can see the 2-second delay. But when I modify unrelated files, it doesn't unnecessarily re-run tests that haven't been affected.

The web UI provides a nice visual overview of everything that's happening. You can see which tests are passing, which are failing, coverage reports, and real-time updates as you change code. It's particularly useful when you want to dive into test details or understand why something isn't working as expected.

All of this creates a pretty pleasant development feedback loop. Write a test, see it fail, write some code, see it pass, refactor, repeat. The tooling stays out of the way and just provides the information you need when you need it.

Was it worth the config rabbit hole?

Looking back at this whole exercise, I spent significantly more time setting up the development environment than I did actually learning TypeScript concepts. The irony isn't lost on me - I set out to learn a programming language and ended up writing a blog article about Docker volumes and ESLint configuration.

But honestly? Yes, it was worth it.

The alternative would have been to muddle through with a half-working setup, constantly fighting with tooling issues, or worse - learning TypeScript concepts incorrectly because my development environment wasn't giving me proper feedback. I've been down that road before with other technologies, and it's frustrating as hell.

What I have now is a solid foundation for iterative learning. I can write a test, see it fail, implement some TypeScript code, see it pass, and refactor - all with immediate feedback from the type checker, linter, and test runner. When I inevitably write something that doesn't make sense, the tooling will tell me quickly rather than letting me develop bad habits.

The TDD approach is working exactly as intended. Having tests that run automatically means I can experiment with TypeScript features without worrying about breaking existing code. The fast feedback loop means I can try things, see what happens, and iterate quickly.

Plus, this setup will serve me well beyond just learning the basics. When I'm ready to explore more advanced TypeScript features - generics, decorators, complex type manipulations - I'll have an environment that can handle it without needing another round of configuration hell.

The time investment was front-loaded, but now I can focus on the actual learning rather than fighting with tools. And frankly, understanding how to set up a modern TypeScript development environment is valuable knowledge in itself - it's not like I'm going to be working with TypeScript in isolation forever.

So yes, the config rabbit hole was worth it. Even if it did take longer than actually learning the difference between interface and type.

Righto.

--
Adam