Friday, 10 October 2025

TypeScript: any, unknown, never

G'day:

I've been using any and unknown in TypeScript for a while now - enough to know that ESLint hates one and tolerates the other, and that sometimes you need to do x as unknown as y to make the compiler shut up. But knowing they exist and actually understanding what they're for are different things.

The Jira ticket I set up for this was straightforward enough:

Both unknown and any represent values of uncertain type, but they have different safety guarantees. any opts out of type checking entirely, while unknown is type-safe and requires narrowing before use.

Simple, right? any turns off TypeScript's safety checks, unknown keeps them on. I built some examples, wrote some tests, and thought I was done.

Then Claudia pointed out I'd completely missed the point of type narrowing. I was using type assertions (value as string) instead of type guards (actually checking what the value is at runtime). Assertions just tell TypeScript to trust you. Guards actually verify you're right.

Turns out there's a difference between "making the compiler happy" and "writing safe code".

any - when you genuinely don't know or don't care

I started with a generic key-value object that could hold anything:

export type WritableValueObject = Record<string, any>
export type ValueObject = Readonly<WritableValueObject>

type keyValue = [string, any]

export function toValueObject(...kv: keyValue[]): ValueObject {
  const vo: WritableValueObject = kv.reduce(
    (valueObject: WritableValueObject, kv: keyValue): ValueObject => {
      valueObject[kv[0]] = kv[1]
      return valueObject
    },
    {} as WritableValueObject
  )
  return vo
}

(from any.ts)

ESLint immediately flags every any with warnings about unsafe assignments and lack of type checking. But this is actually a legitimate use case - I'm building a container that genuinely holds arbitrary values. The whole point is that I don't know what's in there and don't need to.

The Readonly<...> wrapper makes it immutable after creation, which is what you want for a value object. Try to modify it and TypeScript complains about the index signature being readonly. The error message says Readonly<WritableValueObject> instead of just ValueObject because TypeScript helpfully expands type aliases in error messages. Sometimes this is useful (showing you what the type actually is), sometimes it's just verbose.

unknown - the safer alternative that's actually more annoying

The unknown version looks almost identical:

export type WritableValueObject = Record<string, unknown>
export type ValueObject = Readonly<WritableValueObject>

type keyValue = [string, unknown]

export function toValueObject(...kv: keyValue[]): ValueObject {
  const vo: WritableValueObject = kv.reduce(
    (valueObject: WritableValueObject, kv: keyValue): ValueObject => {
      valueObject[kv[0]] = kv[1]
      return valueObject
    },
    {} as WritableValueObject
  )
  return vo
}

(from unknown.ts)

The difference shows up when you try to use the values. With any, you can do whatever you want:

const value = vo.someKey;
const reversed = reverse(value); // Works fine with any

With unknown, TypeScript blocks you:

const value = vo.someKey;
const reversed = reverse(value); // Error: 'value' is of type 'unknown'

My first solution was type assertions:

const reversed = reverse(value as string); // TypeScript: "OK, if you say so"

This compiles. The tests pass. I thought I was done.

Then Claudia pointed out I wasn't actually checking anything - I was just telling TypeScript to trust me. Type assertions are a polite way of saying "shut up, compiler, I know what I'm doing". Which is fine when you genuinely do know, but defeats the point of using unknown in the first place.

Type guards - actually checking instead of just asserting

The proper way to handle unknown is with type guards - runtime checks that prove what type you're dealing with. TypeScript then narrows the type based on those checks.

The simplest is typeof:

const theWhatNow = returnsAsUnknown(input);

if (typeof theWhatNow === 'string') {
  const reversed = reverse(theWhatNow); // TypeScript knows it's a string now
}

(from unknown.test.ts)

Inside the if block, TypeScript knows theWhatNow is a string because the typeof check proved it. Outside that block, it's still unknown.

For objects, use instanceof:

const theWhatNow = returnsAsUnknown(input);

if (theWhatNow instanceof SomeClass) {
  expect(theWhatNow.someMethod('someValue')).toEqual('someValue');
}

And for custom checks, you can write type guard functions with the is predicate:

export class SomeClass {
  someMethod(someValue: unknown): unknown {
    return someValue
  }

  static isValid(value: unknown): value is SomeClass {
    return value instanceof SomeClass
  }
}

(from unknown.ts)

The value is SomeClass return type tells TypeScript that if this function returns true, the value is definitely a SomeClass:

if (SomeClass.isValid(theWhatNow)) {
  expect(theWhatNow.someMethod('someValue')).toEqual('someValue');
}

This is proper type safety - you're checking at runtime, not just asserting at compile time.

Error handling with unknown

The most practical use of unknown is in error handling. Before TypeScript 4.0, everyone wrote:

try {
  throwSomeError('This is an error')
} catch (e) {  // e is implicitly 'any'
  console.log(e.message)  // Hope it's an Error!
}

Now you can (and should) use unknown:

try {
  throwSomeError('This is an error')
} catch (e: unknown) {
  expect(e).toBeInstanceOf(SomeError)
}

(from unknown.test.ts)

Catches can throw anything - not just Error objects. Someone could throw a string, a number, or literally anything. Using unknown forces you to check what you actually caught before using it.

So which one should you use?

Here's the thing though - for my ValueObject use case, unknown is technically safer but practically more annoying. The whole point of a generic key-value store is that you don't know what's in there. Making users narrow types every time they retrieve a value is tedious:

const value = getValueForKey(vo, 'someKey');
if (typeof value === 'string') {
  doSomething(value);
}

versus just:

const value = getValueForKey(vo, 'someKey');
doSomething(value as string);

For a genuinely generic container where you're accepting "no idea what this is" as part of the design, any is the honest choice. You're not pretending to enforce safety on truly dynamic data.

But for error handling, function parameters that could be anything, or situations where you'll actually check the type before using it, unknown is the better option. It forces you to handle the uncertainty explicitly rather than hoping for the best.

never - the type that can't exist

While any and unknown are about values that could be anything, never is about values that can't exist at all. It's the bottom type - nothing can be assigned to it.

The most obvious use is functions that never return:

export function throwAnError(message: string): never {
  throw new Error(message)
}

(from never.ts)

Functions that throw or loop forever return never because they don't return at all. TypeScript uses this to detect unreachable code:

expect(() => {
  throwAnError('an error')
  // "Unreachable code detected."
  const x: string = ''
  void x
}).toThrow('an error')

(from never.test.ts)

The const x line gets flagged because TypeScript knows the previous line never returns control.

Things get more interesting with conditional never:

export function throwsAnErrorIfItIsBad(message: string): boolean | never {
  if (message.toLowerCase().indexOf('bad') !== -1) {
    throw new Error(message)
  }
  return false
}

The return type says "returns a boolean, or never returns at all". TypeScript doesn't flag unreachable code after calling this function because it might actually return normally.

Exhaustiveness checking

The clever use of never is exhaustiveness checking in type narrowing:

export function returnsStringsOrNumbers(
  value: string | number
): string | number {
  if (typeof value === 'string') {
    const valueToReturn = value + ''
    return valueToReturn
  }
  if (typeof value === 'number') {
    const valueToReturn = value * 1
    return valueToReturn
  }
  const valueToReturn = value // TypeScript hints: const valueToReturn: never
  return valueToReturn
}

(from never.ts)

After checking for string and number, TypeScript knows that value can't be anything else, so it infers the type as never. This is TypeScript's way of saying "we've handled all possible cases".

If you tried to call the function with something that wasn't a string or number (like an array cast to unknown then to string), TypeScript won't catch it at compile time because you've lied to the compiler. But at least the never hint shows you've exhausted the legitimate cases.

The actual lesson

I went into this thinking I understood these types well enough - any opts out, unknown is safer, never is for functions that don't return. All true, but missing the point.

The real distinction is between compile-time assertions and runtime checks. Type assertions (as string) tell TypeScript "trust me", but they don't verify anything. Type guards (typeof, instanceof, custom predicates) actually check at runtime.

For genuinely dynamic data like a generic ValueObject, any is the honest choice - you're accepting the lack of type safety as part of the design. For cases where you'll actually verify the type before using it (like error handling), unknown forces you to be explicit about those checks.

And never is TypeScript's way of tracking control flow and exhaustiveness, which is useful when you actually pay attention to what it's telling you.

The code for all this is in the learning-typescript repository, with test examples showing the differences between assertions and guards. Thanks to Claudia for pointing out I was doing type assertions instead of actual type checking - turns out there's a difference between making the compiler happy and writing safe code.

Righto.

--
Adam

Monday, 6 October 2025

Setting up a Python learning environment: Docker, pytest, and ruff

G'day:

I'm learning Python. Not because I particularly want to, but because my 14-year-old son Zachary has IT homework and I should probably be able to help him with it. I've been a web developer for decades, but Python's never been part of my stack. Time to fix that gap.

This article covers getting a Python learning environment set up from scratch: Docker container with modern tooling, pytest for testing, and ruff for code quality. The goal is to have a proper development environment where I can write code, run tests, and not have things break in stupid ways. Nothing revolutionary here, but documenting it for when I inevitably forget how Python dependency management works six months from now.

The repo's at github.com/adamcameron/learning-python (tag 3.0.2), and I'm tracking this as Jira tickets because that's how my brain works. LP-1 was the Docker setup, LP-2 was the testing and linting toolchain.

Getting Docker sorted

First job was getting a Python container running. I'm not installing Python directly on my Windows machine - everything goes in Docker. This keeps the host clean and makes it easy to blow away and rebuild when something inevitably goes wrong.

I went with uv for dependency management. It's the modern Python tooling that consolidates what used to be pip, virtualenv, and a bunch of other stuff into one fast binary. It's written in Rust, so it's actually quick, and it handles the virtual environment isolation properly.

The docker-compose.yml is straightforward:

services:
    python:
        build:
            context: ..
            dockerfile: docker/python/Dockerfile

        volumes:
            - ..:/usr/src/app
            - venv:/usr/src/app/.venv

        stdin_open: true
        tty: true

volumes:
    venv:

The key bit here is that separate volume for .venv. Without it, you get the same problem as with Node.js - the host's virtual environment conflicts with the container's. Using a named volume keeps the container's dependencies isolated while still letting me edit source files on the host.

The Dockerfile handles the initial setup:

FROM astral/uv:python3.11-bookworm

RUN echo "alias ll='ls -alF'" >> ~/.bashrc
RUN echo "alias cls='clear; printf \"\033[3J\"'" >> ~/.bashrc

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]

WORKDIR  /usr/src/app

ENV UV_LINK_MODE=copy

RUN \
    --mount=type=cache,target=/root/.cache/uv \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync \
    --no-install-project

ENTRYPOINT ["bash"]

Nothing fancy. The astral/uv base image already has Python and uv installed. I'm using Python 3.11 because it's stable and well-supported. The uv sync at build time installs dependencies from pyproject.toml, and that cache mount makes rebuilds faster.

The ENTRYPOINT ["bash"] keeps the container running so I can exec into it and run commands. I'm used to having PHP-FPM containers that stay up with their own service loop, and this achieves the same thing.

One thing I'm doing here differently from usual, is that I am using mount to temporarily expose files to the Docker build process. In the past I would have copied pyproject.toml into the image file system. Why the change? Cos I did't realise I could do this until I saw it in this article I googled up: "Using uv in Docker › Intermediate layers"! I'm gonna use this stragey from now on, I think…

Project configuration and initial code

Python projects use pyproject.toml for configuration - it's the equivalent of package.json in Node.js or composer.json in PHP. Here's the initial setup:

[project]
name = "learning-python"
version = "0.1"
description = "And now I need to learn Python..."
readme = "README.md"
requires-python = ">=3.11"
dependencies = []

[project.scripts]
howdy = "learningpython.lp2.main:greet"

[build-system]
requires = ["uv_build>=0.8.15,<0.9.0"]
build-backend = "uv_build"

[tool.uv.build-backend]
namespace = true

The project.scripts section defines a howdy command that calls the greet function from learningpython.main. The syntax is module.path:function. This makes the function callable via uv run howdy from the command line.

The namespace = true bit tells uv to use namespace packages, which means I don't need __init__.py files everywhere. Modern Python packaging is less fussy than it used to be.

The actual code in src/learningpython/lp2/main.py is about as simple as it gets:

def greet():
    print("Hello from learning-python!")

if __name__ == "__main__":
    greet()

Nothing to it. The if __name__ == "__main__" bit means the function runs when you execute the file directly, but not when you import it as a module. Standard Python pattern.

With all this in place, I could build and run the container:

$ docker compose -f docker/docker-compose.yml up --detach
[+] Running 2/2
 ✔ Volume "learning-python_venv"  Created
 ✔ Container learning-python-python-1  Started

$ docker exec learning-python-python-1 uv run howdy
Hello from learning-python!

Right. Basic container works, simple function prints output. Time to sort out testing.

Installing pytest and development dependencies

Python separates runtime dependencies from development dependencies. Runtime deps go in the dependencies array, dev deps go in [dependency-groups]. Things like test frameworks and linters are dev dependencies - you need them for development but not for running the actual application.

To add pytest, I used uv add --dev pytest. This is the Python equivalent of composer require --dev in PHP or npm install --save-dev in Node. The --dev flag tells uv to put it in the dev dependency group rather than treating it as a runtime requirement.

I wanted to pin pytest to major version 8, so I checked PyPI (pypi.org/project/pytest/) to see what was current. As of writing it's 8.4.2. Python uses different version constraint syntax than Composer - instead of ^8.0 you write >=8.0,<9.0. More verbose but explicit.

I also wanted a file watcher like vitest has. There's pytest-watch but it hasn't been maintained since 2020 and doesn't work with modern pyproject.toml files. There's a newer alternative called pytest-watcher that handles the modern Python tooling properly.

After running uv add --dev pytest pytest-watcher, the pyproject.toml updated to include:

[dependency-groups]
dev = [
    "pytest>=8.4.2,<9",
    "pytest-watcher>=0.4.3,<0.5",
]

The uv.lock file pins the exact versions that were installed, giving reproducible builds. It's the Python equivalent of composer.lock or package-lock.json.

Writing the first test

pytest discovers test files automatically. It looks for files named test_*.py or *_test.py and runs functions in them that start with test_. No configuration needed for basic usage.

I created tests/lp2/test_main.py to test the greet() function. The test needed to verify that calling greet() outputs the expected message to stdout. pytest has a built-in fixture called capsys that captures output streams:

from learningpython.lp2.main import greet

def test_greet(capsys):
    greet()
    captured = capsys.readouterr()
    assert captured.out == "Hello from learning-python!\n"

The capsys parameter is a pytest fixture - you just add it as a function parameter and pytest provides it automatically. Calling readouterr() gives you back stdout and stderr as a named tuple. The \n at the end is because Python's print() adds a newline by default.

Running the test:

$ docker exec learning-python-python-1 uv run pytest
======================================= test session starts ========================================
platform linux -- Python 3.11.13, pytest-8.4.2, pluggy-1.6.0
rootdir: /usr/src/app
configfile: pyproject.toml
collected 1 item

tests/lp2/test_main.py .                                                                     [100%]

======================================== 1 passed in 0.01s =========================================

Green. The test found the pyproject.toml config automatically and discovered the test file without needing to tell it where to look.

For continuous testing, pytest-watcher monitors files and re-runs tests on changes:

$ docker exec learning-python-python-1 uv run ptw
[ptw] Watching directories: ['src', 'tests']
[ptw] Running: pytest
======================================= test session starts ========================================
platform linux -- Python 3.11.13, pytest-8.4.2, pluggy-1.6.0
rootdir: /usr/src/app
configfile: pyproject.toml
collected 1 item

tests/lp2/test_main.py .                                                                     [100%]

======================================== 1 passed in 0.01s =========================================

Any time I change a file in src or tests, it automatically re-runs the relevant tests. Much faster feedback loop than running tests manually each time.

Code formatting and linting with ruff

Python has a bunch of tools for code quality - black for formatting, flake8 for linting, isort for import sorting. Or you can just use ruff, which consolidates all of that into one fast tool written in Rust.

Installation was the same pattern: uv add --dev ruff. This added "ruff>=0.8.4,<0.9" to the dev dependencies.

ruff has two main commands:

  • ruff check - linting (finds unused variables, style issues, code problems)
  • ruff format - formatting (fixes indentation, spacing, line length)

Testing it out with some deliberately broken code:

$ docker exec learning-python-python-1 uvx ruff check src/learningpython/lp2/main.py
F841 Local variable `a` is assigned to but never used
 --> src/learningpython/lp2/main.py:2:8
  |
1 | def greet():
2 |        a = "wootywoo"
  |        ^
3 |        print("Hello from learning-python!")
  |
help: Remove assignment to unused variable `a`

It caught the unused variable. It also didn't complain about the 7-space indentation, because ruff check is about code issues, not formatting. That's what ruff format is for:

$ docker exec learning-python-python-1 uvx ruff format src/learningpython/lp2/main.py
1 file reformatted

This fixed the indentation to Python's standard 4 spaces. The check command can also auto-fix some issues with --fix, similar to eslint.

I configured IntelliJ to run ruff format on save. Had to disable a conflicting AMD Adrenaline hotkey first - video driver software stealing IDE shortcuts is always fun to debug. It took about an hour to work out WTF was going on there. I really don't understand why AMD thinks its driver software needs hotkeys. Dorks.

A Python gotcha: hyphens in paths

I reorganised the code by ticket number, so I moved the erstwhile main.py to src/learningpython/lp-2/main.py. Updated the pyproject.toml entry point to match:

[project.scripts]
howdy = "learningpython.lp-2.main:greet"

This did not go well:

$ docker exec learning-python-python-1 uv run howdy
      Built learning-python @ file:///usr/src/app
Uninstalled 1 package in 0.37ms
Installed 1 package in 1ms
  File "/usr/src/app/.venv/bin/howdy", line 4
    from learningpython.lp-2.main import greet
                            ^
SyntaxError: invalid decimal literal

Python's import system doesn't support hyphens in module names. When it sees lp-2, it tries to parse it as "lp minus 2" and chokes. Module names need to be valid Python identifiers, which means letters, numbers, and underscores only.

Renaming to lp2 fixed it. No hyphens in directory names if those directories are part of the import path. You can use hyphens in filenames that you access directly (like python path/to/some-script.py), but not in anything you're importing as a module.

This caught me out because hyphens are fine in most other ecosystems. Coming from PHP and JavaScript where some-module-name is perfectly normal, Python's stricter rules take some adjustment.

Wrapping up

So that's the development environment sorted. Docker container running Python 3.11 with uv for dependency management. pytest for testing with pytest-watcher for continuous test runs. ruff handling both linting and formatting. All the basics for writing Python code without things being annoying.

The final project structure looks like this:

learning-python/
├── docker/
│   ├── docker-compose.yml
│   └── python/
│       └── Dockerfile
├── src/
│   └── learningpython/
│       └── lp2/
│           └── main.py
├── tests/
│   └── lp2/
│       └── test_main.py
├── pyproject.toml
└── uv.lock

Everything's on GitHub at github.com/adamcameron/learning-python (tag 3.0.2).

Now I can actually start learning Python instead of fighting with tooling. Which is the point.

Righto.

--
Adam

Saturday, 4 October 2025

Violating Blogger's community guidelines. Apparently.

G'day:

Earlier this evening I published TypeScript decorators: not actually decorators. And about 5min after it went live, it vanished. Weird. Looking in the back-end of Blogger, I see this warning:

This post was unpublished because it violates Blogger's community guidelines. To republish, please update the content to adhere to the guidelines.

What? Seriously?

Looking through my junk folder in my email client, I had an email thus:

Hello,

As you may know, our Community Guidelines (https://blogger.com/go/contentpolicy) describe the boundaries for what we allow – and don't allow – on Blogger. Your post titled 'TypeScript decorators: not actually decorators' was flagged to us for review. We have determined that it violates our guidelines and have unpublished the URL https://blog.adamcameron.me/2025/10/typescript-decorators-not-actually.html, making it unavailable to blog readers.

If you are interested in republishing the post, please update the content to adhere to Blogger's Community Guidelines. Once the content has been updated, you may republish it at [URL removed]. This will trigger a review of the post.

You may have the option to pursue your claims in court. If you have legal questions or wish to examine legal options that may be available to you, you may want to consult your own legal counsel.

For more information, please review the following resources:

Sincerely,

The Blogger Team

"OK," I thought. "I'll play yer silly game", knowing full-well I had done nothing to violate any sane T&Cs / guidelines. You can review the guidance yerself: obvs there's nothing in the article that comes anywhere near close to butting up against any of those rules.

I did make a coupla edits and resubmitted it:

  • Updated text in the first para to read "what the heck". You can imagine what it said before the edit. Not the only instance of that word in this blog, as one can imagine.
  • I was using my son's name instead of "Jed Dough". I have used Z's name a lot in the past, so can't see it was that.
  • I used a very cliched common password as sample data in place of tough_to_guess.
  • I removed most of one para. The para starting "Worth learning?" went on to explain how some noted TypeScript frameworks used decorators heavily. Why did I remove this? Well: Claudia wrote it, and this came from her knowledge not my own. I didn't know those frameworks even existed, let alone used decorators. I admonished her for using original "research", but I also went through and verified that she was correct in what she was saying. To me this was harmless and useful info: but it wasn't my own work, so I thought I'd get rid. I had included a note there that it was her and not me. There's nothing in the T&Cs that said one cannot use AI to help writing these articles, but I know people are getting a bit pearl-clutchy about the whole thing ATM, so figured it might be that. Daft though, given it was an admission it was AI-written; rather than try to shadily pass AI work as my own. Which, if you read this blog, I don't do. I always say when she's helped me draft things. And I always read what she's done ands tweak where necessary anyhow. It's my work.

And that was it. But maybe 30min later I got another email from them:

Hello,

We have re-evaluated the post titled 'TypeScript decorators: not actually decorators' against our Community Guidelines (https://blogger.com/go/contentpolicy). Upon review, the post has been reinstated. You may access the post at https://blog.adamcameron.me/2025/10/typescript-decorators-not-actually.html.

Sincerely,
The Blogger Team

Cool. No harm done, but I'd really like to know what triggered it. Of course they can't tell me as that would be leaking info that bad-actors could then use to circumvent their system. I get that. And it's better to err on the side of caution in these matters I guess.

Anyway, that was a thing.

Righto.

--
Adam (who wrote every word of this one. How bloody tedious)

TypeScript decorators: not actually decorators

G'day:

I've been working through TypeScript classes, and when I got to decorators I hit the @ syntax and thought "hang on, what the heck is all this doing inside the class being decorated? The class shouldn't know it's being decorated. Fundamentally it shouldn't know."

Turns out TypeScript decorators have bugger all to do with the Gang of Four decorator pattern. They're not about wrapping objects at runtime to extend behavior. They're metaprogramming annotations - more like Java's @annotations or C#'s [attributes] - that modify class declarations at design time using the @ syntax.

The terminology collision is unfortunate. Python had the same debate back in PEP 318 - people pointed out that "decorator" was already taken by a well-known design pattern, but they went with it anyway because the syntax visually "decorates" the function definition. TypeScript followed Python's lead: borrowed the @ syntax, borrowed the confusing name, and now we're stuck with it.

So this isn't about the decorator pattern at all. This is about TypeScript's metaprogramming features that happen to be called decorators for historical reasons that made sense to someone, somewhere.

What TypeScript deco

What TypeScript decorators actually do

A decorator in TypeScript is a function that takes a target (the thing being decorated - a class, method, property, whatever) and a context object, and optionally returns a replacement. They execute at class definition time, not at runtime.

The simplest example is a getter decorator:

function obscurer(
  originalMethod: (this: PassPhrase) => string,
  context: ClassGetterDecoratorContext
) {
  void context
  function replacementMethod(this: PassPhrase) {
    const duplicateOfThis: PassPhrase = Object.assign(
      Object.create(Object.getPrototypeOf(this) as PassPhrase),
      this,
      { _text: this._text.replace(/./g, '*') }
    ) as PassPhrase

    return originalMethod.call(duplicateOfThis)
  }

  return replacementMethod
}

export class PassPhrase {
  constructor(protected _text: string) {}

  get plainText(): string {
    return this._text
  }

  @obscurer
  get obscuredText(): string {
    return this._text
  }
}

(from accessor.ts)

The decorator function receives the original getter and returns a replacement that creates a modified copy of this, replaces the _text property with asterisks, then calls the original getter with that modified context. The original instance is untouched - we're not mutating state, we're intercepting the call and providing different data to work with. The @obscurer syntax applies the decorator to the getter.

The test shows this in action:

it('original text remains unchanged', () => {
  const phrase = new PassPhrase('tough_to_guess')
  expect(phrase.obscuredText).toBe('**************')
  expect(phrase.plainText).toBe('tough_to_guess')
})

(from accessor.test.ts)

The obscuredText getter returns asterisks, the plainText getter returns the original value. The decorator wraps one getter without affecting the other or mutating the underlying _text property.

Method decorators and decorator factories

Method decorators work the same way as getter decorators, except they handle methods with actual parameters. More interesting is the decorator factory pattern - a function that returns a decorator, allowing runtime configuration.

Here's an authentication service with logging:

interface Logger {
  log(message: string): void
}

const defaultLogger: Logger = console

export class AuthenticationService {
  constructor(private directoryServiceAdapter: DirectoryServiceAdapter) {}

  @logAuth()
  authenticate(userName: string, password: string): boolean {
    const result: boolean = this.directoryServiceAdapter.authenticate(
      userName,
      password
    )
    if (!result) {
      throw new AuthenticationException(
        `Authentication failed for user ${userName}`
      )
    }
    return result
  }
}

function logAuth(logger: Logger = defaultLogger) {
  return function (
    originalMethod: (
      this: AuthenticationService,
      userName: string,
      password: string
    ) => boolean,
    context: ClassMethodDecoratorContext<
      AuthenticationService,
      (userName: string, password: string) => boolean
    >
  ) {
    void context
    function replacementMethod(
      this: AuthenticationService,
      userName: string,
      password: string
    ) {
      logger.log(`Authenticating user ${userName}`)
      try {
        const result = originalMethod.call(this, userName, password)
        logger.log(`User ${userName} authenticated successfully`)
        return result
      } catch (e) {
        logger.log(`Authentication failed for user ${userName}: ${e}`)
        throw e
      }
    }
    return replacementMethod
  }
}

(from method.ts)

The factory function takes a logger parameter and returns the actual decorator function. The decorator wraps the method with logging: logs before calling, logs on success, logs on failure and re-throws. The @logAuth() syntax calls the factory which returns the decorator.

Worth noting: the logger has to be configured at module level because @logAuth() executes when the class is defined, not when instances are created. This means tests can't easily inject different loggers per instance - you're stuck with whatever was configured when the file loaded. It's a limitation of how decorators work, and honestly it's a bit crap for dependency injection.

Also note I'm just using the console as the logger here. It makes testing easy.

Class decorators and shared state

Class decorators can replace the entire class, including hijacking the constructor. This example is thoroughly contrived but demonstrates how decorators can inject stateful behavior that persists across all instances:

const maoriNumbers = ['tahi', 'rua', 'toru', 'wha']
let current = 0
function* generator() {
  while (current < maoriNumbers.length) {
    yield maoriNumbers[current++]
  }
  throw new Error('No more Maori numbers')
}

function maoriSequence(
  target: typeof Number,
  context: ClassDecoratorContext
) {
  void context

  return class extends target {
    _value = generator().next().value as string
  }
}

type NullableString = string | null

@maoriSequence
export class Number {
  constructor(protected _value: NullableString = null) {}

  get value(): NullableString {
    return this._value
  }
}

(from class.ts)

The class decorator returns a new class that extends the original, overriding the _value property with the next value from a generator. The generator and its state live at module scope, so they're shared across all instances of the class. Each time you create a new instance, the constructor parameter gets completely ignored and the decorator forces the next Maori number instead:

it('intercepts the constructor', () => {
  expect(new Number().value).toEqual('tahi')
  expect(new Number().value).toEqual('rua')
  expect(new Number().value).toEqual('toru')
  expect(new Number().value).toEqual('wha')
  expect(() => new Number()).toThrowError('No more Maori numbers')
})

(from class.test.ts)

First instance gets 'tahi', second gets 'rua', third gets 'toru', fourth gets 'wha', and the fifth throws an error because the generator is exhausted. The state persists across all instantiations because it's in the decorator's closure at module level.

This demonstrates that class decorators can completely hijack construction and maintain shared state, which is both powerful and horrifying. You'd never actually do this in real code - it's terrible for testing, debugging, and reasoning about behavior - but it shows the level of control decorators have over class behavior.

GitHub Copilot's code review was appropriately horrified by this. It flagged the module-level state, the generator that never resets, the constructor hijacking, and basically everything else about this approach. Fair cop - the code reviewer was absolutely right to be suspicious. This is demonstration code showing what's possible with decorators, not what you should actually do. In real code, if you find yourself maintaining stateful generators at module scope that exhaust after four calls and hijack constructors to ignore their parameters, you've gone badly wrong somewhere and need to step back and reconsider your life choices.

Auto-accessors and the accessor keyword

Auto-accessors are a newer feature that provides shorthand for creating getter/setter pairs with a private backing field. The accessor keyword does automatically what you'd normally write manually:

export class Person {
  @logCalls(defaultLogger)
  accessor firstName: string

  @logCalls(defaultLogger)
  accessor lastName: string

  constructor(firstName: string, lastName: string) {
    this.firstName = firstName
    this.lastName = lastName
  }

  getFullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

(from autoAccessors.ts)

The accessor keyword creates a private backing field plus public getter and setter, similar to C# auto-properties. The decorator can then wrap both operations:

function logCalls(logger: Logger = defaultLogger) {
  return function (
    target: ClassAccessorDecoratorTarget,
    context: ClassAccessorDecoratorContext
  ) {
    const result: ClassAccessorDecoratorResult = {
      get(this: This) {
        logger.log(`[${String(context.name)}] getter called`)
        return target.get.call(this)
      },
      set(this: This, value) {
        logger.log(
          `[${String(context.name)}] setter called with value [${String(value)}]`
        )
        target.set.call(this, value)
      }
    }

    return result
  }
}

(from autoAccessors.ts)

The target provides access to the original get and set methods, and the decorator returns a result object with replacement implementations. The getter wraps the original with logging before calling it, and the setter does the same.

Testing shows both operations getting logged:

it('should log the setters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  new Person('Jed', 'Dough')

  expect(consoleSpy).toHaveBeenCalledWith(
    '[firstName] setter called with value [Jed]'
  )
  expect(consoleSpy).toHaveBeenCalledWith(
    '[lastName] setter called with value [Dough]'
  )
})

it('should log the getters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  const person = new Person('Jed', 'Dough')

  expect(person.getFullName()).toBe('Jed Dough')
  expect(consoleSpy).toHaveBeenCalledWith('[firstName] getter called')
  expect(consoleSpy).toHaveBeenCalledWith('[lastName] getter called')
})

(from autoAccessors.test.ts)

The constructor assignments trigger the setters, which get logged. Later when getFullName() accesses the properties, the getters are logged.

Auto-accessors are actually quite practical compared to the other decorator types. They provide a clean way to add cross-cutting concerns like logging, validation, or change tracking to properties without cluttering the class with boilerplate getter/setter implementations.

What I learned

TypeScript decorators are metaprogramming tools that modify class behavior at design time. They're useful for cross-cutting concerns like logging, validation, or instrumentation - the kinds of things that would otherwise clutter your actual business logic.

The main decorator types are:

  • Getter/setter decorators - wrap property access
  • Method decorators - wrap method calls
  • Class decorators - replace or modify entire classes
  • Auto-accessor decorators - wrap the getter/setter pairs created by the accessor keyword

Decorator factories (functions that return decorators) allow runtime configuration, though "runtime" here means "when the module loads", not "when instances are created". This makes dependency injection awkward - you're stuck with module-level state or global configuration.

The syntax is straightforward once you understand the pattern: decorator receives target and context, returns replacement (or modifies via context), job done. The tricky bit is the type signatures and making sure your implementation signature is flexible enough to handle all the overloads you're declaring.

But fundamentally, these aren't decorators in the design pattern sense. They're annotations that modify declarations. If you're coming from a language with proper decorators (the GoF pattern), you'll need to context-switch your brain because the @ syntax is doing something completely different here.

Worth learning? Yeah, if only because you'll see them in the wild and need to understand what they're doing.

Would I use them in my own code? Probably sparingly. Auto-accessors are legitimately useful. Method decorators for logging or metrics could work if you're comfortable with the module-level configuration limitations. Class decorators that hijack constructors and maintain shared state can absolutely get in the sea.

But to be frank: if I wanted to decorate something - in the accurate sense of that term - I'd do it properly using the design pattern, and DI.


The full code for this investigation is in my learning-typescript repository.

Righto.

--
Adam

Thursday, 2 October 2025

TypeScript mixins: poor person's composition, but with generics

G'day:

I've been working through TypeScript classes, and today I hit mixins. For those unfamiliar, mixins are a pattern for composing behavior from multiple sources - think Ruby's modules or PHP's traits. They're basically "poor person's composition" - a way to share behavior between classes when you can't (or won't) use proper dependency injection.

I think they're a terrible pattern. If I need shared behavior, I'd use actual composition - create a proper class and inject it as a dependency. But I'm not always working with my own code, and mixins do exist in the wild, so here we are.

The TypeScript mixin implementation is interesting though - it's built on generics and functions that return classes, which is quite different from the prototype-mutation approach you see in JavaScript. And despite my reservations about the pattern itself, understanding how it works turned out to be useful for understanding TypeScript's type system better.

The basic pattern

TypeScript mixins aren't about mutating prototypes at runtime (though you can do that in JavaScript). They're functions that take a class and return a new class that extends it.

For this example, I wanted a mixin that would add a flatten() method to any class - something that takes all the object's properties and concatenates their values into a single string. Not particularly useful in real code, but simple enough to demonstrate the mechanics without getting lost in business logic.

type Constructor = new (...args: any[]) => {}

function applyFlattening<TBase extends Constructor>(Base: TBase) {
  return class Flattener extends Base {
    flatten(): string {
      return Object.entries(this).reduce(
        (flattened: string, [_, value]): string => {
          return flattened + String(value)
        },
        ''
      )
    }
  }
}

(from mixins.ts)

That Constructor type is saying "anything that can be called with new and returns an object". The mixin function takes a class that matches this type and returns a new anonymous class that extends the base class with additional behavior.

You can then apply it to any class:

export class Name {
  constructor(
    public firstName: string,
    public lastName: string
  ) {}

  get fullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

export const FlattenableName = applyFlattening(Name)

FlattenableName is now a class that has everything Name had plus the flatten() method. TypeScript tracks all of this at compile time, so you get proper type checking and autocomplete for both the base class members and the mixin methods.

The generics bit

The confusing part (at least initially) is this bit:

function applyFlattening<TBase extends Constructor>(Base: TBase)

Without understanding generics, this is completely opaque. The <TBase extends Constructor> is saying "this function is generic over some type TBase, which must be a constructor". The Base: TBase parameter then uses that type.

This lets TypeScript track what specific class you're mixing into. When you call applyFlattening(Name), TypeScript knows that TBase is specifically the Name class, so it can infer that the returned class has both Name's properties and methods plus the flatten() method.

Without generics, TypeScript would only know "some constructor was passed in" and couldn't give you proper type information about what the resulting class actually contains. The generic parameter preserves the type information through the composition.

I hadn't covered generics properly before hitting this (it's still on my todo list), which made the mixin syntax particularly cryptic. But the core concept is straightforward once you understand that generics are about preserving type information as you transform data - in this case, transforming a class into an extended version of itself.

Using the mixed class

Once you've got the mixed class, using it is straightforward:

const flattenableName: InstanceType<typeof FlattenableName> =
  new FlattenableName('Zachary', 'Lynch')
expect(flattenableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = flattenableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

(from mixins.test.ts)

The InstanceType<typeof FlattenableName> bit is necessary because FlattenableName is a value (the constructor function), not a type. typeof FlattenableName gives you the constructor type, and InstanceType<...> extracts the type of instances that constructor creates.

Once you've got an instance, it has both the original Name functionality (the fullName getter) and the new flatten() method. The mixin has full access to this, so it can see all the object's properties - in this case, firstName and lastName.

Constraining the mixin

The basic Constructor type accepts any class - it doesn't care what properties or methods the class has. But you can constrain mixins to only work with classes that have specific properties:

type NameConstructor = new (
  ...args: any[]
) => {
  firstName: string
  lastName: string
}

function applyNameFlattening<TBase extends NameConstructor>(Base: TBase) {
  return class NameFlattener extends Base {
    flatten(): string {
      return this.firstName + this.lastName
    }
  }
}

(from mixins.ts)

The NameConstructor type specifies that the resulting instance must have firstName and lastName properties. Now the mixin can safely access those properties directly - TypeScript knows they'll exist.

You can't constrain the constructor parameters themselves - that ...args: any[] is mandatory for mixin functions. TypeScript requires this because the mixin doesn't know what arguments the base class constructor needs. You can only constrain the instance type (the return type of the constructor).

This means a class like this won't work with the constrained mixin:

export class ShortName {
  constructor(public firstName: string) {}
}
// This won't compile:
// export const FlattenableShortName = applyNameFlattening(ShortName)
// Argument of type 'typeof ShortName' is not assignable to parameter of type 'NameConstructor'

TypeScript correctly rejects it because ShortName doesn't have a lastName property, and the mixin's flatten() method needs it.

Chaining multiple mixins

You can apply multiple mixins by chaining them - pass the result of one mixin into another:

function applyArrayifier<TBase extends Constructor>(Base: TBase) {
  return class Arrayifier extends Base {
    arrayify(): string[] {
      return Object.entries(this).reduce(
        (arrayified: string[], [_, value]): string[] => {
          return arrayified.concat(String(value).split(''))
        },
        []
      )
    }
  }
}

export const ArrayableFlattenableName = applyArrayifier(FlattenableName)

(from mixins.ts)

Now ArrayableFlattenableName has everything from Name, plus flatten() from the first mixin, plus arrayify() from the second mixin:

const transformableName: InstanceType<typeof ArrayableFlattenableName> =
  new ArrayableFlattenableName('Zachary', 'Lynch')
expect(transformableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = transformableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

const arrayifiedName: string[] = transformableName.arrayify()
expect(arrayifiedName).toEqual('ZacharyLynch'.split(''))

(from mixins.test.ts)

TypeScript correctly infers that all three sets of functionality are available on the final class. The type information flows through each composition step.

Why not just use composition?

Right, so having learned how mixins work in TypeScript, I still think they're a poor choice for most situations. If you need shared behavior, use actual composition:

class Flattener {
  flatten(obj: Record<string, unknown>): string {
    return Object.entries(obj).reduce(
      (flattened, [_, value]) => flattened + String(value),
      ''
    )
  }
}

class Name {
  constructor(
    public firstName: string,
    public lastName: string,
    private flattener: Flattener
  ) {}
  
  flatten(): string {
    return this.flattener.flatten(this)
  }
}

This is clearer about dependencies, easier to test (inject a mock Flattener), and doesn't require understanding generics or the mixin pattern. The behavior is in a separate class that can be reused anywhere, not just through inheritance chains.

Mixins make sense in languages where you genuinely can't do proper composition easily, or where the inheritance model is the primary abstraction. But TypeScript has first-class support for dependency injection and composition. Use it.

The main legitimate use case I can see for TypeScript mixins is when you're working with existing code that uses them, or when you need to add behavior to classes you don't control. Otherwise, favor composition.

The abstract class limitation

One thing you can't do with mixins is apply them to abstract classes. The pattern requires using new Base(...) to instantiate and extend the base class, but abstract classes can't be instantiated - that's their whole point.

abstract class AbstractBase {
  abstract doSomething(): void
}

// This won't work:
// const Mixed = applyMixin(AbstractBase)
// Cannot create an instance of an abstract class

The workarounds involve either making the base class concrete (which defeats the purpose of having it abstract), or mixing into a concrete subclass instead of the abstract parent. Neither is particularly satisfying.

This is a fundamental incompatibility between "can't instantiate" (abstract classes) and "must instantiate to extend" (the mixin pattern). It's another reason to prefer composition - you can absolutely inject abstract dependencies through constructor parameters without these limitations.

What I learned

TypeScript mixins are functions that take classes and return extended classes. They use generics to preserve type information through the composition, and TypeScript tracks everything at compile time so you get proper type checking.

The syntax is more complicated than it needs to be (that type Constructor = new (...args: any[]) => {} bit), and you need to understand generics before any of it makes sense. The InstanceType<typeof ClassName> dance is necessary because of how TypeScript distinguishes between constructor types and instance types.

You can constrain mixins to only work with classes that have specific properties, and you can chain multiple mixins together. But you can't use them with abstract classes, and they're generally a worse choice than proper composition for most real-world scenarios.

I learned the pattern because I'll encounter it in other people's code, not because I plan to use it myself. If I need shared behavior, I'll use dependency injection and composition like a sensible person. But now at least I understand what's happening when I see const MixedClass = applyMixin(BaseClass) in a codebase.

The full code for this investigation is in my learning-typescript repository. Thanks to Claudia for helping work through the type constraints and the abstract class limitation, and for assistance with this write-up.

Righto.

--
Adam