Showing posts with label TypeScript. Show all posts
Showing posts with label TypeScript. Show all posts

Friday, 10 October 2025

TypeScript: any, unknown, never

G'day:

I've been using any and unknown in TypeScript for a while now - enough to know that ESLint hates one and tolerates the other, and that sometimes you need to do x as unknown as y to make the compiler shut up. But knowing they exist and actually understanding what they're for are different things.

The Jira ticket I set up for this was straightforward enough:

Both unknown and any represent values of uncertain type, but they have different safety guarantees. any opts out of type checking entirely, while unknown is type-safe and requires narrowing before use.

Simple, right? any turns off TypeScript's safety checks, unknown keeps them on. I built some examples, wrote some tests, and thought I was done.

Then Claudia pointed out I'd completely missed the point of type narrowing. I was using type assertions (value as string) instead of type guards (actually checking what the value is at runtime). Assertions just tell TypeScript to trust you. Guards actually verify you're right.

Turns out there's a difference between "making the compiler happy" and "writing safe code".

any - when you genuinely don't know or don't care

I started with a generic key-value object that could hold anything:

export type WritableValueObject = Record<string, any>
export type ValueObject = Readonly<WritableValueObject>

type keyValue = [string, any]

export function toValueObject(...kv: keyValue[]): ValueObject {
  const vo: WritableValueObject = kv.reduce(
    (valueObject: WritableValueObject, kv: keyValue): ValueObject => {
      valueObject[kv[0]] = kv[1]
      return valueObject
    },
    {} as WritableValueObject
  )
  return vo
}

(from any.ts)

ESLint immediately flags every any with warnings about unsafe assignments and lack of type checking. But this is actually a legitimate use case - I'm building a container that genuinely holds arbitrary values. The whole point is that I don't know what's in there and don't need to.

The Readonly<...> wrapper makes it immutable after creation, which is what you want for a value object. Try to modify it and TypeScript complains about the index signature being readonly. The error message says Readonly<WritableValueObject> instead of just ValueObject because TypeScript helpfully expands type aliases in error messages. Sometimes this is useful (showing you what the type actually is), sometimes it's just verbose.

unknown - the safer alternative that's actually more annoying

The unknown version looks almost identical:

export type WritableValueObject = Record<string, unknown>
export type ValueObject = Readonly<WritableValueObject>

type keyValue = [string, unknown]

export function toValueObject(...kv: keyValue[]): ValueObject {
  const vo: WritableValueObject = kv.reduce(
    (valueObject: WritableValueObject, kv: keyValue): ValueObject => {
      valueObject[kv[0]] = kv[1]
      return valueObject
    },
    {} as WritableValueObject
  )
  return vo
}

(from unknown.ts)

The difference shows up when you try to use the values. With any, you can do whatever you want:

const value = vo.someKey;
const reversed = reverse(value); // Works fine with any

With unknown, TypeScript blocks you:

const value = vo.someKey;
const reversed = reverse(value); // Error: 'value' is of type 'unknown'

My first solution was type assertions:

const reversed = reverse(value as string); // TypeScript: "OK, if you say so"

This compiles. The tests pass. I thought I was done.

Then Claudia pointed out I wasn't actually checking anything - I was just telling TypeScript to trust me. Type assertions are a polite way of saying "shut up, compiler, I know what I'm doing". Which is fine when you genuinely do know, but defeats the point of using unknown in the first place.

Type guards - actually checking instead of just asserting

The proper way to handle unknown is with type guards - runtime checks that prove what type you're dealing with. TypeScript then narrows the type based on those checks.

The simplest is typeof:

const theWhatNow = returnsAsUnknown(input);

if (typeof theWhatNow === 'string') {
  const reversed = reverse(theWhatNow); // TypeScript knows it's a string now
}

(from unknown.test.ts)

Inside the if block, TypeScript knows theWhatNow is a string because the typeof check proved it. Outside that block, it's still unknown.

For objects, use instanceof:

const theWhatNow = returnsAsUnknown(input);

if (theWhatNow instanceof SomeClass) {
  expect(theWhatNow.someMethod('someValue')).toEqual('someValue');
}

And for custom checks, you can write type guard functions with the is predicate:

export class SomeClass {
  someMethod(someValue: unknown): unknown {
    return someValue
  }

  static isValid(value: unknown): value is SomeClass {
    return value instanceof SomeClass
  }
}

(from unknown.ts)

The value is SomeClass return type tells TypeScript that if this function returns true, the value is definitely a SomeClass:

if (SomeClass.isValid(theWhatNow)) {
  expect(theWhatNow.someMethod('someValue')).toEqual('someValue');
}

This is proper type safety - you're checking at runtime, not just asserting at compile time.

Error handling with unknown

The most practical use of unknown is in error handling. Before TypeScript 4.0, everyone wrote:

try {
  throwSomeError('This is an error')
} catch (e) {  // e is implicitly 'any'
  console.log(e.message)  // Hope it's an Error!
}

Now you can (and should) use unknown:

try {
  throwSomeError('This is an error')
} catch (e: unknown) {
  expect(e).toBeInstanceOf(SomeError)
}

(from unknown.test.ts)

Catches can throw anything - not just Error objects. Someone could throw a string, a number, or literally anything. Using unknown forces you to check what you actually caught before using it.

So which one should you use?

Here's the thing though - for my ValueObject use case, unknown is technically safer but practically more annoying. The whole point of a generic key-value store is that you don't know what's in there. Making users narrow types every time they retrieve a value is tedious:

const value = getValueForKey(vo, 'someKey');
if (typeof value === 'string') {
  doSomething(value);
}

versus just:

const value = getValueForKey(vo, 'someKey');
doSomething(value as string);

For a genuinely generic container where you're accepting "no idea what this is" as part of the design, any is the honest choice. You're not pretending to enforce safety on truly dynamic data.

But for error handling, function parameters that could be anything, or situations where you'll actually check the type before using it, unknown is the better option. It forces you to handle the uncertainty explicitly rather than hoping for the best.

never - the type that can't exist

While any and unknown are about values that could be anything, never is about values that can't exist at all. It's the bottom type - nothing can be assigned to it.

The most obvious use is functions that never return:

export function throwAnError(message: string): never {
  throw new Error(message)
}

(from never.ts)

Functions that throw or loop forever return never because they don't return at all. TypeScript uses this to detect unreachable code:

expect(() => {
  throwAnError('an error')
  // "Unreachable code detected."
  const x: string = ''
  void x
}).toThrow('an error')

(from never.test.ts)

The const x line gets flagged because TypeScript knows the previous line never returns control.

Things get more interesting with conditional never:

export function throwsAnErrorIfItIsBad(message: string): boolean | never {
  if (message.toLowerCase().indexOf('bad') !== -1) {
    throw new Error(message)
  }
  return false
}

The return type says "returns a boolean, or never returns at all". TypeScript doesn't flag unreachable code after calling this function because it might actually return normally.

Exhaustiveness checking

The clever use of never is exhaustiveness checking in type narrowing:

export function returnsStringsOrNumbers(
  value: string | number
): string | number {
  if (typeof value === 'string') {
    const valueToReturn = value + ''
    return valueToReturn
  }
  if (typeof value === 'number') {
    const valueToReturn = value * 1
    return valueToReturn
  }
  const valueToReturn = value // TypeScript hints: const valueToReturn: never
  return valueToReturn
}

(from never.ts)

After checking for string and number, TypeScript knows that value can't be anything else, so it infers the type as never. This is TypeScript's way of saying "we've handled all possible cases".

If you tried to call the function with something that wasn't a string or number (like an array cast to unknown then to string), TypeScript won't catch it at compile time because you've lied to the compiler. But at least the never hint shows you've exhausted the legitimate cases.

The actual lesson

I went into this thinking I understood these types well enough - any opts out, unknown is safer, never is for functions that don't return. All true, but missing the point.

The real distinction is between compile-time assertions and runtime checks. Type assertions (as string) tell TypeScript "trust me", but they don't verify anything. Type guards (typeof, instanceof, custom predicates) actually check at runtime.

For genuinely dynamic data like a generic ValueObject, any is the honest choice - you're accepting the lack of type safety as part of the design. For cases where you'll actually verify the type before using it (like error handling), unknown forces you to be explicit about those checks.

And never is TypeScript's way of tracking control flow and exhaustiveness, which is useful when you actually pay attention to what it's telling you.

The code for all this is in the learning-typescript repository, with test examples showing the differences between assertions and guards. Thanks to Claudia for pointing out I was doing type assertions instead of actual type checking - turns out there's a difference between making the compiler happy and writing safe code.

Righto.

--
Adam

Saturday, 4 October 2025

TypeScript decorators: not actually decorators

G'day:

I've been working through TypeScript classes, and when I got to decorators I hit the @ syntax and thought "hang on, what the heck is all this doing inside the class being decorated? The class shouldn't know it's being decorated. Fundamentally it shouldn't know."

Turns out TypeScript decorators have bugger all to do with the Gang of Four decorator pattern. They're not about wrapping objects at runtime to extend behavior. They're metaprogramming annotations - more like Java's @annotations or C#'s [attributes] - that modify class declarations at design time using the @ syntax.

The terminology collision is unfortunate. Python had the same debate back in PEP 318 - people pointed out that "decorator" was already taken by a well-known design pattern, but they went with it anyway because the syntax visually "decorates" the function definition. TypeScript followed Python's lead: borrowed the @ syntax, borrowed the confusing name, and now we're stuck with it.

So this isn't about the decorator pattern at all. This is about TypeScript's metaprogramming features that happen to be called decorators for historical reasons that made sense to someone, somewhere.

What TypeScript deco

What TypeScript decorators actually do

A decorator in TypeScript is a function that takes a target (the thing being decorated - a class, method, property, whatever) and a context object, and optionally returns a replacement. They execute at class definition time, not at runtime.

The simplest example is a getter decorator:

function obscurer(
  originalMethod: (this: PassPhrase) => string,
  context: ClassGetterDecoratorContext
) {
  void context
  function replacementMethod(this: PassPhrase) {
    const duplicateOfThis: PassPhrase = Object.assign(
      Object.create(Object.getPrototypeOf(this) as PassPhrase),
      this,
      { _text: this._text.replace(/./g, '*') }
    ) as PassPhrase

    return originalMethod.call(duplicateOfThis)
  }

  return replacementMethod
}

export class PassPhrase {
  constructor(protected _text: string) {}

  get plainText(): string {
    return this._text
  }

  @obscurer
  get obscuredText(): string {
    return this._text
  }
}

(from accessor.ts)

The decorator function receives the original getter and returns a replacement that creates a modified copy of this, replaces the _text property with asterisks, then calls the original getter with that modified context. The original instance is untouched - we're not mutating state, we're intercepting the call and providing different data to work with. The @obscurer syntax applies the decorator to the getter.

The test shows this in action:

it('original text remains unchanged', () => {
  const phrase = new PassPhrase('tough_to_guess')
  expect(phrase.obscuredText).toBe('**************')
  expect(phrase.plainText).toBe('tough_to_guess')
})

(from accessor.test.ts)

The obscuredText getter returns asterisks, the plainText getter returns the original value. The decorator wraps one getter without affecting the other or mutating the underlying _text property.

Method decorators and decorator factories

Method decorators work the same way as getter decorators, except they handle methods with actual parameters. More interesting is the decorator factory pattern - a function that returns a decorator, allowing runtime configuration.

Here's an authentication service with logging:

interface Logger {
  log(message: string): void
}

const defaultLogger: Logger = console

export class AuthenticationService {
  constructor(private directoryServiceAdapter: DirectoryServiceAdapter) {}

  @logAuth()
  authenticate(userName: string, password: string): boolean {
    const result: boolean = this.directoryServiceAdapter.authenticate(
      userName,
      password
    )
    if (!result) {
      throw new AuthenticationException(
        `Authentication failed for user ${userName}`
      )
    }
    return result
  }
}

function logAuth(logger: Logger = defaultLogger) {
  return function (
    originalMethod: (
      this: AuthenticationService,
      userName: string,
      password: string
    ) => boolean,
    context: ClassMethodDecoratorContext<
      AuthenticationService,
      (userName: string, password: string) => boolean
    >
  ) {
    void context
    function replacementMethod(
      this: AuthenticationService,
      userName: string,
      password: string
    ) {
      logger.log(`Authenticating user ${userName}`)
      try {
        const result = originalMethod.call(this, userName, password)
        logger.log(`User ${userName} authenticated successfully`)
        return result
      } catch (e) {
        logger.log(`Authentication failed for user ${userName}: ${e}`)
        throw e
      }
    }
    return replacementMethod
  }
}

(from method.ts)

The factory function takes a logger parameter and returns the actual decorator function. The decorator wraps the method with logging: logs before calling, logs on success, logs on failure and re-throws. The @logAuth() syntax calls the factory which returns the decorator.

Worth noting: the logger has to be configured at module level because @logAuth() executes when the class is defined, not when instances are created. This means tests can't easily inject different loggers per instance - you're stuck with whatever was configured when the file loaded. It's a limitation of how decorators work, and honestly it's a bit crap for dependency injection.

Also note I'm just using the console as the logger here. It makes testing easy.

Class decorators and shared state

Class decorators can replace the entire class, including hijacking the constructor. This example is thoroughly contrived but demonstrates how decorators can inject stateful behavior that persists across all instances:

const maoriNumbers = ['tahi', 'rua', 'toru', 'wha']
let current = 0
function* generator() {
  while (current < maoriNumbers.length) {
    yield maoriNumbers[current++]
  }
  throw new Error('No more Maori numbers')
}

function maoriSequence(
  target: typeof Number,
  context: ClassDecoratorContext
) {
  void context

  return class extends target {
    _value = generator().next().value as string
  }
}

type NullableString = string | null

@maoriSequence
export class Number {
  constructor(protected _value: NullableString = null) {}

  get value(): NullableString {
    return this._value
  }
}

(from class.ts)

The class decorator returns a new class that extends the original, overriding the _value property with the next value from a generator. The generator and its state live at module scope, so they're shared across all instances of the class. Each time you create a new instance, the constructor parameter gets completely ignored and the decorator forces the next Maori number instead:

it('intercepts the constructor', () => {
  expect(new Number().value).toEqual('tahi')
  expect(new Number().value).toEqual('rua')
  expect(new Number().value).toEqual('toru')
  expect(new Number().value).toEqual('wha')
  expect(() => new Number()).toThrowError('No more Maori numbers')
})

(from class.test.ts)

First instance gets 'tahi', second gets 'rua', third gets 'toru', fourth gets 'wha', and the fifth throws an error because the generator is exhausted. The state persists across all instantiations because it's in the decorator's closure at module level.

This demonstrates that class decorators can completely hijack construction and maintain shared state, which is both powerful and horrifying. You'd never actually do this in real code - it's terrible for testing, debugging, and reasoning about behavior - but it shows the level of control decorators have over class behavior.

GitHub Copilot's code review was appropriately horrified by this. It flagged the module-level state, the generator that never resets, the constructor hijacking, and basically everything else about this approach. Fair cop - the code reviewer was absolutely right to be suspicious. This is demonstration code showing what's possible with decorators, not what you should actually do. In real code, if you find yourself maintaining stateful generators at module scope that exhaust after four calls and hijack constructors to ignore their parameters, you've gone badly wrong somewhere and need to step back and reconsider your life choices.

Auto-accessors and the accessor keyword

Auto-accessors are a newer feature that provides shorthand for creating getter/setter pairs with a private backing field. The accessor keyword does automatically what you'd normally write manually:

export class Person {
  @logCalls(defaultLogger)
  accessor firstName: string

  @logCalls(defaultLogger)
  accessor lastName: string

  constructor(firstName: string, lastName: string) {
    this.firstName = firstName
    this.lastName = lastName
  }

  getFullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

(from autoAccessors.ts)

The accessor keyword creates a private backing field plus public getter and setter, similar to C# auto-properties. The decorator can then wrap both operations:

function logCalls(logger: Logger = defaultLogger) {
  return function (
    target: ClassAccessorDecoratorTarget,
    context: ClassAccessorDecoratorContext
  ) {
    const result: ClassAccessorDecoratorResult = {
      get(this: This) {
        logger.log(`[${String(context.name)}] getter called`)
        return target.get.call(this)
      },
      set(this: This, value) {
        logger.log(
          `[${String(context.name)}] setter called with value [${String(value)}]`
        )
        target.set.call(this, value)
      }
    }

    return result
  }
}

(from autoAccessors.ts)

The target provides access to the original get and set methods, and the decorator returns a result object with replacement implementations. The getter wraps the original with logging before calling it, and the setter does the same.

Testing shows both operations getting logged:

it('should log the setters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  new Person('Jed', 'Dough')

  expect(consoleSpy).toHaveBeenCalledWith(
    '[firstName] setter called with value [Jed]'
  )
  expect(consoleSpy).toHaveBeenCalledWith(
    '[lastName] setter called with value [Dough]'
  )
})

it('should log the getters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  const person = new Person('Jed', 'Dough')

  expect(person.getFullName()).toBe('Jed Dough')
  expect(consoleSpy).toHaveBeenCalledWith('[firstName] getter called')
  expect(consoleSpy).toHaveBeenCalledWith('[lastName] getter called')
})

(from autoAccessors.test.ts)

The constructor assignments trigger the setters, which get logged. Later when getFullName() accesses the properties, the getters are logged.

Auto-accessors are actually quite practical compared to the other decorator types. They provide a clean way to add cross-cutting concerns like logging, validation, or change tracking to properties without cluttering the class with boilerplate getter/setter implementations.

What I learned

TypeScript decorators are metaprogramming tools that modify class behavior at design time. They're useful for cross-cutting concerns like logging, validation, or instrumentation - the kinds of things that would otherwise clutter your actual business logic.

The main decorator types are:

  • Getter/setter decorators - wrap property access
  • Method decorators - wrap method calls
  • Class decorators - replace or modify entire classes
  • Auto-accessor decorators - wrap the getter/setter pairs created by the accessor keyword

Decorator factories (functions that return decorators) allow runtime configuration, though "runtime" here means "when the module loads", not "when instances are created". This makes dependency injection awkward - you're stuck with module-level state or global configuration.

The syntax is straightforward once you understand the pattern: decorator receives target and context, returns replacement (or modifies via context), job done. The tricky bit is the type signatures and making sure your implementation signature is flexible enough to handle all the overloads you're declaring.

But fundamentally, these aren't decorators in the design pattern sense. They're annotations that modify declarations. If you're coming from a language with proper decorators (the GoF pattern), you'll need to context-switch your brain because the @ syntax is doing something completely different here.

Worth learning? Yeah, if only because you'll see them in the wild and need to understand what they're doing.

Would I use them in my own code? Probably sparingly. Auto-accessors are legitimately useful. Method decorators for logging or metrics could work if you're comfortable with the module-level configuration limitations. Class decorators that hijack constructors and maintain shared state can absolutely get in the sea.

But to be frank: if I wanted to decorate something - in the accurate sense of that term - I'd do it properly using the design pattern, and DI.


The full code for this investigation is in my learning-typescript repository.

Righto.

--
Adam

Thursday, 2 October 2025

TypeScript mixins: poor person's composition, but with generics

G'day:

I've been working through TypeScript classes, and today I hit mixins. For those unfamiliar, mixins are a pattern for composing behavior from multiple sources - think Ruby's modules or PHP's traits. They're basically "poor person's composition" - a way to share behavior between classes when you can't (or won't) use proper dependency injection.

I think they're a terrible pattern. If I need shared behavior, I'd use actual composition - create a proper class and inject it as a dependency. But I'm not always working with my own code, and mixins do exist in the wild, so here we are.

The TypeScript mixin implementation is interesting though - it's built on generics and functions that return classes, which is quite different from the prototype-mutation approach you see in JavaScript. And despite my reservations about the pattern itself, understanding how it works turned out to be useful for understanding TypeScript's type system better.

The basic pattern

TypeScript mixins aren't about mutating prototypes at runtime (though you can do that in JavaScript). They're functions that take a class and return a new class that extends it.

For this example, I wanted a mixin that would add a flatten() method to any class - something that takes all the object's properties and concatenates their values into a single string. Not particularly useful in real code, but simple enough to demonstrate the mechanics without getting lost in business logic.

type Constructor = new (...args: any[]) => {}

function applyFlattening<TBase extends Constructor>(Base: TBase) {
  return class Flattener extends Base {
    flatten(): string {
      return Object.entries(this).reduce(
        (flattened: string, [_, value]): string => {
          return flattened + String(value)
        },
        ''
      )
    }
  }
}

(from mixins.ts)

That Constructor type is saying "anything that can be called with new and returns an object". The mixin function takes a class that matches this type and returns a new anonymous class that extends the base class with additional behavior.

You can then apply it to any class:

export class Name {
  constructor(
    public firstName: string,
    public lastName: string
  ) {}

  get fullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

export const FlattenableName = applyFlattening(Name)

FlattenableName is now a class that has everything Name had plus the flatten() method. TypeScript tracks all of this at compile time, so you get proper type checking and autocomplete for both the base class members and the mixin methods.

The generics bit

The confusing part (at least initially) is this bit:

function applyFlattening<TBase extends Constructor>(Base: TBase)

Without understanding generics, this is completely opaque. The <TBase extends Constructor> is saying "this function is generic over some type TBase, which must be a constructor". The Base: TBase parameter then uses that type.

This lets TypeScript track what specific class you're mixing into. When you call applyFlattening(Name), TypeScript knows that TBase is specifically the Name class, so it can infer that the returned class has both Name's properties and methods plus the flatten() method.

Without generics, TypeScript would only know "some constructor was passed in" and couldn't give you proper type information about what the resulting class actually contains. The generic parameter preserves the type information through the composition.

I hadn't covered generics properly before hitting this (it's still on my todo list), which made the mixin syntax particularly cryptic. But the core concept is straightforward once you understand that generics are about preserving type information as you transform data - in this case, transforming a class into an extended version of itself.

Using the mixed class

Once you've got the mixed class, using it is straightforward:

const flattenableName: InstanceType<typeof FlattenableName> =
  new FlattenableName('Zachary', 'Lynch')
expect(flattenableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = flattenableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

(from mixins.test.ts)

The InstanceType<typeof FlattenableName> bit is necessary because FlattenableName is a value (the constructor function), not a type. typeof FlattenableName gives you the constructor type, and InstanceType<...> extracts the type of instances that constructor creates.

Once you've got an instance, it has both the original Name functionality (the fullName getter) and the new flatten() method. The mixin has full access to this, so it can see all the object's properties - in this case, firstName and lastName.

Constraining the mixin

The basic Constructor type accepts any class - it doesn't care what properties or methods the class has. But you can constrain mixins to only work with classes that have specific properties:

type NameConstructor = new (
  ...args: any[]
) => {
  firstName: string
  lastName: string
}

function applyNameFlattening<TBase extends NameConstructor>(Base: TBase) {
  return class NameFlattener extends Base {
    flatten(): string {
      return this.firstName + this.lastName
    }
  }
}

(from mixins.ts)

The NameConstructor type specifies that the resulting instance must have firstName and lastName properties. Now the mixin can safely access those properties directly - TypeScript knows they'll exist.

You can't constrain the constructor parameters themselves - that ...args: any[] is mandatory for mixin functions. TypeScript requires this because the mixin doesn't know what arguments the base class constructor needs. You can only constrain the instance type (the return type of the constructor).

This means a class like this won't work with the constrained mixin:

export class ShortName {
  constructor(public firstName: string) {}
}
// This won't compile:
// export const FlattenableShortName = applyNameFlattening(ShortName)
// Argument of type 'typeof ShortName' is not assignable to parameter of type 'NameConstructor'

TypeScript correctly rejects it because ShortName doesn't have a lastName property, and the mixin's flatten() method needs it.

Chaining multiple mixins

You can apply multiple mixins by chaining them - pass the result of one mixin into another:

function applyArrayifier<TBase extends Constructor>(Base: TBase) {
  return class Arrayifier extends Base {
    arrayify(): string[] {
      return Object.entries(this).reduce(
        (arrayified: string[], [_, value]): string[] => {
          return arrayified.concat(String(value).split(''))
        },
        []
      )
    }
  }
}

export const ArrayableFlattenableName = applyArrayifier(FlattenableName)

(from mixins.ts)

Now ArrayableFlattenableName has everything from Name, plus flatten() from the first mixin, plus arrayify() from the second mixin:

const transformableName: InstanceType<typeof ArrayableFlattenableName> =
  new ArrayableFlattenableName('Zachary', 'Lynch')
expect(transformableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = transformableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

const arrayifiedName: string[] = transformableName.arrayify()
expect(arrayifiedName).toEqual('ZacharyLynch'.split(''))

(from mixins.test.ts)

TypeScript correctly infers that all three sets of functionality are available on the final class. The type information flows through each composition step.

Why not just use composition?

Right, so having learned how mixins work in TypeScript, I still think they're a poor choice for most situations. If you need shared behavior, use actual composition:

class Flattener {
  flatten(obj: Record<string, unknown>): string {
    return Object.entries(obj).reduce(
      (flattened, [_, value]) => flattened + String(value),
      ''
    )
  }
}

class Name {
  constructor(
    public firstName: string,
    public lastName: string,
    private flattener: Flattener
  ) {}
  
  flatten(): string {
    return this.flattener.flatten(this)
  }
}

This is clearer about dependencies, easier to test (inject a mock Flattener), and doesn't require understanding generics or the mixin pattern. The behavior is in a separate class that can be reused anywhere, not just through inheritance chains.

Mixins make sense in languages where you genuinely can't do proper composition easily, or where the inheritance model is the primary abstraction. But TypeScript has first-class support for dependency injection and composition. Use it.

The main legitimate use case I can see for TypeScript mixins is when you're working with existing code that uses them, or when you need to add behavior to classes you don't control. Otherwise, favor composition.

The abstract class limitation

One thing you can't do with mixins is apply them to abstract classes. The pattern requires using new Base(...) to instantiate and extend the base class, but abstract classes can't be instantiated - that's their whole point.

abstract class AbstractBase {
  abstract doSomething(): void
}

// This won't work:
// const Mixed = applyMixin(AbstractBase)
// Cannot create an instance of an abstract class

The workarounds involve either making the base class concrete (which defeats the purpose of having it abstract), or mixing into a concrete subclass instead of the abstract parent. Neither is particularly satisfying.

This is a fundamental incompatibility between "can't instantiate" (abstract classes) and "must instantiate to extend" (the mixin pattern). It's another reason to prefer composition - you can absolutely inject abstract dependencies through constructor parameters without these limitations.

What I learned

TypeScript mixins are functions that take classes and return extended classes. They use generics to preserve type information through the composition, and TypeScript tracks everything at compile time so you get proper type checking.

The syntax is more complicated than it needs to be (that type Constructor = new (...args: any[]) => {} bit), and you need to understand generics before any of it makes sense. The InstanceType<typeof ClassName> dance is necessary because of how TypeScript distinguishes between constructor types and instance types.

You can constrain mixins to only work with classes that have specific properties, and you can chain multiple mixins together. But you can't use them with abstract classes, and they're generally a worse choice than proper composition for most real-world scenarios.

I learned the pattern because I'll encounter it in other people's code, not because I plan to use it myself. If I need shared behavior, I'll use dependency injection and composition like a sensible person. But now at least I understand what's happening when I see const MixedClass = applyMixin(BaseClass) in a codebase.

The full code for this investigation is in my learning-typescript repository. Thanks to Claudia for helping work through the type constraints and the abstract class limitation, and for assistance with this write-up.

Righto.

--
Adam

Tuesday, 30 September 2025

TypeScript constructor overloading: when one implementation has to handle multiple signatures

G'day:

I've been working through TypeScript classes, and today I hit constructor overloading. Coming from PHP where you can't overload constructors at all (you get one constructor, that's it), the TypeScript approach seemed straightforward enough: declare multiple signatures, implement once, job done.

Turns out the "implement once" bit is where things get interesting.

The basic pattern

TypeScript lets you declare multiple constructor signatures followed by a single implementation:

constructor()
constructor(s: string)
constructor(n: number)
constructor(s: string, n: number)
constructor(p1?: string | number, p2?: number) {
  // implementation handles all four cases
}

The first four lines are just declarations - they tell TypeScript "these are the valid ways to call this constructor". The final signature is the actual implementation that has to handle all of them.

Simple enough when you've got a no-arg constructor and a two-arg constructor - those are clearly different. But what happens when you need two different single-argument constructors, one taking a string and one taking a number?

That's where I got stuck.

The implementation signature problem

Here's what I wanted to support:

const empty = new Numeric()                    // both properties null
const justString = new Numeric('forty-two')    // asString set, asNumeric null
const justNumber = new Numeric(42)             // asNumeric set, asString null
const both = new Numeric('forty-two', 42)      // both properties set

(from constructors.test.ts)

My first attempt at the implementation looked like this:

constructor()
constructor(s: string)
constructor(s: string, n: number)
constructor(s?: string, n?: number) {
  this.asString = s ?? null
  this.asNumeric = n ?? null
}

Works fine for the no-arg, single-string, and two-arg cases. But then I needed to add the single-number constructor:

constructor(n: number)

And suddenly the compiler wasn't happy: "This overload signature is not compatible with its implementation signature."

The error pointed at the new overload, but the actual problem was in the implementation. It took me ages (and asking Claudia) to work this out. This is entirely down to me not reading, but just looking at what line it was pointing too. Duh. The first parameter was typed as string (or undefined), but the new overload promised it could also be a number. The implementation couldn't deliver on what the overload signature was promising.

Why neutral parameter names matter

The fix was to change the implementation signature to accept both types:

constructor(p1?: string | number, p2?: number) {
  // ...
}

But here's where the parameter naming became important. My initial instinct was to keep using meaningful names like s and n:

constructor(s?: string | number, n?: number)

This felt wrong. When you're reading the implementation code and you see a parameter called s, you expect it to be a string. But now it might be a number. The name actively misleads you about what the parameter contains.

Switching to neutral names like p1 and p2 made the implementation logic much clearer - these are just "parameter slots" that could contain different types depending on which overload was called. No assumptions about what they contain.

Runtime type checking

Once the implementation signature accepts both types, you need runtime logic to figure out which overload was actually called:

constructor(p1?: string | number, p2?: number) {
  if (typeof p1 === 'number' && p2 === undefined) {
    this.asNumeric = p1
    return
  }
  this.asString = (p1 as string) ?? null
  this.asNumeric = p2 ?? null
}

(from constructors.ts)

The first check handles the single-number case: if the first parameter is a number and there's no second parameter, we're dealing with new Numeric(42). Set asNumeric and bail out.

Everything else falls through to the default logic: treat the first parameter as a string (or absent) and the second parameter as a number (or absent). This covers the no-arg, single-string, and two-arg cases.

The type assertion (p1 as string) is necessary because TypeScript can't prove that p1 is a string at that point - we've only eliminated the case where it's definitely a number. From the compiler's perspective, it could still be string | number | undefined.

The bug I didn't notice

I had the implementation working and all my tests passing. Job done, right? Except when I submitted the PR, GitHub Copilot's review flagged this:

this.asString = (p1 as string) || null
this.asNumeric = p2 || null
The logic for handling empty strings is incorrect. An empty string ('') will be converted to null due to the || operator, but empty strings should be preserved as valid string values. Use nullish coalescing (??) instead or explicit null checks.

Copilot was absolutely right. The || operator treats all falsy values as "use the right-hand side", which includes:

  • '' (empty string)
  • 0 (zero)
  • false
  • null
  • undefined
  • NaN

So new Numeric('') would set asString to null instead of '', and new Numeric('test', 0) would set asNumeric to null instead of 0. Both are perfectly valid values that the constructor should accept.

The ?? (nullish coalescing) operator only treats null and undefined as "use the right-hand side", which is exactly what I needed:

this.asString = (p1 as string) ?? null
this.asNumeric = p2 ?? null

Now empty strings and zeros are preserved as valid values.

Testing the edge cases

The fact that this bug existed meant my initial tests weren't comprehensive enough. I'd tested the basic cases but missed the edge cases where valid values happen to be falsy.

I added tests for empty strings and zeros:

it('accepts an empty string as the only argument', () => {
  const o: Numeric = new Numeric('')

  expect(o.asString).toEqual('')
  expect(o.asNumeric).toBeNull()
})

it('accepts zero as the only argument', () => {
  const o: Numeric = new Numeric(0)

  expect(o.asNumeric).toEqual(0)
  expect(o.asString).toBeNull()
})

it('accepts an empty string as the first argument', () => {
  const o: Numeric = new Numeric('', -1)

  expect(o.asString).toEqual('')
})

it('accepts zero as the second argument', () => {
  const o: Numeric = new Numeric('NOT_TESTED', 0)

  expect(o.asNumeric).toEqual(0)
})

(from constructors.test.ts)

With the original || implementation, all four of these tests failed. After switching to ??, they all passed. That's how testing is supposed to work - the tests catch the bug, you fix it, the tests confirm the fix.

Fair play to Copilot for spotting this in the PR review. It's easy to miss falsy edge cases when you're focused on getting the type signatures right.

Method overloading in general

Worth noting that constructor overloading is just a specific case of method overloading. Any method can use this same pattern of multiple signatures with one implementation:

class Example {
  doThing(): void
  doThing(s: string): void
  doThing(n: number): void
  doThing(p?: string | number): void {
    // implementation handles all cases
  }
}

The same principles apply: the implementation signature needs to be flexible enough to handle all the declared overloads, and you need runtime type checking to figure out which overload was actually called.

Constructors just happen to be where I first encountered this pattern, because that's where you often want multiple ways to initialize an object with different combinations of parameters.

What I learned

Constructor overloading in TypeScript is straightforward once you understand that the implementation signature has to be a superset of all the overload signatures. The tricky bit is when you have overloads that look similar but take different types - that's when you need union types and runtime type checking to make it work.

Using neutral parameter names in the implementation helps avoid confusion about what types you're actually dealing with. And edge case testing matters - falsy values like empty strings and zeros are valid inputs that need explicit test coverage.

The full code is in my learning-typescript repository if you want to see the complete implementation. Thanks to Claudia for helping me understand why that compilation error was pointing at the overload when the problem was in the implementation, and to GitHub Copilot for catching the || vs ?? bug in the PR review.

Righto.

--
Adam

Monday, 29 September 2025

TypeScript late static binding: parameters that aren't actually parameters

G'day:

I've been working through classes in TypeScript as part of my learning project, and today I hit static methods. Coming from PHP, one of the first questions that popped into my head was "how does late static binding work here?"

In PHP, you can do this:

class Base {
    static function create() {
        return new static();  // Creates instance of the actual called class
    }
}

class Child extends Base {}

$instance = Child::create();  // Returns a Child instance, not Base

The static keyword in new static() means "whatever class this method was actually called on", not "the class where this method is defined". It's late binding - the class is resolved at runtime based on how the method was called.

Seemed like a reasonable thing to want in TypeScript. Turns out it's possible, but the syntax is... questionable.

The TypeScript approach

Here's what I ended up with:

export class TranslatedNumber {
  constructor(
    private value: number,
    private en: string,
    private mi: string
  ) {}

  getAll(): { value: number; en: string; mi: string } {
    return {
      value: this.value,
      en: this.en,
      mi: this.mi,
    }
  }

  static fromTuple<T extends typeof TranslatedNumber>(
    this: T,
    values: [value: number, en: string, mi: string]
  ): InstanceType<T> {
    return new this(...values) as InstanceType<T>
  }
}

export class ShoutyTranslatedNumber extends TranslatedNumber {
  constructor(value: number, en: string, mi: string) {
    super(value, en.toUpperCase(), mi.toUpperCase())
  }
}

(from static.ts)

And it works - when you call ShoutyTranslatedNumber.fromTuple(), you get a ShoutyTranslatedNumber instance back, not a TranslatedNumber:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

expect(translated.getAll()).toEqual({
  value: 3,
  en: 'THREE',
  mi: 'TORU',
})

(from static.test.ts)

The late binding works. But look at that fromTuple method signature again. Specifically this bit: this: T.

Parameters that aren't parameters

When I first saw this: T in the parameter list, my immediate reaction was "okay, so I need to pass the class as the first argument?"

But the usage doesn't have any extra parameter:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

No class being passed. Just the tuple. So what the hell is this: T, doing in the parameter list?

Turns out it's a TypeScript-specific construct that exists purely for the type system. It's not a runtime parameter at all - it gets completely erased during compilation. It's a type hint that tells TypeScript "remember which class this static method was called on".

When you write ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru']), TypeScript infers:

  • The this inside fromTuple refers to ShoutyTranslatedNumber
  • Therefore T is typeof ShoutyTranslatedNumber
  • Therefore InstanceType<T> is ShoutyTranslatedNumber

It's clever. It works. But it's also completely bizarre if you're coming from any language where parameters are just parameters.

Why this feels wrong

The thing that bothers me about this isn't that it doesn't work - it does work fine. It's that the solution is a hack at the type system level when it should be a language feature.

TypeScript could have introduced syntax like new static() or new this() and compiled it to whatever JavaScript pattern makes it work at runtime. Instead, they've made developers express "the class this method was called on" through a phantom parameter that only exists for the type checker.

Compare this to how other languages handle it:

PHP just gives you static as a keyword. You write new static() and the compiler handles the rest.

Kotlin compiles to JavaScript too, but when you write Kotlin, you write actual Kotlin - proper classes, sealed classes, data classes, all the language features. The compiler figures out how to make it work in JavaScript. You don't write weird pseudo-parameters because "JavaScript doesn't have that feature".

TypeScript has positioned itself as "JavaScript with types" rather than "a language that compiles to JavaScript", which means it's constantly constrained by JavaScript's limitations instead of abstracting them away. When JavaScript doesn't have a concept, TypeScript makes you do the workaround instead of the compiler doing it.

It's functional, but it's not elegant. And it's definitely not intuitive.

Does it matter?

In practice? Not really. Once you know the pattern, it's straightforward enough to use. The this: T parameter becomes just another TypeScript idiom you memorise and move on.

But it does highlight a fundamental tension in TypeScript's design philosophy. The language is scared to be a proper language with its own features and syntax. Everything has to map cleanly back to JavaScript, even when that makes the developer experience worse.

I found this Stack Overflow answer while researching this, which explains the mechanics well enough, but doesn't really acknowledge how weird the solution is. It's all type theory without much "here's why the language works this way".

For now, I've got late static binding working in TypeScript. It required some generics gymnastics and a phantom parameter, but it does what I need. I'll probably dig deeper into generics in a future ticket - there's clearly more to understand there, and I've not worked with generics in any language before, so that'll be interesting.

The code for this is in my learning-typescript repository if you want to see the full implementation. Thanks to Claudia for helping me understand what the hell this: T was actually doing and for assistance with this write-up.

Righto.

--
Adam

Saturday, 27 September 2025

JavaScript Symbols: when learning one thing teaches you fifteen others

G'day:

This is one of those "I thought I was learning one thing but ended up discovering fifteen other weird JavaScript behaviors" situations that seems to happen every time I try to understand a JavaScript feature properly.

I was working through my TypeScript learning project, specifically tackling symbols (TS / JS) as part of understanding primitive types. Seemed straightforward enough - symbols are unique primitive values, used for creating "private" object properties and implementing well-known protocols. Easy, right?

Wrong. What started as "symbols are just unique identifiers" quickly turned into a masterclass in JavaScript's most bizarre type coercion behaviors, ESLint's opinions about legitimate code patterns, and why semicolons sometimes matter more than you think.

The basics (that aren't actually basic)

Symbols are primitive values that are guaranteed to be unique:

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false - always unique

Except when they're not unique, because Symbol.for() maintains a global registry:

const s1 = Symbol.for('my-key');
const s2 = Symbol.for('my-key');
console.log(s1 === s2); // true - same symbol from registry

Fair enough. And you can't call Symbol as a constructor (unlike literally every other primitive wrapper):

const sym = new Symbol(); // TypeError: Symbol is not a constructor

This seemed like a reasonable safety feature until I tried to test it and discovered that TypeScript will happily let you write this nonsense, but ESLint immediately starts complaining about the any casting required to make it "work".

Where things get properly weird

The real fun starts when you encounter the well-known symbols - particularly Symbol.toPrimitive. This lets you control how objects get converted to primitive values, which sounds useful until you actually try to use it.

Here's a class that implements custom primitive conversion:

export class SomeClass {
  [Symbol.toPrimitive](hint: string) {
    if (hint === 'number') {
      return 42;
    }
    if (hint === 'string') {
      return 'forty-two';
    }
    return 'default';
  }
}

(from symbols.ts)

Now, which conversion do you think obj + '' would trigger? If you guessed "string", because you're concatenating with a string, you'd be wrong. It actually triggers the "default" hint because JavaScript's + operator is fundamentally broken.

The + operator with mixed types calls toPrimitive with hint "default", not "string". JavaScript has to decide whether this is addition or concatenation before converting the operands, so it plays it safe with the default hint. Only explicit string conversion like String(obj) or template literals get the string hint.

This is the kind of language design decision that makes you question whether the people who created JavaScript have ever actually used JavaScript.

ESLint vs. reality

Speaking of questionable decisions, try writing the template literal version:

expect(`${obj}`).toBe('forty-two');

ESLint immediately complains: "Invalid type of template literal expression". It sees a custom class being used in string interpolation and assumes you've made a mistake, despite this being exactly what Symbol.toPrimitive is designed for.

You end up with this choice:

  1. Suppress the ESLint rule for legitimate symbol behavior
  2. Use String(obj) explicitly (which actually works better anyway)
  3. Cast to any and deal with ESLint complaining about that instead

Modern tooling is supposedly designed to help us write better code, but it turns out "better" doesn't include using JavaScript's actual primitive conversion protocols.

Symbols as "secret" properties

The privacy model for symbols is... interesting. They're hidden from normal enumeration but completely discoverable if you know where to look:

const secret1 = Symbol('secret1');
const secret2 = Symbol('secret2');

const obj = {
  publicProp: 'visible',
  [secret1]: 'hidden',
  [secret2]: 'also hidden'
};

console.log(Object.keys(obj));                    // ['publicProp']
console.log(JSON.stringify(obj));                 // {"publicProp":"visible"}
console.log(Object.getOwnPropertySymbols(obj));   // [Symbol(secret1), Symbol(secret2)]
console.log(Reflect.ownKeys(obj));                // ['publicProp', Symbol(secret1), Symbol(secret2)]

So symbols provide privacy from accidental access, but not from intentional inspection. It's like having a door that's closed but not locked - good enough to prevent accidents, useless against anyone who actually wants to get in.

Semicolons matter (sometimes)

While implementing symbol properties, I discovered this delightful parsing ambiguity:

export class SomeClass {
  private stringName: string = 'StringNameOfClass'
  [Symbol.toStringTag] = this.stringName  // Prettier goes mental
}

Without a semicolon after the first line, Prettier interprets this as:

private stringName: string = ('StringNameOfClass'[Symbol.toStringTag] = this.stringName)

Because you can totally set properties on string literals in JavaScript (even though it's completely pointless), the parser thinks you're doing property access and assignment chaining.

The semicolon makes it unambiguous, and impressively, Prettier is smart enough to recognize that this particular semicolon is semantically significant and doesn't remove it like it normally would.

Testing arrays vs. testing values

Completely unrelated to symbols, but I learned that Vitest's toBe() and toEqual() are different beasts:

expect(Object.keys(obj)).toBe(['publicProp']);     // Fails - different array objects
expect(Object.keys(obj)).toEqual(['publicProp']);  // Passes - same contents

toBe() uses reference equality (like Object.is()), so even arrays with identical contents are different objects. toEqual() does deep equality comparison. This seems obvious in hindsight, but when you're in the middle of testing symbol enumeration behavior, it's easy to forget that arrays are objects too.

The real lesson

I set out to learn about symbols and ended up with a tour of JavaScript's most questionable design decisions:

  • Type coercion that doesn't work the way anyone would expect
  • Operators that behave differently based on hints that don't correspond to actual usage
  • Tooling that warns against legitimate language features
  • Parsing ambiguities that require strategic semicolon placement
  • Privacy models that aren't actually private

This is exactly why "learn by doing" beats "read the documentation" every time. The docs would never tell you about the ESLint conflicts, the semicolon parsing gotcha, or the + operator's bizarre hint behavior. You only discover this stuff when you're actually writing code and things don't work the way they should.

The symbols themselves are fine - they do what they're supposed to do. It's everything else around them that's… erm… "laden with interesting design decision "opportunities".[Cough].


The full code for this investigation is available in my learning-typescript repository if you want to see the gory details. Thanks to Claudia for helping debug the type coercion weirdness and for assistance with this write-up. Also props to GitHub Copilot for pointing out that I had three functions doing the same thing - sometimes the robots are right.

Righto.

--
Adam

Thursday, 25 September 2025

TypeScript namespaces: when the docs say one thing and ESLint says another

G'day:

This is one of those "the documentation says one thing, the tooling says another, what the hell am I actually supposed to do?" situations that seems to crop up constantly in modern JavaScript tooling.

I was working through TypeScript enums as part of my learning project, and I wanted to add methods to an enum - you know, the kind of thing you can do with PHP 8 enums where you can have both the enum values and associated behavior in the same construct. Seemed like a reasonable thing to want to do.

TypeScript enums don't support methods directly, but some digging around Stack Overflow led me to namespace merging as a solution. Fair enough - except as soon as I implemented it, ESLint started having a proper whinge about using namespaces at all.

Cue an hour of trying to figure out whether I was doing something fundamentally wrong, or whether the tooling ecosystem just hasn't caught up with legitimate use cases. Turns out it's a bit of both.

The contradiction

Here's what the official TypeScript documentation says about namespaces:

A note about terminology: It's important to note that in TypeScript 1.5, the nomenclature has changed. "Internal modules" are now "namespaces". "External modules" are now simply "modules", as to align with ECMAScript 2015's terminology, (namely that module X { is equivalent to the now-preferred namespace X {).

Note that "now-preferred" bit. Sounds encouraging, right?

And here's what the ESLint TypeScript rules say:

TypeScript historically allowed a form of code organization called "custom modules" (module Example {}), later renamed to "namespaces" (namespace Example). Namespaces are an outdated way to organize TypeScript code. ES2015 module syntax is now preferred (import/export).

So which is it? Are namespaces preferred, or are they outdated?

The answer, as usual with JavaScript tooling, is "it depends, and the documentation is misleading".

The TypeScript docs were written when they renamed the syntax from module to namespace - the "now-preferred" referred to using the namespace keyword instead of the old module keyword. It wasn't saying namespaces were preferred over ES modules; it was just clarifying the syntax change within the namespace feature itself.

The ESLint docs reflect current best practices: ES2015 modules (import/export) are indeed the standard way to organize code now. Namespaces are generally legacy for most use cases.

But "most use cases" isn't "all use cases". And this is where things get interesting.

The legitimate use case: enum methods

What I wanted to do was add a method to a TypeScript enum, similar to what you can do in PHP:

// What I wanted (conceptually)
enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
  
  // This doesn't work in TypeScript
  static fromValue(value: string): MaoriNumber {
    // ...
  }
}

The namespace merging approach lets you achieve this by declaring an enum and then a namespace with the same name:

// src/lt-15/namespaces.ts

export enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
}

// eslint-disable-next-line @typescript-eslint/no-namespace
export namespace MaoriNumber {
  const enumKeysOnly = Object.keys(MaoriNumber).filter(
    (key) =>
      typeof MaoriNumber[key as keyof typeof MaoriNumber] !== 'function'
  )

  export function fromValue(value: string): MaoriNumber {
    const valueAsMaoriNumber: MaoriNumber = value as MaoriNumber
    const index = Object.values(MaoriNumber).indexOf(valueAsMaoriNumber);
    if (index === -1) {
      throw new Error(`Value "${value}" is not a valid MaoriNumber`);
    }
    const elementName: string = enumKeysOnly[index];
    const typedElementName = elementName as keyof typeof MaoriNumber;

    return MaoriNumber[typedElementName] as MaoriNumber;
  }
}

This gives you exactly what you want: MaoriNumber.Tahi for enum access and MaoriNumber.fromValue() for the method, all properly typed.

The // eslint-disable-next-line comment acknowledges that yes, I know namespaces are generally discouraged, but this is a specific case where they're the right tool for the job.

Why the complexity in fromValue?

You might wonder why that fromValue function is doing so much filtering and type casting. It's because of the namespace merging itself.

When you merge an enum with a namespace, TypeScript sees MaoriNumber as containing both the enum values and the functions. So Object.keys(MaoriNumber) returns:

['Tahi', 'Rua', 'Toru', 'Wha', 'fromValue']

And keyof typeof MaoriNumber becomes:

"Tahi" | "Rua" | "Toru" | "Wha" | "fromValue"

The filtering step removes the function keys so we only work with the actual enum values. The type assertions handle the fact that TypeScript can't statically analyze that our runtime filtering has eliminated the function possibility.

Sidebar: that keyof typeof bit took a while for me to work out. Well I say "work out": I just read this Q&A on Stack Overflow: What does "keyof typeof" mean in TypeScript?. I didn't find anything useful in the actual docs. I look at it more closely in some other code I wrote today… there might be an article in that too. We'll see (I'll cross-ref it here if I write it).

Testing the approach

The tests prove that both aspects work correctly:

// tests/lt-15/namespaces.test.ts

describe('Emulating enum with method', () => {
  it('has accessible enums', () => {
    expect(MaoriNumber.Tahi).toBe('one')
  })
  
  it('has accessible methods', () => {
    expect(MaoriNumber.fromValue('two')).toEqual(MaoriNumber.Rua)
  })
  
  it("won't fetch the method as an 'enum' entry", () => {
    expect(() => {
      MaoriNumber.fromValue('fromValue')
    }).toThrowError('Value "fromValue" is not a valid MaoriNumber')
  })
  
  it("will error if the string doesn't match a MaoriNumber", () => {
    expect(() => {
      MaoriNumber.fromValue('rima')
    }).toThrowError('Value "rima" is not a valid MaoriNumber')
  })
})

The edge case testing is important here - we want to make sure the function doesn't accidentally treat its own name as a valid enum value, and that it properly handles invalid inputs.

Alternative approaches

You could achieve similar functionality with a class and static methods:

const MaoriNumberValues = {
  Tahi: 'one',
  Rua: 'two', 
  Toru: 'three',
  Wha: 'four'
} as const

type MaoriNumber = typeof MaoriNumberValues[keyof typeof MaoriNumberValues]

class MaoriNumbers {
  static readonly Tahi = MaoriNumberValues.Tahi
  static readonly Rua = MaoriNumberValues.Rua
  static readonly Toru = MaoriNumberValues.Toru
  static readonly Wha = MaoriNumberValues.Wha
  
  static fromValue(value: string): MaoriNumber {
    // implementation
  }
}

But this is more verbose, loses some of the enum benefits (like easy iteration), and doesn't give you the same clean MaoriNumber.Tahi syntax you get with the namespace approach.

So when should you use namespaces?

Based on this experience, I'd say namespace merging with enums is one of the few remaining legitimate use cases for TypeScript namespaces. The modern alternatives don't provide the same ergonomics for this specific pattern.

For everything else - code organisation, avoiding global pollution, grouping related functionality - ES modules are indeed the way forward. But when you need to add methods to enums and you want clean, intuitive syntax, namespace merging is still the right tool.

The key is being intentional about it. Use the ESLint disable comment to acknowledge that you're making a conscious choice, not just ignoring best practices out of laziness.

It's one of those situations where the general advice ("don't use namespaces") doesn't account for specific edge cases where they're still the best solution available. The tooling will complain, but sometimes the tooling is wrong.

I'll probably circle back to write up more about TypeScript enums in general - there's a fair bit more to explore there. But for now, I've got a working solution for enum methods that gives me the PHP-like behavior I was after, even if it did require wading through some contradictory documentation to get there.

Credit where it's due: Claudia (claude.ai) was instrumental in both working through the namespace merging approach and helping me understand the TypeScript type system quirks that made the implementation more complex than expected. The back-and-forth debugging of why MaoriNumber[typedElementName] was causing type errors was particularly useful - sometimes you need another perspective to spot what the compiler is actually complaining about. She also helped draft this article, which saved me a few hours of writing time. GitHub Copilot's code review feature has been surprisingly helpful too - it caught some genuine issues with error handling and performance that I'd missed during the initial implementation.

Righto.

--
Adam