Showing posts with label Javascript. Show all posts
Showing posts with label Javascript. Show all posts

Saturday, 4 October 2025

TypeScript decorators: not actually decorators

G'day:

I've been working through TypeScript classes, and when I got to decorators I hit the @ syntax and thought "hang on, what the heck is all this doing inside the class being decorated? The class shouldn't know it's being decorated. Fundamentally it shouldn't know."

Turns out TypeScript decorators have bugger all to do with the Gang of Four decorator pattern. They're not about wrapping objects at runtime to extend behavior. They're metaprogramming annotations - more like Java's @annotations or C#'s [attributes] - that modify class declarations at design time using the @ syntax.

The terminology collision is unfortunate. Python had the same debate back in PEP 318 - people pointed out that "decorator" was already taken by a well-known design pattern, but they went with it anyway because the syntax visually "decorates" the function definition. TypeScript followed Python's lead: borrowed the @ syntax, borrowed the confusing name, and now we're stuck with it.

So this isn't about the decorator pattern at all. This is about TypeScript's metaprogramming features that happen to be called decorators for historical reasons that made sense to someone, somewhere.

What TypeScript deco

What TypeScript decorators actually do

A decorator in TypeScript is a function that takes a target (the thing being decorated - a class, method, property, whatever) and a context object, and optionally returns a replacement. They execute at class definition time, not at runtime.

The simplest example is a getter decorator:

function obscurer(
  originalMethod: (this: PassPhrase) => string,
  context: ClassGetterDecoratorContext
) {
  void context
  function replacementMethod(this: PassPhrase) {
    const duplicateOfThis: PassPhrase = Object.assign(
      Object.create(Object.getPrototypeOf(this) as PassPhrase),
      this,
      { _text: this._text.replace(/./g, '*') }
    ) as PassPhrase

    return originalMethod.call(duplicateOfThis)
  }

  return replacementMethod
}

export class PassPhrase {
  constructor(protected _text: string) {}

  get plainText(): string {
    return this._text
  }

  @obscurer
  get obscuredText(): string {
    return this._text
  }
}

(from accessor.ts)

The decorator function receives the original getter and returns a replacement that creates a modified copy of this, replaces the _text property with asterisks, then calls the original getter with that modified context. The original instance is untouched - we're not mutating state, we're intercepting the call and providing different data to work with. The @obscurer syntax applies the decorator to the getter.

The test shows this in action:

it('original text remains unchanged', () => {
  const phrase = new PassPhrase('tough_to_guess')
  expect(phrase.obscuredText).toBe('**************')
  expect(phrase.plainText).toBe('tough_to_guess')
})

(from accessor.test.ts)

The obscuredText getter returns asterisks, the plainText getter returns the original value. The decorator wraps one getter without affecting the other or mutating the underlying _text property.

Method decorators and decorator factories

Method decorators work the same way as getter decorators, except they handle methods with actual parameters. More interesting is the decorator factory pattern - a function that returns a decorator, allowing runtime configuration.

Here's an authentication service with logging:

interface Logger {
  log(message: string): void
}

const defaultLogger: Logger = console

export class AuthenticationService {
  constructor(private directoryServiceAdapter: DirectoryServiceAdapter) {}

  @logAuth()
  authenticate(userName: string, password: string): boolean {
    const result: boolean = this.directoryServiceAdapter.authenticate(
      userName,
      password
    )
    if (!result) {
      throw new AuthenticationException(
        `Authentication failed for user ${userName}`
      )
    }
    return result
  }
}

function logAuth(logger: Logger = defaultLogger) {
  return function (
    originalMethod: (
      this: AuthenticationService,
      userName: string,
      password: string
    ) => boolean,
    context: ClassMethodDecoratorContext<
      AuthenticationService,
      (userName: string, password: string) => boolean
    >
  ) {
    void context
    function replacementMethod(
      this: AuthenticationService,
      userName: string,
      password: string
    ) {
      logger.log(`Authenticating user ${userName}`)
      try {
        const result = originalMethod.call(this, userName, password)
        logger.log(`User ${userName} authenticated successfully`)
        return result
      } catch (e) {
        logger.log(`Authentication failed for user ${userName}: ${e}`)
        throw e
      }
    }
    return replacementMethod
  }
}

(from method.ts)

The factory function takes a logger parameter and returns the actual decorator function. The decorator wraps the method with logging: logs before calling, logs on success, logs on failure and re-throws. The @logAuth() syntax calls the factory which returns the decorator.

Worth noting: the logger has to be configured at module level because @logAuth() executes when the class is defined, not when instances are created. This means tests can't easily inject different loggers per instance - you're stuck with whatever was configured when the file loaded. It's a limitation of how decorators work, and honestly it's a bit crap for dependency injection.

Also note I'm just using the console as the logger here. It makes testing easy.

Class decorators and shared state

Class decorators can replace the entire class, including hijacking the constructor. This example is thoroughly contrived but demonstrates how decorators can inject stateful behavior that persists across all instances:

const maoriNumbers = ['tahi', 'rua', 'toru', 'wha']
let current = 0
function* generator() {
  while (current < maoriNumbers.length) {
    yield maoriNumbers[current++]
  }
  throw new Error('No more Maori numbers')
}

function maoriSequence(
  target: typeof Number,
  context: ClassDecoratorContext
) {
  void context

  return class extends target {
    _value = generator().next().value as string
  }
}

type NullableString = string | null

@maoriSequence
export class Number {
  constructor(protected _value: NullableString = null) {}

  get value(): NullableString {
    return this._value
  }
}

(from class.ts)

The class decorator returns a new class that extends the original, overriding the _value property with the next value from a generator. The generator and its state live at module scope, so they're shared across all instances of the class. Each time you create a new instance, the constructor parameter gets completely ignored and the decorator forces the next Maori number instead:

it('intercepts the constructor', () => {
  expect(new Number().value).toEqual('tahi')
  expect(new Number().value).toEqual('rua')
  expect(new Number().value).toEqual('toru')
  expect(new Number().value).toEqual('wha')
  expect(() => new Number()).toThrowError('No more Maori numbers')
})

(from class.test.ts)

First instance gets 'tahi', second gets 'rua', third gets 'toru', fourth gets 'wha', and the fifth throws an error because the generator is exhausted. The state persists across all instantiations because it's in the decorator's closure at module level.

This demonstrates that class decorators can completely hijack construction and maintain shared state, which is both powerful and horrifying. You'd never actually do this in real code - it's terrible for testing, debugging, and reasoning about behavior - but it shows the level of control decorators have over class behavior.

GitHub Copilot's code review was appropriately horrified by this. It flagged the module-level state, the generator that never resets, the constructor hijacking, and basically everything else about this approach. Fair cop - the code reviewer was absolutely right to be suspicious. This is demonstration code showing what's possible with decorators, not what you should actually do. In real code, if you find yourself maintaining stateful generators at module scope that exhaust after four calls and hijack constructors to ignore their parameters, you've gone badly wrong somewhere and need to step back and reconsider your life choices.

Auto-accessors and the accessor keyword

Auto-accessors are a newer feature that provides shorthand for creating getter/setter pairs with a private backing field. The accessor keyword does automatically what you'd normally write manually:

export class Person {
  @logCalls(defaultLogger)
  accessor firstName: string

  @logCalls(defaultLogger)
  accessor lastName: string

  constructor(firstName: string, lastName: string) {
    this.firstName = firstName
    this.lastName = lastName
  }

  getFullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

(from autoAccessors.ts)

The accessor keyword creates a private backing field plus public getter and setter, similar to C# auto-properties. The decorator can then wrap both operations:

function logCalls(logger: Logger = defaultLogger) {
  return function (
    target: ClassAccessorDecoratorTarget,
    context: ClassAccessorDecoratorContext
  ) {
    const result: ClassAccessorDecoratorResult = {
      get(this: This) {
        logger.log(`[${String(context.name)}] getter called`)
        return target.get.call(this)
      },
      set(this: This, value) {
        logger.log(
          `[${String(context.name)}] setter called with value [${String(value)}]`
        )
        target.set.call(this, value)
      }
    }

    return result
  }
}

(from autoAccessors.ts)

The target provides access to the original get and set methods, and the decorator returns a result object with replacement implementations. The getter wraps the original with logging before calling it, and the setter does the same.

Testing shows both operations getting logged:

it('should log the setters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  new Person('Jed', 'Dough')

  expect(consoleSpy).toHaveBeenCalledWith(
    '[firstName] setter called with value [Jed]'
  )
  expect(consoleSpy).toHaveBeenCalledWith(
    '[lastName] setter called with value [Dough]'
  )
})

it('should log the getters being called', () => {
  const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {})
  const person = new Person('Jed', 'Dough')

  expect(person.getFullName()).toBe('Jed Dough')
  expect(consoleSpy).toHaveBeenCalledWith('[firstName] getter called')
  expect(consoleSpy).toHaveBeenCalledWith('[lastName] getter called')
})

(from autoAccessors.test.ts)

The constructor assignments trigger the setters, which get logged. Later when getFullName() accesses the properties, the getters are logged.

Auto-accessors are actually quite practical compared to the other decorator types. They provide a clean way to add cross-cutting concerns like logging, validation, or change tracking to properties without cluttering the class with boilerplate getter/setter implementations.

What I learned

TypeScript decorators are metaprogramming tools that modify class behavior at design time. They're useful for cross-cutting concerns like logging, validation, or instrumentation - the kinds of things that would otherwise clutter your actual business logic.

The main decorator types are:

  • Getter/setter decorators - wrap property access
  • Method decorators - wrap method calls
  • Class decorators - replace or modify entire classes
  • Auto-accessor decorators - wrap the getter/setter pairs created by the accessor keyword

Decorator factories (functions that return decorators) allow runtime configuration, though "runtime" here means "when the module loads", not "when instances are created". This makes dependency injection awkward - you're stuck with module-level state or global configuration.

The syntax is straightforward once you understand the pattern: decorator receives target and context, returns replacement (or modifies via context), job done. The tricky bit is the type signatures and making sure your implementation signature is flexible enough to handle all the overloads you're declaring.

But fundamentally, these aren't decorators in the design pattern sense. They're annotations that modify declarations. If you're coming from a language with proper decorators (the GoF pattern), you'll need to context-switch your brain because the @ syntax is doing something completely different here.

Worth learning? Yeah, if only because you'll see them in the wild and need to understand what they're doing.

Would I use them in my own code? Probably sparingly. Auto-accessors are legitimately useful. Method decorators for logging or metrics could work if you're comfortable with the module-level configuration limitations. Class decorators that hijack constructors and maintain shared state can absolutely get in the sea.

But to be frank: if I wanted to decorate something - in the accurate sense of that term - I'd do it properly using the design pattern, and DI.


The full code for this investigation is in my learning-typescript repository.

Righto.

--
Adam

Thursday, 2 October 2025

TypeScript mixins: poor person's composition, but with generics

G'day:

I've been working through TypeScript classes, and today I hit mixins. For those unfamiliar, mixins are a pattern for composing behavior from multiple sources - think Ruby's modules or PHP's traits. They're basically "poor person's composition" - a way to share behavior between classes when you can't (or won't) use proper dependency injection.

I think they're a terrible pattern. If I need shared behavior, I'd use actual composition - create a proper class and inject it as a dependency. But I'm not always working with my own code, and mixins do exist in the wild, so here we are.

The TypeScript mixin implementation is interesting though - it's built on generics and functions that return classes, which is quite different from the prototype-mutation approach you see in JavaScript. And despite my reservations about the pattern itself, understanding how it works turned out to be useful for understanding TypeScript's type system better.

The basic pattern

TypeScript mixins aren't about mutating prototypes at runtime (though you can do that in JavaScript). They're functions that take a class and return a new class that extends it.

For this example, I wanted a mixin that would add a flatten() method to any class - something that takes all the object's properties and concatenates their values into a single string. Not particularly useful in real code, but simple enough to demonstrate the mechanics without getting lost in business logic.

type Constructor = new (...args: any[]) => {}

function applyFlattening<TBase extends Constructor>(Base: TBase) {
  return class Flattener extends Base {
    flatten(): string {
      return Object.entries(this).reduce(
        (flattened: string, [_, value]): string => {
          return flattened + String(value)
        },
        ''
      )
    }
  }
}

(from mixins.ts)

That Constructor type is saying "anything that can be called with new and returns an object". The mixin function takes a class that matches this type and returns a new anonymous class that extends the base class with additional behavior.

You can then apply it to any class:

export class Name {
  constructor(
    public firstName: string,
    public lastName: string
  ) {}

  get fullName(): string {
    return `${this.firstName} ${this.lastName}`
  }
}

export const FlattenableName = applyFlattening(Name)

FlattenableName is now a class that has everything Name had plus the flatten() method. TypeScript tracks all of this at compile time, so you get proper type checking and autocomplete for both the base class members and the mixin methods.

The generics bit

The confusing part (at least initially) is this bit:

function applyFlattening<TBase extends Constructor>(Base: TBase)

Without understanding generics, this is completely opaque. The <TBase extends Constructor> is saying "this function is generic over some type TBase, which must be a constructor". The Base: TBase parameter then uses that type.

This lets TypeScript track what specific class you're mixing into. When you call applyFlattening(Name), TypeScript knows that TBase is specifically the Name class, so it can infer that the returned class has both Name's properties and methods plus the flatten() method.

Without generics, TypeScript would only know "some constructor was passed in" and couldn't give you proper type information about what the resulting class actually contains. The generic parameter preserves the type information through the composition.

I hadn't covered generics properly before hitting this (it's still on my todo list), which made the mixin syntax particularly cryptic. But the core concept is straightforward once you understand that generics are about preserving type information as you transform data - in this case, transforming a class into an extended version of itself.

Using the mixed class

Once you've got the mixed class, using it is straightforward:

const flattenableName: InstanceType<typeof FlattenableName> =
  new FlattenableName('Zachary', 'Lynch')
expect(flattenableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = flattenableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

(from mixins.test.ts)

The InstanceType<typeof FlattenableName> bit is necessary because FlattenableName is a value (the constructor function), not a type. typeof FlattenableName gives you the constructor type, and InstanceType<...> extracts the type of instances that constructor creates.

Once you've got an instance, it has both the original Name functionality (the fullName getter) and the new flatten() method. The mixin has full access to this, so it can see all the object's properties - in this case, firstName and lastName.

Constraining the mixin

The basic Constructor type accepts any class - it doesn't care what properties or methods the class has. But you can constrain mixins to only work with classes that have specific properties:

type NameConstructor = new (
  ...args: any[]
) => {
  firstName: string
  lastName: string
}

function applyNameFlattening<TBase extends NameConstructor>(Base: TBase) {
  return class NameFlattener extends Base {
    flatten(): string {
      return this.firstName + this.lastName
    }
  }
}

(from mixins.ts)

The NameConstructor type specifies that the resulting instance must have firstName and lastName properties. Now the mixin can safely access those properties directly - TypeScript knows they'll exist.

You can't constrain the constructor parameters themselves - that ...args: any[] is mandatory for mixin functions. TypeScript requires this because the mixin doesn't know what arguments the base class constructor needs. You can only constrain the instance type (the return type of the constructor).

This means a class like this won't work with the constrained mixin:

export class ShortName {
  constructor(public firstName: string) {}
}
// This won't compile:
// export const FlattenableShortName = applyNameFlattening(ShortName)
// Argument of type 'typeof ShortName' is not assignable to parameter of type 'NameConstructor'

TypeScript correctly rejects it because ShortName doesn't have a lastName property, and the mixin's flatten() method needs it.

Chaining multiple mixins

You can apply multiple mixins by chaining them - pass the result of one mixin into another:

function applyArrayifier<TBase extends Constructor>(Base: TBase) {
  return class Arrayifier extends Base {
    arrayify(): string[] {
      return Object.entries(this).reduce(
        (arrayified: string[], [_, value]): string[] => {
          return arrayified.concat(String(value).split(''))
        },
        []
      )
    }
  }
}

export const ArrayableFlattenableName = applyArrayifier(FlattenableName)

(from mixins.ts)

Now ArrayableFlattenableName has everything from Name, plus flatten() from the first mixin, plus arrayify() from the second mixin:

const transformableName: InstanceType<typeof ArrayableFlattenableName> =
  new ArrayableFlattenableName('Zachary', 'Lynch')
expect(transformableName.fullName).toEqual('Zachary Lynch')

const flattenedName: string = transformableName.flatten()
expect(flattenedName).toEqual('ZacharyLynch')

const arrayifiedName: string[] = transformableName.arrayify()
expect(arrayifiedName).toEqual('ZacharyLynch'.split(''))

(from mixins.test.ts)

TypeScript correctly infers that all three sets of functionality are available on the final class. The type information flows through each composition step.

Why not just use composition?

Right, so having learned how mixins work in TypeScript, I still think they're a poor choice for most situations. If you need shared behavior, use actual composition:

class Flattener {
  flatten(obj: Record<string, unknown>): string {
    return Object.entries(obj).reduce(
      (flattened, [_, value]) => flattened + String(value),
      ''
    )
  }
}

class Name {
  constructor(
    public firstName: string,
    public lastName: string,
    private flattener: Flattener
  ) {}
  
  flatten(): string {
    return this.flattener.flatten(this)
  }
}

This is clearer about dependencies, easier to test (inject a mock Flattener), and doesn't require understanding generics or the mixin pattern. The behavior is in a separate class that can be reused anywhere, not just through inheritance chains.

Mixins make sense in languages where you genuinely can't do proper composition easily, or where the inheritance model is the primary abstraction. But TypeScript has first-class support for dependency injection and composition. Use it.

The main legitimate use case I can see for TypeScript mixins is when you're working with existing code that uses them, or when you need to add behavior to classes you don't control. Otherwise, favor composition.

The abstract class limitation

One thing you can't do with mixins is apply them to abstract classes. The pattern requires using new Base(...) to instantiate and extend the base class, but abstract classes can't be instantiated - that's their whole point.

abstract class AbstractBase {
  abstract doSomething(): void
}

// This won't work:
// const Mixed = applyMixin(AbstractBase)
// Cannot create an instance of an abstract class

The workarounds involve either making the base class concrete (which defeats the purpose of having it abstract), or mixing into a concrete subclass instead of the abstract parent. Neither is particularly satisfying.

This is a fundamental incompatibility between "can't instantiate" (abstract classes) and "must instantiate to extend" (the mixin pattern). It's another reason to prefer composition - you can absolutely inject abstract dependencies through constructor parameters without these limitations.

What I learned

TypeScript mixins are functions that take classes and return extended classes. They use generics to preserve type information through the composition, and TypeScript tracks everything at compile time so you get proper type checking.

The syntax is more complicated than it needs to be (that type Constructor = new (...args: any[]) => {} bit), and you need to understand generics before any of it makes sense. The InstanceType<typeof ClassName> dance is necessary because of how TypeScript distinguishes between constructor types and instance types.

You can constrain mixins to only work with classes that have specific properties, and you can chain multiple mixins together. But you can't use them with abstract classes, and they're generally a worse choice than proper composition for most real-world scenarios.

I learned the pattern because I'll encounter it in other people's code, not because I plan to use it myself. If I need shared behavior, I'll use dependency injection and composition like a sensible person. But now at least I understand what's happening when I see const MixedClass = applyMixin(BaseClass) in a codebase.

The full code for this investigation is in my learning-typescript repository. Thanks to Claudia for helping work through the type constraints and the abstract class limitation, and for assistance with this write-up.

Righto.

--
Adam

Tuesday, 30 September 2025

TypeScript constructor overloading: when one implementation has to handle multiple signatures

G'day:

I've been working through TypeScript classes, and today I hit constructor overloading. Coming from PHP where you can't overload constructors at all (you get one constructor, that's it), the TypeScript approach seemed straightforward enough: declare multiple signatures, implement once, job done.

Turns out the "implement once" bit is where things get interesting.

The basic pattern

TypeScript lets you declare multiple constructor signatures followed by a single implementation:

constructor()
constructor(s: string)
constructor(n: number)
constructor(s: string, n: number)
constructor(p1?: string | number, p2?: number) {
  // implementation handles all four cases
}

The first four lines are just declarations - they tell TypeScript "these are the valid ways to call this constructor". The final signature is the actual implementation that has to handle all of them.

Simple enough when you've got a no-arg constructor and a two-arg constructor - those are clearly different. But what happens when you need two different single-argument constructors, one taking a string and one taking a number?

That's where I got stuck.

The implementation signature problem

Here's what I wanted to support:

const empty = new Numeric()                    // both properties null
const justString = new Numeric('forty-two')    // asString set, asNumeric null
const justNumber = new Numeric(42)             // asNumeric set, asString null
const both = new Numeric('forty-two', 42)      // both properties set

(from constructors.test.ts)

My first attempt at the implementation looked like this:

constructor()
constructor(s: string)
constructor(s: string, n: number)
constructor(s?: string, n?: number) {
  this.asString = s ?? null
  this.asNumeric = n ?? null
}

Works fine for the no-arg, single-string, and two-arg cases. But then I needed to add the single-number constructor:

constructor(n: number)

And suddenly the compiler wasn't happy: "This overload signature is not compatible with its implementation signature."

The error pointed at the new overload, but the actual problem was in the implementation. It took me ages (and asking Claudia) to work this out. This is entirely down to me not reading, but just looking at what line it was pointing too. Duh. The first parameter was typed as string (or undefined), but the new overload promised it could also be a number. The implementation couldn't deliver on what the overload signature was promising.

Why neutral parameter names matter

The fix was to change the implementation signature to accept both types:

constructor(p1?: string | number, p2?: number) {
  // ...
}

But here's where the parameter naming became important. My initial instinct was to keep using meaningful names like s and n:

constructor(s?: string | number, n?: number)

This felt wrong. When you're reading the implementation code and you see a parameter called s, you expect it to be a string. But now it might be a number. The name actively misleads you about what the parameter contains.

Switching to neutral names like p1 and p2 made the implementation logic much clearer - these are just "parameter slots" that could contain different types depending on which overload was called. No assumptions about what they contain.

Runtime type checking

Once the implementation signature accepts both types, you need runtime logic to figure out which overload was actually called:

constructor(p1?: string | number, p2?: number) {
  if (typeof p1 === 'number' && p2 === undefined) {
    this.asNumeric = p1
    return
  }
  this.asString = (p1 as string) ?? null
  this.asNumeric = p2 ?? null
}

(from constructors.ts)

The first check handles the single-number case: if the first parameter is a number and there's no second parameter, we're dealing with new Numeric(42). Set asNumeric and bail out.

Everything else falls through to the default logic: treat the first parameter as a string (or absent) and the second parameter as a number (or absent). This covers the no-arg, single-string, and two-arg cases.

The type assertion (p1 as string) is necessary because TypeScript can't prove that p1 is a string at that point - we've only eliminated the case where it's definitely a number. From the compiler's perspective, it could still be string | number | undefined.

The bug I didn't notice

I had the implementation working and all my tests passing. Job done, right? Except when I submitted the PR, GitHub Copilot's review flagged this:

this.asString = (p1 as string) || null
this.asNumeric = p2 || null
The logic for handling empty strings is incorrect. An empty string ('') will be converted to null due to the || operator, but empty strings should be preserved as valid string values. Use nullish coalescing (??) instead or explicit null checks.

Copilot was absolutely right. The || operator treats all falsy values as "use the right-hand side", which includes:

  • '' (empty string)
  • 0 (zero)
  • false
  • null
  • undefined
  • NaN

So new Numeric('') would set asString to null instead of '', and new Numeric('test', 0) would set asNumeric to null instead of 0. Both are perfectly valid values that the constructor should accept.

The ?? (nullish coalescing) operator only treats null and undefined as "use the right-hand side", which is exactly what I needed:

this.asString = (p1 as string) ?? null
this.asNumeric = p2 ?? null

Now empty strings and zeros are preserved as valid values.

Testing the edge cases

The fact that this bug existed meant my initial tests weren't comprehensive enough. I'd tested the basic cases but missed the edge cases where valid values happen to be falsy.

I added tests for empty strings and zeros:

it('accepts an empty string as the only argument', () => {
  const o: Numeric = new Numeric('')

  expect(o.asString).toEqual('')
  expect(o.asNumeric).toBeNull()
})

it('accepts zero as the only argument', () => {
  const o: Numeric = new Numeric(0)

  expect(o.asNumeric).toEqual(0)
  expect(o.asString).toBeNull()
})

it('accepts an empty string as the first argument', () => {
  const o: Numeric = new Numeric('', -1)

  expect(o.asString).toEqual('')
})

it('accepts zero as the second argument', () => {
  const o: Numeric = new Numeric('NOT_TESTED', 0)

  expect(o.asNumeric).toEqual(0)
})

(from constructors.test.ts)

With the original || implementation, all four of these tests failed. After switching to ??, they all passed. That's how testing is supposed to work - the tests catch the bug, you fix it, the tests confirm the fix.

Fair play to Copilot for spotting this in the PR review. It's easy to miss falsy edge cases when you're focused on getting the type signatures right.

Method overloading in general

Worth noting that constructor overloading is just a specific case of method overloading. Any method can use this same pattern of multiple signatures with one implementation:

class Example {
  doThing(): void
  doThing(s: string): void
  doThing(n: number): void
  doThing(p?: string | number): void {
    // implementation handles all cases
  }
}

The same principles apply: the implementation signature needs to be flexible enough to handle all the declared overloads, and you need runtime type checking to figure out which overload was actually called.

Constructors just happen to be where I first encountered this pattern, because that's where you often want multiple ways to initialize an object with different combinations of parameters.

What I learned

Constructor overloading in TypeScript is straightforward once you understand that the implementation signature has to be a superset of all the overload signatures. The tricky bit is when you have overloads that look similar but take different types - that's when you need union types and runtime type checking to make it work.

Using neutral parameter names in the implementation helps avoid confusion about what types you're actually dealing with. And edge case testing matters - falsy values like empty strings and zeros are valid inputs that need explicit test coverage.

The full code is in my learning-typescript repository if you want to see the complete implementation. Thanks to Claudia for helping me understand why that compilation error was pointing at the overload when the problem was in the implementation, and to GitHub Copilot for catching the || vs ?? bug in the PR review.

Righto.

--
Adam

Monday, 29 September 2025

TypeScript late static binding: parameters that aren't actually parameters

G'day:

I've been working through classes in TypeScript as part of my learning project, and today I hit static methods. Coming from PHP, one of the first questions that popped into my head was "how does late static binding work here?"

In PHP, you can do this:

class Base {
    static function create() {
        return new static();  // Creates instance of the actual called class
    }
}

class Child extends Base {}

$instance = Child::create();  // Returns a Child instance, not Base

The static keyword in new static() means "whatever class this method was actually called on", not "the class where this method is defined". It's late binding - the class is resolved at runtime based on how the method was called.

Seemed like a reasonable thing to want in TypeScript. Turns out it's possible, but the syntax is... questionable.

The TypeScript approach

Here's what I ended up with:

export class TranslatedNumber {
  constructor(
    private value: number,
    private en: string,
    private mi: string
  ) {}

  getAll(): { value: number; en: string; mi: string } {
    return {
      value: this.value,
      en: this.en,
      mi: this.mi,
    }
  }

  static fromTuple<T extends typeof TranslatedNumber>(
    this: T,
    values: [value: number, en: string, mi: string]
  ): InstanceType<T> {
    return new this(...values) as InstanceType<T>
  }
}

export class ShoutyTranslatedNumber extends TranslatedNumber {
  constructor(value: number, en: string, mi: string) {
    super(value, en.toUpperCase(), mi.toUpperCase())
  }
}

(from static.ts)

And it works - when you call ShoutyTranslatedNumber.fromTuple(), you get a ShoutyTranslatedNumber instance back, not a TranslatedNumber:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

expect(translated.getAll()).toEqual({
  value: 3,
  en: 'THREE',
  mi: 'TORU',
})

(from static.test.ts)

The late binding works. But look at that fromTuple method signature again. Specifically this bit: this: T.

Parameters that aren't parameters

When I first saw this: T in the parameter list, my immediate reaction was "okay, so I need to pass the class as the first argument?"

But the usage doesn't have any extra parameter:

const translated = ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru'])

No class being passed. Just the tuple. So what the hell is this: T, doing in the parameter list?

Turns out it's a TypeScript-specific construct that exists purely for the type system. It's not a runtime parameter at all - it gets completely erased during compilation. It's a type hint that tells TypeScript "remember which class this static method was called on".

When you write ShoutyTranslatedNumber.fromTuple([3, 'three', 'toru']), TypeScript infers:

  • The this inside fromTuple refers to ShoutyTranslatedNumber
  • Therefore T is typeof ShoutyTranslatedNumber
  • Therefore InstanceType<T> is ShoutyTranslatedNumber

It's clever. It works. But it's also completely bizarre if you're coming from any language where parameters are just parameters.

Why this feels wrong

The thing that bothers me about this isn't that it doesn't work - it does work fine. It's that the solution is a hack at the type system level when it should be a language feature.

TypeScript could have introduced syntax like new static() or new this() and compiled it to whatever JavaScript pattern makes it work at runtime. Instead, they've made developers express "the class this method was called on" through a phantom parameter that only exists for the type checker.

Compare this to how other languages handle it:

PHP just gives you static as a keyword. You write new static() and the compiler handles the rest.

Kotlin compiles to JavaScript too, but when you write Kotlin, you write actual Kotlin - proper classes, sealed classes, data classes, all the language features. The compiler figures out how to make it work in JavaScript. You don't write weird pseudo-parameters because "JavaScript doesn't have that feature".

TypeScript has positioned itself as "JavaScript with types" rather than "a language that compiles to JavaScript", which means it's constantly constrained by JavaScript's limitations instead of abstracting them away. When JavaScript doesn't have a concept, TypeScript makes you do the workaround instead of the compiler doing it.

It's functional, but it's not elegant. And it's definitely not intuitive.

Does it matter?

In practice? Not really. Once you know the pattern, it's straightforward enough to use. The this: T parameter becomes just another TypeScript idiom you memorise and move on.

But it does highlight a fundamental tension in TypeScript's design philosophy. The language is scared to be a proper language with its own features and syntax. Everything has to map cleanly back to JavaScript, even when that makes the developer experience worse.

I found this Stack Overflow answer while researching this, which explains the mechanics well enough, but doesn't really acknowledge how weird the solution is. It's all type theory without much "here's why the language works this way".

For now, I've got late static binding working in TypeScript. It required some generics gymnastics and a phantom parameter, but it does what I need. I'll probably dig deeper into generics in a future ticket - there's clearly more to understand there, and I've not worked with generics in any language before, so that'll be interesting.

The code for this is in my learning-typescript repository if you want to see the full implementation. Thanks to Claudia for helping me understand what the hell this: T was actually doing and for assistance with this write-up.

Righto.

--
Adam

Saturday, 27 September 2025

JavaScript Symbols: when learning one thing teaches you fifteen others

G'day:

This is one of those "I thought I was learning one thing but ended up discovering fifteen other weird JavaScript behaviors" situations that seems to happen every time I try to understand a JavaScript feature properly.

I was working through my TypeScript learning project, specifically tackling symbols (TS / JS) as part of understanding primitive types. Seemed straightforward enough - symbols are unique primitive values, used for creating "private" object properties and implementing well-known protocols. Easy, right?

Wrong. What started as "symbols are just unique identifiers" quickly turned into a masterclass in JavaScript's most bizarre type coercion behaviors, ESLint's opinions about legitimate code patterns, and why semicolons sometimes matter more than you think.

The basics (that aren't actually basic)

Symbols are primitive values that are guaranteed to be unique:

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false - always unique

Except when they're not unique, because Symbol.for() maintains a global registry:

const s1 = Symbol.for('my-key');
const s2 = Symbol.for('my-key');
console.log(s1 === s2); // true - same symbol from registry

Fair enough. And you can't call Symbol as a constructor (unlike literally every other primitive wrapper):

const sym = new Symbol(); // TypeError: Symbol is not a constructor

This seemed like a reasonable safety feature until I tried to test it and discovered that TypeScript will happily let you write this nonsense, but ESLint immediately starts complaining about the any casting required to make it "work".

Where things get properly weird

The real fun starts when you encounter the well-known symbols - particularly Symbol.toPrimitive. This lets you control how objects get converted to primitive values, which sounds useful until you actually try to use it.

Here's a class that implements custom primitive conversion:

export class SomeClass {
  [Symbol.toPrimitive](hint: string) {
    if (hint === 'number') {
      return 42;
    }
    if (hint === 'string') {
      return 'forty-two';
    }
    return 'default';
  }
}

(from symbols.ts)

Now, which conversion do you think obj + '' would trigger? If you guessed "string", because you're concatenating with a string, you'd be wrong. It actually triggers the "default" hint because JavaScript's + operator is fundamentally broken.

The + operator with mixed types calls toPrimitive with hint "default", not "string". JavaScript has to decide whether this is addition or concatenation before converting the operands, so it plays it safe with the default hint. Only explicit string conversion like String(obj) or template literals get the string hint.

This is the kind of language design decision that makes you question whether the people who created JavaScript have ever actually used JavaScript.

ESLint vs. reality

Speaking of questionable decisions, try writing the template literal version:

expect(`${obj}`).toBe('forty-two');

ESLint immediately complains: "Invalid type of template literal expression". It sees a custom class being used in string interpolation and assumes you've made a mistake, despite this being exactly what Symbol.toPrimitive is designed for.

You end up with this choice:

  1. Suppress the ESLint rule for legitimate symbol behavior
  2. Use String(obj) explicitly (which actually works better anyway)
  3. Cast to any and deal with ESLint complaining about that instead

Modern tooling is supposedly designed to help us write better code, but it turns out "better" doesn't include using JavaScript's actual primitive conversion protocols.

Symbols as "secret" properties

The privacy model for symbols is... interesting. They're hidden from normal enumeration but completely discoverable if you know where to look:

const secret1 = Symbol('secret1');
const secret2 = Symbol('secret2');

const obj = {
  publicProp: 'visible',
  [secret1]: 'hidden',
  [secret2]: 'also hidden'
};

console.log(Object.keys(obj));                    // ['publicProp']
console.log(JSON.stringify(obj));                 // {"publicProp":"visible"}
console.log(Object.getOwnPropertySymbols(obj));   // [Symbol(secret1), Symbol(secret2)]
console.log(Reflect.ownKeys(obj));                // ['publicProp', Symbol(secret1), Symbol(secret2)]

So symbols provide privacy from accidental access, but not from intentional inspection. It's like having a door that's closed but not locked - good enough to prevent accidents, useless against anyone who actually wants to get in.

Semicolons matter (sometimes)

While implementing symbol properties, I discovered this delightful parsing ambiguity:

export class SomeClass {
  private stringName: string = 'StringNameOfClass'
  [Symbol.toStringTag] = this.stringName  // Prettier goes mental
}

Without a semicolon after the first line, Prettier interprets this as:

private stringName: string = ('StringNameOfClass'[Symbol.toStringTag] = this.stringName)

Because you can totally set properties on string literals in JavaScript (even though it's completely pointless), the parser thinks you're doing property access and assignment chaining.

The semicolon makes it unambiguous, and impressively, Prettier is smart enough to recognize that this particular semicolon is semantically significant and doesn't remove it like it normally would.

Testing arrays vs. testing values

Completely unrelated to symbols, but I learned that Vitest's toBe() and toEqual() are different beasts:

expect(Object.keys(obj)).toBe(['publicProp']);     // Fails - different array objects
expect(Object.keys(obj)).toEqual(['publicProp']);  // Passes - same contents

toBe() uses reference equality (like Object.is()), so even arrays with identical contents are different objects. toEqual() does deep equality comparison. This seems obvious in hindsight, but when you're in the middle of testing symbol enumeration behavior, it's easy to forget that arrays are objects too.

The real lesson

I set out to learn about symbols and ended up with a tour of JavaScript's most questionable design decisions:

  • Type coercion that doesn't work the way anyone would expect
  • Operators that behave differently based on hints that don't correspond to actual usage
  • Tooling that warns against legitimate language features
  • Parsing ambiguities that require strategic semicolon placement
  • Privacy models that aren't actually private

This is exactly why "learn by doing" beats "read the documentation" every time. The docs would never tell you about the ESLint conflicts, the semicolon parsing gotcha, or the + operator's bizarre hint behavior. You only discover this stuff when you're actually writing code and things don't work the way they should.

The symbols themselves are fine - they do what they're supposed to do. It's everything else around them that's… erm… "laden with interesting design decision "opportunities".[Cough].


The full code for this investigation is available in my learning-typescript repository if you want to see the gory details. Thanks to Claudia for helping debug the type coercion weirdness and for assistance with this write-up. Also props to GitHub Copilot for pointing out that I had three functions doing the same thing - sometimes the robots are right.

Righto.

--
Adam

Thursday, 25 September 2025

TypeScript namespaces: when the docs say one thing and ESLint says another

G'day:

This is one of those "the documentation says one thing, the tooling says another, what the hell am I actually supposed to do?" situations that seems to crop up constantly in modern JavaScript tooling.

I was working through TypeScript enums as part of my learning project, and I wanted to add methods to an enum - you know, the kind of thing you can do with PHP 8 enums where you can have both the enum values and associated behavior in the same construct. Seemed like a reasonable thing to want to do.

TypeScript enums don't support methods directly, but some digging around Stack Overflow led me to namespace merging as a solution. Fair enough - except as soon as I implemented it, ESLint started having a proper whinge about using namespaces at all.

Cue an hour of trying to figure out whether I was doing something fundamentally wrong, or whether the tooling ecosystem just hasn't caught up with legitimate use cases. Turns out it's a bit of both.

The contradiction

Here's what the official TypeScript documentation says about namespaces:

A note about terminology: It's important to note that in TypeScript 1.5, the nomenclature has changed. "Internal modules" are now "namespaces". "External modules" are now simply "modules", as to align with ECMAScript 2015's terminology, (namely that module X { is equivalent to the now-preferred namespace X {).

Note that "now-preferred" bit. Sounds encouraging, right?

And here's what the ESLint TypeScript rules say:

TypeScript historically allowed a form of code organization called "custom modules" (module Example {}), later renamed to "namespaces" (namespace Example). Namespaces are an outdated way to organize TypeScript code. ES2015 module syntax is now preferred (import/export).

So which is it? Are namespaces preferred, or are they outdated?

The answer, as usual with JavaScript tooling, is "it depends, and the documentation is misleading".

The TypeScript docs were written when they renamed the syntax from module to namespace - the "now-preferred" referred to using the namespace keyword instead of the old module keyword. It wasn't saying namespaces were preferred over ES modules; it was just clarifying the syntax change within the namespace feature itself.

The ESLint docs reflect current best practices: ES2015 modules (import/export) are indeed the standard way to organize code now. Namespaces are generally legacy for most use cases.

But "most use cases" isn't "all use cases". And this is where things get interesting.

The legitimate use case: enum methods

What I wanted to do was add a method to a TypeScript enum, similar to what you can do in PHP:

// What I wanted (conceptually)
enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
  
  // This doesn't work in TypeScript
  static fromValue(value: string): MaoriNumber {
    // ...
  }
}

The namespace merging approach lets you achieve this by declaring an enum and then a namespace with the same name:

// src/lt-15/namespaces.ts

export enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
}

// eslint-disable-next-line @typescript-eslint/no-namespace
export namespace MaoriNumber {
  const enumKeysOnly = Object.keys(MaoriNumber).filter(
    (key) =>
      typeof MaoriNumber[key as keyof typeof MaoriNumber] !== 'function'
  )

  export function fromValue(value: string): MaoriNumber {
    const valueAsMaoriNumber: MaoriNumber = value as MaoriNumber
    const index = Object.values(MaoriNumber).indexOf(valueAsMaoriNumber);
    if (index === -1) {
      throw new Error(`Value "${value}" is not a valid MaoriNumber`);
    }
    const elementName: string = enumKeysOnly[index];
    const typedElementName = elementName as keyof typeof MaoriNumber;

    return MaoriNumber[typedElementName] as MaoriNumber;
  }
}

This gives you exactly what you want: MaoriNumber.Tahi for enum access and MaoriNumber.fromValue() for the method, all properly typed.

The // eslint-disable-next-line comment acknowledges that yes, I know namespaces are generally discouraged, but this is a specific case where they're the right tool for the job.

Why the complexity in fromValue?

You might wonder why that fromValue function is doing so much filtering and type casting. It's because of the namespace merging itself.

When you merge an enum with a namespace, TypeScript sees MaoriNumber as containing both the enum values and the functions. So Object.keys(MaoriNumber) returns:

['Tahi', 'Rua', 'Toru', 'Wha', 'fromValue']

And keyof typeof MaoriNumber becomes:

"Tahi" | "Rua" | "Toru" | "Wha" | "fromValue"

The filtering step removes the function keys so we only work with the actual enum values. The type assertions handle the fact that TypeScript can't statically analyze that our runtime filtering has eliminated the function possibility.

Sidebar: that keyof typeof bit took a while for me to work out. Well I say "work out": I just read this Q&A on Stack Overflow: What does "keyof typeof" mean in TypeScript?. I didn't find anything useful in the actual docs. I look at it more closely in some other code I wrote today… there might be an article in that too. We'll see (I'll cross-ref it here if I write it).

Testing the approach

The tests prove that both aspects work correctly:

// tests/lt-15/namespaces.test.ts

describe('Emulating enum with method', () => {
  it('has accessible enums', () => {
    expect(MaoriNumber.Tahi).toBe('one')
  })
  
  it('has accessible methods', () => {
    expect(MaoriNumber.fromValue('two')).toEqual(MaoriNumber.Rua)
  })
  
  it("won't fetch the method as an 'enum' entry", () => {
    expect(() => {
      MaoriNumber.fromValue('fromValue')
    }).toThrowError('Value "fromValue" is not a valid MaoriNumber')
  })
  
  it("will error if the string doesn't match a MaoriNumber", () => {
    expect(() => {
      MaoriNumber.fromValue('rima')
    }).toThrowError('Value "rima" is not a valid MaoriNumber')
  })
})

The edge case testing is important here - we want to make sure the function doesn't accidentally treat its own name as a valid enum value, and that it properly handles invalid inputs.

Alternative approaches

You could achieve similar functionality with a class and static methods:

const MaoriNumberValues = {
  Tahi: 'one',
  Rua: 'two', 
  Toru: 'three',
  Wha: 'four'
} as const

type MaoriNumber = typeof MaoriNumberValues[keyof typeof MaoriNumberValues]

class MaoriNumbers {
  static readonly Tahi = MaoriNumberValues.Tahi
  static readonly Rua = MaoriNumberValues.Rua
  static readonly Toru = MaoriNumberValues.Toru
  static readonly Wha = MaoriNumberValues.Wha
  
  static fromValue(value: string): MaoriNumber {
    // implementation
  }
}

But this is more verbose, loses some of the enum benefits (like easy iteration), and doesn't give you the same clean MaoriNumber.Tahi syntax you get with the namespace approach.

So when should you use namespaces?

Based on this experience, I'd say namespace merging with enums is one of the few remaining legitimate use cases for TypeScript namespaces. The modern alternatives don't provide the same ergonomics for this specific pattern.

For everything else - code organisation, avoiding global pollution, grouping related functionality - ES modules are indeed the way forward. But when you need to add methods to enums and you want clean, intuitive syntax, namespace merging is still the right tool.

The key is being intentional about it. Use the ESLint disable comment to acknowledge that you're making a conscious choice, not just ignoring best practices out of laziness.

It's one of those situations where the general advice ("don't use namespaces") doesn't account for specific edge cases where they're still the best solution available. The tooling will complain, but sometimes the tooling is wrong.

I'll probably circle back to write up more about TypeScript enums in general - there's a fair bit more to explore there. But for now, I've got a working solution for enum methods that gives me the PHP-like behavior I was after, even if it did require wading through some contradictory documentation to get there.

Credit where it's due: Claudia (claude.ai) was instrumental in both working through the namespace merging approach and helping me understand the TypeScript type system quirks that made the implementation more complex than expected. The back-and-forth debugging of why MaoriNumber[typedElementName] was causing type errors was particularly useful - sometimes you need another perspective to spot what the compiler is actually complaining about. She also helped draft this article, which saved me a few hours of writing time. GitHub Copilot's code review feature has been surprisingly helpful too - it caught some genuine issues with error handling and performance that I'd missed during the initial implementation.

Righto.

--
Adam

Sunday, 14 August 2022

JS: Server-sent events

G'day:

Yes, it's very odd for me to have something to say about JS stuff. I don't imagine there's anything new here for people that actually do JS development, but I don't, so this feature is new to me. Maybe it'll be new to some of my readers too.

I was looking at a question on Stack Overflow tagged with "ColdFusion": Server Side Events and Polling Database for new records with Lucee/Coldfusion. There was no answer, and I didn't know what the questioner was on about, so I decided to have a look.

Firstly, I RTFMed the JS words I didn't recognise:

The from ferreting around in the docs further, I found a PHP example of the server-side part: MDN › References › Web APIs › Server-sent events › Using server-sent events

From that lot it was easy to knock-together a CFML example.

<!-- test.html -->

<div id="result"></div>

<script>
var source = new EventSource('testInLoop.cfm');

source.addEventListener('message', function(e){
    document.body.innerHTML += `${e.data}<br>`
});
</script>
// testInLoop.cfm

header name="Content-Type" value="text/event-stream";

requestStartedAt = now()
for (i=1; i <= 10; i++) {
    data = {
        "uuid" = createUuid(),
        "now" = now().timeFormat("HH:mm:ss"),
        "requestStartedAt" = requestStartedAt.timeFormat("HH:mm:ss")
    }
    writeOutput("event: message" & chr(10))
    writeOutput('data: #serializeJson(data)#' & chr(10) & chr(10))
    flush;
    sleep(1000)
}
writeOutput("event: message" & chr(10))
writeOutput('data: #serializeJson({"complete": true})#' & chr(10) & chr(10))
flush;

Miraculaously, this works:

{"uuid":"4E1A4D17-6E4A-410B-B9C26B420A0DD5B4","now":"11:15:40","requestStartedAt":"11:15:40"}
{"uuid":"65C3F605-8823-4BF5-B32550FEABD0CECC","now":"11:15:41","requestStartedAt":"11:15:40"}
{"uuid":"D22E86D0-872F-4E5F-A89B29F44314BE05","now":"11:15:42","requestStartedAt":"11:15:40"}
{"uuid":"C4B6C1A6-0EF6-4CFD-AC9D4791F14A74AA","now":"11:15:43","requestStartedAt":"11:15:40"}
{"uuid":"FE0DB9CA-3535-416C-B63066EC4913E87D","now":"11:15:44","requestStartedAt":"11:15:40"}
{"uuid":"3E4E2DED-5B57-40AB-B8F5886DC7983053","now":"11:15:45","requestStartedAt":"11:15:40"}
{"uuid":"19611965-34D0-4ED4-9B9260B7EE3D8941","now":"11:15:46","requestStartedAt":"11:15:40"}
{"uuid":"81C7066F-55AD-47F1-BBBEC6EDAAA7A7C4","now":"11:15:47","requestStartedAt":"11:15:40"}
{"uuid":"2477899D-50EB-4CB0-AAF8E681DF168087","now":"11:15:48","requestStartedAt":"11:15:40"}
{"uuid":"7ABF54BD-1618-458D-A6BBF5D07812564D","now":"11:15:49","requestStartedAt":"11:15:40"}
{"complete":true}
{"uuid":"44AA5416-8EA9-451B-95C60E2BC1F4CEF8","now":"11:15:53","requestStartedAt":"11:15:53"}
{"uuid":"8829A145-93D5-4175-9305414B32CBE83E","now":"11:15:54","requestStartedAt":"11:15:53"}
{"uuid":"15F96712-263C-4798-ABCDBDC0C0AA68F7","now":"11:15:55","requestStartedAt":"11:15:53"}
{"uuid":"AF3A1831-8817-400E-B1292B4FCBD8BDDC","now":"11:15:56","requestStartedAt":"11:15:53"}
{"uuid":"C3D9D100-C8A9-4A92-87F6E62138DB5C6F","now":"11:15:57","requestStartedAt":"11:15:53"}
{"uuid":"1E60C5D6-E16A-429C-804D32B278272872","now":"11:15:58","requestStartedAt":"11:15:53"}
{"uuid":"DDB2DA3C-D064-4F6E-80227E337B41D657","now":"11:15:59","requestStartedAt":"11:15:53"}
{"uuid":"E7034B05-7912-4971-9ECD76E07403AD49","now":"11:16:00","requestStartedAt":"11:15:53"}
{"uuid":"D0D17279-6531-42B9-AB21FFB76DA5D939","now":"11:16:01","requestStartedAt":"11:15:53"}
{"uuid":"D804394F-DCB7-4FE3-BB6AAD86A4865D7D","now":"11:16:02","requestStartedAt":"11:15:53"}
{"complete":true}
{"uuid":"C25CF442-18FB-4BDD-8B55811E6DDCCD13","now":"11:16:06","requestStartedAt":"11:16:06"}

One thing to note is that once the server-side request completes, the browser (Chrome for me) re-polls the server after seemingly 4sec. I could not find anything in the docs about this (or how to set the interval). I've just tested in Edge and Firefox too, and their behaviour is the same as Chrome, except Firefox's interval seemed to be 6sec not 4sec.

That's all I have to say on this. Pleased to know about it, and pleased to be able to answer that Stack Overflow question now.

Righto.

--
Adam

Thursday, 21 January 2021

Listening to the console log of a page loaded with Puppeteer

G'day:

This is a follow-up from something I touched on yesterday ("Polishing my Vue / Puppeteer / Mocha / Chai testing some more"). In that exercise I was using Puppeteer to load a web page I was testing, and then pulling some DOM element values out and checking they matched expectations. The relevant bits of code are thus:

describe.only("Tests of githubProfiles page using github data", function () {
    let browser;
    let page;
    let expectedUserData;

    before("Load the page and test data", async function () {
        await loadTestPage();
        expectedUserData = await loadTestUserFromGithub();
    });

    let loadTestPage = async function () {
        browser = await puppeteer.launch( {args: ["--no-sandbox"]});
        page = await browser.newPage();

        await Promise.all([
            page.goto("http://webserver.backend/githubProfiles.html"),
            page.waitForNavigation()
        ]);
    }

    it("should have the expected person's name", async function () {
        let name = await page.$eval("#app>.card>.content>a.header", headerElement => headerElement.innerText);
        name.should.equal(expectedUserData.name);
    });

  • Load the page with Puppeteer
  • Example test checking the page's DOM

This code seemed to be running fine, and the tests were passing. As I was adding more code ot my Vue component on the client end, I suddenly found the tests started to fail. Sometimes. If I ran them ten times, they'd fail maybe three times. At the same time, if I was just hitting the page in the browser, it was working 100% of the time. Odd. I mean clearly I was doing something wrong, and I'm new to all this async code I'm using, so figured I was using values before they were available or something. But it seemed odd that this was only manifesting sometimes. The way the tests were failing was telling though:

1) Tests of githubProfiles page using github data
       should have the expected person's name:

      AssertionError: expected '' to equal 'Alex Kyriakidis'
      + expected - actual

      +Alex Kyriakidis

The values coming from the DOM were blank. And note that it's not a case of the DOM being wrong, because if that was the case, the tests would barf all the time, with something like this:

Error: Error: failed to find element matching selector "#app>.card>.content>a.header"

The relevant mark-up here is:

<a class="header" :href="pageUrl">{{name}}</a>

So {{name}} isn't getting its value sometimes.

I faffed around for a bit reading up on Vue components, and their lifecycle handlers in case created was not the right place to load the data or something like that, but the code seemed legit.

My JS debugging is not very sophisticated, and it's basically a matter of console.logging stuff and see what happens. I chucked a bunch of log calls in to see what happens:

created () {
    console.debug(`before get call [${this.username}]`);
    axios.get(
        `https://api.github.com/users/${this.username}`,
        {
            auth: {
                username: this.$route.query.GITHUB_PERSONAL_ACCESS_TOKEN
            }
        }
    )
    .then(response => {
        console.debug(`beginning of then [${response.data.name}]`);
        this.name = response.data.name;
		// [etc...]
        console.debug("end of then");
    });
    console.debug("after get call");
}

Along with some other ones around the place, these all did what I expected when I hit the page in the browser:

beginning of js
githubProfiles.js:46 before VueRouter
githubProfiles.js:52 before Vue
githubProfiles.js:23 before get call [hootlex]
githubProfiles.js:43 after get call
githubProfiles.js:63 end of js
githubProfiles.js:33 beginning of then [Alex Kyriakidis]
githubProfiles.js:41 end of then

I noted that the then call was being fulfilled after the mainline code had finished, but in my test I was waiting for the page to fully load, so I'd catered for this. Repeated from above:

await Promise.all([
    page.goto("http://webserver.backend/githubProfiles.html"),
    page.waitForNavigation()
]);

I ran my tests, and was not seeing anything in the console which momentarily bemused me. But then I was just "errr… duh, Cameron. That stuff is logging in the web page's console. Not Node's console from the test run". I'm really thick sometimes.

This flumoxed me for a bit as I wondered how the hell I was going to get telemetry out of the page that I was calling in the Puppeteer headless browser. Then it occurred to me that I would not be the first person to wonder this, so just RTFMed.

It's really easy! The Puppeteer Page object exposes event listeners one can hook into, and one of the events is console. Perfect. All I needed to do is put this into my test code:

page = await browser.newPage();

page.on("console", (log) => console.debug(`Log from client: [${log.text()}] `));

await Promise.all([
    page.goto("http://webserver.backend/githubProfiles.html"),
    page.waitForNavigation()
]);

Then when I ran my tests, I was console-logging the log entries made in the headless browser as they occurred. What I was seeing is:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
    1) should have the expected person's name
    2) should have the expected person's github page URL
    3) should have the expected person's avatar
    4) should have the expected person's joining year
    5) should have the expected person's description
Log from client: [beginning of xxxxxx then [Alex Kyriakidis]]
Log from client: [end of then]
    6) should have the expected person's number of friends
     should have the expected person's friends URL


  1 passing (4s)
  6 failing

Note how the tests get underway before the then call takes place. And shortly after that, the tests start passing because by then the dynamic values have actually been loaded into the DOM. This is my problem! that page.waitForNavigation() is not waiting long enough! My first reaction was to blame Puppeteer, but I quickly realised that's daft and defensive of me, given this is the first time I've messed with this stuff, almost certainly I'm doing something wrong. Then it occurred to me that a page is navigable once the various asset files are loaded, but not necessarily when any code in them has run. Duh. I figured Puppeteer would have thought of this, so there'd be something else I could make it wait for. I googled around and found the docs for page.waitForNavigation, and indeed I needed to be doing this:

page.waitForNavigation({waitUntil: "networkidle0"})

After I did that, I found the tests still failing sometimes, but now due to a time out:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
Log from client: [beginning of then [Alex Kyriakidis]]
Log from client: [end of then]
    1) "before all" hook: Load the page for "should have the expected person's name"


  0 passing (4s)
  1 failing

  1) Tests of githubProfiles page using github data
       "before all" hook: Load the page for "should have the expected person's name":
     Error: Timeout of 5000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/share/fullstackExercise/tests/functional/public/GithubProfilesTest.js)

I had the time out set for five seconds, but now the tests are waiting for the client to finish its async call as well, I was just edging over that five second mark sometimes. So I just bumped it to 10 seconds, and thenceforth the tests all passed all the time. I've left the telemetry in for one last successful run here:

  Tests of githubProfiles page using github data
Log from client: [beginning of js]
Log from client: [before VueRouter]
Log from client: [before Vue]
Log from client: [before get call [hootlex]]
Log from client: [after get call]
Log from client: [end of js]
Log from client: [beginning of then [Alex Kyriakidis]]
Log from client: [end of then]
     should have the expected person's name
     should have the expected person's github page URL
     should have the expected person's avatar
     should have the expected person's joining year
     should have the expected person's description
     should have the expected person's number of friends
     should have the expected person's friends URL


  7 passing (5s)

OK so that was a bit of a newbie exercise, but I'm a noob so yer gonna get that. It was actually pretty fun working through it though. I'm really liking all this tooling I'm checking out ATM, so yer likely get a few more of these basic articles from me.

Righto.

--
Adam

Tuesday, 28 November 2017

That array_map quandary implemented in other languages

G'day:
A coupla days ago I bleated about array_map [having] a dumb implementation. I had what I thought was an obvious application for array_map in PHP, but it couldn't really accommodate me due to array_map not exposing the array's keys to the callback, and then messing up the keys in the mapped array if one passes array_map more than one array to process.

I needed to remap this:

[
    "2008-11-08" => "Jacinda",
    "1990-10-27" => "Bill",
    "2014-09-20" => "James",
    "1979-05-24" => "Winston"
]

To this:

array(4) {
  '2008-11-08' =>
  class IndexedPerson#3 (2) {
    public $date =>
    string(10) "2008-11-08"
    public $name =>
    string(7) "Jacinda"
  }
  '1990-10-27' =>
  class IndexedPerson#4 (2) {
    public $date =>
    string(10) "1990-10-27"
    public $name =>
    string(4) "Bill"
  }
  '2014-09-20' =>
  class IndexedPerson#5 (2) {
    public $date =>
    string(10) "2014-09-20"
    public $name =>
    string(5) "James"
  }
  '1979-05-24' =>
  class IndexedPerson#6 (2) {
    public $date =>
    string(10) "1979-05-24"
    public $name =>
    string(7) "Winston"
  }
}

Note how the remapped object also contains the original key value. That was the sticking point. Go read the article for more detail and more whining.

OK so my expectations of PHP's array higher order functions are based  on  my experience with JS's and CFML's equivalents. Both of which receive the key as well as the value in all callbacks. I decided to see how other languages achieve the same end, and I'll pop the codee in here for shits 'n' giggles.


CFML

Given most of my history is as a CFML dev, that one was easy.

peopleData = ["2008-11-08" = "Jacinda", "1990-10-27" = "Bill", "2014-09-20" = "James", "1979-05-24" = "Winston"]

people = peopleData.map((date, name) => new IndexedPerson(date, name))

people.each((date, person) => echo("#date# => #person#<br>"))

Oh, this presupposes the IndexedPerson component. Due to a shortcoming of how CFML works, components must be declared in a file of their own:

component {

    function init(date, name) {
        this.date = date
        this.name = name
    }

    string function _toString() {
        return "{date:#this.date#; name: #this.name#}"
    }
}


But the key bit is the mapping operation:

people = peopleData.map((date, name) => new IndexedPerson(date, name))

Couldn't be simpler (NB: this is Lucee's CFML implementation, not ColdFusion's which does not yet support arrow functions).

The output is:


2008-11-08 => {date:2008-11-08; name: Jacinda}
1990-10-27 => {date:1990-10-27; name: Bill}
2014-09-20 => {date:2014-09-20; name: James}
1979-05-24 => {date:1979-05-24; name: Winston}

Also note that CFML doesn't have associative arrays, it has structs, so the keys are not ordered. This does not matter here. (Thanks to Zac for correcting me here: CFML does have ordered structs these days).


JS

The next language I turned to was JS as that's the I'm next most familiar with. One thing that hadn't occurred to me is that whilst JS's Array implementation has a map method, we need to use an object here as the keys are values not indexes. And whilst I knew Objects didn't have a map method, I didn't know what the equivalent might be.

Well it turns out that there's no real option to use a map here, so I needed to do a reduce on the object's entries, Still: it's pretty terse and obvious:

class IndexedPerson {
    constructor(date, name) {
        this.date = date
        this.name = name
    }
}

let peopleData = {"2008-11-08": "Jacinda", "1990-10-27": "Bill", "2014-09-20": "James", "1979-05-24": "Winston"}

let people = Object.entries(peopleData).reduce(function (people, personData) {
    people.set(personData[0], new IndexedPerson(personData[0], personData[1]))
    return people
}, new Map())

console.log(people)

This returns what we want:

Map {
  '2008-11-08' => IndexedPerson { date: '2008-11-08', name: 'Jacinda' },
  '1990-10-27' => IndexedPerson { date: '1990-10-27', name: 'Bill' },
  '2014-09-20' => IndexedPerson { date: '2014-09-20', name: 'James' },
  '1979-05-24' => IndexedPerson { date: '1979-05-24', name: 'Winston' } }

TBH I think this is a misuse of an object to contain basically an associative array / struct, but so be it. It's the closest analogy to the PHP requirement. I was able to at least return it as a Map, which I think is better. I tried to have the incoming personData as a map, but the Map prototype's equivalent of entries() used above is unhelpful in that it returns an Iterator, and the prototype for Iterator is a bit spartan.

I think it's slightly clumsy I need to access the entries value via array notation instead of some sort of name, but this is minor.

As with all my code, I welcome people showing me how I should actually be doing this. Post a comment. I'm looking at you Ryan Guill ;-)

Java

Next up was Java. Holy fuck what a morass of boilterplate nonsense I needed to perform this simple operation in Java. Deep breath...

import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;

class IndexedPerson {
    String date;
    String name;
    
    public IndexedPerson(String date, String name) {
        this.date = date;
        this.name = name;
    }
    
    public String toString(){
        return String.format("{date: %s, name: %s}", this.date, this.name);
    }
}

class Collect {

    public static void main(String[] args) {

        HashMap<String,String> peopleData = loadData();

        HashMap<String, IndexedPerson> people = mapToPeople(peopleData);
            
        dumpIdents(people);
    }
    
    private static HashMap<String,String> loadData(){
        HashMap<String,String> peopleData = new HashMap<String,String>();
        
        peopleData.put("2008-11-08", "Jacinda");
        peopleData.put("1990-10-27", "Bill");
        peopleData.put("2014-09-20", "James");
        peopleData.put("1979-05-24", "Winston");
        
        return peopleData;
    }
    
    private static HashMap<String,IndexedPerson> mapToPeople(HashMap<String,String> peopleData) {
        HashMap<String, IndexedPerson> people = (HashMap<String, IndexedPerson>) peopleData.entrySet().stream()
            .collect(Collectors.toMap(
                e -> e.getKey(),
                e -> new IndexedPerson(e.getKey(), e.getValue())
            ));
            
        return people;
    }
    
    private static void dumpIdents(HashMap<String,IndexedPerson> people) {
        for (Map.Entry<String, IndexedPerson> entry : people.entrySet()) {
            System.out.println(String.format("%s => %s", entry.getKey(), entry.getValue()));
        }
    }
    
}

Result:
1979-05-24 => {date: 1979-05-24, name: Winston}
2014-09-20 => {date: 2014-09-20, name: James}
1990-10-27 => {date: 1990-10-27, name: Bill}
2008-11-08 => {date: 2008-11-08, name: Jacinda}

Most of that lot seems to be just messing around telling Java what types everything are. Bleah.

The interesting bit - my grasp of which is tenuous - is the Collectors.toMap. I have to admit I derived that from reading various Stack Overflow articles. But I got it working, and I know the general approach now, so that's good.

Too much code for such a simple thing though, eh?


Groovy

Groovy is my antidote to Java. Groovy makes this shit easy:

class IndexedPerson {
    String date
    String name

    IndexedPerson(String date, String name) {
        this.date = date;
        this.name = name;
    }

    String toString(){
        String.format("date: %s, name: %s", this.date, this.name)
    }
}

peopleData = ["2008-11-08": "Jacinda", "1990-10-27": "Bill", "2014-09-20": "James", "1979-05-24": "Winston"]

people = peopleData.collectEntries {date, name -> [date, new IndexedPerson(date, name)]}

people.each {date, person -> println String.format("%s => {%s}", date, person)}

Bear in mind that most of that is getting the class defined, and the output. The bit that does the mapping is just the one line in the middle. That's more like it.

Again, I don't know much about Groovy… I had to RTFM to find out how to do the collectEntries bit, but it was easy to find and easy to understand.

I really wish I had a job doing Groovy.

Oh yeah, for the sake of completeness, the output was thus:

2008-11-08 => {date: 2008-11-08, name: Jacinda}
1990-10-27 => {date: 1990-10-27, name: Bill}
2014-09-20 => {date: 2014-09-20, name: James}
1979-05-24 => {date: 1979-05-24, name: Winston}


Ruby

Ruby's version was pretty simple too as it turns out. No surprise there as Ruby's all about higher order functions and applying blocks to collections and stuff like that.

class IndexedPerson

    def initialize(date, name)
        @date = date
        @name = name
    end

    def inspect
        "{date:#{@date}; name: #{@name}}\n"
    end
end

peopleData = {"2008-11-08" => "Jacinda", "1990-10-27" => "Bill", "2014-09-20" => "James", "1979-05-24" => "Winston"}

people = peopleData.merge(peopleData) do |date, name|
    IndexedPerson.new(date, name)
end

puts people

Predictable output:

{"2008-11-08"=>{date:2008-11-08; name: Jacinda}
, "1990-10-27"=>{date:1990-10-27; name: Bill}
, "2014-09-20"=>{date:2014-09-20; name: James}
, "1979-05-24"=>{date:1979-05-24; name: Winston}
}

I wasn't too sure about all that block nonsense when I first started looking at Ruby, but I quite like it now. It's easy to read.


Python

My Python skills don't extend much beyond printing G'day World on the screen, but it was surprisingly easy to google-up how to do this. And I finally got to see what Python folk are on about with this "comprehensions" stuff, which I think is quite cool.

class IndexedPerson:
    def __init__(self, date, name):
        self.date = date
        self.name = name

    def __repr__(self):
        return "{{date: {date}, name: {name}}}".format(date=self.date, name=self.name)

people_data = {"2008-11-08": "Jacinda", "1990-10-27": "Bill", "2014-09-20": "James", "1979-05-24": "Winston"}

people = {date: IndexedPerson(date, name) for (date, name) in people_data.items()}

print("\n".join(['%s => %s' % (date, person) for (date, person) in people.items()]))


And now that I am all about Clean Code, I kinda get the "whitespace as indentation" thing too. It's clear enough if yer code is clean in the first place.

The output of this is identical to the Groovy one.

Only one more then I'll stop.

Clojure

I can only barely do G'day World in Clojure, so this took me a while to work out. I also find the Clojure docs to be pretty impentrable. I'm sure they're great if one already knows what one is doing, but I found them pretty inaccessible from the perspective of a n00b. It's like if the PHP docs were solely the user-added stuff at the bottom of each docs page. Most blog articles I saw about Clojure were pretty much just direct regurgitation of the docs, without much value-add, if I'm to be honest.

(defrecord IndexedPerson [date name])

(def people-data (array-map "2008-11-08" "Jacinda" "1990-10-27" "Bill" "2014-09-20" "James" "1979-05-24" "Winston"))

(def people
  (reduce-kv
    (fn [people date name] (conj people (array-map date (IndexedPerson. date name))))
    (array-map)
    people-data))

(print people)

The other thing with Clojure for me is that the code is so alien-looking to me that I can't work out how to indent stuff to make the code clearer. All the examples I've seen don't seem very clear, and the indentation doesn't help either, I think. I guess with more practise it would come to me.

It seems pretty powerful though, cos there's mot much code there to achieve the desired end-goal.

Output for this one:

{2008-11-08 #user.IndexedPerson{:date 2008-11-08, :name Jacinda},
1990-10-27 #user.IndexedPerson{:date 1990-10-27, :name Bill},
2014-09-20 #user.IndexedPerson{:date 2014-09-20, :name James},
1979-05-24 #user.IndexedPerson{:date 1979-05-24, :name Winston}}


Summary

This was actually a very interesting exercise for me, and I learned stuff about all the languages concerned. Even PHP and CFML.

I twitterised a comment regarding how pleasing I found each solution:


This was before I did the Clojure one, and I'd slot that in afer CFML and before JS, making the list:
  1. Python
  2. Ruby
  3. Groovy
  4. CFML
  5. Clojure
  6. JS
  7. PHP
  8. Java

Python's code looks nice and it was easy to find out what to do. Same with Ruby, just not quite so much. And, really same with Groovy. I could order those three any way. I think Python tips the scales slightly with the comprehensions.

CFML came out suprisingly well in this, as it's a bloody easy exercise to achieve with it.

Clojure's fine, just a pain in the arse to understand what's going on, and the code looks a mess to me. But it does a lot in little space.

JS was disappointing because it wasn't nearly so easy as I expected it to be.

PHP is a mess.

And - fuck me - Java. Jesus.

My occasional reader Barry O'Sullivan volunteered some input the other day:


Hopefully he's still up for this, and I'll add it to the list so we can have a look at that code too.

Like I said before, if you know a better or more interesting way to do this in any of the languages above, or any other languages, make a comment and post a link to a Gist (just don't put the code inline in the comment please; it will not render at all well).

I might have another one of these exercises to do soon with another puzzle a friend of mine had to recently endure in a job-interview-related coding test. We'll see.

Righto.

--
Adam