Saturday, 27 September 2025

JavaScript Symbols: when learning one thing teaches you fifteen others

G'day:

This is one of those "I thought I was learning one thing but ended up discovering fifteen other weird JavaScript behaviors" situations that seems to happen every time I try to understand a JavaScript feature properly.

I was working through my TypeScript learning project, specifically tackling symbols (TS / JS) as part of understanding primitive types. Seemed straightforward enough - symbols are unique primitive values, used for creating "private" object properties and implementing well-known protocols. Easy, right?

Wrong. What started as "symbols are just unique identifiers" quickly turned into a masterclass in JavaScript's most bizarre type coercion behaviors, ESLint's opinions about legitimate code patterns, and why semicolons sometimes matter more than you think.

The basics (that aren't actually basic)

Symbols are primitive values that are guaranteed to be unique:

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false - always unique

Except when they're not unique, because Symbol.for() maintains a global registry:

const s1 = Symbol.for('my-key');
const s2 = Symbol.for('my-key');
console.log(s1 === s2); // true - same symbol from registry

Fair enough. And you can't call Symbol as a constructor (unlike literally every other primitive wrapper):

const sym = new Symbol(); // TypeError: Symbol is not a constructor

This seemed like a reasonable safety feature until I tried to test it and discovered that TypeScript will happily let you write this nonsense, but ESLint immediately starts complaining about the any casting required to make it "work".

Where things get properly weird

The real fun starts when you encounter the well-known symbols - particularly Symbol.toPrimitive. This lets you control how objects get converted to primitive values, which sounds useful until you actually try to use it.

Here's a class that implements custom primitive conversion:

export class SomeClass {
  [Symbol.toPrimitive](hint: string) {
    if (hint === 'number') {
      return 42;
    }
    if (hint === 'string') {
      return 'forty-two';
    }
    return 'default';
  }
}

(from symbols.ts)

Now, which conversion do you think obj + '' would trigger? If you guessed "string", because you're concatenating with a string, you'd be wrong. It actually triggers the "default" hint because JavaScript's + operator is fundamentally broken.

The + operator with mixed types calls toPrimitive with hint "default", not "string". JavaScript has to decide whether this is addition or concatenation before converting the operands, so it plays it safe with the default hint. Only explicit string conversion like String(obj) or template literals get the string hint.

This is the kind of language design decision that makes you question whether the people who created JavaScript have ever actually used JavaScript.

ESLint vs. reality

Speaking of questionable decisions, try writing the template literal version:

expect(`${obj}`).toBe('forty-two');

ESLint immediately complains: "Invalid type of template literal expression". It sees a custom class being used in string interpolation and assumes you've made a mistake, despite this being exactly what Symbol.toPrimitive is designed for.

You end up with this choice:

  1. Suppress the ESLint rule for legitimate symbol behavior
  2. Use String(obj) explicitly (which actually works better anyway)
  3. Cast to any and deal with ESLint complaining about that instead

Modern tooling is supposedly designed to help us write better code, but it turns out "better" doesn't include using JavaScript's actual primitive conversion protocols.

Symbols as "secret" properties

The privacy model for symbols is... interesting. They're hidden from normal enumeration but completely discoverable if you know where to look:

const secret1 = Symbol('secret1');
const secret2 = Symbol('secret2');

const obj = {
  publicProp: 'visible',
  [secret1]: 'hidden',
  [secret2]: 'also hidden'
};

console.log(Object.keys(obj));                    // ['publicProp']
console.log(JSON.stringify(obj));                 // {"publicProp":"visible"}
console.log(Object.getOwnPropertySymbols(obj));   // [Symbol(secret1), Symbol(secret2)]
console.log(Reflect.ownKeys(obj));                // ['publicProp', Symbol(secret1), Symbol(secret2)]

So symbols provide privacy from accidental access, but not from intentional inspection. It's like having a door that's closed but not locked - good enough to prevent accidents, useless against anyone who actually wants to get in.

Semicolons matter (sometimes)

While implementing symbol properties, I discovered this delightful parsing ambiguity:

export class SomeClass {
  private stringName: string = 'StringNameOfClass'
  [Symbol.toStringTag] = this.stringName  // Prettier goes mental
}

Without a semicolon after the first line, Prettier interprets this as:

private stringName: string = ('StringNameOfClass'[Symbol.toStringTag] = this.stringName)

Because you can totally set properties on string literals in JavaScript (even though it's completely pointless), the parser thinks you're doing property access and assignment chaining.

The semicolon makes it unambiguous, and impressively, Prettier is smart enough to recognize that this particular semicolon is semantically significant and doesn't remove it like it normally would.

Testing arrays vs. testing values

Completely unrelated to symbols, but I learned that Vitest's toBe() and toEqual() are different beasts:

expect(Object.keys(obj)).toBe(['publicProp']);     // Fails - different array objects
expect(Object.keys(obj)).toEqual(['publicProp']);  // Passes - same contents

toBe() uses reference equality (like Object.is()), so even arrays with identical contents are different objects. toEqual() does deep equality comparison. This seems obvious in hindsight, but when you're in the middle of testing symbol enumeration behavior, it's easy to forget that arrays are objects too.

The real lesson

I set out to learn about symbols and ended up with a tour of JavaScript's most questionable design decisions:

  • Type coercion that doesn't work the way anyone would expect
  • Operators that behave differently based on hints that don't correspond to actual usage
  • Tooling that warns against legitimate language features
  • Parsing ambiguities that require strategic semicolon placement
  • Privacy models that aren't actually private

This is exactly why "learn by doing" beats "read the documentation" every time. The docs would never tell you about the ESLint conflicts, the semicolon parsing gotcha, or the + operator's bizarre hint behavior. You only discover this stuff when you're actually writing code and things don't work the way they should.

The symbols themselves are fine - they do what they're supposed to do. It's everything else around them that's… erm… "laden with interesting design decision "opportunities".[Cough].


The full code for this investigation is available in my learning-typescript repository if you want to see the gory details. Thanks to Claudia for helping debug the type coercion weirdness and for assistance with this write-up. Also props to GitHub Copilot for pointing out that I had three functions doing the same thing - sometimes the robots are right.

Righto.

--
Adam

Thursday, 25 September 2025

TypeScript namespaces: when the docs say one thing and ESLint says another

G'day:

This is one of those "the documentation says one thing, the tooling says another, what the hell am I actually supposed to do?" situations that seems to crop up constantly in modern JavaScript tooling.

I was working through TypeScript enums as part of my learning project, and I wanted to add methods to an enum - you know, the kind of thing you can do with PHP 8 enums where you can have both the enum values and associated behavior in the same construct. Seemed like a reasonable thing to want to do.

TypeScript enums don't support methods directly, but some digging around Stack Overflow led me to namespace merging as a solution. Fair enough - except as soon as I implemented it, ESLint started having a proper whinge about using namespaces at all.

Cue an hour of trying to figure out whether I was doing something fundamentally wrong, or whether the tooling ecosystem just hasn't caught up with legitimate use cases. Turns out it's a bit of both.

The contradiction

Here's what the official TypeScript documentation says about namespaces:

A note about terminology: It's important to note that in TypeScript 1.5, the nomenclature has changed. "Internal modules" are now "namespaces". "External modules" are now simply "modules", as to align with ECMAScript 2015's terminology, (namely that module X { is equivalent to the now-preferred namespace X {).

Note that "now-preferred" bit. Sounds encouraging, right?

And here's what the ESLint TypeScript rules say:

TypeScript historically allowed a form of code organization called "custom modules" (module Example {}), later renamed to "namespaces" (namespace Example). Namespaces are an outdated way to organize TypeScript code. ES2015 module syntax is now preferred (import/export).

So which is it? Are namespaces preferred, or are they outdated?

The answer, as usual with JavaScript tooling, is "it depends, and the documentation is misleading".

The TypeScript docs were written when they renamed the syntax from module to namespace - the "now-preferred" referred to using the namespace keyword instead of the old module keyword. It wasn't saying namespaces were preferred over ES modules; it was just clarifying the syntax change within the namespace feature itself.

The ESLint docs reflect current best practices: ES2015 modules (import/export) are indeed the standard way to organize code now. Namespaces are generally legacy for most use cases.

But "most use cases" isn't "all use cases". And this is where things get interesting.

The legitimate use case: enum methods

What I wanted to do was add a method to a TypeScript enum, similar to what you can do in PHP:

// What I wanted (conceptually)
enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
  
  // This doesn't work in TypeScript
  static fromValue(value: string): MaoriNumber {
    // ...
  }
}

The namespace merging approach lets you achieve this by declaring an enum and then a namespace with the same name:

// src/lt-15/namespaces.ts

export enum MaoriNumber {
  Tahi = 'one',
  Rua = 'two',
  Toru = 'three',
  Wha = 'four',
}

// eslint-disable-next-line @typescript-eslint/no-namespace
export namespace MaoriNumber {
  const enumKeysOnly = Object.keys(MaoriNumber).filter(
    (key) =>
      typeof MaoriNumber[key as keyof typeof MaoriNumber] !== 'function'
  )

  export function fromValue(value: string): MaoriNumber {
    const valueAsMaoriNumber: MaoriNumber = value as MaoriNumber
    const index = Object.values(MaoriNumber).indexOf(valueAsMaoriNumber);
    if (index === -1) {
      throw new Error(`Value "${value}" is not a valid MaoriNumber`);
    }
    const elementName: string = enumKeysOnly[index];
    const typedElementName = elementName as keyof typeof MaoriNumber;

    return MaoriNumber[typedElementName] as MaoriNumber;
  }
}

This gives you exactly what you want: MaoriNumber.Tahi for enum access and MaoriNumber.fromValue() for the method, all properly typed.

The // eslint-disable-next-line comment acknowledges that yes, I know namespaces are generally discouraged, but this is a specific case where they're the right tool for the job.

Why the complexity in fromValue?

You might wonder why that fromValue function is doing so much filtering and type casting. It's because of the namespace merging itself.

When you merge an enum with a namespace, TypeScript sees MaoriNumber as containing both the enum values and the functions. So Object.keys(MaoriNumber) returns:

['Tahi', 'Rua', 'Toru', 'Wha', 'fromValue']

And keyof typeof MaoriNumber becomes:

"Tahi" | "Rua" | "Toru" | "Wha" | "fromValue"

The filtering step removes the function keys so we only work with the actual enum values. The type assertions handle the fact that TypeScript can't statically analyze that our runtime filtering has eliminated the function possibility.

Sidebar: that keyof typeof bit took a while for me to work out. Well I say "work out": I just read this Q&A on Stack Overflow: What does "keyof typeof" mean in TypeScript?. I didn't find anything useful in the actual docs. I look at it more closely in some other code I wrote today… there might be an article in that too. We'll see (I'll cross-ref it here if I write it).

Testing the approach

The tests prove that both aspects work correctly:

// tests/lt-15/namespaces.test.ts

describe('Emulating enum with method', () => {
  it('has accessible enums', () => {
    expect(MaoriNumber.Tahi).toBe('one')
  })
  
  it('has accessible methods', () => {
    expect(MaoriNumber.fromValue('two')).toEqual(MaoriNumber.Rua)
  })
  
  it("won't fetch the method as an 'enum' entry", () => {
    expect(() => {
      MaoriNumber.fromValue('fromValue')
    }).toThrowError('Value "fromValue" is not a valid MaoriNumber')
  })
  
  it("will error if the string doesn't match a MaoriNumber", () => {
    expect(() => {
      MaoriNumber.fromValue('rima')
    }).toThrowError('Value "rima" is not a valid MaoriNumber')
  })
})

The edge case testing is important here - we want to make sure the function doesn't accidentally treat its own name as a valid enum value, and that it properly handles invalid inputs.

Alternative approaches

You could achieve similar functionality with a class and static methods:

const MaoriNumberValues = {
  Tahi: 'one',
  Rua: 'two', 
  Toru: 'three',
  Wha: 'four'
} as const

type MaoriNumber = typeof MaoriNumberValues[keyof typeof MaoriNumberValues]

class MaoriNumbers {
  static readonly Tahi = MaoriNumberValues.Tahi
  static readonly Rua = MaoriNumberValues.Rua
  static readonly Toru = MaoriNumberValues.Toru
  static readonly Wha = MaoriNumberValues.Wha
  
  static fromValue(value: string): MaoriNumber {
    // implementation
  }
}

But this is more verbose, loses some of the enum benefits (like easy iteration), and doesn't give you the same clean MaoriNumber.Tahi syntax you get with the namespace approach.

So when should you use namespaces?

Based on this experience, I'd say namespace merging with enums is one of the few remaining legitimate use cases for TypeScript namespaces. The modern alternatives don't provide the same ergonomics for this specific pattern.

For everything else - code organisation, avoiding global pollution, grouping related functionality - ES modules are indeed the way forward. But when you need to add methods to enums and you want clean, intuitive syntax, namespace merging is still the right tool.

The key is being intentional about it. Use the ESLint disable comment to acknowledge that you're making a conscious choice, not just ignoring best practices out of laziness.

It's one of those situations where the general advice ("don't use namespaces") doesn't account for specific edge cases where they're still the best solution available. The tooling will complain, but sometimes the tooling is wrong.

I'll probably circle back to write up more about TypeScript enums in general - there's a fair bit more to explore there. But for now, I've got a working solution for enum methods that gives me the PHP-like behavior I was after, even if it did require wading through some contradictory documentation to get there.

Credit where it's due: Claudia (claude.ai) was instrumental in both working through the namespace merging approach and helping me understand the TypeScript type system quirks that made the implementation more complex than expected. The back-and-forth debugging of why MaoriNumber[typedElementName] was causing type errors was particularly useful - sometimes you need another perspective to spot what the compiler is actually complaining about. She also helped draft this article, which saved me a few hours of writing time. GitHub Copilot's code review feature has been surprisingly helpful too - it caught some genuine issues with error handling and performance that I'd missed during the initial implementation.

Righto.

--
Adam

Saturday, 6 September 2025

Setting up a TypeScript learning environment: Docker, TDD, and the inevitable config rabbit hole

G'day:

This is another one of those "I really should learn this properly" situations that's been nagging at me for a while now.

My approach to web development has gotten a bit stale. I'm still very much in the "app server renders markup and sends it to the browser" mindset, whereas the world has moved on to "browser runs the app and talks back to the server for server stuff". I've been dabbling with bits and pieces of modern JavaScript tooling, but it's all been very ad-hoc and surface-level. Time to get serious about it.

TypeScript seems like the sensible entry point into this brave new world. It's not like I'm allergic to types - I've been working with strongly-typed languages for decades. And from what I can see, the TypeScript ecosystem has matured to the point where it's not just hipster nonsense any more; it's become the pragmatic choice for serious JavaScript development.

I've decided it would be expedient to actually learn TypeScript properly, and I want to do it via TDD. That means I need a proper development environment set up: something that lets me write tests, run them quickly, and iterate on the code. Plus all the usual developer quality-of-life stuff like linting and formatting that stops me from having to think about trivial decisions.

The educational focus here is important. I'm not trying to build a production system; I'm trying to build a learning environment. That means optimizing for development speed and feedback loops, not for deployment efficiency or runtime performance. I want to be able to write a test, see it fail, write some code, see it pass, refactor, and repeat. Fast.

I always use Docker for everything these days - I won't install server software directly on my host machine in 2025. That decision alone introduces some complexity, but it's non-negotiable for me. The benefits of containerisation (isolation, reproducibility, easy cleanup) far outweigh the setup overhead.

And yes, I'm enough of a geek that I'm running this as a proper Jira project with tickets and everything. LT-7 was dockerizing the environment, LT-8 was getting TypeScript and Vitest working, LT-9 was ESLint and Prettier setup. It helps me track progress, maintain focus, and prevent rabbit-hole-ing - which, as you'll see, is a constant danger when setting up modern JavaScript tooling.

So this article documents the journey of setting up that environment. Spoiler alert: it took longer than actually learning the first few TypeScript concepts, but now I've got a solid foundation for iterative learning.

I should mention that I'm not tackling this learning project solo. I'm working with Claudia (okok, claude.ai. Fine. Whatever) as my TypeScript tutor. It's been an interesting experiment in AI-assisted learning - she's helping me understand concepts, troubleshoot setup issues, and even draft this article documenting the process. The back-and-forth has been surprisingly effective for working through both the technical challenges and the "why does this work this way" questions that come up constantly in modern JavaScript tooling.

This collaborative approach has turned out to be quite useful. I get to focus on the actual learning and problem-solving, while Claudia handles the research grunt work and helps me avoid some of the more obvious rabbit holes. Plus, having to explain what I'm doing and why I'm doing it (even to an AI) forces me to think more clearly about the decisions I'm making.

The Docker foundation

The first challenge was getting a Node.js environment running in Docker that wouldn't drive me mental. This sounds straightforward, but there are some non-obvious gotchas when you're trying to mount your source code into a container while still having node_modules work properly.

The core problem is this: you want your source code to be editable on the host machine (so your IDE can work with it), but you need node_modules to be installed inside the container (because native modules and platform-specific binaries). If you just mount your entire project directory into the container, you'll either overwrite the container's node_modules with whatever's on your host, or vice versa. Neither option ends well.

The solution is to use a separate named volume for node_modules:

# docker/docker-compose.yml

services:
    node:
        build:
            context: ..
            dockerfile: docker/node/Dockerfile

        volumes:
            - ..:/usr/src/app
            - node_modules:/usr/src/app/node_modules

        ports:
            - "51204:51204"

        stdin_open: true
        tty: true

volumes:
    node_modules:

This mounts the project root to /usr/src/app, but then overlays a separate volume specifically for the node_modules directory. The container gets its own node_modules that persists between container restarts, while the host machine never sees it.

The Dockerfile handles the initial npm install during the build process:

# docker/node/Dockerfile

FROM node:24-bullseye

RUN echo "alias ll='ls -alF'" >> ~/.bashrc
RUN echo "alias cls='clear; printf \"\033[3J\"'" >> ~/.bashrc

RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "zip", "unzip", "git", "vim"]
RUN ["apt-get", "install", "xdg-utils", "-y"]

WORKDIR  /usr/src/app
COPY package*.json ./
RUN npm install

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node --version || exit 1

Note the xdg-utils package - this turned out to be essential for getting Vitest's web UI working properly. Without it, the test runner couldn't open browser windows from within the container, which meant the UI server would start but be inaccessible.

This setup means I can edit files on my host machine using any editor, but all the Node.js execution happens inside the container with the correct dependencies. It also means I can blow away the entire environment and rebuild it from scratch without affecting my host machine - very important when you're experimenting with JavaScript tooling that changes its mind about best practices every six months.

TypeScript configuration basics

With the Docker environment sorted, the next step was getting TypeScript itself configured. This is where some fundamental decisions need to be made about how the development workflow will actually work.

The tsconfig.json ended up being fairly straightforward:

{
    "compilerOptions": {
        "target": "ES2020",
        "module": "commonjs",
        "sourceMap": true,
        "skipLibCheck": true,
        "noEmitOnError": false,
        "outDir": "./dist",
        "esModuleInterop": true
    },
    "include": ["src/**/*"],
    "watchOptions": {
        "watchDirectory": "useFsEvents"
    }
}

The key choices here were ES2020 as the target (modern enough to be useful, old enough to be stable) and CommonJS for modules (because that's what Node.js expects by default, and I didn't want to fight that battle yet). Source maps are essential for debugging, and noEmitOnError: false means TypeScript will still generate JavaScript even when there are type errors - useful during development when you want to test partially-working code.

But the more interesting decision was about the testing strategy. Do you test the compiled JavaScript in ./dist, or do you test the TypeScript source files directly?

I spent quite a bit of time going back and forth on this. On one hand, it's the dist code that actually runs in production, so surely that's what's important to test. On the other hand, it's the src code that I'm writing and thinking about, so that's what should be tested during development. I was almost ready to go with the dist approach until I came across a Stack Overflow discussion from 2015 that helped clarify the thinking.

The crux of it comes down to the distinction between developer testing and QA testing responsibilities. As a developer, my job is to ensure my code logic is correct and that my implementation matches my intent. That's fundamentally about the source code I'm writing. QA teams, on the other hand, are responsible for verifying that the entire application works correctly under real-world conditions - which includes testing the compiled/built artifacts.

For a learning environment where I'm focused on understanding TypeScript concepts and language features, testing the source makes perfect sense. I want immediate feedback on whether my understanding of TypeScript's type system is correct, not whether the JavaScript compiler is working properly (that's TypeScript's problem, not mine).

I went with testing the source files directly. This means Vitest needs to understand TypeScript natively, but the payoff is faster iteration. When I change a source file, the test runner can immediately re-run the relevant tests without waiting for TypeScript compilation. For a learning environment where I'm constantly making small changes and want immediate feedback, this speed matters more than having tests that exactly mirror a production deployment.

The project structure reflects this educational focus:

src/
  lt-8/
    math.ts
    baseline.ts
    slow.ts
tests/
  lt-8/
    math.test.ts
    baseline.test.ts
    slow.test.ts

Each "learning ticket" gets its own subdirectory in both src and tests. This keeps different concepts isolated and makes it easy to look back at what I was working on during any particular phase. It also means I can experiment with one concept without accidentally breaking code from a previous lesson.

The numbered ticket approach might seem a bit over-engineered for a personal learning project, but it's proven useful for maintaining focus. Each directory represents a specific learning goal, and having that structure prevents me from mixing concerns or losing track of what I was supposed to be working on.

Vitest: the testing backbone

With TypeScript configured, the next step was getting a proper test runner in place. I'd heard good things about Vitest - it's designed specifically for modern JavaScript/TypeScript projects and promises to be fast and developer-friendly.

I'd evaluated JS test frameworks a few years ago, and dismissed Jest as being poorly reasoned/implemented (despite being popular), and had run with Mocha/Chai/Sinon/etc, which seemed more sensible. I'd cut my teeth on Jasmine stuff years ago, but that's faded into obscurity these days. As of 2025, Jest seems to have stumbled and Vitest has come through as being faster and better and more focused when it comes to TypeScript. So seems to be the way forward. Until the JS community does a reversal in probably two weeks time and decides something else is the new shineyshiney. Anyway, for now it's Vitest. I hope its popularity lasts at least until I finish writing this article. Fuck sake.

The installation was straightforward enough:

npm install --save-dev vitest @vitest/coverage-v8 @vitest/ui

The basic configuration in package.json gives you everything you need:

"scripts": {
    "test": "vitest",
    "test:coverage": "vitest run --coverage",
    "test:ui": "vitest --ui --api.host 0.0.0.0"
}

That --api.host 0.0.0.0 bit is crucial when running inside Docker - without it, the UI server only binds to localhost inside the container, which means you can't access it from your host machine.

But the real magic is in Vitest's intelligent watch mode. This thing is genuinely clever about what it re-runs when files change. I created a deliberately slow test to demonstrate this to myself:

async function sleep(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms))
}

export async function mySlowFunction(): Promise<void> {
  console.log('Starting slow function...')
  await sleep(2000)
  console.log('Slow function finished.')
}

When I run the tests and then modify unrelated files, Vitest doesn't re-run the slow test. But when I change slow.ts itself, it immediately runs just that test and its dependencies. You can actually see the delay when that specific test runs, but other changes don't trigger it. It's a small thing, but it makes the development feedback loop much more pleasant when you're not waiting for irrelevant tests to complete.

The web UI is where things get really interesting though. Running npm run test:ui spins up a browser-based interface that gives you a visual overview of all your tests, coverage reports, and real-time updates as you change code. This is why I needed to expose port 51204 in the Docker configuration and install xdg-utils in the container.

That's really nice, and it's "live": it updates whenever I change any code. Pretty cool.

Without xdg-utils, Vitest can start the UI server but can't open browser windows from within the container. The package provides the utilities that Node.js applications expect to be able to launch external programs - in this case, web browsers. It's one of those dependencies that's not immediately obvious until something doesn't work, and then you spend an hour googling why your UI won't open.

The combination of fast command-line testing for quick feedback and the rich web UI for deeper analysis turned out to be exactly what I wanted for a learning environment. I can run tests continuously in the background while coding, but also dive into the visual interface when I want to understand coverage or debug failing tests.

Code quality tooling: ESLint and Prettier

With the testing foundation in place, the next step was getting proper code quality tooling set up. This is where things got a bit more involved, and where I had to make some decisions about what constitutes "good" TypeScript style.

First up was ESLint. The modern TypeScript approach uses typescript-eslint which provides TypeScript-specific linting rules. But there was an immediate gotcha: the configuration format.

ESLint has moved to a new "flat config" format, and naturally this means configuration files with different extensions. Enter the .mts file. WTF is an .mts file? It's TypeScript's way of saying "this is a TypeScript module that should be treated as an ES module regardless of your package.json settings". It's part of the ongoing CommonJS vs ES modules saga that the JavaScript world still hasn't fully sorted out. The .mts extension forces ES module semantics, while .cts would force CommonJS. Since ESLint's flat config expects ES modules, but my project is using CommonJS for everything else, I needed the .mts extension to make the config file work properly. (NB: Claudia wrote all that. She explained it to me and I got it enough to nod along and go "riiiiiiight…" in a way I thought was convincing at the time, but doesn't seem that way now I write it down).

The resulting eslint.config.mts ended up being reasonably straightforward:

import js from '@eslint/js'
import globals from 'globals'
import tseslint from 'typescript-eslint'
import { defineConfig } from 'eslint/config'
import eslintConfigPrettier from 'eslint-config-prettier/flat'

export default defineConfig([
  {
    files: ['**/*.{js,mjs,cjs,ts,mts,cts}'],
    plugins: { js },
    languageOptions: {
      globals: globals.node,
    },
  },
  {
    files: ['**/*.js'],
    languageOptions: {
      sourceType: 'commonjs',
    },
  },
  {
    rules: {
      'prefer-const': 'error',
      'no-var': 'error',
      'no-undef': 'error',
    },
  },
  tseslint.configs.recommended,
  eslintConfigPrettier,
])

The philosophy here is to split responsibilities: Prettier handles style and formatting, ESLint handles potential bugs and code quality issues. The eslint-config-prettier integration ensures these two don't step on each other's toes by disabling any ESLint rules that conflict with Prettier's formatting decisions.

Speaking of Prettier, this is where I had to do some soul-searching about applying rules I don't necessarily agree with. The .prettierrc configuration reflects a mix of TypeScript community zeitgeist and my own preferences:

{
    "semi": false,
    "trailingComma": "es5",
    "singleQuote": true,
    "printWidth": 80,
    "tabWidth": 2,
    "useTabs": false,
    "endOfLine": "lf"
}

The "semi": false bit aligns with my long-standing view that semicolons are for the computer, not the human - only use them when strictly necessary. Single quotes over double quotes is just personal preference.

But then we get to the contentious bits. Two-space indentation instead of four? I've been a four-space person for decades, but the TypeScript world has largely standardised on two spaces. Trailing commas in ES5 style? Again, this is considered best practice in modern JavaScript because it makes diffs cleaner when you add array or object elements, but it feels wrong to someone coming from more traditional languages.

In the end, I decided to go with the community defaults rather than fight them. When you're learning a new ecosystem, there's value in following the established conventions even when they don't match your personal preferences. It makes it easier to read other people's code, easier to contribute to open source projects, and easier to get help when you're stuck.

The tooling integration worked exactly as advertised. ESLint caught real issues - like when I deliberately used var instead of let or const - while Prettier handled all the formatting concerns automatically. It's a surprisingly pleasant development experience once it's all wired up.

IDE integration: VSCode vs IntelliJ

This is where things got properly annoying, and where I had to make some compromises I wasn't entirely happy with.

My preferred IDE is IntelliJ. I've been using JetBrains products for years, and their TypeScript/Node.js support is generally excellent. The problem isn't with IntelliJ's understanding of TypeScript - it's with IntelliJ's support for dockerised Node.js development.

Here's the issue: IntelliJ can see that Node.js is running in a container. It can connect to it, execute commands against it, and generally work with the containerised environment. But when it comes to the node_modules directory, it absolutely requires those modules to exist on the host machine as well. Even though it knows the actual execution is happening in the container, even though it can see the modules in the container filesystem, it won't provide proper IntelliSense or code completion without a local copy of node_modules.

This is a complete show-stopper for my setup. The whole point of the separate volume for node_modules is that the host machine never sees those files. I'm not going to run npm install on my host just to make IntelliJ happy - that defeats the entire purpose of containerisation.

So: VSCode it is. And to be fair, VSCode's Docker integration is genuinely well done. The Dev Containers extension understands the setup immediately, provides proper IntelliSense for all the containerised dependencies, and generally "just gets it" in a way that IntelliJ doesn't.

There are some annoyances though. VSCode has this file locking behaviour when running against mounted volumes that occasionally interferes with file operations. Nothing catastrophic, but the kind of minor friction that makes you appreciate how smooth things usually are in IntelliJ. Still, it's livable - and the benefits of having an IDE that properly understands your containerised development environment far outweigh the occasional file system hiccup.

Getting Prettier integrated into VSCode required a few configuration tweaks. I had to install the Prettier extension, then configure VSCode to use it as the default formatter and enable format-on-save. The key settings in .vscode/settings.json were:

{
    "editor.defaultFormatter": "esbenp.prettier-vscode",
    "prettier.configPath": ".prettierrc",
    "editor.formatOnPaste": true,
    "editor.formatOnSave": true
}

I got these from How to use Prettier with ESLint and TypeScript in VSCode › Formatting using VSCode on save (recommended) .

The end result is a development environment where I can focus on learning TypeScript concepts rather than fighting with tooling. It's not my ideal setup - I'd rather be using IntelliJ - but it works well enough that the IDE choice doesn't get in the way of the actual learning.

Seeing it all work

With all the tooling in place, it was time to put it through its paces with some actual TypeScript code. I started with a baseline test just to prove that Vitest was operational:

// src/lt-8/baseline.ts
import process from 'node:process'

export function getNodeVersion(): string {
  return process.version
}
// tests/lt-8/baseline.test.ts
import { describe, it, expect } from 'vitest'
import { getNodeVersion } from '../../src/lt-8/baseline'

describe('tests vitest is operational and test TS code', () => {
  it('should return the current Node.js version', () => {
    const version = getNodeVersion()
    expect(version).toMatch(/^v24\.\d+\.\d+/)
  })
})

This isn't really testing TypeScript-specific features, but it proves that the basic infrastructure works - we're importing Node.js modules, calling functions, and verifying that we get the expected Node 24 version back. It's a good sanity check that the container environment and test runner are talking to each other properly.

"Interestingly" I ran this pull request through Github Copilot's code review mechanism, and it pulled me up for this test being fragile because I'm verifying the Node version is specifically 24, suggesting "this will break if the version isn't 24 Well… exactly mate. It's a test to verify I am running on the version we're expecting it to be! I guess though my test label 'should return the current Node.js version' is not correct. It should be 'should return the application\'s required Node.js version', or something.

The real TypeScript example was the math function:

// src/lt-8/math.ts
export function add(a: number, b: number): number {
  return a + b
}
// tests/lt-8/math.test.ts
import { describe, it, expect } from 'vitest'
import { add } from '../../src/lt-8/math'

describe('add function', () => {
  it('should return 3 when adding 1 and 2', () => {
    expect(add(1, 2)).toBe(3)
  })
})

Simple, but it demonstrates TypeScript's type annotations working properly. The function expects two numbers and returns a number, and TypeScript will complain if you try to pass strings or other types.

ESLint caught real issues too. When I deliberately changed the math function to use var instead of const:

export function add(a: number, b: number): number {
  var c = a + b
  return c
}

Running npx eslint src/lt-8/math.ts immediately flagged it:

/usr/src/app/src/lt-8/math.ts
  2:3  error  Unexpected var, use let or const instead  no-var
✖ 1 problem (1 error, 0 warnings)
1 error and 0 warnings potentially fixable with the --fix option.

Perfect - exactly the kind of feedback that helps enforce modern JavaScript practices. ESLint even suggested that it could auto-fix the issue, and running npx eslint src/lt-8/math.ts --fix would change var to const automatically.

I had some personal confusion here. My initial attempt at that var c = a + b thing was to omit the var entirely. But ESLint wasn't doing anything about it. Odd. WTF? It wasn't until Claudia explained to me that c = a + b is not valid TS at all that it made sense. That way of init-ing a variable is fine in JS, but invalid in TS. It needs a qualifier. So ESLint couldn't even parse the code, so didn't bother. Poss it should have gone "ummm… that ain't code…?" though?

The Vitest watch mode proved its worth during development. Running npm test puts it into watch mode, and it sits there monitoring file changes. When I modify math.ts, it immediately re-runs just the math tests. When I modify slow.ts, it runs the slow test and I can see the 2-second delay. But when I modify unrelated files, it doesn't unnecessarily re-run tests that haven't been affected.

The web UI provides a nice visual overview of everything that's happening. You can see which tests are passing, which are failing, coverage reports, and real-time updates as you change code. It's particularly useful when you want to dive into test details or understand why something isn't working as expected.

All of this creates a pretty pleasant development feedback loop. Write a test, see it fail, write some code, see it pass, refactor, repeat. The tooling stays out of the way and just provides the information you need when you need it.

Was it worth the config rabbit hole?

Looking back at this whole exercise, I spent significantly more time setting up the development environment than I did actually learning TypeScript concepts. The irony isn't lost on me - I set out to learn a programming language and ended up writing a blog article about Docker volumes and ESLint configuration.

But honestly? Yes, it was worth it.

The alternative would have been to muddle through with a half-working setup, constantly fighting with tooling issues, or worse - learning TypeScript concepts incorrectly because my development environment wasn't giving me proper feedback. I've been down that road before with other technologies, and it's frustrating as hell.

What I have now is a solid foundation for iterative learning. I can write a test, see it fail, implement some TypeScript code, see it pass, and refactor - all with immediate feedback from the type checker, linter, and test runner. When I inevitably write something that doesn't make sense, the tooling will tell me quickly rather than letting me develop bad habits.

The TDD approach is working exactly as intended. Having tests that run automatically means I can experiment with TypeScript features without worrying about breaking existing code. The fast feedback loop means I can try things, see what happens, and iterate quickly.

Plus, this setup will serve me well beyond just learning the basics. When I'm ready to explore more advanced TypeScript features - generics, decorators, complex type manipulations - I'll have an environment that can handle it without needing another round of configuration hell.

The time investment was front-loaded, but now I can focus on the actual learning rather than fighting with tools. And frankly, understanding how to set up a modern TypeScript development environment is valuable knowledge in itself - it's not like I'm going to be working with TypeScript in isolation forever.

So yes, the config rabbit hole was worth it. Even if it did take longer than actually learning the difference between interface and type.

Righto.

--
Adam