Saturday, 27 September 2025

JavaScript Symbols: when learning one thing teaches you fifteen others

G'day:

This is one of those "I thought I was learning one thing but ended up discovering fifteen other weird JavaScript behaviors" situations that seems to happen every time I try to understand a JavaScript feature properly.

I was working through my TypeScript learning project, specifically tackling symbols (TS / JS) as part of understanding primitive types. Seemed straightforward enough - symbols are unique primitive values, used for creating "private" object properties and implementing well-known protocols. Easy, right?

Wrong. What started as "symbols are just unique identifiers" quickly turned into a masterclass in JavaScript's most bizarre type coercion behaviors, ESLint's opinions about legitimate code patterns, and why semicolons sometimes matter more than you think.

The basics (that aren't actually basic)

Symbols are primitive values that are guaranteed to be unique:

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false - always unique

Except when they're not unique, because Symbol.for() maintains a global registry:

const s1 = Symbol.for('my-key');
const s2 = Symbol.for('my-key');
console.log(s1 === s2); // true - same symbol from registry

Fair enough. And you can't call Symbol as a constructor (unlike literally every other primitive wrapper):

const sym = new Symbol(); // TypeError: Symbol is not a constructor

This seemed like a reasonable safety feature until I tried to test it and discovered that TypeScript will happily let you write this nonsense, but ESLint immediately starts complaining about the any casting required to make it "work".

Where things get properly weird

The real fun starts when you encounter the well-known symbols - particularly Symbol.toPrimitive. This lets you control how objects get converted to primitive values, which sounds useful until you actually try to use it.

Here's a class that implements custom primitive conversion:

export class SomeClass {
  [Symbol.toPrimitive](hint: string) {
    if (hint === 'number') {
      return 42;
    }
    if (hint === 'string') {
      return 'forty-two';
    }
    return 'default';
  }
}

(from symbols.ts)

Now, which conversion do you think obj + '' would trigger? If you guessed "string", because you're concatenating with a string, you'd be wrong. It actually triggers the "default" hint because JavaScript's + operator is fundamentally broken.

The + operator with mixed types calls toPrimitive with hint "default", not "string". JavaScript has to decide whether this is addition or concatenation before converting the operands, so it plays it safe with the default hint. Only explicit string conversion like String(obj) or template literals get the string hint.

This is the kind of language design decision that makes you question whether the people who created JavaScript have ever actually used JavaScript.

ESLint vs. reality

Speaking of questionable decisions, try writing the template literal version:

expect(`${obj}`).toBe('forty-two');

ESLint immediately complains: "Invalid type of template literal expression". It sees a custom class being used in string interpolation and assumes you've made a mistake, despite this being exactly what Symbol.toPrimitive is designed for.

You end up with this choice:

  1. Suppress the ESLint rule for legitimate symbol behavior
  2. Use String(obj) explicitly (which actually works better anyway)
  3. Cast to any and deal with ESLint complaining about that instead

Modern tooling is supposedly designed to help us write better code, but it turns out "better" doesn't include using JavaScript's actual primitive conversion protocols.

Symbols as "secret" properties

The privacy model for symbols is... interesting. They're hidden from normal enumeration but completely discoverable if you know where to look:

const secret1 = Symbol('secret1');
const secret2 = Symbol('secret2');

const obj = {
  publicProp: 'visible',
  [secret1]: 'hidden',
  [secret2]: 'also hidden'
};

console.log(Object.keys(obj));                    // ['publicProp']
console.log(JSON.stringify(obj));                 // {"publicProp":"visible"}
console.log(Object.getOwnPropertySymbols(obj));   // [Symbol(secret1), Symbol(secret2)]
console.log(Reflect.ownKeys(obj));                // ['publicProp', Symbol(secret1), Symbol(secret2)]

So symbols provide privacy from accidental access, but not from intentional inspection. It's like having a door that's closed but not locked - good enough to prevent accidents, useless against anyone who actually wants to get in.

Semicolons matter (sometimes)

While implementing symbol properties, I discovered this delightful parsing ambiguity:

export class SomeClass {
  private stringName: string = 'StringNameOfClass'
  [Symbol.toStringTag] = this.stringName  // Prettier goes mental
}

Without a semicolon after the first line, Prettier interprets this as:

private stringName: string = ('StringNameOfClass'[Symbol.toStringTag] = this.stringName)

Because you can totally set properties on string literals in JavaScript (even though it's completely pointless), the parser thinks you're doing property access and assignment chaining.

The semicolon makes it unambiguous, and impressively, Prettier is smart enough to recognize that this particular semicolon is semantically significant and doesn't remove it like it normally would.

Testing arrays vs. testing values

Completely unrelated to symbols, but I learned that Vitest's toBe() and toEqual() are different beasts:

expect(Object.keys(obj)).toBe(['publicProp']);     // Fails - different array objects
expect(Object.keys(obj)).toEqual(['publicProp']);  // Passes - same contents

toBe() uses reference equality (like Object.is()), so even arrays with identical contents are different objects. toEqual() does deep equality comparison. This seems obvious in hindsight, but when you're in the middle of testing symbol enumeration behavior, it's easy to forget that arrays are objects too.

The real lesson

I set out to learn about symbols and ended up with a tour of JavaScript's most questionable design decisions:

  • Type coercion that doesn't work the way anyone would expect
  • Operators that behave differently based on hints that don't correspond to actual usage
  • Tooling that warns against legitimate language features
  • Parsing ambiguities that require strategic semicolon placement
  • Privacy models that aren't actually private

This is exactly why "learn by doing" beats "read the documentation" every time. The docs would never tell you about the ESLint conflicts, the semicolon parsing gotcha, or the + operator's bizarre hint behavior. You only discover this stuff when you're actually writing code and things don't work the way they should.

The symbols themselves are fine - they do what they're supposed to do. It's everything else around them that's… erm… "laden with interesting design decision "opportunities".[Cough].


The full code for this investigation is available in my learning-typescript repository if you want to see the gory details. Thanks to Claudia for helping debug the type coercion weirdness and for assistance with this write-up. Also props to GitHub Copilot for pointing out that I had three functions doing the same thing - sometimes the robots are right.

Righto.

--
Adam