G'day:
This is another one of those "I really should learn this properly" situations that's been nagging at me for a while now.
My approach to web development has gotten a bit stale. I'm still very much in the "app server renders markup and sends it to the browser" mindset, whereas the world has moved on to "browser runs the app and talks back to the server for server stuff". I've been dabbling with bits and pieces of modern JavaScript tooling, but it's all been very ad-hoc and surface-level. Time to get serious about it.
TypeScript seems like the sensible entry point into this brave new world. It's not like I'm allergic to types - I've been working with strongly-typed languages for decades. And from what I can see, the TypeScript ecosystem has matured to the point where it's not just hipster nonsense any more; it's become the pragmatic choice for serious JavaScript development.
I've decided it would be expedient to actually learn TypeScript properly, and I want to do it via TDD. That means I need a proper development environment set up: something that lets me write tests, run them quickly, and iterate on the code. Plus all the usual developer quality-of-life stuff like linting and formatting that stops me from having to think about trivial decisions.
The educational focus here is important. I'm not trying to build a production system; I'm trying to build a learning environment. That means optimizing for development speed and feedback loops, not for deployment efficiency or runtime performance. I want to be able to write a test, see it fail, write some code, see it pass, refactor, and repeat. Fast.
I always use Docker for everything these days - I won't install server software directly on my host machine in 2025. That decision alone introduces some complexity, but it's non-negotiable for me. The benefits of containerisation (isolation, reproducibility, easy cleanup) far outweigh the setup overhead.
And yes, I'm enough of a geek that I'm running this as a proper Jira project with tickets and everything. LT-7 was dockerizing the environment, LT-8 was getting TypeScript and Vitest working, LT-9 was ESLint and Prettier setup. It helps me track progress, maintain focus, and prevent rabbit-hole-ing - which, as you'll see, is a constant danger when setting up modern JavaScript tooling.
So this article documents the journey of setting up that environment. Spoiler alert: it took longer than actually learning the first few TypeScript concepts, but now I've got a solid foundation for iterative learning.
I should mention that I'm not tackling this learning project solo. I'm working with Claudia (okok, claude.ai. Fine. Whatever) as my TypeScript tutor. It's been an interesting experiment in AI-assisted learning - she's helping me understand concepts, troubleshoot setup issues, and even draft this article documenting the process. The back-and-forth has been surprisingly effective for working through both the technical challenges and the "why does this work this way" questions that come up constantly in modern JavaScript tooling.
This collaborative approach has turned out to be quite useful. I get to focus on the actual learning and problem-solving, while Claudia handles the research grunt work and helps me avoid some of the more obvious rabbit holes. Plus, having to explain what I'm doing and why I'm doing it (even to an AI) forces me to think more clearly about the decisions I'm making.
The Docker foundation
The first challenge was getting a Node.js environment running in Docker that wouldn't drive me mental. This sounds straightforward, but there are some non-obvious gotchas when you're trying to mount your source code into a container while still having node_modules work properly.
The core problem is this: you want your source code to be editable on the host machine (so your IDE can work with it), but you need node_modules to be installed inside the container (because native modules and platform-specific binaries). If you just mount your entire project directory into the container, you'll either overwrite the container's node_modules with whatever's on your host, or vice versa. Neither option ends well.
The solution is to use a separate named volume for node_modules:
# docker/docker-compose.yml
services:
node:
build:
context: ..
dockerfile: docker/node/Dockerfile
volumes:
- ..:/usr/src/app
- node_modules:/usr/src/app/node_modules
ports:
- "51204:51204"
stdin_open: true
tty: true
volumes:
node_modules:
This mounts the project root to /usr/src/app, but then overlays a separate volume specifically for the node_modules directory. The container gets its own node_modules that persists between container restarts, while the host machine never sees it.
The Dockerfile handles the initial npm install during the build process:
# docker/node/Dockerfile
FROM node:24-bullseye
RUN echo "alias ll='ls -alF'" >> ~/.bashrc
RUN echo "alias cls='clear; printf \"\033[3J\"'" >> ~/.bashrc
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "zip", "unzip", "git", "vim"]
RUN ["apt-get", "install", "xdg-utils", "-y"]
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node --version || exit 1
Note the xdg-utils package - this turned out to be essential for getting Vitest's web UI working properly. Without it, the test runner couldn't open browser windows from within the container, which meant the UI server would start but be inaccessible.
This setup means I can edit files on my host machine using any editor, but all the Node.js execution happens inside the container with the correct dependencies. It also means I can blow away the entire environment and rebuild it from scratch without affecting my host machine - very important when you're experimenting with JavaScript tooling that changes its mind about best practices every six months.
TypeScript configuration basics
With the Docker environment sorted, the next step was getting TypeScript itself configured. This is where some fundamental decisions need to be made about how the development workflow will actually work.
The tsconfig.json ended up being fairly straightforward:
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"sourceMap": true,
"skipLibCheck": true,
"noEmitOnError": false,
"outDir": "./dist",
"esModuleInterop": true
},
"include": ["src/**/*"],
"watchOptions": {
"watchDirectory": "useFsEvents"
}
}
The key choices here were ES2020 as the target (modern enough to be useful, old enough to be stable) and CommonJS for modules (because that's what Node.js expects by default, and I didn't want to fight that battle yet). Source maps are essential for debugging, and noEmitOnError: false means TypeScript will still generate JavaScript even when there are type errors - useful during development when you want to test partially-working code.
But the more interesting decision was about the testing strategy. Do you test the compiled JavaScript in ./dist, or do you test the TypeScript source files directly?
I spent quite a bit of time going back and forth on this. On one hand, it's the dist code that actually runs in production, so surely that's what's important to test. On the other hand, it's the src code that I'm writing and thinking about, so that's what should be tested during development. I was almost ready to go with the dist approach until I came across a Stack Overflow discussion from 2015 that helped clarify the thinking.
The crux of it comes down to the distinction between developer testing and QA testing responsibilities. As a developer, my job is to ensure my code logic is correct and that my implementation matches my intent. That's fundamentally about the source code I'm writing. QA teams, on the other hand, are responsible for verifying that the entire application works correctly under real-world conditions - which includes testing the compiled/built artifacts.
For a learning environment where I'm focused on understanding TypeScript concepts and language features, testing the source makes perfect sense. I want immediate feedback on whether my understanding of TypeScript's type system is correct, not whether the JavaScript compiler is working properly (that's TypeScript's problem, not mine).
I went with testing the source files directly. This means Vitest needs to understand TypeScript natively, but the payoff is faster iteration. When I change a source file, the test runner can immediately re-run the relevant tests without waiting for TypeScript compilation. For a learning environment where I'm constantly making small changes and want immediate feedback, this speed matters more than having tests that exactly mirror a production deployment.
The project structure reflects this educational focus:
src/
lt-8/
math.ts
baseline.ts
slow.ts
tests/
lt-8/
math.test.ts
baseline.test.ts
slow.test.ts
Each "learning ticket" gets its own subdirectory in both src and tests. This keeps different concepts isolated and makes it easy to look back at what I was working on during any particular phase. It also means I can experiment with one concept without accidentally breaking code from a previous lesson.
The numbered ticket approach might seem a bit over-engineered for a personal learning project, but it's proven useful for maintaining focus. Each directory represents a specific learning goal, and having that structure prevents me from mixing concerns or losing track of what I was supposed to be working on.
Vitest: the testing backbone
With TypeScript configured, the next step was getting a proper test runner in place. I'd heard good things about Vitest - it's designed specifically for modern JavaScript/TypeScript projects and promises to be fast and developer-friendly.
I'd evaluated JS test frameworks a few years ago, and dismissed Jest as being poorly reasoned/implemented (despite being popular), and had run with Mocha/Chai/Sinon/etc, which seemed more sensible. I'd cut my teeth on Jasmine stuff years ago, but that's faded into obscurity these days. As of 2025, Jest seems to have stumbled and Vitest has come through as being faster and better and more focused when it comes to TypeScript. So seems to be the way forward. Until the JS community does a reversal in probably two weeks time and decides something else is the new shineyshiney. Anyway, for now it's Vitest. I hope its popularity lasts at least until I finish writing this article. Fuck sake.
The installation was straightforward enough:
npm install --save-dev vitest @vitest/coverage-v8 @vitest/ui
The basic configuration in package.json gives you everything you need:
"scripts": {
"test": "vitest",
"test:coverage": "vitest run --coverage",
"test:ui": "vitest --ui --api.host 0.0.0.0"
}
That --api.host 0.0.0.0 bit is crucial when running inside Docker - without it, the UI server only binds to localhost inside the container, which means you can't access it from your host machine.
But the real magic is in Vitest's intelligent watch mode. This thing is genuinely clever about what it re-runs when files change. I created a deliberately slow test to demonstrate this to myself:
async function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms))
}
export async function mySlowFunction(): Promise<void> {
console.log('Starting slow function...')
await sleep(2000)
console.log('Slow function finished.')
}
When I run the tests and then modify unrelated files, Vitest doesn't re-run the slow test. But when I change slow.ts itself, it immediately runs just that test and its dependencies. You can actually see the delay when that specific test runs, but other changes don't trigger it. It's a small thing, but it makes the development feedback loop much more pleasant when you're not waiting for irrelevant tests to complete.
The web UI is where things get really interesting though. Running npm run test:ui spins up a browser-based interface that gives you a visual overview of all your tests, coverage reports, and real-time updates as you change code. This is why I needed to expose port 51204 in the Docker configuration and install xdg-utils in the container.
That's really nice, and it's "live": it updates whenever I change any code. Pretty cool.
Without xdg-utils, Vitest can start the UI server but can't open browser windows from within the container. The package provides the utilities that Node.js applications expect to be able to launch external programs - in this case, web browsers. It's one of those dependencies that's not immediately obvious until something doesn't work, and then you spend an hour googling why your UI won't open.
The combination of fast command-line testing for quick feedback and the rich web UI for deeper analysis turned out to be exactly what I wanted for a learning environment. I can run tests continuously in the background while coding, but also dive into the visual interface when I want to understand coverage or debug failing tests.
Code quality tooling: ESLint and Prettier
With the testing foundation in place, the next step was getting proper code quality tooling set up. This is where things got a bit more involved, and where I had to make some decisions about what constitutes "good" TypeScript style.
First up was ESLint. The modern TypeScript approach uses typescript-eslint which provides TypeScript-specific linting rules. But there was an immediate gotcha: the configuration format.
ESLint has moved to a new "flat config" format, and naturally this means configuration files with different extensions. Enter the .mts file. WTF is an .mts file? It's TypeScript's way of saying "this is a TypeScript module that should be treated as an ES module regardless of your package.json settings". It's part of the ongoing CommonJS vs ES modules saga that the JavaScript world still hasn't fully sorted out. The .mts extension forces ES module semantics, while .cts would force CommonJS. Since ESLint's flat config expects ES modules, but my project is using CommonJS for everything else, I needed the .mts extension to make the config file work properly. (NB: Claudia wrote all that. She explained it to me and I got it enough to nod along and go "riiiiiiight…" in a way I thought was convincing at the time, but doesn't seem that way now I write it down).
The resulting eslint.config.mts ended up being reasonably straightforward:
import js from '@eslint/js'
import globals from 'globals'
import tseslint from 'typescript-eslint'
import { defineConfig } from 'eslint/config'
import eslintConfigPrettier from 'eslint-config-prettier/flat'
export default defineConfig([
{
files: ['**/*.{js,mjs,cjs,ts,mts,cts}'],
plugins: { js },
languageOptions: {
globals: globals.node,
},
},
{
files: ['**/*.js'],
languageOptions: {
sourceType: 'commonjs',
},
},
{
rules: {
'prefer-const': 'error',
'no-var': 'error',
'no-undef': 'error',
},
},
tseslint.configs.recommended,
eslintConfigPrettier,
])
The philosophy here is to split responsibilities: Prettier handles style and formatting, ESLint handles potential bugs and code quality issues. The eslint-config-prettier integration ensures these two don't step on each other's toes by disabling any ESLint rules that conflict with Prettier's formatting decisions.
Speaking of Prettier, this is where I had to do some soul-searching about applying rules I don't necessarily agree with. The .prettierrc configuration reflects a mix of TypeScript community zeitgeist and my own preferences:
{
"semi": false,
"trailingComma": "es5",
"singleQuote": true,
"printWidth": 80,
"tabWidth": 2,
"useTabs": false,
"endOfLine": "lf"
}
The "semi": false bit aligns with my long-standing view that semicolons are for the computer, not the human - only use them when strictly necessary. Single quotes over double quotes is just personal preference.
But then we get to the contentious bits. Two-space indentation instead of four? I've been a four-space person for decades, but the TypeScript world has largely standardised on two spaces. Trailing commas in ES5 style? Again, this is considered best practice in modern JavaScript because it makes diffs cleaner when you add array or object elements, but it feels wrong to someone coming from more traditional languages.
In the end, I decided to go with the community defaults rather than fight them. When you're learning a new ecosystem, there's value in following the established conventions even when they don't match your personal preferences. It makes it easier to read other people's code, easier to contribute to open source projects, and easier to get help when you're stuck.
The tooling integration worked exactly as advertised. ESLint caught real issues - like when I deliberately used var instead of let or const - while Prettier handled all the formatting concerns automatically. It's a surprisingly pleasant development experience once it's all wired up.
IDE integration: VSCode vs IntelliJ
This is where things got properly annoying, and where I had to make some compromises I wasn't entirely happy with.
My preferred IDE is IntelliJ. I've been using JetBrains products for years, and their TypeScript/Node.js support is generally excellent. The problem isn't with IntelliJ's understanding of TypeScript - it's with IntelliJ's support for dockerised Node.js development.
Here's the issue: IntelliJ can see that Node.js is running in a container. It can connect to it, execute commands against it, and generally work with the containerised environment. But when it comes to the node_modules directory, it absolutely requires those modules to exist on the host machine as well. Even though it knows the actual execution is happening in the container, even though it can see the modules in the container filesystem, it won't provide proper IntelliSense or code completion without a local copy of node_modules.
This is a complete show-stopper for my setup. The whole point of the separate volume for node_modules is that the host machine never sees those files. I'm not going to run npm install on my host just to make IntelliJ happy - that defeats the entire purpose of containerisation.
So: VSCode it is. And to be fair, VSCode's Docker integration is genuinely well done. The Dev Containers extension understands the setup immediately, provides proper IntelliSense for all the containerised dependencies, and generally "just gets it" in a way that IntelliJ doesn't.
There are some annoyances though. VSCode has this file locking behaviour when running against mounted volumes that occasionally interferes with file operations. Nothing catastrophic, but the kind of minor friction that makes you appreciate how smooth things usually are in IntelliJ. Still, it's livable - and the benefits of having an IDE that properly understands your containerised development environment far outweigh the occasional file system hiccup.
Getting Prettier integrated into VSCode required a few configuration tweaks. I had to install the Prettier extension, then configure VSCode to use it as the default formatter and enable format-on-save. The key settings in .vscode/settings.json were:
{
"editor.defaultFormatter": "esbenp.prettier-vscode",
"prettier.configPath": ".prettierrc",
"editor.formatOnPaste": true,
"editor.formatOnSave": true
}
I got these from How to use Prettier with ESLint and TypeScript in VSCode › Formatting using VSCode on save (recommended) .
The end result is a development environment where I can focus on learning TypeScript concepts rather than fighting with tooling. It's not my ideal setup - I'd rather be using IntelliJ - but it works well enough that the IDE choice doesn't get in the way of the actual learning.
Seeing it all work
With all the tooling in place, it was time to put it through its paces with some actual TypeScript code. I started with a baseline test just to prove that Vitest was operational:
// src/lt-8/baseline.ts
import process from 'node:process'
export function getNodeVersion(): string {
return process.version
}
// tests/lt-8/baseline.test.ts
import { describe, it, expect } from 'vitest'
import { getNodeVersion } from '../../src/lt-8/baseline'
describe('tests vitest is operational and test TS code', () => {
it('should return the current Node.js version', () => {
const version = getNodeVersion()
expect(version).toMatch(/^v24\.\d+\.\d+/)
})
})
This isn't really testing TypeScript-specific features, but it proves that the basic infrastructure works - we're importing Node.js modules, calling functions, and verifying that we get the expected Node 24 version back. It's a good sanity check that the container environment and test runner are talking to each other properly.
"Interestingly" I ran this pull request through Github Copilot's code review mechanism, and it pulled me up for this test being fragile because I'm verifying the Node version is specifically 24, suggesting "this will break if the version isn't 24 Well… exactly mate. It's a test to verify I am running on the version we're expecting it to be! I guess though my test label 'should return the current Node.js version' is not correct. It should be 'should return the application\'s required Node.js version', or something.
The real TypeScript example was the math function:
// src/lt-8/math.ts
export function add(a: number, b: number): number {
return a + b
}
// tests/lt-8/math.test.ts
import { describe, it, expect } from 'vitest'
import { add } from '../../src/lt-8/math'
describe('add function', () => {
it('should return 3 when adding 1 and 2', () => {
expect(add(1, 2)).toBe(3)
})
})
Simple, but it demonstrates TypeScript's type annotations working properly. The function expects two numbers and returns a number, and TypeScript will complain if you try to pass strings or other types.
ESLint caught real issues too. When I deliberately changed the math function to use var instead of const:
export function add(a: number, b: number): number {
var c = a + b
return c
}
Running npx eslint src/lt-8/math.ts immediately flagged it:
/usr/src/app/src/lt-8/math.ts 2:3 error Unexpected var, use let or const instead no-var ✖ 1 problem (1 error, 0 warnings) 1 error and 0 warnings potentially fixable with the --fix option.
Perfect - exactly the kind of feedback that helps enforce modern JavaScript practices. ESLint even suggested that it could auto-fix the issue, and running npx eslint src/lt-8/math.ts --fix would change var to const automatically.
I had some personal confusion here. My initial attempt at that var c = a + b thing was to omit the var entirely. But ESLint wasn't doing anything about it. Odd. WTF? It wasn't until Claudia explained to me that c = a + b is not valid TS at all that it made sense. That way of init-ing a variable is fine in JS, but invalid in TS. It needs a qualifier. So ESLint couldn't even parse the code, so didn't bother. Poss it should have gone "ummm… that ain't code…?" though?
The Vitest watch mode proved its worth during development. Running npm test puts it into watch mode, and it sits there monitoring file changes. When I modify math.ts, it immediately re-runs just the math tests. When I modify slow.ts, it runs the slow test and I can see the 2-second delay. But when I modify unrelated files, it doesn't unnecessarily re-run tests that haven't been affected.
The web UI provides a nice visual overview of everything that's happening. You can see which tests are passing, which are failing, coverage reports, and real-time updates as you change code. It's particularly useful when you want to dive into test details or understand why something isn't working as expected.
All of this creates a pretty pleasant development feedback loop. Write a test, see it fail, write some code, see it pass, refactor, repeat. The tooling stays out of the way and just provides the information you need when you need it.
Was it worth the config rabbit hole?
Looking back at this whole exercise, I spent significantly more time setting up the development environment than I did actually learning TypeScript concepts. The irony isn't lost on me - I set out to learn a programming language and ended up writing a blog article about Docker volumes and ESLint configuration.
But honestly? Yes, it was worth it.
The alternative would have been to muddle through with a half-working setup, constantly fighting with tooling issues, or worse - learning TypeScript concepts incorrectly because my development environment wasn't giving me proper feedback. I've been down that road before with other technologies, and it's frustrating as hell.
What I have now is a solid foundation for iterative learning. I can write a test, see it fail, implement some TypeScript code, see it pass, and refactor - all with immediate feedback from the type checker, linter, and test runner. When I inevitably write something that doesn't make sense, the tooling will tell me quickly rather than letting me develop bad habits.
The TDD approach is working exactly as intended. Having tests that run automatically means I can experiment with TypeScript features without worrying about breaking existing code. The fast feedback loop means I can try things, see what happens, and iterate quickly.
Plus, this setup will serve me well beyond just learning the basics. When I'm ready to explore more advanced TypeScript features - generics, decorators, complex type manipulations - I'll have an environment that can handle it without needing another round of configuration hell.
The time investment was front-loaded, but now I can focus on the actual learning rather than fighting with tools. And frankly, understanding how to set up a modern TypeScript development environment is valuable knowledge in itself - it's not like I'm going to be working with TypeScript in isolation forever.
So yes, the config rabbit hole was worth it. Even if it did take longer than actually learning the difference between interface and type.
Righto.
--
Adam