The Golden Age of Server Side JavaScript

Support this website by purchasing prints of my photographs! Check them out here.

Two decades ago I first wrote and fell in love with PHP (5.4). It allowed me to build web applications that I could share with my friends. It even helped me pay for college. At first I wrote CRUD apps, and later games. The frontends I wrote were mostly static but eventually I learned enough JavaScript and CSS to make things interesting. There was a big distinction between what ran on the server and what ran on the browser as the programming languages were different. PHP does a very good job writing applications tied to a single request, and JavaScript does a very good job writing event driven applications.

A decade after that I discovered Node.js (0.4). Nearly overnight I stopped writing PHP and went all in on Node.js. Sure, I no longer had a syntactical way to differentiate what code ran on the server and what ran in the browser, but it wasn't all that bad. There would be some differences, notably the version of JS shipping with Node.js prior to version 1.0 was outdated, but I made due. We all did.

Using one language for code running in the browser and on the server was neat, but it was never why I liked Node.js. In my mind, much like a decade before when I wrote PHP and JS, the code I was writing now was still distinct. There was the server JavaScript, and then there was the browser JavaScript. The patterns I implemented in each was different. Error handling, performance, and security was a much higher priority on the server than in the browser.

Watching the runtime evolve over the next several years was quite a journey. There was the drama, the io.js fork, and later the great merger. The major version numbers continued to tick upward, V8 got faster, JavaScript improvements were roped in as well. Native modules from the community were replaced with pure JavaScript modules and deployments got easier. npm finally introduced a lock file and I no longer had to share node_modules tarballs colleagues.

Node.js reached an amazing place, somewhere around the v12 era, where it was one hell of a platform for building servers.

Development doesn't happen in isolation. Node.js gained popularity and it was due in large part to the ecosystem of npm packages. This one-language-everywhere pattern really helped Node.js gain developers as well. Overnight, companies with a dozen "frontend engineers" could switch to Node.js and then have a dozen "fullstack engineers".

Build tooling written in JavaScript, running on Node.js, was constructed to help evolve the language. Features that were still in the planning phase but weren't quite implemented in JavaScript engines could be written by developers then transpiled into a version of JavaScript that could run anywhere. The community could then experiment with these features, suss out issues, and then the changes would be implemented upstream.

The upstreaming of functionality is a very powerful thing indeed. Implementing features in userspace then proves the importance of said features. CoffeeScript lead the way for arrow functions. The left-pad debacle inspired String#padStart(). The dozens of highly downloaded UUID generation packages lead the way for crypto.randomUUID(). Truly the best fate of userspace functionality is to be made obsolete after getting upstreamed.

Of course, other build tooling came about as well. Linters and test runners are obvious ones. Some became more ingrained, such as JSX and React, and of course TypeScript. Building this tooling using Node.js makes a lot of sense. The developers that are using it are also able to contribute.

This is about when the term Isomorphic-, and later Universal-JavaScript were coined. All of this build tooling, and the ubiquity of the npm registry, made it so that developers could write JavaScript that can run in the browser and on the server. A lot of this is obvious wins, like validating that a string meets certain requirements. Other Universal JavaScript was a bit more complex, for example code that would rely on a browser-only API or a Node.js-only API would then need some sort of polyfill or other form of transpilation to make it work.

This is about when JavaScript burnout happened as well. The typical Node.js project, especially those of moderate complexity, then had so much build tooling involved that maintenance became difficult. Bit rot, a term referring to old projects that become difficult to run, certainly appeared to be expedited with code related to build tooling. For the most part a non-universal Node.js application, one with few npm packages, mostly continues to run over time. However those with complex build tooling and universal JavaScript end up being the most fragile.

Of course, issue trackers and complaints and trends in pull requests and community interactions start to follow a trend. Maintainers of Node.js see that a lot of these polyfills and transpilation issues are frustrating the community and so the platform evolves. Node.js, which started off as a way to run servers, began pulling in more and more APIs from the browser.

Some of these borrowed APIs are a very obvious win for Node.js. Before Typed Arrays existed Node.js needed a way to represent binary data and so invented the Buffer. With modern the modern JavaScript provided by V8 we now have Typed Arrays. So, it makes sense to update the Node.js APIs to accept Typed Arrays where Buffers are also accepted. Similarly beneficial APIs include Web Streams, Web Crypto, URL and friend URLSearchParams, Intl, and tons of things we take for granted like setTimeout.

Some of these borrowed APIs come with caveats. The fetch() API, which offers much better developer ergonomics than the built-in http module, has some security caveats when translating the API, intended for a single user in the browser, to the server, where requests can represent many different users. Overall the API makes for a nice replacement to the once-ubiquitous request library, and it's convenient that the "docs are already written". TODO import/export.

Finally, there are APIs that offer questionable benefit to a server environment and that really only appease the crowd looking to offload work from the transpilers or otherwise who wish to write universal JavaScript. Think of globalThis. Or the btoa and atob functions which have direct equivalents:

const btoa = (str) => Buffer.from(str, 'binary').toString('base64');
const atob = (enc) => Buffer.from(enc, 'base64').toString('binary');

Regardless of how helpful the API, the way that they're exposed is usually also done for universal JavaScript purposes. For example, URL is available as global.URL, meaning it's always in scope. Even if your program doesn't touch HTTP, that global is still there in memory. Traditionally, Node.js solved the problem of keeping different APIs locked up until needed by hiding them behind require() statements. For example the standard way of making an HTTP request requires you to require it:

const http = require('http');
http.request(...);

In the case of fetch, while I find it useful in my applications, I would have personally preferred to see hit hidden behind a require. Imagine one of the following:

const { fetch } = require('http');
const fetch = require('node:fetch');

Requiring a require() before accessing such APIs makes it easier to audit application code and find out what modules are talking to the network. TODO lazy loading faster process starts?

Is this a wrong way to look at Node.js? Is it silly to think that Node.js should not embrace universal javascript?

Node.js was fun when it was new. Frontend devs killed it. The complexity now of modern apps makes it an unenjoyable experience. It's impossible to find real Node.js jobs as they're pigeonholed into jobs that just use it to build frontend tooling.

There are of course alternative JavaScript platforms out there, the most popular of them being Deno and Bun, and esoteric ones like workerd. Deno's goal is to embrace web APIs even more. It supports things with no server equivalent at all like window.close(), alert(),

What would a white room implementation of a JavaScript runtime look like? One that isn't beholden to APIs that only make sense in the browser? It would probably hide everything behind import or require statements. Even things that we take for granted. Instead of setTimeout, it could be available as require('timers').setTimeout (even in Node.js, setTimeout === require('timers').setTimeout). This cleans up the global namespace and makes code audits easier.

The other big change would be to not pull in browser APIs.

Thomas Hunter II Avatar

Thomas has contributed to dozens of enterprise Node.js services and has worked for a company dedicated to securing Node.js. He has spoken at several conferences on Node.js and JavaScript and is an O'Reilly published author.