In my career so far (which isn’t that long… but also not that short ๐Ÿ˜‰), I’ve been lucky enough to not be subject to strict deadlines. That’s why I’m always in awe every time I talk to fellow long-time Microsoft employees. A thread! ๐Ÿงต

You see, my previous work has focused on two distinct areas: development of open source projects and development of infrastructure software at Google. I suspect that the practices behind these two dominate a lot of the software shops these days.

In the case of open source projects, time is usually of the volunteer kind. With that in mind, setting strict deadlines rarely works. For example, in NetBSD we did set schedules to prepare major releases, but there were inevitable slips.

That may explain why many open source projects have adopted a “rolling release” model: it’s easier to keep the source tree at HEAD clean enough to ship at any point, and then ship at predictable and frequent times to avoid “last minute check-ins”.

In the Google case, where I primarily worked on software for our production environment, things were similar: projects would align at quarter boundaries if you squinted enough, but even those were quite lax. (This is not to say all of Google is free of deadlines though!)

Google’s production and corporate environments are also extremely homogeneous and have a great deal of introspection features, so shipping software for either case is easy. E.g. when we prepared Bazel releases, we could trivially verify that it’d work for almost everyone.

In both of these cases, having direct access to the target machines and having the Internet around lowered the risk of bad releases. Messed up an open source release? Ship a new micro version. Messed up a roll out of an infrastructure piece? Make a new release and deploy it.

But the world of consumer software was (is?) quite different, especially when the only possible distribution mechanism was physical media. And that’s where the insight I get from talking to long-time engineers of, say, Windows, is pretty enlightening.

Just consider this: imagine having to prepare the golden CD for the Windows 95 release, a piece of software that was intended to run on millions of computers world-wide at a time where the Internet was in its infancy (no telemetry, not remote updates, no nothing!).

Imagine the high stakes involved in shipping a software product under these conditions, especially when the product is no less than an operating system. A mistake can literally brick thousands of machines and, with no Internet, there is “just ship a patch”.

Imagine also having to plan releases with a 3-year time horizon, where the date to cut the golden CD or DVD is pretty firm, and once it is burned, it is done. You’d better have a solid plan to ship on time, or a strict process to strip features that aren’t “going to make it”.

Imagine… wait, you don’t have to imagine. Lots of engineers have been subject to these conditions and they still are today in some industries (video games?). That said, ubiquitous online distribution and automatic updates have made this much less painful.

So where am I going with this thread? Nowhere really. I was just going to tweet this thought that keeps coming to mind as I have conversations around Microsoft ๐Ÿ˜„ and it turned into a longer story.

If there is one thing you want to take out of this thread, however, is a recommendation to subscribe to The Old New Thing by Raymond Chen and to skim through old articles from the early 2000s.

Following that blog back in the day made me stop “making fun” of Windows and better understand the difficulties in testing and shipping software at a large scale.