Edward Loveall

The Cost of a Tool

I just bought a new router. It’s one of those fancy ones that lets me playact IT admin in my free time. I’m setting up ad blockers and custom DNS resolutions left and right. I keep toggling options I don’t understand and my internet is 50% less stable. It’s great.

Elizabeth (my partner) is less excited. When all of a sudden she can’t read her lemon cookie recipe, it doesn’t matter to her how many ads I’m blocking. So I revert the setting back; she’s happy that she can make her lemon cookies and I’m happy that I get to eat them.

It’s fun to be excited about new technology. Luckily, most of my experiments don’t affect anyone else. I keep downloading new text editors and CLI tools on the off-chance that they make some previously annoying task easy. Most of the time they don’t, but it was a fun use of an hour.

There’s this new “Artificial Intelligence” tool—specifically LLMs from OpenAI, Google, Microsoft/GitHub, and others, and text-to-image models like Stable Diffusion from Stability AI—that a lot of people are really excited about. Describe what you want in words and they show some code, or a blog post, or an image, or… kind of anything. They’re novel and wildly inaccurate which seems right up my alley. But I’ve been around in tech long enough to smell that something’s off.

The more I look and consider these tools, the more I am genuinely alarmed at how they can cause harm, and how that harm goes largely unexamined. So I want to write to those people who, like me, enjoy new tools and their potential. I’m talking to you: the prompt engineer, “it’s really good for some things”, gotta-break-a-few-eggs mindset, AI-experimentalist. It’s me: a luddite who nevertheless loves technology. I’m going to show you how I see these tools.

Because where you see a tool, I see a weapon.


For starters, none of the current, popular LLMs could exist without stealing their training data. Once it’s stolen, the data must be cleaned up for consumption. Kenyan workers are paid $2/hour to sift through “textual descriptions of sexual abuse, hate speech, and violence”—a process that mentally scars them.

Those image generators are also stealing left and right from artists. I’d personally like to live in a world where we celebrate artists and their art alike. Being a professional artist is already a financially precarious career, so when Stability AI and others steal that art so they can get massive paychecks, we are moving farther away from that world.

Speaking of the world we live in, the energy used to create these images is enormously harmful to the environment. LLMs and chatbots are no slouch here either. In 2021, AI was already at least 10% of Google’s energy bill alone. The energy generated by proof-of-work cryptocurrencies like Bitcoin was a major concern only months ago. As more and more companies train and deploy large, general-purpose models, their environmental impact should be similarly concerning.

Sometimes the tool/weapon dichotomy is dependent on who’s doing the wielding. Consider that an “AI algorithm” wrongfully denied people health insurance or that boards and CEOs are laying off their workforce. Those tech company boards do not understand the complexity of a job that isn’t theirs, so they assume a machine can do it just as well and for much less money. This is just capitalism, but AI is a shiny, new weapon for capitalism to use.

Because these tools are products of their source material (the entire internet) they contain massive amounts of regressive ideas. ChatGPT will explain male, but not female, anatomy. This is sexism. LLMs spread debunked ideas about black health. This is racism. Generative image tools refuse to generate people of all body types. This is erasure. AI is being used to synthesize political voices for political propaganda and assist in voter suppression. Misinformation in all forms has been, and will continue to be, a weapon.

Maybe you think calling ChatGPT/Stable Diffusion/etc “weapons” is too extreme. “Actual weapons are made for the purpose of causing harm and something that may cause harm should be in a different category,” you say. But I say: if a tool is stealing work, denying healthcare, perpetuating sexism, racism, and erasure, and incentivizing layoffs, splitting hairs over what category we put it in misses the point.

What’s baffling to me is how people praise the abilities of AI without questioning its ramifications. No doubt they look impressive. If you’ve never seen a computer take a suggestion and generate words based on that suggestion, it’s a great magic trick. But this is less like stage magic and more like blood magic: it’s the kind of magic that costs.

But there’s gotta be some use for these things, right? I write software for a living, so I’m most qualified to talk about how they write code. A recent study shows that code written with GitHub’s Copilot is measurably worse in quality than non-Copilot code. Another details how developers wrote less secure code with an AI assistant.

I’ve heard people say they’re good at reducing repetitive tasks, letting you get past error messages, or helping you learn new syntax. Repetitive tasks can be solved trivially without an LLM, and getting past errors is not worth lower-quality, less-secure code. There are more accurate, less biased ways to learn, retrieve information and reduce repetition that exist today.

And learning new syntax is not the most important part of being a good programmer. Jennifer Moore said it well in her article Losing the imitation game:

The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.

I’m not saying all computers or algorithms are bad. Should we ban hammers because they can potentially be used as weapons? No. But if every time I hammered a nail it also broke someone’s hand, caused someone to have a mental breakdown, or spread misinformation, I would find a different hammer. Especially if that hammer built houses with more vulnerabilities on average.

A tool’s usefulness and enjoyment must be measured against its cost. I’m frustrated because I see stories daily about the harm AI is causing while my coworkers and friends shout its praises. It’s reasonable if at first you are unaware of those costs, but it’s also up to you to learn about them, and work on mitigating them. Hopefully this helped you with the learning part. And while mitigating the costs of AI might feel daunting, it’s actually quite easy.

You stop using the tools.