Technology and (in)humanity

How often do we consider the impact of the big tech companies we work with? How often do we consider our responsibility, our complicity, in the harms that they inflict on society?

It’s been more than 100 days of genocide in Palestine, and that has been aided by AI-based systems from Amazon and Google.

And to some extent, this is no great surprise: big technology companies have a long history of providing systems to those hurting humanity, whether it be IBM back in World War II, Microsoft and ICE, and Microsoft’s, Amazon’s and Google’s deals with fossil fuel companies.

AI is just following that trend - it’s almost like it’s using history as its training data. Beyond the hype, it’s just another piece of technology, and like any other tech, it can perpetuate injustice and be responsible for death and destruction - especially when in service to capitalism. It has the added bonus of using significant amounts of energy, has stolen massive amounts of personal and proprietry content, and will happily be racist and transphobic. Delightful.

This whole “it would be impossible to train LLMs without using copyrighted works” thing might be more amusing if it weren’t so close to the “it would be impossible to run modern globalised capitalism without accelerating the planet towards inevitable climate disaster” that we’re also living through. (Kerry Buckley)

But we build things on ChatGPT anyway - hell, we’ve been building things on top of Amazon for over a decade now, even with their longstanding anti-competitive behaviour and inhumane treatment of their own workers. I’m not sure why I should be surprised at the current state of things.

Yet, I must ask: is there a line in the sand for what we will not accept, while we build shiny new tech that might be useful? Do we bear any responsibility if we enable others to integrate with these services and companies?

Do the sunk costs outweigh our consciences?

If they don’t, then: what are you going to do about it?