After many years of barely using Facebook, I finally got around to pulling the plug on it. If I had done this a few years ago, I probably would have posted some lame “I don’t use this anymore!” message and that would be it. But now I actually have good reason to delete it, so I wanted my leaving to be a political act (or as political as my miniscule influence affords). This is what I posted on Facebook along with the follow-up discussion. I didn’t want this to be lost once they got around to erasing my existence.

I know I’m really late on this, but in a few weeks I’m going to delete my Facebook account.

The concept seems harmless. People share whatever they want, some algorithm slurps it all up and presents the stuff that it thinks you will like. We’ve seen how it can go bad from American politics, but we just shake our heads at the people who rely on Facebook as their primary news source because really it can’t be that many people and isn’t Facebook just a silly time sink?

But this shit in Myanmar… it’s indefensible.

Why Facebook is losing the war on hate speech in Myanmar

That seemingly harmless sharing model can be so easily turned into a weapon. And when you introduce that weapon into a country with existing ethnic conflict and little to no alternative news sources, you’ve stepped way beyond the point of pleading ignorance. This is criminal negligence.

Facebook says they’re working on it. Facebook says they will make it better. We could all band together and pressure Facebook to change!

Or not. Facebook isn’t the fucking government. We aren’t stuck with Facebook. We don’t need to work within the system to slowly push them in the right direction. They’re a company. They only exist because we stick around and let them push ads in our faces. So please just leave.

It’s just a silly time sink to me. And it’s way worse to others. So I’m leaving.

If you want to quit too, you can download all of your Facebook data first if you’re worried about losing anything: How do I download a copy of my information on Facebook?

A couple of my friends chimed in with one-off comments:

I thought Buddhists were peaceful….

I completely agree with your motivation. The only way toward a decentralized Internet is through exiting the big central servers.

But then it got interesting:

Hey, sorry to hear. I’ve some opinions on all of this, which are quite biased due to the fact I work there, but feel like it’s important you leave with a less-sour taste in your mouth.

First, implore you to read what we are actually doing: Update on Myanmar

Those are real changes, not sure I can convince you otherwise.

Secondly, fighting hate speech is not a solved problem, and will never be. You know that no system is 100% accurate or fail-proof. Not saying our systems were adequate, but cherry picking type-1 errors is disengenuous when the underlying data set is millions upon millions of points.

Finally, let’s consider both sides of the coin. Yes, people will post horrible shit on the internet. They will try to do that on Facebook, they will do that without Facebook. However, by opening FB to developing countries, we are also able to expose users to the good things associated with connection: building empathy. It’s not fair to only consider the negative effects of something without considering the positive effects.

For example: did FB fuck up when launching in Myanmar, and should we have had more natural language speakers reviewing content, etc? Yes, of course, we fucked up and that was shitty and is being rectified.

Was launching FB in Myanmar a net negative to the existing horrific situation? That’s a stretch and not one you can claim by showing instances of non-deleted hate speech. I’m not saying it was a net good either, but you need to also consider the ability to build empathy in a community by connecting them online. Lastly, what would the situation look like had FB not (shittily) entered the Myanmar market. Harder to access news of all forms, more heavily reliance on (racist and horrific) local news sources that do not have outside validation and fact checking, etc. It’s really, really hard to say it would have been better.

AGAIN, I am not justifying the shitty decisions, I’m just trying to give you some extra context, with the added benefit of: company leaders recognize we fucked this up – apologies are real, change is happening.

I don’t want to make you feel bad about your job. But if I were a Facebook employee, this is what I would be thinking about right now:

Does the world actually “need” Facebook? Certainly Facebook has created an incredible platform for individuals and organizations to connect and even make a living, but are any of those functions actually unique? Do they need to be centralized in the manner that Facebook is pushing for? Does society benefit from all of these services being tied together by amoral machine learning algorithms? Would society benefit from breaking up the Facebook “monopoly”? etc.

One thing I’m interested in hearing you’re thoughts on… if you’re concerned about things like hate speech or nefarious uses of platforms, what do you think is the best way to solve such problems without a robust and centralized platform?

I don’t understand your perspective. In my mind the problem is the interaction between social media and personal prejudice (hate speech being a very specific expression of prejudice). Calling it “nefarious use” or suggesting that Facebook can “solve” hate speech completely mischaracterizes the problem.

That was the end of that (I wasn’t going to let the debate delay my departure). But I had another thread of debate going at the same time with a different friend:

I’ve been thinking about this issue as well. Two questions have been rolling around in my head that I can’t decide how to answer, so I’ll pose them to you.

All of technology and scientific advancement seem to have a potential for harm. Whether it does harm or not really just depends on who is using it and what their intentions are. Essentially, a tool can be used for good or bad. Is Facebook uniquely different in this respect? If so, how?

If people were to stop using Facebook, would these problems be mitigated? Or would people find another avenue to create the same problems?

Maybe not the best analogy, but a gun is also something that can be used for good or bad. Facebook seems to be using US gun control policy when it comes to their algorithm: super easy for everyone to get access to it, and then going around after the fact to try to prevent anything bad from happening because of that policy.

But unlike guns, Facebook is a centralized service provided by a company (and isn’t enshrined in the constitution). So unlike guns it’s possible to get rid of Facebook’s harmful algorithm and replace it with something that cannot be so easily used for evil. And we can very directly pressure Facebook to do that by leaving the platform.

I think it would help to mitigate the problem because I don’t think people use Facebook because they want an echo chamber. They want to news and they want to keep up with their friends, the bad parts of Facebook are all side-effects of how Facebook chooses to deliver that.

I guess, since I’m not familiar with programming or coding outside of statistics software, I’m missing just how Facebook’s algorithm is the problem. I see the analogy you’re making with guns, and I’m afraid I’m making the equivalent “guns don’t kill people, people kill people” argument. But I do feel I need to understand better exactly how Facebook is responsible in this context, and how, if Facebook didn’t exist, things like this wouldn’t happen through some other platform, such as Instagram, twitter, snapchat, or even e-mail chains. I guess what I’m asking is: If we pressure Facebook to change its algorithm, or if Facebook were to no longer exist, would that not be the equivalent of making one particular brand of gun illegal, but allowing all other similar guns to be sold to the public?

I don’t know the details of their algorithm, but it seems to be essentially a recommendation algorithm just like on Netflix or Youtube: Facebook uses feedback (likes, comments, shares, clicks) to “score” content and then uses that score to determine who else to show that content to. It’s a pretty simple, intuitive idea (even though the implementation is very complex) and there’s nothing inherently evil or problematic about it. I think the particular problem with Facebook’s use of such an algorithm is that they want to monopolize online social interaction (see their forays into India and Myanmar) and they seem to believe that their algorithm–simply by not being inherently biased and evil–will automatically provide a social benefit to society if they apply it to the massive collection of social data they have access to.

The evidence of this is their responses to the Myanmar crisis, and the previous election, and Russian troll farms: they repeatedly treat these negative uses of their platform like some kind of software bug. “We didn’t intend the platform for this! It’s not designed to spread evil! We’re fixing the problem now!” But I completely disagree with that viewpoint. I think it’s entirely possible that Unbiased Algorithm + Monopolized Social Media = Inherently Dangerous Platform. Gun control in America is only a difficult issue because there are fundamental systems (constitutional protections) that we refuse to question. Similarly, misuse of Facebook is only a difficult issue because they refuse to question their own fundamental systems: the almighty machine learning algorithm.

This podcast closely reflects my views on this issue: The Age of the Algorithm

To be clear, I’m pro-2nd-Amendment. My point is that misuse of Facebook is an easier problem to solve than gun control!