Look, I made you some content

What are the lessons we should be taking from the Facebook Papers reports, and what does content moderation look like in 2021?

A few days ago, I saw a post from Substack co-founders Chris Best and Hamish McKenzie about the lessons of the Facebook Papers reporting. I think it’s worth a quick read:

Substack
The internet needs better rules, not stricter referees
The recent Facebook leaks have prompted a torrent of proposals for fixing social media’s harmful effects on society, including demands for more oversight by company executives, boards, or regulators. None of these addresses the core problem of the attention economy, which no amount of top-down control can fix…
Read more

In it, Best and McKenzie argue that there aren’t any top-down solutions to the problems illustrated in the leaks (which prove once and for all that Facebook gives right-wing media outlets preferential treatment for fear of upsetting conservatives). They’re probably right on that point.

There will be no perfect internet, but there can be a better one. A healthier internet requires overhauling incentives by putting people back in charge. While tech companies can’t filter only the good parts of human nature, they can stop amplifying the bad and the ugly. Through business models that let people choose to pay with money instead of attention, we can foster discourse that is more thoughtful, civil, and intellectually diverse. This kind of model respects people’s agency and judgment by inviting them to mindfully seek out valuable ideas rather than mindlessly scroll through a feed of promoted content. 

People will hate-read and doom-scroll, but they won’t hate-pay or doom-subscribe. While people pay attention to content that makes them agitated, they’ll only pay money for content they trust and value. With this kind of model, free content can still exist, but it will be truly free — not masquerading as such while quietly extracting costs in the form of personal data or manipulated behavior. The profit driver is no longer exploiting emotions, but fostering trusting relationships between the people who create content and the people who consume it.

I completely agree that the internet’s incentive structure, which prioritizes rage above just about anything else, needs an overhaul. And I completely agree that “while tech companies can’t filter only the good parts of human nature, they can stop amplifying the bad and the ugly.” What this boils down to is basically a call for an end (or at least an extreme overhaul) of the way content on social media platforms is ranked and how internal algorithms decide what’s relevant. Great. Good luck getting Facebook on board with that, but great. Totally agreed.


The Present Age is a reader-supported newsletter about communication, media, culture, and politics in a time of hyperconnectedness. There are paid and free subscriptions available.


But the second half of that excerpt, which essentially argues that Substack’s model, which doesn’t rely on discovery tools or built-in reading suggestions, is the answer. It may be an answer, but I just don’t know if it’s the best answer (but it might be). Even without algorithms geared toward encouraging people to embrace rage, I wouldn’t go so far as to suggest that Substack’s model doesn’t encourage the exploitation of audience emotions.

Our content-consuming decisions, generally speaking, will always tie into emotion. People may not “hate-pay” for subscriptions, but there are certainly people who specifically subscribe to publications because those places hate on the same people they do. Add in the fact that there’s always the concern that if people only consume content from others they agree with (as many people do), we still have bubbles and alternate realities that will only become more difficult to escape over time.

But this is not a Substack-specific problem. Or a Facebook-specific problem. Or even an internet-specific problem. This is a human problem that may not actually have an answer.

That said, it got me thinking about the various platforms and their guidelines for acceptable content.

There would seem to be two separate ways for platforms to set their content rules:

  • Create a set of rules so permissive that advertisers (or investors) would (rightly) worry about what, exactly, their ads are displayed alongside.

  • Create a set of rules restrictive enough to assuage advertisers’ concerns, but then end up forced to grapple with a way to actually enforce those rules.

Both have their own set of unique challenges. In the first case, there’s the worry of losing both advertisers and users. As much as some people claim to want a total free-for-all, setting the bar for allowed content at “Well, if it’s legal, it should be allowed” will inevitably drive some users away. And on platforms that rely on advertising (Facebook, Twitter, YouTube, and well, most places), that’s obviously something that stays front of mind from a business sense. In 2020, more than 80% of Google parent company Alphabet’s $183 billion in revenue came from its ad business. And understandably, advertisers don’t want their ads running alongside controversial content that could hurt their brand image.

In 2017, in response to brand concerns about where their ads were showing up , Google announced plans to improve the topic and site-level exclusion processes:

We know advertisers don't want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.

We’ll also tighten safeguards to ensure that ads show up only against legitimate creators in our YouTube Partner Program—as opposed to those who impersonate other channels or violate our community guidelines. Finally, we won’t stop at taking down ads. The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform—not just what content can be monetized.

And in the other example of possible social network rules, in which companies do create rules that make their sites both a safe bet for advertisers and something less than a cesspool for users, these platforms are then tasked with actually enforcing those rules. This, as these companies have learned, is easier said than done. Moderating gigantic platforms like Facebook, for instance, might actually be impossible, as Lance Ulanoff argued at One Zero earlier this year.

(On this topic, I highly recommend checking out Casey Newton’s excellent 2019 articles about the lives of Facebook moderators, which often involve hours of watching videos of everything ranging from graphic pornography to actual beheadings.)

When it comes to creating and enforcing policies, platforms are damned if they do and damned if they don’t — so most simply say they do… and then… don’t.

Social media platforms tend to create big, sweeping policies aimed at making them attractive to users and advertisers alike. For instance, here’s a portion of what Twitter’s “hateful conduct policy” states:

We prohibit targeting others with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals. We also prohibit the dehumanization of a group of people based on their religion, caste, age, disability, serious disease, national origin, race, or ethnicity. In some cases, such as (but not limited to) severe, repetitive usage of slurs, epithets, or racist/sexist tropes where the primary intent is to harass or intimidate others, we may require Tweet removal. In other cases, such as (but not limited to) moderate, isolated usage where the primary intent is to harass or intimidate others, we may limit Tweet visibility as further described below.

And whether or not you believe those to be good policies, it doesn’t really matter since Twitter isn’t exactly enforcing it anyway. As a trans person who spends a lot of time on Twitter, whew, I can say with some certainty that the company really doesn’t seem to actually care about “targeted misgendering” unless it’s persistent to an absolutely absurd point.

And it’s not just Twitter. YouTube, for instance, had a harassment policy that said that “content that makes hurtful and negative personal comments/videos about another person” would be removed from the platform. So when Carlos Maza, who is gay, called out right-wing YouTuber Steven Crowder after years of calling Maza things like “a lispy sprite,” "a “little queer,” “Mr. Lispy queer from Vox,” “an angry little queer,” “gay Mexcican,” and more, it should have been a fairly open and shut case, a clear breach of YouTube’s rules as they were written. Instead, YouTube twisted itself in knots to try to argue that Crowder hadn’t actually violated its policies.

Maza really went out on a limb to call that out, and in the end, all it resulted in was Vox letting him go. (We discussed this a bit when I had Maza on my podcast a few months back:)

The Present Age
Carlos Maza at the end of the world (podcast + transcript)
Listen now (41 min) | Welcome to the Present Age Podcast. I’m your host Parker Molloy. On today’s show, I speak with my friend Carlos Maza. As the host of Vox’s “Strikethrough,” Carlos helped shine a light on the way the choices made by the media helped raise Donald Trump and Republicans to power…
Read more

Facebook, for its part, has its own set of rules in place, but as I mentioned earlier, doesn’t seem to have the ability or the will to actually enforce them. In September, The Wall Street Journal reported on XCheck, a program that essentially held a number of high-profile right-wing Facebook users to more relaxed standards when it came to enforcement of policy violations.

All of this shows just how pointless these policies are. Scattershot, inconsistent enforcement with exceptions carved out for high-profile users make these documents worthless. And that’s why I think companies should start from scratch.

Platforms should start fresh with policies they can actually enforce.

In 2018, I wrote an article for The Verge about Twitter and Alex Jones — sort of. The truth is that I’d had that article bouncing around in my head for some time, and Twitter’s indecisiveness around Jones and whether he had a place on the platform (despite regularly breaking its rules) made for a great jumping off point.

I wrote about Twitter’s identity crisis and its tendency to create advertiser-friendly standards of conduct for the platform while not actually enforcing them.

The company’s “hateful conduct policy” prohibits users from engaging in targeted harassment, making unwanted sexual advances, or harassing others on the basis of “race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” But take a quick glance at Twitter, and you’ll find no shortage of accounts promoting racist, homophobic, transphobic, or Islamophobic content, which are all seemingly clear violations of the site’s own policies. But should you actually report the tweets and accounts that are promoting those views, you’ll no doubt receive your fair share of notices informing you that, actually, none of the company’s policies were violated.

It’s no wonder that there’s so much confusion about whether Alex Jones and Infowars belong on the platform given how opaque the company is about its own policies and haphazard enforcement of them. Reading through the rules as they’re laid out on the company’s website, it feels as though those documents are less actual guidelines of governance and more just a collection of vague ideas held together with masking tape and chewing gum. Perhaps it’s time for the company to call something of a corporate constitutional convention, delete the entire document, and start fresh with a clear purpose — and the will to enforce it.

As it stands, Twitter acts the way you might expect an external litigator in search of a legal loophole would, despite having written the laws itself. Dorsey tweets that he’s simply holding Jones to the same standards as every other user, but the company very clearly doesn’t apply its rules evenly. When people pointed to the way Trump would flout the platform’s rules with personal attacks, the company issued a statement explaining that the rules the rest of us are expected to follow don’t concern him because of his status as a world leader. While many companies took the opportunity to ban Jones, using each other for cover — perhaps giving way to a bit of insight into what they’d have done if not for fear of political fallout — Twitter’s inclination seems to lean in the opposite direction: away from a good-faith reading of its own rules.

In the years since, Trump and Jones have both been booted from the platform, but it took repeated, flagrant disregard of Twitter’s rules for that to happen. If that’s the standard, then it’s likely that these policies aren’t so great to begin with. Twitter, like Facebook and YouTube and even Substack1, should only set rules it can and will actually enforce. If that means paring back some of the more advertiser-friendly terms of service, then that’s what needs to happen.

These platforms — Twitter, Facebook, YouTube, etc., — should each convene something of a constitutional convention for themselves, ditching the old documents around rules and regulations, and replace them with clear, easy to understand guidelines that fall within platforms’ ability to actually moderate them.

What are your thoughts?

1

Currently, Substack’s own terms of service include this line: “Substack cannot be used to publish content or fund initiatives that call for violence, exclusion, or segregation based on protected classes. Offending behavior includes serious attacks on people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability or medical condition.”

There are certainly a handful of accounts that violate these rules regularly that the company is almost certainly aware of. As is my view with Facebook, Twitter, etc., I think it should pare back its rules to whatever it feels comfortable actually enforcing. That way people know exactly what they’re in for when they sign up.