Users

Reddit and the Robots

A.I. is learning about us from our meme-iest hangout. I don’t feel great about it.

A robot hand hovering over a laptop, from which images of a Karen, a cat, a neckbeard, and a smug cartoon man with glasses are emerging from it.
Photo illustration by Slate. Photos by Getty Images Plus and knowyourmeme.com.

This is part of r/Farhad, in which Slate contributor Farhad Manjoo delves into the Reddit communities that bring him peculiar joy.

Think of the internet as a colossal assembly line that sucks in human behavior on one end and stamps out memes on the other. Now and then, somewhere out in the real world, an out-of-touch senior citizen is complaining about kids these days, a gamer is sending creepy messages to a colleague, and patrons in a coffee shop are aghast at the middle-aged white woman yelling at the barista. Online, such scenes get pressed into universal shorthand, a kaleidoscope of human behavior flattened into neat, digital-era archetypes: Boomer. Neckbeard. Karen.

Every social platform plays a part in this process. But let’s focus on Reddit, as this is another edition of r/Farhad, where I, Farhad, try to put into productive use the otherwise indefensible amount of time I spend scrolling through the internet’s most capacious message board. Lately I’ve been thinking about the singular role Reddit plays in how our memes are made—and how, in the age of artificial intelligence, what’s posted on Reddit might become alarmingly consequential for how machines understand our species.

Reddit excels at categorizing human behavior: If there’s a thing for people to talk about, someone will create a subreddit for it. If enough others take to the subreddit with other examples of that thing, Reddit inevitably turns the thing into a Thing.

The thing could be anything: Sometimes when women get together to have a good time, drunken pratfalls ensue. Put those in r/HoldMyCosmo!

People are prone to making a lot of exaggerated threats when they get mad on the internet. More fodder for r/iamverybadass!

Before they take a leap, cats perform a charming target assessment involving a lot of calculated head bobbing. r/Catculations!

The upshot of all this is that often on Reddit, inchoate concepts floating around the digital ether pick up the distinct, catchy labels that push them into the mainstream. Consider Karen. Throughout the 2010s, several slang terms emerged online to describe entitled white women causing a scene—“BBQ Becky,” “Permit Patty,” “Golf Cart Gail.” In 2017 a high school kid in California created r/FuckYouKaren, a subreddit to document the over-the-top behavior of a real woman named Karen. (She was the ex-wife of another Redditor.) The subreddit quickly outgrew its namesake Karen, becoming a hub for memes and anecdotes about a particular kind of woman, and cementing the name as the go-to descriptor for the stereotype.​​​​​​​​​​​​​​​​

Boomer, Chad, neckbeard, nice guy, pick me, NLOGs (“I’m not like other girls!”), incels—these weren’t born on Reddit, but they crystallized there, becoming the more-or-less official terms for this or that kind of person seen doing this or that type of thing.

In March, Reddit listed its shares on the New York Stock Exchange. The company recently signed deals with Google and OpenAI to license its content for training their A.I. systems.

There are already some signs this won’t end well: Google’s A.I., Gemini, learned that glue is a good topping for pizza from an old Reddit post. Then there are the questions of equity—shouldn’t the people who post to, comment on, and moderate Reddit get some share from their contributions to this suddenly valuable corpus of human interaction?

But it’s more than unpaid Redditors and sticky pizza I worry about. Does it make sense to teach an A.I. about humanity by feeding it our meme-iest hangout? Isn’t that like learning to paint fine portraits by looking at a lot of political cartoons?

Caricature is the constant danger in Reddit’s drive to categorize. Follow a subreddit for a while and you’re bound to notice category creep; it is the rare subreddit that doesn’t inevitably stray beyond its original boundaries as it grows, eventually applying its moniker to instances so far afield that the label begins to lose all meaning.

As I write this, the top post on r/karensoftiktok is a news clip of a knife-wielding South Asian man in a road rage incident. On r/ChoosingBeggars, a subreddit to show examples of people “being way too picky when begging for things,” a top post maligns a woman for simply asking for a lower price on a used van.

By building A.I. upon this imprecise categorization engine, there’s a danger that we’re creating robots that only amplify our human habit of slicing up society into hashtag collections.

Of course, my fear, like all those regarding A.I., is necessarily speculative; we don’t know how thoroughly A.I. will shape our world, so worrying about Reddit’s influence on that process could sound a little abstruse. It’s also true that we don’t need Reddit for meme creep—in the digital era, all of our neologisms tend to be repeated into oblivion. Now anyone who disagrees with you is a Karen, everyone over 30 is a boomer, and an American who isn’t woke is MAGA. If we’re already putting ourselves into all these buckets, is it so bad that the computers will too?

Who knows, but I doubt it’s good. We are all more than our subreddits; the robots, I think, might treat us better if they knew that.