Slop Is Slop

The discourse around AI has recently unlocked a new fear: the potential for apps like Meta’s Vibes and OpenAI’s Sora to flood the internet with AI-generated slop. This anxiety mirrors the concerns that emerged after ChatGPT’s popularization, centered on a future saturated with low-quality, AI-generated content.

It’s important to draw a line here. When I talk about slop, I mean low-quality, high-volume filler content. I am not talking about deliberate, malicious disinformation. Automated fake news is a separate and serious problem, but it’s not this one.

My observation is about the quality of our media environment. And on that front, I’m struggling to see the new problem. My core point is this: I often can’t tell the difference between AI-generated slop and human-generated slop. Slop is slop.

Think about the internet before the current AI boom. When I navigate many internet portals, the content is already optimized for algorithms, not for readers. Clickbait headlines, thin articles, and content made to sell a product are the norm. This ecosystem of human-generated slop, designed to please algorithms and make a profit, was already here.

The same applies to short-form video. TikTok and Instagram Reels are already saturated with content that could be called brain rot. This is, to date, overwhelmingly human-generated. The fear is that AI will make more of this, but it’s not clear if this is a new kind of problem or just… more of the same.

There is, however, a valid concern. AI, with its potential for hyper-personalization, could create content loops even more addictive than what human-driven algorithms currently produce. This is a risk worth monitoring.

This isn’t new to the internet, either. I recall broadcast television from a decade ago. Swapping channels through hours of low-quality programming is conceptually similar to scrolling an endless feed. While today’s platforms are arguably more potent and better at attention theft, the core behavior of passive, low-engagement consumption is not new.

This leads me to question the precise nature of the fear. Is the concern that corporations have now learned to automate slop? If so, the primary threat here isn’t to the audience, who are already consuming slop, but to the human creators who were previously paid to make it.

Perhaps the real problem isn’t the AI at all. The real problem is that we’ve built an internet that rewards slop, no matter who or what creates it. The AI isn’t the cause; it’s just the most efficient tool we’ve found for feeding the system we already built.