top of page

AI Slop: The Junk Flooding Our Feeds

Robot face made of words like "content sludge" on a blue tech background. Text above: "AI Slop." Describes low-effort AI content.

Last year, Merriam-Webster named “slop” its Word of the year to describe the AI-generated digital junk swamping social media. As defined by experts, AI Slop is a pejorative term for “digital content made with generative artificial intelligence that is perceived as lacking in effort, quality, or deeper meaning, and produced at an overwhelming volume.”

AI slop is content churned out in bulk, often mindlessly, to fill feeds and farm clicks.


How Slop Gets Made

AI slop shows up whenever people use generative tools to pump out content with minimal care. A few common ways it happens:

Content Farming and Clickbait

A lot of opportunists use AI to flood the web with cheap content designed to grab attention. One Washington Post investigation found a TikTok creator who used AI video tools to pump Political Propaganda and Misinformation


AI slop is also created to advance political agendas, spread conspiracy theories, or save face. Party operatives and troll farms use AI to generate memes, posters, or video clips. For example, AI was used to create hyper-stylized campaign images like the “Trump As Pope” or “Swifties for Trump” memes that circulated on social media.

Even after disasters, activists have circulated AI images of a “rescued child” in hurricane floods to score political points. One activist even shared the content while acknowledging it wasn’t genuine.


Governments have also reportedly run “spamouflage” campaigns, piping out divisive AI videos to stir unrest. PBS NewsHour noted that slop has even appeared from the White House’s social feeds. An AI-generated Lara Trump video even declared a fake new healthcare plan, fooling many viewers.


In these cases, slop isn’t made for genuine expression. It’s propaganda and virality bait, built to grab attention and push narratives.

Casual Experimentation and Trends

Sometimes AI slop is just the byproduct of people playing with new AI tools. Most of us have tried it at least once. Early adopters on Reddit, TikTok, Facebook, and Instagram experimented by asking AI to create bizarre art or memes “just for fun.”


Now our social media feeds are flooded with “whimsical” images like cartoon celebrities, fantasy landscapes, and animals doing random human things. These oddities rack up millions of views despite being low-quality and totally made up.

NPR’s Max Read describes AI slop best:

“It's the stuff that you see in your feed that you didn't necessarily ask for that looks a little bit off, that was clearly generated quite quickly and quite cheaply, and is usually designed to be scrolled through for a small amount of engagement and then moved past.”


When people “mess around” with AI prompts instead of building their own creative skills, the slop pile grows.

Virtue Signaling and “Performative” Posts

Some AI slop is crafted to look like commentary, support for causes, or a narrative divorced from reality. For example, AI-generated infographics are sometimes shared to make a political or social point even if the content itself is nonsensical.

One Guardian writer observed that major platforms are awash with “AI Slop,” which is exactly what these companies want: endless content, endless engagement, sensationalist material.


Often this generated content carries a veneer of sincerity or “virtue signaling,” but taking a closer look shows the text or the “data” is basically gibberish. This slop can also come from people trying to appear savvy or more informed than they are, which leads to what I’d call second-hand exposure of AI. Sharing slop without taking a moment to process it or validate it creates its own problems.


Each one of these scenarios produces a flood of new content that crowds out genuine posts and gets rewarded mainly for being exaggerated or clickbaity. AI slop creators trade on shock and novelty to go viral because platforms reward whatever keeps people glued to the screen.

Beware of Second-Hand AI Content and Slop

Even if we’re not generating AI slop ourselves, we can still end up sharing it (or at least believing it). This is especially true when it comes packaged as something socially conscious or emotionally in-tune. Infographics, charts, and “awareness” videos can look thoughtful, but closer inspection often reveals generic phrasing, shallow metaphors, or conveniently vague framing.


This type of content isn’t always designed to inform. Sometimes it’s built to redirect. When language that has historically been used to marginalize suddenly reappears rebranded as empathy, it’s worth asking who the message really serves. The tone may have softened, but the underlying associations remain. Slop, in this case, isn’t just low-quality output, it’s a tactical misdirection.


Before reposting, ask whether the content:

  • Frames the issue with substance, or relies on emotional tone without depth

  • Acknowledges root causes, or avoids structural responsibility

  • Clarifies past usage of terms, or quietly rewrites them

  • Cites sources or provides sources that are not misleading

  • Is it even worth posting? (Most AI memes aren’t funny. They’re just noise.)


Some AI tools leave watermarks (such as Google's NotebookLM) or exhibit telltale patterns, but often the giveaway is the framing. If it feels like a shift in voice or values from the same source, it may not be an update, it may be a cover-up.

Not all slop is obvious, so don’t feel bad if you’ve shared it before.

The Hidden Cost of AI Slop

AI slop isn’t just a visual or informational problem. It has real-world physical costs. Generating mass amounts of low-quality content requires enormous computing power, fueling a rapid expansion of energy-intensive data centers.


In 2022 alone, global data centers consumed roughly 460 terawatt-hours of electricity! That’s more than entire countries, and that figure is expected to more than double by the end of 2026, largely due to AI workloads.


In the U.S., demand is climbing fast, and Scioto County is currently expected to get a data center in Franklin Furnace. The North American data center footprint doubled between 2022 and 2023, and much of the new capacity is being built specifically to serve generative AI tools. These facilities consume huge amounts of power and water, with one estimate putting AI’s projected carbon emissions at 24–44 million metric tons per year by 2030, the equivalent of adding 5–10 million cars to U.S. roads.


Much of that energy is spent on throwaway content: auto-generated articles, synthetic images, spammy video channels. Slop may be fast and cheap to produce, but it’s resource-intensive to store, deliver, and scale.


If AI is used intentionally for automating dull tasks, not replacing meaningful work, we can slow this curve. Less slop means less strain on infrastructure, on attention, and on the environment.


What We Feed the Feed Matters

AI slop didn’t come out of nowhere. It’s a product of a technological race with tools optimized for speed, and platforms optimized for clicks. The cost of letting it run unchecked is high: polluted feeds, blurred truths, wasted energy, and a growing gap between what looks real and what is.


Used well, these tools can be helpful, for drafting, summarizing, or automating work we don’t want to do, or making the work we’re already doing more efficient. The problem is when we let AI replace what we should be doing ourselves: thinking critically, creating thoughtfully, and learning new skills.


What we choose to share (or ignore) shapes what shows up next. If we want feeds that inform instead of overwhelm, we need to resist the temptation of sloppy content and reward what’s actually worth our time.


Slop spreads because it’s easy. We need to choose what’s better, not just what’s faster.

Scioto County Democratic Party Logo
PO Box 492
Portsmouth, Ohio 45662
  • Instagram
  • Facebook
  • X
  • SciotoDems TikTok
bottom of page