I Love Slop*
Published on: Dec 03, 2024Filed under: Media
I think I first encountered the term ‘slop’ in, or around, the spring of 2024.
It was lexicological love at first sight. Slop succinctly summed up both the flood of production as well as both the lack of effort and value in AI content permeating the social internet.
Equating folks pumping out ‘meaningless posts to trigger equally meaningless engagement’ with a farmer tossing wet kitchen scraps into a pig stye with just a single syllable is an enviable linguistic invention.
The term is near-perfect.
The sole flaw, after having spent way too much time thinking about this, is the restriction of the phrase to AI. This seems to be a case of mistaking correlation for causation.
AI - no matter the model - is always an output prediction machine.
Given some inputs and a model, predict an output.
Your spam filter, your autocorrect, and ChatGPT are all different flavors of the same basic formula.
That formula starts by breaking down the real world into numbers - so they can be compared against each and the machine’s self-taught rules - in order to predict an output.
An easy way to understand this to picture a house price predictor. Turn your house or apartment into numbers (or in AI-parlance ‘features’). Find your square footage, count the bedrooms and bathrooms, the zip code, whether it’s a house or an apartment. With thousands of examples of other homes with similar features, both you and your model would get a pretty intuitive understanding of the patterns of value. More square footage typically correlates to a higher price. Some zip codes are just more expensive than others.
But that model, that intuition, will fail every time if the interior of the house looks like this:
Therein lies the single flaw I have with only using slop to refer only to low-quality AI outputs. When people use AI they frequently mistake correlative values with causal ones, but that’s because AI is a correlative and probabilistic mechanism. Mistaking correlation for causation isn’t limited to AI - humans have been doing it for as long as we’ve existed.
And creating low-effort, low-value outputs by mistaking numbers for the things the numbers represent is not limited to AI.
Take the recent film ‘Red One’[1] for example.
The film stars two of Hollywood’s biggest actors in Dwayne Johnson and Chris Evans, backed up by a bunch of other big names. It’s directed by Jake Kasdan - a respectable director. The film involves family-friendly action, Marvel-esque quips, and a high concept of premise.
On paper, that’s a set of features that should align with a blockbuster feature.
So much so that a reported $250M was pumped into production only to fail to impress audiences (recouping roughly 25% of what was required to breakeven) and critics (the AV Club saying it felt ‘like a fake movie’ and ‘made by a committee’[2]).
Mistaking metrics for meaning isn’t anything new. Though I would argue it has gotten worse - when we stopped calling art by the medium it resides in (a painting, a book, a film) and instead started referring to it as a fungible item designed to achieve a metric - that is ‘content’[3] - the volume of this output seems to have increased.
But now we have a word for it - for outputs driven by metrics and features divorced from the reality they are meant to capture - slop.
AI images of kids making Jesus statues out of water bottles? Slop.
Creating a paint-by-metrics billion dollar streaming series or movie? Also slop.
Maybe if we call out slop when we see it, folks might start making better art.
*Note - passion might cure slop, but it doesn’t guarantee good art. I’ve seen Megalopolis.