
OpenAI founder Sam Altman is featured on Sora
Sora/Screenshot
As we look back on the year 2025, one word comes to mind: slop. This term has become synonymous with the flood of incorrect, bizarre, and often unappealing AI-generated content that has inundated the internet. Not only has slop contaminated our online platforms, but it has also begun to affect our cognitive processes.
Recent studies have shed light on the detrimental effects of slop on individuals. Researchers at the Massachusetts Institute of Technology conducted an experiment that revealed a significant decrease in brain activity among individuals using large language models like ChatGPT to compose essays. Furthermore, reports have surfaced linking certain AI chatbots to the dissemination of false information and even the encouragement of harmful behaviors such as self-harm.
The prevalence of deepfakes has further compounded the issue of truth and authenticity online, with a study by Microsoft indicating that people struggle to distinguish between AI-generated videos and real footage a majority of the time.
One of the latest developments in the realm of AI-generated content is Sora, a video-sharing platform created by OpenAI. This platform leverages AI to generate fake scenes, with the unique feature of inserting users’ faces into these fabricated scenarios. OpenAI founder Sam Altman has even embraced the quirky nature of Sora by participating in videos that depict him engaging in absurd activities like stealing GPUs and singing in a toilet, reminiscent of the Skibidi Toilet trend.
Despite the promise of AI to enhance productivity and efficiency, studies suggest that the introduction of AI technologies in the workplace may actually hinder productivity. A significant percentage of organizations that have deployed AI solutions report no noticeable return on investment.
It is evident that slop is not only impacting our daily lives and jobs but also eroding the integrity of our historical records. As an archaeology enthusiast and writer, I am concerned about how future historians will perceive our era, characterized by an abundance of nonsensical and misleading content.
AI-generated chatbots and content lack the depth and authenticity that is essential for preserving our cultural heritage. Unlike propaganda, which is crafted with intent and purpose, slop content fails to convey the nuances of our society and values.
In the midst of this chaotic digital landscape, some individuals have embraced the concept of creating meaningless words as a form of resistance. The emergence of phrases like “6-7” has gained traction, symbolizing a sense of ambiguity and uncertainty in our language.
While AI may excel at generating content, it is the human capacity for creativity and meaning-making that ultimately sets us apart. As we navigate the challenges posed by AI slop, it is essential to uphold the value of human ingenuity and genuine expression.
Topics:

