Generative Videos: The Fatal Blow to the Internet and Reality

By: Shloimy Lowy  |  November 25, 2025
SHARE

By Shloimy Lowy, Photographer and Staff Writer

We have reached a point with technology where most of us — including the experts — believe we have gone too far. TikTok eviscerated our attention spans. X (formerly Twitter) eroded any sense of decency in politics and provided a place for extremists to gather in echo chambers of extremists. Instagram, with its unachievable beauty standards, destroyed young women, both literally and figuratively. Google bought our privacy at the cheap price of convenience. LinkedIn gave us pretentious quotes from your coworker’s three-year-old. The internet slowly ate away at everything we care about, and we didn’t say no. 

We didn’t say no because the internet also provided us with positives. It gave us “the world at our fingertips.” It gave us Wikipedia and Khan Academy, Substack and PubMed. It freed education from institutions and democratized learning. It granted people in less privileged parts of the world access to information that would save their lives and to blogs from which revolutions would start. It gave defectors a place to share, writers a voice, creatives an outlet. 

At the beginning, these technologies offered so much and demanded so little. But of course, with the altruists in Silicon Valley having to answer to their investors, the cost of its use began to climb. 

At first, it was simple. The internet would be financed by advertisements. Fair enough, we said. Then corporations began collecting our personal information to serve ads more efficiently. We let it slide. In the 2010s, Facebook found itself in a scandal when its data was being sold for political tracking and analytics in the 2016 U.S. presidential election. We complained, and life went on. 

But as the cost rose, it began to feel more uncomfortable. People weren’t willing to risk their daughters’ well-being or self-harming to fit in. They weren’t willing to give their children lifelong anxiety in exchange for convenience. Psychologists like Jonathan Haidt began writing about the harmful effects of social media, and a movement erupted. Parents vowed not to give their children phones until they were fifteen. Adults would limit their own screen time. Governments started requiring age verification on certain sites. Some people even reverted to “dumb” phones. Chasidim began yelling in the streets, “We told you so!”

And just as the world began to realize the cost of unbounded technology, artificial intelligence (AI) was unleashed. It took the world by storm — you probably use it in some capacity in your day-to-day life. AI replaced search with illusory conversations and writing with hallucination. It began to eat away at the little bit of critical thinking we have left, sucking the meaning out of everything meaningful. It began to eat away at art, first by training its models on copyrighted material, and then by unleashing loads of “AI slop” onto the internet, making us wonder what is real and what isn’t, and deprecating real human art and expression.  

AI also ruined whatever good was left of the internet of yesteryear. It told us that we no longer needed to read more than one article to get at the truth. It was the 21st century version of an oracle. Studies have found that AI search summary has dramatically reduced the rate of internet users who click on another site, reducing traffic to websites who built their business around Google Search. It would be one thing if what AI claimed was always accurate. But its hallucinations and absolute need to please, led it to advise people to eat rocks and to tell them that one plus one did not, in fact, equal two. It unleashed a cheating epidemic in schools (except at Yeshiva University, of course) the likes of which the world has never seen, and in its darker moments AI told people to cheat on their spouses, and even drove some to commit suicide

Yet, there were still benefits to generative AI. For example, in the field that I love most, biology, AI has taken strides never before possible. It helped create a Nobel Prize-winning system whereby tens of thousands of protein structures could be modeled almost instantaneously, solving a problem that would have taken tens or hundreds of years to solve without AI in almost no time at all. It revolutionized genomics, allowing patterns to be found from massive amounts of data analyzed at once. It created CRISPR-GPT, an AI model that advises researchers in using the different versions of CRISPR gene editing molecules for different purposes. When used in this way, AI is miraculous. 

But recently, AI took a dark turn from being morally questionable to outright amoral. It ceased caring about doing good at all and introduced us to video generation.

Up until this point, OpenAI CEO Sam Altman and his cronies could always point to a specific good that the next version of their AI would provide. This did not fool the AI fearers, those who believe we should pull the plug on AI, but their excuses managed to blind the average person. When we played around with image generation, we saw it as an added benefit of AI. AI could advance medicine and could also, as a bonus, give us images of Donald Trump as the pope. But then we got so used to Trump as the pope, that we forgot that there needs to be a good provided by AI. AI became a good in itself. For its own sake. That’s how we got to Sora. 

Sora is a video generation model. It spews out fictional videos of real or fictional characters doing fictional things in a fictional world. Except it isn’t fictional. Well, yes, it is technically fictional. There isn’t some other universe in which our prompts are creating real scenarios (If there is, lord have mercy on that gymnast whose body does unimaginable things during a backflip.). But it isn’t fictional because there is no way to know that it is fictional. At least not to the layman. For all I know, some camera did catch 12 bunnies jumping on a trampoline at midnight (I may have thought that one was real.) and Jake Paul is suddenly convinced of the truths of Judaism. 

The question here becomes, why does Sora exist? What is the good it provides? 

In the Sora 2 launch video, Sam Altman (or an AI-generated version of himself, I genuinely don’t know) says, “On the path to AGI [artificial general intelligence], the gains aren’t just about productivity, it’s about creating new possibilities.” This isn’t about adding good to the world, nor is it any longer about productivity. It is new possibilities for the sake of new possibilities. Altman is saying the quiet part out loud. This is an arms race for the sake of an arms race, give or take hundreds of billions of dollars. All that is featured in the rest of the Sora 2 presentation, apart from some good but not good enough safety features, is just how “fun” the Sora app is. 

The app is a TikTok-style doomscrolling app but strictly for AI-generated content. What, you ask, is the point of this? To sell us on TikTok, the tech giants told us that it would help us stay connected with our friends and favorite creators. To sell us on ChatGPT they told us it would solve the world’s problems. Now, having convinced us that technology must be good, they no longer feel the need to ethically justify it. Why? Because. Just because we can. 

As in any fair fight, I wanted to give my opponent a voice too. This is what ChatGPT had to say to my skepticism: “To answer you honestly: the ‘good’ that’s claimed — accessibility, creativity, equality — is largely a moral cover for what’s really a massive race for dominance in the content economy. The question ‘should we?’ becomes almost rhetorical when billions are at stake.” I couldn’t express it better. And that scares me too. 

Video generation introduces an entirely new problem. It makes us wonder what is real. There is no longer a way to know whether Will Smith ate that pasta or not. More seriously, we cannot tell whether the news actually happened or if it is AI. When a video of a political assassination goes viral and people cannot tell if it is real or AI-generated, we as a society have a problem. A problem of gargantuan proportions. A problem of believing the evidence of our eyes and ears. For the benefit of “fun,” we pay with our reality. 

Fundamentally, this seems to me the worst offense of generative AI. It completely destroyed the one thing the internet had going for it, namely, information. For all its vices, the internet was an okay place to get information. You could almost say that the internet was invented for information. Sure, you can find yourself a lovely community of 9/11 truthers and Atlantis searchers in some obscure corner of the internet, but the majority of searches still led you to reasonable answers to your queries. AI slaughtered that. It filled the internet with slop, and the truth became so muddied with untruth that we could no longer distinguish one from the other. As Hannah Arendt warned us in the 20th century, “If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer.” 

It is hard to overestimate the benefits of technology. The education, freedom and convenience it provides are revolutionary. But irresponsible generative AI risks taking away all those benefits. I don’t have a solution to this. If we pull the plug on AI, ill-intentioned governments will be delighted to take control. Perhaps we should follow the advice of experts and simply slow down. I don’t know. 

What I do know is that if you care about a human future, a future where you can trust your senses and your neighbors, the president and the TV, then this problem should mean something to you. For when generative video is in the picture, reality is on the line. 

Photo Credit: Unsplash

SHARE