screen time

Uncanny AI Videos Are About to Flood the Internet

Photo-Illustration: Intelligencer; Photo: Runway

Two years ago, the general public gained access to powerful new software that could generate images from simple text prompts. At first, tools like Midjourney and Dall-e were glitch-prone and difficult to wrangle for specific purposes. Soon, they got a little better at doing what was asked of them. They could spit out plausible illustrations, render fairly realistic synthetic photos, and crudely simulate various artistic styles. Google, Microsoft, and Meta bolted image generators onto popular software; last week, Apple announced it would join them.

Next comes video. Start-ups and tech giants have been racing to build video generators for a while, and since 2023 there have been a handful of publicly available products with limited but growing capabilities. Early this year, though, OpenAI again proved itself a pacesetter when it teased Sora, a video generator that could, among other things, create fairly realistic filmlike video clips. Sora was just a demo and still doesn’t have a release date, but it was a strong suggestion that AI video quality would follow the accelerated timeline of AI images.

Now we have confirmation. Last week, a company called Luma AI released Dream Machine, a publicly available AI video generator. It’s glitchy and difficult to wrangle in that familiar way, but it’s also indicative of a pretty clear step forward for synthetic media generation. It’ll generate images from text:

It’ll also try to animate still photos:

I’ve run a bunch of prompts and images through it, and it reminds me of Midjourney or Dall-e circa early 2023. You end up with a lot of surreal stuff like this:

Then on Monday, Runway, one of the start-ups that have been building AI video tools in public, announced a new model called Gen-3, which will be widely available this week:

We can expect more similar tools to come and for AI video generation to become available to anyone who wants it.

Back in 2022, it wasn’t totally clear how people and companies would actually use AI image generation. It still isn’t, but we have a better idea. First, the mundane stuff: It’s replacing some stock photography, driving down freelance illustration rates, and filling in digital spaces formerly occupied by free or low-cost media. You might encounter AI-generated images in presentations at work, for example, or in news stories, sometimes without disclosure, as in the case of this New York Post article from last week. You’ve probably seen this sort of stuff around, creeping in from the edges of your daily online experience. Otherwise, for many users who’ve tried it, it’s more of a novel trick or toy. (This appears to be Apple’s belief too: Its forthcoming image-generation tools are limited and oriented toward texting and lighthearted communication.)

Then, of course, there’s the spam, the garbage, the porn, and the rising tide of slop. Low-effort AI generations tend to congregate around a few distinctive aesthetics — hyperreal staged portraiture; elaborately airbrushed illustration; figures that look like they’re made out of fondant — which now feature regularly in spam ads, social-media engagement bait, and most other types of content that a well-conditioned internet user will instinctively identify as something to be ignored.

When image generators first broke through, there was a lot of sensible speculation that such technology might be used by sophisticated actors for high-level manipulation or insidious propaganda. So far, though, the surge is mostly seeping up from below, conforming to the shape of various neglected online spaces: the Facebook newsfeed, where AI-generated shrimp Jesus is Lord; chumboxes, where weird AI generators are replacing mysterious body-horror thumbnails; YouTube thumbnails, where creators are automating Mr. Beast–style promo images; and X’s ailing advertising platform, where truly anything goes. Less visibly — but only slightly — it has enabled a flood of nonconsensual generated porn and exploitative material. Just this week, to choose one example of many, 404 Media reported on the post-AI state of Google image search:

Google image search is serving users AI-generated images of celebrities in swimsuits and not indicating that the images are AI-generated. In a few instances, even when the search terms do not explicitly ask for it, Google image search is serving AI-generated images of celebrities in swimsuits, but the celebrities are made to look like underage children.


In a less dire sense, the spread of AI imagery has contributed to the general shabbiness afflicting many mature online spaces, which have been overwhelmed by low-level bad actors who suddenly have a bit more firepower.

AI image generation remains a solution in search of worthy problems — or, rather, problems worth solving. Two years in, AI-generated images are associated not with futuristic technology or empowerment but with fraud, laziness, spam, and inauthenticity. For internet users minding their own business, the first wave of AI deployment has felt less like revolution than pollution.

It’s not clear that the arrival of AI-generated video will be much different. Filmmakers and motion-graphics artists are rightly worried about where such technology might be headed, and the wide availability of AI video generation and editing is once again raising concerns about political disinformation and propaganda. If recent history is a guide, though, we know what to expect in the meantime: a whole lot of kinetic slop.

More From This Series

See All
Uncanny AI Videos Are About to Flood the Internet