Does Big Tech actually care about fighting AI slop?

AI Summary1 min read

TL;DR

Big Tech's slow progress on deepfake labeling contradicts claims of fighting AI-generated content, as authenticity becomes easily replicable with AI tools. Instagram's head suggests cryptographic signing of images as a solution, but implementation remains sluggish.

Photo collage of a pig eating digital slop out of a bucket.
Progress towards reliable deepfake labelling tech is sluggish, despite all the “help” from AI providers. | Image: Cath Virginia / The Verge, Getty Images

As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. "Authenticity is becoming infinitely reproducible," Mosseri lamented. "Everything that made creators matter - the ability to be real, to connect, to have a voice that couldn't be faked - is now accessible to anyone with the right tools." But people, Mosseri insisted, still wanted "content that feels real." His proposed solution was finding a way to label real media. "Camera manufacturers will cryptographically sign images at capture, creating a chain of custody," he said. The result would be a trustworthy system for determining what's not AI.

The g …

Read the full story at The Verge.

Visit Website