I Am Time Magazine’s Person of the Year

AI Summary4 min read

TL;DR

Time's 2025 Person of the Year honors AI architects, but the article reveals how AI models are trained on copyrighted content without consent, raising ethical and legal concerns about creative work and livelihoods.

Key Takeaways

  • Time's 2025 Person of the Year is 'the architects of AI,' highlighting tech leaders while ignoring copyright issues in AI training.
  • AI companies extensively use copyrighted books, videos, and online content without permission or compensation to creators.
  • Generative AI threatens many occupations by potentially replacing human work with tools trained on that same work.
  • The debate over AI training involves legal, ethical, and existential questions about art, attribution, and economic impact.
So are you. Congrats!
A black-and-white aerial photo shows dozens of people walking across a large open plaza, their long shadows stretching diagonally across the ground. The scene is framed by a bold red border resembling a Time magazine cover, with the date “Dec. 29, 2025” printed at the top.
Illustration by The Atlantic. Source: Alexander Spatari / Getty.
It’s rude to boast, but here in 2025, you’ve got to take the wins where you can get them. This morning, Time magazine announced its Person of the Year, and it’s me. It’s you too.

If you want to get all technical about it, Time’s Person of the Year is actually not a person at all but a collection of people: the architects of AI. One of the two covers Time released is a re-creation of the “Lunch Atop a Skyscraper” photograph from 1932, which depicted blue-collar ironworkers suspended hundreds of feet in the air during the construction of 30 Rockefeller Plaza. In its image, Time replaces these laborers with tech personalities such as Mark Zuckerberg, Elon Musk, Sam Altman, and Jensen Huang. That editorial decision alone is, shall we say, a rich text.

Perhaps you are wondering: Where do you, Charlie, fit in? And what of myself? I’m glad you asked. Odds are, you have not personally developed a large language model at a large technology company. (If you have, my Signal handle is @cwarzel.92, and I would like to talk.) And yet, the odds are also decent that morsels from your life have been used to train chatbots.

For the past two years, my colleague Alex Reisner has investigated precisely how tech companies use massive data sets to train their LLMs. He has repeatedly found that so-called architects of AI have relied heavily on enormous databases of copyrighted work to create chatbots and other programs, and has also found that this work is generally taken without the consent or awareness of its creators: musicians, filmmakers, YouTubers, podcasters, illustrators, writers—anyone who has ever posted online, or had anything about them posted by someone else, really. Relatively early in the generative-AI boom, Reisner uncovered that AI companies had used Books3, a data set of nearly 200,000 books, and since then, he’s revealed much more: a far larger pirated-book collection, as well as a data set of writing from movies and TV shows, plus millions of hoovered-up YouTube videos. Much of the information that’s crawl-able on webpages indexed by Google has been siphoned by these companies. And God only knows what kinds of data the social platforms are using to train their systems. Well, God and Mark Zuckerberg, anyway.

That generative-AI models are trained on the creative (and even mundane) output of much of humankind is extremely consequential—plenty of tech companies have been sued for their training practices, and it remains an open question whether they will be able to continue in this way, though notably Time’s Person of the Year story does not use the word copyright a single time. (The Atlantic is involved in at least one such lawsuit, against the AI firm Cohere.)

But there’s an existential quality to the debate over AI and copyright that goes well beyond legal liability. In a little over three years, generative AI has already reshaped culture, the internet, and the economy. One study from April suggested that Google’s AI Overviews feature has, in addition to annoying some users, reduced traffic to outside websites by more than 34 percent. Corporate leaders of organizations such as The Atlantic have struck queasy-feeling partnership deals with OpenAI, in what looks quite obviously to many observers like a hostage situation. No matter the industry, the proposition is similar: Tools trained on people’s work, in many cases without compensation or permission, threaten to undermine, replace, or make irrelevant many occupations.

The fights over training, copyright, attribution, money, ethics, and what it means to make art in an automated future are just beginning. For many of us this is an unfair and difficult-to-win fight. The very least we can do is take a page from the “techno-optimist” playbook and pilfer something that isn’t ours to take. So congratulations on being Time’s Person of the Year in 2025! If you count 2006’s Person of the Year selection (“You”), that means you’ve got two under your belt—and hey, that’s not nothing.

Visit Website