What Do the People Building AI Believe?

AI Summary33 min read

TL;DR

The article explores San Francisco's AI subculture, characterized by massive salaries, ideological factions like 'doomers' and 'accelerationists,' and a quasi-religious devotion to building AI. It discusses the jagged reality of current AI models and Silicon Valley's political shifts.

Key Takeaways

  • San Francisco's AI scene is marked by exuberant 'gold-rush vibes,' with high salaries and a pride in embracing tech's strangeness.
  • Two major factions shape the industry: AI 'doomers' who fear existential risks and 'accelerationists' who advocate for rapid, unregulated progress.
  • Current AI models exhibit a 'jagged frontier'—excelling at some tasks while struggling with others, challenging simplistic narratives of AGI.
  • Silicon Valley's political landscape has shifted rightward, driven by backlash against regulation and 'wokeness,' though some now seek a nonpartisan, progress-focused identity.
  • The future of writing faces challenges from AI, but text retains unique properties for idea dissemination and critical thinking.
Inside San Francisco’s AI subculture
Image of Charlie Warzel side-by-side with Jasmine Sun
Subscribe here: Apple Podcasts | Spotify | YouTube

Silicon Valley runs on hype cycles, and the AI boom is generating a new one—part gold rush, part ideology, and part quasi-religious devotion to building an alien intelligence.

On this week’s “Galaxy Brain,” Charlie Warzel explores the culture of this boom with the writer Jasmine Sun, who’s been chronicling San Francisco’s AI scene. Sun describes what this moment feels like on the ground, including a subculture of massive salaries, and a weird pride in leaning into tech’s strangeness. Together, Warzel and Sun unpack two major factions shaping the industry: the AI “doomers,” and the accelerationists. The conversation also traces Silicon Valley’s rightward drift—the “founder mode” backlash against regulation and employee activism and the rise of “Trump style” provocation-first tech marketing. Finally, Sun and Warzel address the jagged reality of today’s models, which are brilliant at some tasks and weak at others.

The following is a transcript of the episode:

Jasmine Sun: The way that AI progresses is in these fits and starts, and it’s going to diffuse into our society quickly, but also incrementally. And I don’t really want to wait around until that moment that AGI shows up and we can all agree on it before we start to think about what that actually means for us.

[Music]

Charlie Warzel: Every tech revolution produces its own distinct culture. The vibe of Silicon Valley’s early computing years was, in part, countercultural. There were government contracts, yes, but the builders of that moment were also influenced by DIY publications like Stewart Brand’s Whole Earth Catalog, which sought to “change the world by establishing new, exemplary communities from which a corrupt mainstream might draw inspiration.”

The dot-com boom of the late ’90s and early 2000s was fueled by an optimism that was much more profit driven, but also buoyed by the novelty of the commercial internet.

“Carpet the world with cheap technology, and clever hands will put it to work in a thousand ways never before imagined,” Wired wrote describing the moment. “Moore’s law boiled down to one word: more. The more you have, the more you use. While traditional economics are driven by scarcity, the world created by the microchip is one of abundance.”

People saw the internet and felt certain it would change everything. Money flowed in aggressively. The big bets were directionally correct, but it was too much, too fast, too greedy. Think about the canonical example of the dot-com bubble—Pets.com. It goes public in 2000 and raises more than $80 million only to have to liquidate after nine months. It wasn’t a bad idea, but it was arguably just way ahead of its time.

The social-media era was defined, at first, by a blinding optimism. Nerds in hoodies building billion-dollar companies in dorm rooms. The iPhone birthed the App Store, from which 10,000 start-ups bloomed. Zero-interest rates meant easy venture capital, which underwrote gig-economy apps like Uber and Lyft and helped companies like Meta become titans. It was, at first, a bro-y, nerdy, at times earnest culture—lots of ping-pong tables, keg parties, and bean-bag chairs—that ultimately minted billionaires and remade the city of San Francisco and succeeded in rewiring the planet.

Today, the AI revolution has its own flavor. One that is defined by the tech industry’s feeling that they are building something extremely powerful: a kind of alien superintelligence that will—depending on who you ask—solve humanity’s greatest problems and usher in an era of extreme prosperity, or destroy our economy by eliminating the need for most jobs and potentially kill us all. There’s an unbelievable amount of hype that can feel delusional. But also a very real, almost religious devotion to the technology by people who feel as if they are building God. All of this is complicated by the fact that billions of dollars of investment are pouring into the industry every year, creating a tech-hiring arms race and a strange new culture of its own.

One of the many AI manifestos of the last few years—written in 2024 by a former OpenAI employee named Leopold Aschenbrenner—starts with this line: “You can see the future first in San Francisco.” In many cases, he’s expressing a feeling that tech workers have felt since the 1960s. But if AI is going to change everything, it’s worth trying to understand the culture that the people building the technology are living in every day.

Jasmine Sun just so happens to be chronicling that culture. She’s a writer living in the Bay Area who describes herself as an anthropologist of disruption. She interviews AI researchers, tech-industry gadflies, and has an incredible knack for seeing and describing trends before everyone else. She’s also worked in the tech industry, as an employee at Substack. Reading her over the last year has helped me understand not just what technologists are building, but why they’re building it. She joins me now.

[Music]

Warzel: Jasmine Sun, welcome to Galaxy Brain.

Sun: Thanks for having me. I’m excited.

Warzel: This is great. I’ve been reading your writing for the past year, and it’s like dispatches from a foreign planet that I also happen to live on and write about. So it’s been wonderful. And that’s what I want to start with. You have described San Francisco as a place where the future comes first. I’ve heard you talk with other people about attending, like, underground robot fights, things like that. And there’s a great line from one of your newsletters that said, “The other night, a friend and I are at a meetup in the Russian sauna, dissecting the city’s frenetic ‘gold-rush vibes.’” Gold-rush vibes being sort of the operative word. And so I wanted to ask, just first off, what is the vibe like in San Francisco right now? Paint me a picture of what it’s like to live there, because I feel like I get a caricature of it, right? Like it’s either just all hacker houses, or people getting together injecting black-market Chinese peptides that help them make better eye contact. Stuff like that. But what’s it really like out there? What is the vibe?

Sun: Yeah; San Francisco is an interesting place. But I do think the mood right now is very … exuberant is how I describe it. There’s a lot of money, as everyone knows, flowing around the AI scene.

Your friend is 25, and they might be making $10 million a year. You don’t know. People are raising the craziest seed in series-A rounds I’ve ever seen in my life. And I think people are really—people feel like the city is back. In the sense that, during COVID, there was a downswing, where a lot of people moved out of the city and there’s a lot of urban crime and disorder. People were unhappy with city governance. There are a lot of tech layoffs.

But because the AI boom has sort of resuscitated the city, and we have this new mayor—and people are feeling like, “Okay, we’re back; we’re going to be doing our stuff again”—there’s increasing pride. Around being like: “Yeah, the rest of the country is falling apart in many ways. The rest of the world might be falling apart, but here we’re still excited about the future. We’re going to experiment with things. We’re going to lean into the weird parts of SF tech’s personality and really indulge in all of these like strange-looking things and take a lot of pride in that.”

Warzel: But one of the things you do so well in your newsletter, and in covering this, is sort of building out a taxonomy of the culture there. Talk to me about, you know, the group of—let’s start with “doomers and decels.” Like, describe that for me. What is that group? How are they important to the culture?

Sun: So, one of the largest sort of subcultures or factions of AI are what people call the AI-doomers: the people who think that AI is going to kill us all. And so Eliezer Yudkowsky, the founder-ish of the rationalist online subculture as well, is probably the best known. He just wrote this best-selling book, If Anyone Builds It, Everyone Dies—i.e., if anyone builds a superhuman intelligence, we will all die. Because it will inevitably acquire strange goals that we don’t understand, and then the AI, because it’s so smart, is probably gonna hack into all our computers, steal our resources. And we will be ants to the superhuman intelligence. And so there are a lot of people in AI—including many of the first researchers who joined the field, including even, for a long time, people who were more worried than they were excited about building superhuman intelligence.

And the reason that they got into the field at all was to understand the superhuman intelligence so that they might stop it from killing us all. Of course, this has gotten quite messy, because now there’s a sort of divide within the AI doomers about whether it’s okay to work at an AI lab or not. Like, do you want to be the people building the less-bad superhuman intelligence, or is working on it at all an immoral thing to do? But basically it is this very vibrant, you know, world of people who think that rogue AI poses the greatest existential threat to humanity we have ever had. And so this divide shows up because the doomers, who are really worried about risks, acquire quite a bit of both cultural power, financial power, from working at these companies. Increasingly, political powers. A lot of Biden’s AI folks sort of came from the safety-oriented, doomer-adjacent camp.

And then of course the venture capitalists were like, “Hey; whoa, whoa, whoa—we’re building this incredible technology. Why are you guys so worried about it killing everyone? Why are you trying to regulate it?” And so this sort of gave rise to the “e/acc” [effective] accelerationist front, who put on the label “doomers” as a sort of pejorative on all of these people who wanted to pause building AI or impose onerous regulations, or things like that.

Warzel: Right, and let the other side of it, the accelerationists: “Let it rip.” Right? And just, like, “Embrace the future of it. We will figure it out as it comes. And the most important thing is to beat China, to push things forward, and usher in a world of abundant intelligence and possibly, like, economic progress.”

Sun: Yeah. I mean, to the accelerationists, the doomers are basically the same as being woke or a social-justice activist. Which is like, in this world, that’s a very bad thing to be—because it’s the same deal where you are so worried about these nebulous impacts on society that you are willing to place restrictions on how fast technological progress can go.

Warzel: Do you feel that the doomerism has turned down a notch or changed in character? Or is it still, in the Bay Area, kind of booming?

Sun: I think it has changed in character in a lot of ways. I agree overall with the assessment, for example, that classic doomerism has waned. And also economic concerns, whether it’s the bubble, whether it’s job loss, have sort of come to the forefront. It’s really interesting, actually, because I remember at the beginning of the year, whenever I talked to AI-safety people, people who come from this more doomer-adjacent camp, and they’d say, “What risks are you concerned about?” And if I said something like, “I’m pretty worried about job loss or labor issues,” people would kind of roll their eyes. Because this was seen as a short-term harm—not like, you know, it’s not extinction. It’s just people losing their jobs.

Warzel: Right. Yeah, happens all the time.


Sun: People would really disregard anyone who is worried about something as small and petty as mental health, or job loss, or whatever. But there’s more of a coalition, I think, between these camps now, because they share the desire to make AI slow down. I think the doomers identified these alignment risks, but the model of how they predicted it would play out has been challenged a little bit just by literally what we’re seeing from AI progress. So for example, in the doomer, like Eliezer Yudkowsky, view of the world, we only have one shot at building superintelligence. It’s like this threshold. Once we’ve built superintelligence, it’s going to self-improve recursively. It’s going to immediately—tomorrow—every day, GDP will double. Every day the machine will replicate itself … and then take over the city and have robots to go everywhere. And we’re not gonna be able to stop it, because it’s just gonna happen so fast, right? And so there’s sort of binary apocalyptic scenario that they imagine.

Whereas when you really look at AI in the world—looking at these LLMs we have, looking at AI integrating into society—I think all of us can see that on one hand, yes, these tools can be pretty powerful. There have been things like AI-enabled cyberattacks. But it also happens incrementally, right? Like, ChatGPT is not escaping its cage. It is not that just because we have, you know, ChatGPT today, we’re going to have robot arms tomorrow. Actually, there’s a lot of steps in between that. Or like, just because some people are using Claude Code and having a great time doesn’t mean that people in other industries are necessarily handing over all of their infrastructure to AI as well. So a lot of just, like, the models of how fast this would happen—and as a result, how risky it would be—have been a bit challenged. Because it just turns out that progress is not as fast as some people expected. And diffusion into the world, baking AI into all of our society’s infrastructure, is especially slow.

Warzel: One thing I’ve noticed, covering this stuff for too long now, is: The visions of the future are always way too sexy and way too logical than what actually happens. So many people were predicting this postapocalyptic information environment that is just like “Nobody knows what’s real,” or anything like that. This was 10 years ago. And that’s basically come true. We live in that sort of “Don’t believe your lying eyes” world right now, and all kinds of generated videos and slop and et cetera. And it doesn’t feel like we are living in that future, right? It doesn’t feel like that insane thing.

Sun: Yeah, it’s much more of a “boiling the frog”–type situation. Like, the risks are real, but they take a while. It reminds me of being in Brooklyn in 2021 during crypto summer, and the way that everyone talks about Web3. Of like, “One day, like, Web2 is all going to go away, and the entire financial ecosystem is going to collapse and be replaced with these cryptocurrencies.” And it is a sort of before-and-after-type moment. And it turns out that, like, crypto is a part of the economy. I think that more things will integrate cryptography into them. But it’s just like—it doesn’t happen as fast or as definitively as I think a lot of these tech folks sort of expect it to.

Warzel: When you have written before about AGI—and I think your writing on this has been really clear—you said that, “I wouldn’t call myself a believer yet, though I’ve updated in the direction that yes, AI really matters.” Kind of touched on that. But this line that you had, I thought, really described where it felt like we were at, or where I often feel like this, you know, this kind of alien computer intelligence is at. Which is: “AI discovered wholly new proteins before it could count the Rs in the word strawberry, which makes it neither vaporware or a demigod, but a secret third thing.” Where are you on, you know, artificial general intelligence? Where are we on that timeline? Do the timelines even matter? Will we know when we see it? Or like—how do you think about it, as someone who’s covering this?

Sun: Yeah, I wrote that piece in maybe April. I think I was thinking about it a lot in February and March, at the beginning of the year, when I was starting to cover AI much more seriously. I think it’s aged reasonably well. I remember at the time, a lot of my friends who are AI researchers were sort of in the two-years-to-AGI timeline: “In 2027, we’re gonna have AGI.” Like, “This stuff is gonna take over very fast.” And, at the time, I think Ethan Mollick is the professor who came up with the “jagged frontier” concept—that, when you look at AI, they can be superhuman at some things like protein folding, generating high-school essays, certain types of coding tasks, while being quite weak at other tasks entirely. Arithmetic; I remember the first few versions of ChatGPT couldn’t even do simple math. Or counting the Rs in strawberry we finally figured out, but it took a few years to get there. I still think that jaggedness is underappreciated, and it is why you can have such drastically different experiences of “This thing doesn’t work at all” and, like, “My god; this is doing my entire job.” But I do think that understanding AI as a sort of jagged superintelligence right now, rather than AGI, is a reasonable way to understand it. It is possible for it to be amazing at some things and weak at others. As for the generality part, I think that most folks in the field would say that we are, you know, now maybe five to 10 years or something like that away from AGI.

And in this case, “generality” means that the AI is able to learn new tasks on its own that it wasn’t explicitly trained for. But honestly, I don’t personally spend a ton of time thinking about when exactly that’s going to hit. I just feel like: The way that AI progresses is in these fits and starts, and it’s going to diffuse into our society quickly, but also incrementally. And I don’t really want to wait around until that moment that AGI shows up and we can all agree on it before we start to think about what that actually means for us. I find it a little bit of a distraction to try to pin specific timelines on AGI. And it’s not something I spend a ton of time thinking about.

Warzel: I think, to some degree too, it’s just—a little bit buying into the, I don’t know if it’s explicitly the marketing, right? But it’s buying into the narrative coming out of these companies. Right? We have a mutual friend, Robin Sloan. He’s a technologist and a jack-of-all-trades Renaissance human who’s dealt with machine learning and can code and do all that stuff. And he wrote a piece very recently talking about AGI and just saying, like: It’s here in the sense that there is a type of intelligence that these models can produce. It is artificial, and it is very general in that it can do lots and lots of things with reasonable competency, and also mistakes and failures. It’s not a replacement for an autonomous human being that can go out and learn things that, you know, you didn’t tell it to learn, and infer things about the world and grow and learn, right?

But at the same time, critics should adopt the mantle of, like, that’s there because it allows you to take this thing seriously. To talk about all of its general-use cases, and why it’s important. Do you agree with that framework? Sounds like it’s sort of similar to the way you’re thinking about it.

Sun: Yeah. I think that it is already general in many ways—that you can type all sorts of things into ChatGPT and it’s going to be able to figure them out. Researchers think of it as emergent capabilities.

The reason that I find AGI hard to define is because, one, if you just look at all of the definitions, nobody agrees with each other. And so you realize that it isn’t really anything that people in the field have even agreed on. But two, what’s so interesting about AI, and why I like thinking about it as a humanities-ish person also, is that every time you think you’ve reached AGI, what you really end up doing is moving the goalpost. Because you realize these new dimensions to human intelligence, right?

So we have the Turing test. Where, at one point—you know, 75 years ago—Alan Turing thought that if a machine could talk like a human in a way that you wouldn’t be able to tell there was no human on the other side of it, then it must be as smart as a human. We thought the ability to talk as well as a person was what really revealed intelligence. Well, we’ve passed the Turing test now, and it’s certainly a very powerful thing. People are falling in love with these chatbots. But we’ve also uncovered all these dimensions to human intelligence that are more than just next-word prediction. And so, every time AI sort of passes a threshold of intelligence, I think what we really end up doing is like: Hmm, it’s not able to do everything we can do yet. There must be something else special going on in our brains—whether it is creativity, whether it is generality, whether it is social intelligence—that is a little bit different. And so that’s the kind of the process that I enjoy—sort of revealing these new dimensions to, I guess, human intelligence as a result of the moving benchmark of AGI.

Warzel: I want to pivot here, because I want to get a little bit into the politics of Silicon Valley, which you have written about quite a lot and quite well. Talking about, especially, the rise of the tech right; 2023, ’24, you start to see this slightly ideological change, but it’s really nuanced. And I was wondering if you could kind of walk me through, like in your mind: How did Silicon Valley end up kind of aligning, at least from the boss perspective, with Donald Trump? And get this sort of anti-woke, rightward ideology?

Sun: There was a really illuminating conversation that Marc Andreessen—one of the co-founders of Andreessen Horowitz and prominent Trump supporter in the last election—had with Ross Douthat at the Times where he sort of walks through his journey. And I think he was actually being largely quite honest here.

During the Biden administration, there were two sort of dominant forces in the Democratic Party. One was taking a pretty, like, corporate accountability–type approach, whether that was Lina Khan leading the FTC and pursuing antitrust action against a lot of big tech companies, or whether it was having quote unquote “AI doomers” regulating AI and introducing things like the executive order to sort of enforce more civil rights and transparency requirements on these companies. Or whether it was crypto regulation looking at, there are a lot of these crypto frauds going on. FTX just collapsed; we should have a lot more scrutiny of what’s going on in crypto. So on one hand, I think the Biden admin pursued a lot more aggressive regulation of tech companies.

And then on the other hand, the cultural force of the Democratic Party became quote-unquote, much more “woke,” right? And so there was a lot more interest in affirmative action, in sort of activism—both at the grassroots level and also within companies at the employee level. And I think the combination of “wokeness” and regulation just really pushed against some core Silicon Valley values that these people held. Because Silicon Valley is generally very happy to be “live and let live” social liberals. They’re really libertarians in a way. They’re even okay with being taxed for the most part, right?

Silicon Valley actually has, historically, had pretty high willingness to pay income taxes and to redistribute wealth that way. What they do not like is other people telling them what to do, how to live, how to run their companies. And so levels of support for, you know, regulation or for labor unions is incredibly low, even among Silicon Valley Democrats. And so as soon as you had employee-activism movements or antitrust, that was actually the thing that when the Democratic Party shifted toward really pushed Silicon Valley leaders like Andreessen away from that. And then, of course, I think part of it is just some people making a rational calculation that “We all know that Trump is a very personalistic president, who will care a lot whether you sit with him at the dinner table, and you give him a call, and you’re nice to him.”

And so I think for other CEOs, they were just making a logical transactional decision to support him for those reasons. But I think for a lot of folks, it was this sense that they felt that they had been abandoned by a Democratic Party that was no longer the party of “Live and let live; you have freedom; you do you,” and much more of a “We are going to police you if you have wealth; we are going to enforce DEI requirements; we are going to support the employees who want to put restrictions on all the amazing technological things you’ve created.”

Warzel: You’ve written that, in terms of the ideology of a lot of these people, it’s a little less left to right than it is “acceleration,” as “deceleration is right.” That people are perhaps really just up for whoever’s going to let them let it rip, and live and let live, as you said. And build and prioritize innovation, whatever that means. Do you feel like that’s still true now? Do you feel like that’s the right way to think about this divide? Or do you think that now—this far into Trump two—I don’t know. Does the left or right of it play more of a role as the politics of the Trump administration are becoming harder, I think, probably for anyone to ignore? Right? All the stuff happening with ICE, all the geopolitical considerations now with Venezuela, et cetera. Do you feel like it still is, though, talking about politics in the Bay Area is kind of cringe?

Sun: Toward the end of 2024, and the beginning of 2025, was like the tech right on top, right? People were really excited about the Trump admin. They thought that maybe Trump was gonna invest big in AI. He had promised at one point to, like, give a visa to any college graduate or something like that. He was seemingly pro high-skilled immigration. Elon [Musk] was helping out with DOGE. People were very excited. And then as you know, the tariffs rolled in; that was very unpopular with business leaders. There’s been a big crackdown on H1B visas and high-skilled immigration. That’s very unpopular with tech leaders, because they rely on those immigrant workforces. And all of these other things, like [the federal government] taking a 10 percent stake in Intel.

That is not the free-market politics that a lot of these people were hoping for. And so I actually think a lot of the tech right feels a little bit embarrassed, or in retreat. But rather than becoming Democrats, because the Democrats have given them no reason to have much faith either, I think Silicon Valley is, more than ever, doubling down on its identity as being a nonpartisan center of progress. SF, too, sometimes feels like the only place in the world almost like insulated from crisis. People talk very little about ICE. People talk very little about the Gaza conflict. People don’t want to talk about politics, because it feels messy and out of control, and they would rather focus on the things that they can control. And sort of like, We are going to shake off the excesses of both the left and the right and just do our own thing. And so my sense is actually that people in tech have become more enthusiastic about adopting this nonpartisan, progress-accelerationist orientation, the more that the Trump admin has sort of spooked them a bit with its excesses.

Warzel: I wonder how this pairs at all, if it does, with another thing you’ve written about. You’ve called it “the Donald Trump school of marketing”—this kind of vice signaling, right? This very, very provocative, very in-your-face kind of way of talking about the products that people are building. How do you think about all of that? Because it does feel interesting if there’s this isolationism, right? This accelerationist isolationism and “We’re just not gonna do that.” But also it does feel like there is a little bit of this middle-finger ethos as well. I don’t know; how do you hold all that in your head? Is there a vice-signaling problem in Silicon Valley?

Sun: There’s definitely a vice-signaling problem. And I do think a lot of that comes from the Donald Trump School of Tech Marketing. I think one interesting thing—so I spent some time trying to talk to folks who considered themselves part of the tech right. Some of them were younger, Gen Z, male founders. Some of them a little bit older in the ecosystem. Just because I’m a liberal, this is a foreign world to me. So I’m just trying to figure out what was going on here.

And one thing that really interested me was that folks who supported Trump in the 2024 election—a lot of them did so. Some of them supported his policies. But a lot of them did so not out of respect for his policies, but out of respect for who he was as a sort of founder and operator. People saw Donald Trump as a guy who was like them: who could remake the Republican Party in his image, who could command immense loyalty, who had this sort of delusional self-confidence, who could disrupt an establishment party. And just do things like capture the leaders of other sovereign nations and bring them to the U.S. and arrest them, right? Like it is this very high-agency, God-complex type figure.

And so I talked to some founders who saw identification with Trump, and who he was and how he did things—not necessarily identification with his specific political views, which is, I think, like when I think about vice signaling and marketing. A lot of folks probably—whether explicitly or implicitly and subconsciously—do take inspiration for the way that they conduct their businesses. With realizing: Yeah, attention is everything in this world. That’s what we’ve learned from Trump’s ascendance. Can we borrow some of those same tactics that worked so well for this guy, Trump, and use them to help our businesses succeed as well?

Warzel: You recently wrote about your first year being a full-time writer, covering a lot of this stuff. And this question actually comes from the aforementioned Robin Sloan. I reached out and I said I’m talking to you today. And I wanted to know, “What do you want to hear from her?” And he writes: “She’s obviously someone who takes writing and reading seriously and is a careful, rigorous observer of the weird present. Like very clearly interested in digging into the anthropological, emotional truth of it all. Someone who is not interested in comforting platitudes. So I want to know whether she believes, in her heart of hearts—is the future of writing, the future of the book, is Jasmine Sun part of the last generation of writers? And if not, why not?”

Sun: Whoa, that is such a Robin question. That’s so existential.

Warzel: It is, it is. But we are to put that in context for people who may not be in Robin’s brain or your brain or my brain. There is obviously, right now, a lot of concern about reading and writing and text-based anything, right? Attention spans not being able to sort of hold. Just, you know, the sort of ChatGPT-ification of education, making it so that you actually don’t necessarily need to go through that same exercise of the five-paragraph essay. And people reading less, and that being less important, is sort of like the vibe. Like the reading crisis there. But that’s for other people. But I am curious: Are you the last generation of writers? If not, why not?

Sun: I don’t think so. I have gone back and forth on this question a lot this year. I have experimented with my share of video podcasting; a couple forays into short-form video. The stats scare me on literacy. Like, it terrifies me that the kids don’t read and can’t read and all of that. But one of the first books I read this year was Walter Ong’s Orality and Literacy, which is fantastic.

Warzel: Same.

Sun: Has so much longevity to it. And I think I just really believe in the connection between literate cultures and the specific form of text and being sort of an independent thinker. The fact that you have to strive for precision. The fact that you contemplated text alone. It’s not something that you hear once and it disappears. You can really, critically examine, reread, annotate, take notes. And so I don’t. I think it is very possible that the number of writers will decrease in the future. I think it’s very possible that the number of readers will decrease. And that makes me super sad. I hate that, but I think it’s probably true.

But one of the amazing things about writing is: The idea can live separately from the person who says it. And that’s why you can have new ideas come from all sorts of places, from people who don’t have authority and credibility, or might not be really charismatic—and have the sort of presence that can still change the world, just because it’s a really good idea. And that idea can spread from person to person, morphing within each person’s mind. Because, again, it is able to be detached from the host. So I do think that there are these special properties to writing.

Maybe the last thing I’ll say on this is: I had this conversation with a sophomore in college, Berkeley sophomore, who I met at that AI conference in December. And he came up to me and he was like, “Jasmine, like, I love your writing. Like, it’s so awesome. Blah, blah, blah.” And I was like, “Thank you so much.” And I asked him, “Are you a writer as well? Do you also write?” And he says, you know, “You are so lucky you went to college before ChatGPT, because, like, you can write. And I’m just screwed.” I was like, “What?” And he was like, “Well, because I’m never gonna learn. And then he said, “You know, I do have a blog; I have a Substack for my friends. But I wrote one post with ChatGPT, and it mogged all the others.” Which, in Gen Z speak, is that it got more likes than all of his human-written posts.

And this made me really sad. And I thought about it for the next few weeks. And I told him, “Well, I think you can keep writing yourself.” Da, da, da, da. I do think it is terrifying, maybe, to be a young person who is still learning what their voice is, and to have that experience of AI being quote unquote better than you. And to not be motivated to close that gap, to not be motivated to find your voice anymore. But I will say that I think the day after Christmas, or something like that, he sent me another message on Substack. And he was like, “Hey, I just wrote another post; you should read it.” Totally human-run; it was great. He’s actually a great writer, and it made me very happy.

Warzel: There’s no better way to close out a video podcast than to say, you know, the writers will live on, and we hope that. But I know your writing will live on, and I will be reading it. We will be linking to it here. Jasmine, thank you for coming on and talking about the culture of Silicon Valley for me.

Sun: Thanks so much for having me. This was super fun.

[Music]

Warzel: That’s it for us here. Thank you again to my guest, Jasmine Sun. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You can subscribe on The Atlantic’s YouTube channel, or on Apple or Spotify or wherever it is that you get your podcasts.

And if you want to support this work and the work of my fellow journalists at The Atlantic, you can subscribe to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thanks so much, and I’ll see you on the internet.

This episode of Galaxy Brain was produced by Renee Klahr and engineered by Dave Grein. Our theme is by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.

Visit Website