The article discusses the misuse of X's AI chatbot Grok to generate nonconsensual sexualized images, highlighting a breakdown in platform stewardship. It also introduces the 'Resonant Computing Manifesto' as a hopeful framework for building technology that enhances human agency and well-being.
Key Takeaways
•X's AI chatbot Grok has been exploited to create and spread nonconsensual sexualized images, revealing a failure in content moderation and basic human decency on the platform.
•The 'Resonant Computing Manifesto' proposes a vision for technology that is resonant—nourishing and aligned with human agency—contrasting with hollow, engagement-maximizing digital experiences.
•Key principles of the manifesto include privacy, dedication to user interests, pluralism to avoid centralized control, and using AI for personalization that extends user agency rather than manipulation.
Grok’s “digital undressing” crisis and a manifesto to build a better internet The AtlanticSubscribe here: Apple Podcasts | Spotify | YouTube
In this episode of Galaxy Brain, Charlie Warzel discusses the nightmare playing out on Elon Musk’s X: Grok, the platform’s embedded AI chatbot, is being used to generate and spread nonconsensual sexualized images—often through “undressing” prompts that turn harassment into a viral game. Warzel describes how what once lived on the internet’s fringes has been supercharged by X’s distribution machine. He explains how the silence and lack of urgency isn’t just another content-moderation failure; it’s a breakdown of basic human decency, a moment that signals what happens when platforms choose chaos over stewardship.
Then Charlie is joined by Mike Masnick, Alex Komoroske, and Zoe Weinberg to discuss a vision for a positive future of the internet. The trio helped write the “Resonant Computing Manifesto,” a framework for building technology that leaves people feeling nourished rather than hollow. They discuss how to combat engagement-maximizing products that hijack attention, erode agency, and creep people out through surveillance and manipulation. The conversation is both a diagnosis and a call to action: Stop only defending against the worst futures, and start articulating, designing, and demanding the kinds of digital spaces that make us more human.
The following is a transcript of the episode:
Alex Komoroske: AI should not be your friend. If you think that AI is your friend, you are on the wrong track. AI should be a tool. It should be an extension of your agency. The fact that the first manifestation of large language models in a product happens to be a chatbot that pretends to be a human … it’s like the aliens in Contact who, you know, present themselves as her grandparents or whatever, so that she can make sense of it.
It’s like—it’s just a weird thing. Perfect crime. I think we’re going to look back on it and think of chatbots as an embarrassing party trick.
Charlie Warzel: Welcome to Galaxy Brain. I’m Charlie Warzel. Initially, I wanted to start something out for the new year where I wanted to just talk about some things that I’ve been paying attention to every week, and give a bullet-pointed list of stuff that I think you should pay attention to. Stuff I’m covering, reporting on, et cetera, before we get into our conversation today. But today, I really only have one thing, and it has been top of mind for little less than a week. And it is something that I can’t stop thinking about and that frankly I find extremely disturbing. And I’m mad about it, honestly. To ditch the sober-journalist part, it’s infuriating. And this is what’s going on on Elon Musk’s X app.
I don’t know if you’ve heard about this, but Elon Musk’s AI chatbot, Grok, has been used to create just a slew of nonconsensual sexualized images of people, including people who look to be minors. This has been called a, quote, “mass-undressing spree.” And essentially what has happened is: A couple of weeks ago, some content creators who create adult content on places like OnlyFans used Grok’s app, which is infused inside of the X platform. You can just @Grok and ask it to, prompt it to do something. And the chatbot will, you know, generate whatever. It will make a meme for you, a photo, it will translate text. It will, you know, basically do anything like a normal chatbot would do, but it’s inside of X’s app. And, so some of these content creators said, Put me in a bikini. They were asking for this, and Grok did it. And a bunch of trolls essentially took notice of this and then started prompting Grok to put tons of different people in these compromising situations. On communities and different forums across the internet, people are trying to game the chatbot to try to get it to push the boundaries further and further and further. They’re prompting it to do things like edit an image of a woman to, quote, “Show a cellophane bikini with white donut glaze.” Really absolutely horrific and disgusting things that are these workarounds to get it to create sexualized images.
This has been happening for a long time online. There’s always been, since these AI tools have come out, problems with nonconsensual imagery being generated. There are lots of so-called “nudify” apps, right, that take regular dressed photos of people and undress them. And there are communities that share these as revenge porn and use them to harass and intimidate women and all kinds of vulnerable people. And this has been a problem.
People are trying to figure out the right ways to put guardrails up to stop this—to make sure that these communities get shut down, that they don’t continue to prompt these bots to do this, trying to get these tools to stop doing this. And a lot of this has been happening in these small backwater parts of the internet, and it does bubble up to the surface. But what’s changed here with X and Grok is that Grok is, as I said earlier: It’s baked into the platform. And so what has essentially happened is that X—xAI, Elon Musk—they have created a distribution method, and linked it with a creation method, and basically allowed for the viral distribution of these nonconsensual sexual images. And it has become, in the way that it does in places like 4chan and other backwater parts of the internet, it’s become a meme in this community. And people have decided that they are going to intimidate people and generate these images out in public.
And so what you have is publications posting photos of celebrities, and then a bunch of people, you know, in the comments saying: “@Grok undress this person.” “@Grok, put them into bikini.” “@Grok, put them in a swastika bikini.” “@Grok, put them in a swastika bikini doing a Roman salute.” And then you have a photo of a celebrity, undressed without their consent, in a Nazi uniform, giving a Nazi salute.
This is stuff that I have seen all across the platform. Not going into strange backwater areas of it—just looking directly at it. So this is out there. Something I noticed earlier this week—we’re recording this on Wednesday—was there was a photo of the Swedish deputy prime minister at a podium, giving a talk. And a bunch of people were asking Grok, prompting Grok to put her in a bikini, et cetera.
X and the people who work there have issued a statement saying that they’re working on the guardrails for this system. This is against their community standards, and they will punish the people who are involved here. But that doesn’t really seem to be happening. Just yesterday I was looking around, and people who are asking Grok to put women in compromising photos have blue checks next to their name, which means they asked the company for a verified badge. Those people are still on the platform as of this time when I’m talking to you.
So I reached out to Nikita Bier on his personal email. He’s the head of product at X. And I asked as a journalist, as a human: How someone can in good conscience work for a company that’s willing to tolerate this type of thing? Like, what’s the rationale? Who’s being served? How can you tolerate your product doing this? Do you imagine you’ll be able to get this under control with the appropriate guardrails? And if not, how can you sign your name to this stuff? How is this allowed to be in the world? They did not respond. They forwarded me to their comms lead, and I asked the same questions of them, and they never responded back to me. I have also asked Apple and Google similar questions. How can they allow an app like this on their app store? And they also have not gotten back to me.
The lack of response to this from the people who are the stewards of this platform, and the people who can exert pressure on this—including X employees or investors, or Elon Musk himself, who has made jokes about the @Grok bikini-photo stuff on the platform over the past week.
The lack of apologizing. The lack of urgency in trying to fix this. The lack of really seeming, from my perspective, to care about this, I think, feels a bit like crossing some kind of Rubicon. This is not a standard content-moderation issue. This is not a bunch of people trying to scold for something that is a part of some kind of ideology. This is basic human decency: That we shouldn’t have tools that can very easily create viral content of women and children being undressed against their will. Feels like the lowest possible bar, and yet the silence is—it just speaks volumes of what these platforms have become and what their stewards seem to think.
I would just ask of truly anyone who works at these platforms: How do you sleep at night with this? The silence from X, from employees there who we’ve tried to contact just to get some basic understanding of what they’re doing and how this can be allowed. And what’s happening on the platform, because the platform is not taking enough action to stop this, because it’s still allowing this undressing meme to go forward. What’s happened is: A culture has evolved here. And that culture is one of harassment and intimidation. And it feels like the people who are doing this know that no one’s going to stop them. They’re doing this out in the open. They’re doing it proudly. They’re doing it gleefully.
Something has to change here. I’ve been covering these platforms for 15-plus years, and I’ve watched different people in these platforms struggle with moderation issues in good faith, in bad faith. I’ve watched it devolve into this idea of politics and ideology. I’ve watched people pledge to do things, and then give up on those things.
It ebbs, it flows. The internet is chaos. I get it. But this is just different. This is a standard of human decency and social fabric and civic integrity that you can’t—you can’t punt on it. You either choose to have rules and order of some kind of a very base level, or you just—it does become full anarchy and chaos. And it seems to be that’s the direction where they want to go.
So if you work at X, if you’re an investor, if you’re somebody who can exert any influence in this situation, I would a) love to hear from you. And also I would ask: Is this okay? Is this what you want the legacy to be? Sorry for getting on a soapbox there, but I think it’s a massive, massive story. And one that, again—I think if this is allowed to just be the way that the internet is, then we lose something pretty fundamental.
So anyhow, it’s a tough way to segue there, but today’s conversation is actually the opposite of all of this. I do a lot of tech criticism. Do a lot of really sort of, you know, aggressive reporting, trying to hold tech companies to account. And that means looking at a lot of awful things and talking about a lot of awful things. But today’s podcast is about something great, something that’s actually hopeful that’s being built. It’s about a group of technologists who’ve come together with a different vision for the internet: a positive vision for the internet, something that they are trying to build that can sort of lead to positive outcomes and people living their best lives.
And so this project is called the “Resonant Computing Manifesto.” Basic top-line idea of it is that technology should bring out the best in humanity. It should be something that allows people to flourish. And they have five core principles here, that are essentially meant to combat the hyperscalers and extraction of what we know as the current algorithmic internet that we all live on.
And to talk about that, I’ve brought on Zoe Weinberg, Mike Masnick, Alex Komoroske. They are three of the writers of the “Resonant Computing Manifesto.” And I had them on to talk about why they came up with all this, and what, if anything, we can do to change the internet in 2026.
Warzel: All right: Zoe, Alex, Mike. Welcome to Galaxy Brain.
Mike Masnick: Thanks for having us.
Warzel: You all put forward something that I actually came across very recently. Often my timeline is a mess of the horrors of the world. The terrible things, the doomscroll. And this kind of stopped me in my tracks, because frankly, it wasn’t doomscrolly at all.
And when I clicked on it, I began to feel this very strange emotion I’m not used to feeling, which is hope. And/or, I agree. And I agree, and it doesn’t make me furious. And so what you guys have done in part, with a group of other people, is come up with something called the “Resonant Computing Manifesto.”
And it is based off of this idea of resonance. And when you guys put this out—and I want you guys to describe all of this—but when you put it out, you said that you were hoping this was going to be the beginning of a conversation. A process about getting people to realize technology should work for us, and not just for the people at the very top, the people behind [Donald] Trump on the inauguration dais, that sort of thing.
And so, in this world of mergers and acquisitions and also artificial intelligence and all that jazz, I wanted to start the conversation off with a definition of what resonant technology is and what it means. And I’ll bring that up to either all of you or one of you.
But what is resonant technology? What does it mean to you?
Alex Komoroske: So to me, that resonant computing. There’s a difference between things that are hollow—leave you feeling regret. And things that are resonant—leave you feeling nourished. And they’re superficially very similar in the moment. And it’s not until afterwards, or until you think through it, or let it kind of diffuse through you, that you realize the difference between the two.
And I think that technology amplifies whatever you apply it to. And now with large language models that are taking what tech can do and making it go even further than before, it’s more important than ever before to make sure the stuff that we’re applying technology and computing to is resonant.
And I think we are so used to not having a word for this. And we can tell that something is off in slop or things that are just outrage bait or what have you. And social networks, but we don’t know how to describe it. And just having a term for that: the kind of stuff that you like.
And then also the more that you think about it, the closer you look, the more you like it. Does that capture it?
Masnick: Yeah, pretty much. I mean, we spent a lot of time trying to come up with the term.
Komoroske: And you wanted something that was ownable, that was distinctive, that wasn’t just a thing that would fade into nothing.
Zoe Weinberg: There’s a lot of terms out there that now have a lot of baggage. Even something that sounds kind of innocuous—responsible tech—I think now comes laden for a lot of people with a bunch of associations or different movements of people, whether it’s corporate or grassroots or otherwise.
And so, you know, we were trying to move beyond that a little bit in the choice of the word resonance.
Warzel: Yeah. There is also like—there’s an onomatopoeia to it. There’s sort of, this is what it sounds like. You have resonance there. And also there is something a little bit, the word that comes to mind is almost monkish.
Like, a monastery type. There’s something that’s very, it’s … resonance is not, like, a capitalistic word. It is a word that signifies something much different to me. Like sort of sacred. You know?
Komoroske: Yeah. It’s balance, pureness. tThere’s something about it that feels very whole, maybe.
Warzel: And at the top of the manifesto, there’s this line that is sort of offset there. A pull quote, if you will. Says: “There’s a feeling you get in the presence of beautiful buildings, bustling courtyards. A sense that these spaces are inviting you to slow down, deepen your attention, and be a bit more human. What if our software could do the same?”
That was the thing that struck me there. When did you guys see a sort of architectural element to this? Like, an inspiration from things that we see and experience in meatspace, so to speak, in the world?
Komoroske: We had the word resonance, I think, actually came before we …so, I’m a big fan of Christopher Alexander. He lived a few blocks away from me. And, you know, a big fan of The Timeless Way of Building and a few other books.
And so we had various formulations of it, that try to key off of that frame or idea. I don’t think he ever calls it resonance in the book, in his actual book. But, you know, it’s a word that other people—maybe he might offer it as one of the potential names. He calls it aliveness and wholeness and other things.
But so, it was always in the mix of the kind of vibe that we were trying to capture. And then we decided to lean into resonance and introduce it via this architectural lens. And actually, that addition at the top was a late addition, because it starts off talking about resonance kind of indirectly and it pivots into this architectural frame.
And someone was like, What? I thought you were talking about technology. We said, Okay, let’s put a little teaser about the architectural connection up at the top, to help connect with the way the middle of it is going, so you don’t get confused.
Weinberg: I think there’s something also powerful about writing and thinking about software, which exists in a digital plane—that is, not a physical space—that feels like it’s kind of in the ether and a little bit untouchable. And then trying to ground that in a very human reality, which is in fact tied to place and space and where we spend time.
And maybe drawing some insights from those physical realities into the way in which we build digital spaces.
Komoroske: Christopher Alexander, when you read some of his work and, we all know that feeling, we can all imagine the situations that we’ve been in, the environments where we feel that resonance. And there’s something very, I don’t think we ever think about it in the digital world. Because you have to be, when you’re in it, in the physical world. It’s impossible to ignore it when when you’re in it.
And there’s always point. Let’s—why don’t we ask the question, why do digital experiences not feel the same way? They absolutely could. You know.
Weinberg: And I think, you know, what is the feng shui for software? It’s maybe a way of thinking about it. But I think that goes much deeper than UX and UI-design principles.
It’s much more about: What is the experience as a user, and as a human, interacting with a tool over repeated periods of time?
Warzel: Well, and I think too, a lot of—at least what I reach for in my work, which a lot of it is, critiques of, you know, big-tech platforms and such. A long time ago, I found the word architecture—the “architecture of these platforms”—as just being extremely helpful to communicate some of this stuff.
I think there is a way for people who, you know, are just using these platforms to get from A to B. Or, you know, on the toilet at a moment of just, I’ve just gotta get away from the kids, or whatever it is. If you’re not thinking with the critical lens—which, there’s no judgment there—about these platforms, you might just sort of think this is a neutral thing. Or this is a thing that just does a thing, and, you know, whatever. And I think that, you know, architecture—this idea that there are designs, there is an intentionality to this algorithm or this layout or whatever choice that a platform has made that leads to these outcomes—that leads you to post more incendiary things, or whatnot. And I think that architecture there is so helpful to let people see like: no, no, no. In the same way that, you know, these arches are the way they are. This stained-glass window does this, to give this vibe. So is putting the “What are you thinking?” bar right here, or whatever. The poke icon wherever.
Komoroske: So I think that’s also about, with connection to architecture, that’s even stronger there. I think of traditionally, architecture is, like, this designed-top-down cathedral. Like, the designer’s intent. And one of the things that Christopher Alexander later did was this bottoms-up emergent of: How is this space actually used and modified? How does it come alive?
And I think that’s one of the reasons architecture, in his sense, I think really nails it. Because a lot of these experiences, like a bunch of people, when you build Facebook 10 years ago, were trying to connect the world. That’s a prosocial outcome. It’s prosocial in the first order. The second-order implications, turns out, oh, actually are not prosocial.
And so you get these emergent characteristics that are not what anyone intended going in, necessarily. And still, and yet, they emerge out of the actual usage of how different people react off each other, and how the incentives kind of bounce off each other. And so I think architecture hits that emergent case too.
Warzel: Mm-hmm. So, Mike, I’ll throw this to you. How did this come about? What is the behind-the-scenes process here? I’ve heard, you know, “We’re using these words, and we’re taking ’em for a spin in the world for two weeks.” This does not sound like something that you guys wrote last weekend and put up on the thing.
There’s a lot of people behind it who aren’t on this call here. Or this podcast here, I should say, not a call. How did this come about?
Masnick: Yeah, I mean a lot of this is Alex, and so I’m curious about his version of this. But in my case: I mean, I met Alex about a year ago. Almost exactly a year ago at some event.
And we got to talking, and it was a good conversation. It was a resonant conversation, where I sort of came out of it saying, Oh wow, there are people thinking through these things and having interesting conversations. And then we kept talking and he said, “You know, I’ve been having this same conversation with a group of different people. And I thought I might just pull them all together, and we’ll get into a Signal chat, and we’ll have a Google Meet call every couple weeks. And we will try to figure out what do we all—we’re all having this feeling, what do we do about it?”
And then we did that for almost a year. I mean, it’s kind of incredible. And where we would just sort of be chatting in the group chat and occasionally having a call and sort of talking through these ideas, and working on it. And trying to figure out even what we were going to do with it.
Weinberg: I definitely think the manifesto emerged very organically.
Masnick: Yes.
Weinberg: To the point that I would say in the first couple months of us meeting, Charlie, like I was like: Okay. It’s really fun chit-chatting with these interesting people that Alex has brought together, but let’s get to brass tacks. Is this going anywhere? And I have to say, there was a part of me that wanted to end those calls being: Okay guys, what’s our agenda? Where are we going? What are the outputs? How are they met? Whatever. And I actually think, Alex, you did a really great job of kind of keeping people from jumping to that sort of action-item mode too early.
And so, from my perspective, we did not get together to write a manifesto. We got together to talk about these issues. And then, very naturally, you know, out of those conversations came a set of ideas and principles, and sort of theses. That then felt like we should put them out in the world.
Warzel: Did this feel like—the choice of the word manifesto and the choice to just do this—does this feel a little bit, too, like a response to we’re in a manifesto-heavy moment here? It feels like there are a lot. Whether we’re talking like the Marc Andreessens of the world or, if you pay taxes in San Francisco, you need to write a manifesto to get your garbage picked up or something.
But is this a response in the same way? Or is it meant to be seen as, in some senses, in dialogue with some of these other things that are coming out there?
Komoroske: I think to some degree, I don’t know, actually. I can’t remember how we ever discussed if it should be a manifesto.
We just knew that there should be something that we could point people at, that kind of distilled some of the conversations and ideas that we were having. And I think I’ve seen a bunch of manifestos in the tech industry, that sometimes I look at and go, Oh my God, is that the tech industry that I’m a part of?
That doesn’t seem at all like—that seems so cynical or so close-minded about the sort of broader humanistic impacts that technology might have. And so, I think the choice of doing something that other people have, you know … this manifesto was deliberately kind of humble. It says: We don’t have all the answers; just here’s a few questions that seem relevant to us.
That was a very important stylistic choice. Manifestos are not typically humble. But we aimed for that because we wanted to almost counter-position to some of the ones that say, This is definitely the right way. And everyone should think about it this way.
Masnick: Yeah, I almost think I’ve been using that as a joke to other people. Where it’s, This is the most humble manifesto you’ll ever see.
Which is not something—you know, you don’t normally see those two words together. You don’t think of as a manifesto as being humble. But, I mean, this was definitely a part of the conversation that we had. Which is: We want to be explicit that we don’t have all the answers, and that this is the start of a conversation. Not, you know, putting an exclamation point on a philosophy or something.
Weinberg: I do think, Charlie, you’re touching on something noteworthy here. Which is, and I’ll speak only for myself, but I’ve been observing in the last couple years as it has felt to me like the ideological landscape of the discussion in Silicon Valley has been really defined by these extremes.
And on one end, it’s like the accelerationist kind of techno-optimism way of seeing the world. And on the other side, on the other kind of far extreme, it is like existential and catastrophic risk and ways that, you know, we must prevent that. And I know a lot of people who don’t feel like they really belong in either of those camps, and actually don’t even really think that the optimist/pessimist spectrum is like the right way to think about it.
And so from my own perspective, part of what I have hoped that the “Resonant Computing Manifesto” will accomplish is, like, helping to establish some values and some north stars that are kind of on a different plane from that conversation. That also feels like there can be both. You can both be optimistic about the ways things might develop, and also concerned about the places we’ve come from. And that those things can coexist, and that is like the beauty and complexity of the technological moment we’re in.
Masnick: Yeah, totally. Because, you know, I had written something in response to Andreessen’s manifesto, and I never really thought of this as like a response.
Warzel: Is it the “build one” or the “techno-optimist” manifesto?
Masnick: There’s been many. Yeah, that’s true. Fair enough. But, you know, I’ve always considered myself, and I’ve been accused of being, a techno optimist. Like, to a fault. And like, I am optimistic about technology. But to me, his manifesto really, you know, rubbed me the wrong way. Because I was like, This isn’t optimism. What he was presenting was not an optimistic viewpoint.
It was a very dystopian, very scary viewpoint. And so soon after it came out, I had written a response, like, “That’s not optimism that you’re talking about.” And there, and if you really believe in this—this vision of like a good, better world from technology—then you should also be willing to recognize the challenges that come with that.
Because if you don’t acknowledge that, and don’t seek to—if we’re building these new technologies—understand what kinds of damages and harms they might create, then the end result is inevitably going to be worse. Because something terrible is going to happen. And then, you know, the politicians will come in and make everything else that you want to do impossible.
It’s just like: Think this through. Like a couple steps ahead.
Komoroske: And so technology is powerful. Like, we should be careful with that power, and we should use it for good. And I think it is incumbent—you know, it’s a good thing for people to do, to use technology for good. Like, you shouldn’t sit there and not use it.
You should use it, and you should be aware of the second-order implications and the third-order implications. And not say, “Well, who could have seen this inevitable outcome?” You know, so much in the tech industry is about optimizing. It’s about driving the number up. It’s about thinking, not necessarily thinking about second-order implications.
I, at some point, had somebody tell me, You know, anything that can’t be understood via computer science is either unknowable or unimportant. Which is an idea that, you know, pervades some parts of Silicon Valley. And I think this combination of the humanistic side and the technology side into the synthesis, I think is where a lot of value for society is created. And you have to have them in balance. You have to be in conversation with each other.
Warzel: Well, that’s definitely speaking my language, for sure. That’s like Charlie bait right here. But I want to define a little of this. I want to actually define it, but first I want to define it via its opposite.
What’s the opposite of resonance here? How would you describe the current software dynamic? I’ll let anyone who wants to take that. But maybe all of you, honestly.
Komoroske: And to me it’s just, I think, most of the technology, the tech experience and consumer world is hollow. In that you wake up the next day and go, God, why did I do that?
Or you use the thing. To me, if you use a tool and then after you are sober, after you’ve sort of come down from it, because sometimes you’ll be really hopped up on the thing. So maybe a week later, or the next day, would you proudly recommend it to somebody you care about? And if not, then it’s probably not resonant.
And you know, at some point, somebody—I was having this debate with somebody at Meta many years ago—they said, Oh, Alex, despite what people say, our numbers are very clear. People love doomscrolling. It’s like, that’s not love. That’s right. Like that’s a … what are you talking about?
So I think trying to just make number go up, and increase engagement or what have you, is what creates hollow experiences. And that tends to happen when you have, hypercentralized, hyperscale products. One of the reasons that happens inevitably is if you have five hyperscale products that are all consumer, and trying to get as many minutes of your waking day, there’s only so many waking minutes of people’s time in a given day. And so you naturally kind of have to marginally push. You know, try to figure out the thing that’s going to be more engaging than the other thing. And that emerges, I think, fundamentally when you have these hyperscale products—which is what emerges when you have massive centralization.
And all these things are of a piece, and lead to these. Hollow. Yeah.
Masnick: I think there’s a concept that has come up a few times in the conversations, in the various meetings that we had. And I don’t remember if it originated from you, Alex, or from someone else. But like, the difference between what you want and what you want to want, which may take a second. You think through, and you begin to like, Oh, right.
Like, there is this belief within certain companies that revealed preference is law. “If people love doomscrolling, ’cause they keep doing it, then we’re just giving them what they want.” Like, shut up. Like, you know, anyone who complains about that is just wrong. But then, as Alex said, it leaves you feeling terrible.
You have a hangover from it later. Whereas, if there’s this intentionality—of like, No, this is what I really want; I get nourishment out of it; I get value out of it in a real way—that lives on. That stays with me; that lingers. That’s different. And there’s that intentionality. As opposed to like, the problem with Oh, people love to doomscroll.
It’s, yeah. Because you’re sort of manipulating people into it. And people feel that they might not be able to explain it clearly. But like, it just feels like someone’s twisting the knobs behind the scenes, and I have no control over it. Right. And I think that feeling is what pervades; it’s the opposite of resonant computing.
Weinberg: I also think the opposite can be defined as any technology that’s ultimately undermining human agency. And so that can be things that are attention, you know, engagement-maximizing. And so it removes your agency in that sense. ’Cause you’re not actually able to express what you really want.
But also all the kind of micro ways in which we end up feeling deeply surveilled by the technology that we use. And I think all of us have probably had moments where we feel deeply creeped out by our tools. And I think, to me, that is the opposite of resonance also. So part of it’s about attention and engagement. And then part of it also is about, you know, having some individual autonomy in how you make decisions, where your data lives, who has access to it. And all of that we’ve tried to kind of embed into this piece.
Warzel: So you all write in the manifesto—and I’m going to quote you guys here, back to you at length. Hopefully it’s not cringey because it is written, you know, with a committee of people; I hate when people read my own stuff back to me.
But you all say: “For decades, technology has required standardized solutions to complex human problems. In order to scale software, you have to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander”—mentioned by you guys before—“has spent his career pushing back against. This is where AI provides a missing puzzle piece. Software can respond fluidly to the context and particularity of each human at scale. One size fits all is no longer a technological or economic necessity.”
This is the one part where I was tripped up while reading, and not in the “I am reflexively against AI” kind of way. But because personalization, I feel, in my own experience a lot of times, can be discordant with that idea of resonance.
I think personalization can be great. I think it’s actually, you know, underutilized or -realized in the tech space. But when I look around at the algorithmic world that we’re living in, sometimes it can feel like optimization. Which was, you know, the word there—like personalization and optimization comingle together.
Yeah. To become part of the problem and not the solution. So I was curious how you all would respond or think about that.
Komoroske: I think that the key thing there, I agree with that. What is the angle of the thing that is personalizing itself for you? Is it the tool, is it like trying to figure out how to fit exactly into the crevices of your brain? To get you to do something that is … you know, to click the ads or whatever?
Or does it feel like an outgrowth of your agency? Like, one way I talked about it is: Large language models can write infinite software. They can write little bits of software on demand, which has the potential to revolutionize what software can do for humanity. Today, software feels like a thing.
You go to the big-box store, and you pick which one of the three beige boxes, all of which suck, are you going to purchase. And instead, what if software felt like something that grew in your own personal garden? It was something that nourished you and felt like it aligned with your interest naturally and intrinsically, because it was an extension of your agency and intention?
And I think that kind of personalization—where it doesn’t feel like something else manipulating you, but it feels like something that is an extension of you and your agency and intention—I think is a very different kind of thing. We’re just not familiar with that kind, because it doesn’t exist currently.
Warzel: I was going to ask Alex just to push—not push back on it, but further follow up on that. Is there anything that exists like that, you think? A piece of software that feels garden grown versus a big-box store?
Komoroske: The one that keeps me coming back in my history is like—I think looking back at the early days of the web, actually, is where you had a bunch of these interesting bottoms-up kind of things.
HyperCard is my favorite one from many, many, many years ago. Have you heard of HyperCard? It’s like this thing that allowed you to make little stacks of cards. And you could have images on them; you could click between them, and you could program them to be like slideshows. Or like stacks of different things. And interlink.
The original game Myst, that was really popular, was actually implemented as a HyperCard stack back in the day. And so HyperCard, to me, is an example of one of these tools that allows you a freeform thing, that allows you to create this situated, very personalized software. You could argue that spreadsheets also have this kind of dynamic, because it’s an open substrate that allows you to express lots of different logic and build up very complex worlds inside of itself.
It’s pretty intimidating, but it is something that gives you that kind of ability to create meaning and behavior, inside of that substrate.
Masnick: Yeah. The thing I’ll say, to that point—and you’re not the only one who has sort of stopped on that line. And a few people have called it out and raised questions about it. And I think it’s because the idea of personalization, to date, has generally really been optimization. And it’s been optimization for the company’s interest, as opposed to the user’s interest. I think the real personalization is when it’s directly in your interest—and it’s doing something for you and not the company.
In the end, it has to be the user who has the agency, who has the control. Who says, This is what I want; this is what I want to see. And having it match that.
Komoroske: Charlie, I’ve also made a bunch of little tools. You know, a bunch of—if you’re technical, you can build these little bespoke bits of software now that fit perfectly to your workflow with large language models.
And that’s the kind of thing that a few of us can see a glimpse of this today, who are at the forefront and able to use Claude code and in the terminal to make these things. And I think in the not-too-distant future, large language models would, put on the proper substrate, will allow basically everyone on Earth to have that same kind of experience, that feels like an extension of their agency.
And I think that’s what some of us are seeing. And that’s why it’s in that essay. And that people who haven’t seen that yet are like, Excuse me, what? Like, you know, because they haven’t experienced it yet, they can’t see what’s coming.
Weinberg: Yeah; I do think that that sentence itself in many ways is a little bit forward looking. And so, as Alex said, there’s glimpses of it.
But I think the urgency and feeling like we needed to write about this is that it feels, I think to many of us, like the introduction of AI into all of our workflows gives us this kind of amazing opportunity. And crossroads. To either build along the lines of the paradigm of big tech and platforms and everything we’ve seen in the last, you know, couple decades—or we can try to shift into this new paradigm that is about personalization that, as Mike said, is not extrinsic from a third party, But something that you are building intrinsically yourself.
Warzel: I want to go through, actually, some of these starting principles. You all have five of them
that are these guiding lights. And I’d love to just sort of rapid-fire go through them, have whoever wants to explain just a little bit about how you’re thinking of them. Or how they, you know, might work to give a framework or a set of ethics or values to whatever is going to come out of this manifesto.
Right. And how they could be incorporated. And so the first one here is “private.” Which it says: In the, in the era of AI, whoever controls the context holds the power. Data often involves multiple stakeholders, and people in the service stewards of their own context, determining how it’s used.
We’ve talked a little around that. What “private” makes me think of, in a world of AI, is like: Our consumer-AI tools look the way that they do now because they’re built by the people who have spent—not totally, but when you think about like X, Google, Meta—the people who have spent the last, you know, 10, 15, 20 years collecting information on people.
So you are going to build a product that makes having that information more valuable to the end use. That’s part of the architecture there. But talk to me about how you see that first principle. Yeah. Zoe, do you want to take that one?
Weinberg: We debated this word a lot, and even the concept of privacy.
Komoroske: Yeah. We debated all these words.
Weinberg: Yeah, that’s true. But, you know, I think this one in particular is tricky, because we really went back and forth on—is it privacy that we feel like is the key value here? Or is it really about control, and putting the user in the driver’s seat?
And so it’s about, you know, consent. Rather than it is about just, like—and I think I speak for all of us. Like, I don’t think any of us are privacy maximalist. There are lots of, you know, amazing, wonderful prosocial reasons that you don’t always want to keep information private. And actually sharing information can be very helpful. And all those things.
And so, I guess, there’s a different way that we could have framed this that was a little bit more about control, or about agency, or whatever. But I think there is something meaningful about privacy as a value, and the notion. And the point of having privacy in the digital world is to be able to have a rich interior life. And that is, in many ways, very central to the experience of being human. And that’s why privacy is an individual value. It’s also a societal value. And I think that that was sort of important to capture in the mix here.
Komoroske: What we try to do with all these words is the word themselves.
We want to communicate on its own. And, if anything, go a little bit too hard in the direction it’s going. And because we actually soften the statement a fair bit about data stewardship. Because, you know, various thoughtful people pointed out that, well, actually data is owned, co-owned by the different parties. And in some cases you do want to give it up for an advantage, and whatever.
Mm-hmm. But we wanted the word to be private. Like, we wanted to be obvious when you have these five words. Like you could apply it to a product and say, “Does this fit, or does this not?” And not have little, like, soft, nuanced words for some of this. So we try to add the nuance in the sentence after the key word.
Warzel: Well, to that point, Alex: “dedicated.” You guys define it as: “Software should work exclusively for you, ensuring contextual integrity where data use aligns with expectations. You must be able to trust that there are no hidden agendas, conflicting interests.” Why’d you use the word dedicated? Like what do you mean exactly?
Komoroske: I wanted something that was, again, about: It’s an extension of your agency. It is not a conflict of interest, because it is in your interest. And “contextual integrity” actually is a meaningful phrase, because this is Helen Nissenbaum’s concept of contextual integrity. Which is, to my mind, the gold standard of what people mean when they think of privacy.
And it means: Your data is being used in line with your interests and expectations. So it’s aligned. It’s not being used against you, and it’s being used in ways that you understand or could be—or would not be surprised by if you were to understand it. And so that we wanted to get the words contextual integrity in there to get across this alignment with your interests and expectations.
Masnick: I think that’s a really important concept. You know, one of the discussions that comes up when talking about privacy is this idea that privacy is like a thing. And to me it’s always been a set of trade-offs. And the thing that really seems to upset people is when their data is being used in ways that they don’t understand, for purposes that they don’t understand.
And that is the world that we often live in, in the digital context. It’s like we know we’re giving up some data for some benefit, and neither side of that is fully understood by the users. We don’t know quite how much data we’re giving up. And we’re not quite sure for what purpose. And we’re getting some benefit, but we can’t judge whether or not that trade-off is worth it.
Warzel: I think about this all the time in terms of the “terms of service” agreement. I try to tell people, with that: Imagine that on the other side of the button that you were about to click is the most expensive-looking boardroom that you’ve ever seen in your life. With a whole bunch of people who make more in a week than you do in a year.
All in fancy suits. You know, like perfectly coiffed. And they’re just standing there, being like you versus them, you know? That’s what that is. It’s not a fair fight. You are agreeing to things. Yeah. Anyway, I want to keep running through this, though, because I want to get to ask a couple more questions here.
But the third of the five principles is “plural.” Which is: No single entity should control that distributed power. Interoperability. That seems relatively obvious. But, I mean, is this the idea of the dece