'Obscene': Grammarly's New AI Tool Offers Writing Feedback From Dead Scholars
TL;DR
Grammarly's new 'Expert Review' AI feature, which provides writing feedback framed through the perspectives of both living and deceased scholars, is facing criticism from academics for using experts' names and work without consent. Critics call it 'obscene' and express concerns about trust and ethics in AI tools for education.
Key Takeaways
- •Grammarly's Expert Review feature uses AI to generate writing feedback based on the perspectives of scholars, including deceased ones, without their consent.
- •Academics criticize the tool as 'obscene' and 'concerning,' highlighting ethical issues around using experts' reputations and scraped work.
- •The feature aims to help users improve writing by suggesting experts based on document content, but risks deepening skepticism about AI in education.
Tags
Grammarly’s new AI feature that provides writing feedback from the purported perspective of noted “experts” is drawing criticism from academics who say the tool appears to “resurrect” scholars to review users’ work.
The feature, called Expert Review, analyzes text and generates feedback framed through the perspective of specific scholars, journalists, and other specialists. Many of the experts that the AI tool claims to mimic are no longer living—a feature that one medieval historian on BlueSky called “morbid.”
Launched in 2009 as an AI-assisted writing and grammar tool, Grammarly’s parent company rebranded to Superhuman in October to reflect its shift from a single writing assistant into a suite of AI productivity agents, including tools for research, scheduling, email, and workflow automation.
Grammarly introduced the Expert Review feature last summer. Through the Grammarly browser extension, users who opt into the Superhuman Go version can select an expert and receive AI-generated feedback based on that scholar’s field or published work.
"Our Expert Review agent examines the writing a user is working on, whether it's a marketing brief or a student project on biodiversity, and leverages our underlying LLM to surface expert content that can help the document's author shape their work,” a Superhuman spokesperson told Decrypt. “The suggested experts depend on the substance of the writing being evaluated.”
The Expert Review agent, the spokesperson explained, doesn’t claim endorsement or direct participation from those experts, but provides “suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.”
“The experts in Expert Review appear because their published works are publicly available and widely cited,” they said.
When testing the feature for this article, expert reviewers suggested by the app included Margaret Sullivan, media columnist and former editor at the New York Times, Jack Shafer, former senior media writer at Politico, and Lawrence Lessig, a professor at Harvard Law. Other options included AI ethics researcher Timnit Gebru and Helen Nissenbaum, professor of information science at Cornell Tech.
While the feature aims to help students and professionals improve their writing abilities, Vanessa Heggie, professor of history at the University of Birmingham, questioned whether the “reviewers” gave their consent before the company used them in the app.
“I don't know where to start with this, but… Grammarly is now offering "expert review" of your work by living and dead academics,” Heggie wrote on LinkedIn. “Yes, dead ones—without anyone's explicit permission it's creating little LLMs based on their scraped work and using their names and reputations. Obscene.”
Brielle Harbin, a former associate professor of political science at the United States Naval Academy, called it “an odd and concerning development.”
“Choices like this—especially when made without context, consent, or meaningful partnership with educators—risk deepening skepticism about AI tools in higher education,” she wrote on LinkedIn. “Ironically, decisions meant to accelerate adoption may end up strengthening resistance instead. Trust and collaboration matter a lot right now.”
Grammarly is just one of the companies creating AI programs designed to mimic real people.
In 2023, Meta released a line of chatbots for its Meta AI platform built around celebrity identities, including Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka. That same year, Khan Academy launched its AI tutor Khanmigo, which allows students to role-play conversations with historical figures, including British Prime Minister Winston Churchill, and U.S. Civil War spy and Underground Railroad conductor Harriet Tubman.