Back to Blog

Your AI Editor Just Landed Its Owner In A Multi-Million Dollar Lawsuit.

Grammarly's owner, Superhuman, faces a multi-million dollar class-action lawsuit for using celebrity names in an AI tool without consent. Find out what this means for you.

Admin
Mar 12, 2026
4 min read
Your AI Editor Just Landed Its Owner In A Multi-Million Dollar Lawsuit.
Your AI Editor Just Landed Its Owner In A Multi-Million Dollar Lawsuit.

Editorial Note

Reviewed and analysis by ScoRpii Tech Editorial Team.

You probably rely on AI writing tools to polish your prose, but what if that helpful digital assistant was impersonating literary legends and experts without their permission? That's precisely the bombshell revelation rocking the tech world, as Superhuman, the company behind your go-to writing software Grammarly, faces a class-action lawsuit over an AI tool that used hundreds of established authors' and academics' names without their consent.

Key Details

The core of the legal challenge centers on Grammarly's now-discontinued "Expert Review" feature. This AI tool offered editing suggestions, presenting them as if they came directly from renowned figures like investigative journalist Julia Angwin, author Stephen King, and astrophysicist Neil deGrasse Tyson. The catch? None of these individuals had agreed to have their names, identities, or expertise leveraged in this commercial product. Julia Angwin, an award-winning journalist and founder of The Markup, is the named plaintiff in the federal suit filed in the Southern District of New York. The lawsuit doesn't specify an exact amount but argues that damages across the plaintiff class are in excess of a staggering $5 million.

The complaint alleges that Grammarly and its owner, Superhuman, deliberately "misappropriated the names and identities of hundreds of journalists, authors, writers, and editors to earn profits." This action comes despite Superhuman's decision to disable the "Expert Review" feature amid significant public backlash. Ailian Gan, Superhuman’s director for product management, acknowledged the misstep, stating, "Based on the feedback we’ve received, we clearly missed the mark. We are sorry and will do things differently going forward." Superhuman CEO Shishir Mehrotra also posted on LinkedIn about receiving "valid critical feedback from experts who are concerned that the agent misrepresented their voices."

Angwin herself learned of her "cloning" via the tech newsletter Platformer and was, understandably, surprised. "You know, deepfakes are something I always think celebrities are getting caught up in, not regular journalists," she remarked. Adding insult to injury, she found the advice offered by her AI doppelgänger to be actively detrimental. She recounted instances where the AI suggested making simple sentences unnecessarily complex, or expanding on themes irrelevant to the text. "It felt very scattershot to me," Angwin noted, "I was surprised at how bad it was." Peter Romer-Friedman, Angwin’s attorney, highlights the legal precedent, telling WIRED, "Legally, we think it's a pretty straightforward case," citing long-standing laws in New York and California that prohibit commercial use of a person's name and likeness without permission.

Why This Matters

This lawsuit isn't just about a single feature or a few famous names; it's a critical moment for the broader conversation around AI, intellectual property, and consent. As AI models increasingly leverage vast amounts of publicly available data, the line between fair use and misappropriation becomes incredibly blurry. Your digital footprint—your writing, your ideas, even your unique style—could potentially be absorbed and repurposed by AI tools without your knowledge or permission. This case challenges the apparent belief of some tech companies that they can appropriate people's identities for commercial gain, famous or not, and attributes words and advice that were never given. It forces us to confront how professionals, who spend years honing their skills, might find their work and likenesses used to profit others without their consent.

Furthermore, this incident highlights the ethical responsibility of tech companies developing AI. The initial intention, as stated by Superhuman, was to help users tap into the insights of thought leaders and give experts new ways to share knowledge. However, the execution "missed the mark," creating a deceptive and potentially harmful product. As a New York Times opinion writer, Angwin has extensively covered how Silicon Valley giants have eroded privacy. This lawsuit extends that fight into the realm of AI, questioning who truly owns your digital identity and intellectual output in an era of rapidly evolving artificial intelligence.

The Bottom Line

In an age where AI is becoming an indispensable part of your daily workflow, this lawsuit serves as a stark reminder to be vigilant about the tools you use. Always question the source and veracity of AI-generated content, especially when it purports to come from specific individuals. This case will undoubtedly shape future regulations regarding AI's use of personal data and intellectual property. For you, the takeaway is clear: while AI offers incredible potential, it’s crucial to understand its ethical boundaries and to advocate for your digital rights, ensuring that your name, your work, and your expertise remain truly yours, with your explicit consent.

Originally reported by

Wired

Share this article

What did you think?