Grammarly Removes AI Expert-Review Feature After Writer Backlash

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

Grammarly disabled its controversial Expert Review AI feature after facing backlash for using famous writers’ identities without consent. A new class-action lawsuit now questions how AI should reuse real voices and what this means for the ethics of AI-assisted writing in both professional and academic settings.

What happened with Expert Review

The tool aimed to offer feedback in the style of real people, producing guidance that sounded like Stephen King, Neil deGrasse Tyson, and even the late Carl Sagan. This set off immediate concerns about consent, attribution, and the monetization of creators’ identities.

Grammarly’s parent company, Superhuman, now faces a class-action suit in the Southern District of New York. The suit accuses the company of profiting from those identities without permission, with damages claimed at over $5 million.

Investigative journalist Julia Angwin leads the suit. She argues the tool basically stole core elements of professional craft from living writers.

From promise to backlash

Grammarly had pitched Expert Review as a way to provide expert, tailored feedback for serious writing. Critics quickly pointed out that the feature relied on harvesting and mimicking real people’s voices, not just generating neutral, synthetic advice.

In response, CEO Shishir Mehrotra apologized and said the feature would be rethought and redesigned. Company statements downplayed the feature’s usage before it was removed.

The public dispute raised bigger questions about consent, rights, and the ethics of voice replication in AI. Is it really okay to let a machine borrow someone’s voice without asking?

Legal and ethical implications

The SDNY lawsuit highlights a real tension in AI development: Can an AI system profit from voices and styles tied to real people without their say-so? And where does that intersect with copyright, publicity rights, and professional reputation?

The suit claims that letting models “hallucinate” advice in someone’s voice misappropriates their reputation and expertise. Over 40 writers reached out to the plaintiffs’ legal team right after the filing, showing just how much this case has struck a nerve in the writing world.

What the suit alleges

The action says Expert Review used identities for commercial gain without consent. It argues that this effectively turns living or recently active authors into assets for AI, without their approval.

The claims stress the risk of confusion, misattribution, and unfair market leverage. Can a product really imitate a real author’s craft responsibly without some kind of licensing or opt-out?

The defense calls the claims meritless. Still, the litigation shows that people expect AI tools to respect the rights and autonomy of real individuals.

Industry response and next steps

Grammarly’s leadership says they’ll rethink how similar AI features get built and released. Mehrotra’s apology and the pause on the feature reflect a bigger trend in tech: companies are realizing they need better frameworks for consent, data provenance, and ethical use of voice and style.

The industry faces ongoing scrutiny over how to balance innovation with transparency and accountability, especially for tools used in formal writing and professional communications. Superhuman says it’ll fight the lawsuit but is also reexamining its AI roadmap.

Practical takeaways for developers and users

  • Get explicit consent before using any real person’s voice or identity in AI outputs.
  • Disclose AI authorship clearly when models imitate living authors or public figures.
  • Offer opt-out options for people who don’t want their voice or style represented.
  • Make sure training data and outputs are properly licensed to avoid misappropriation.
  • Develop governance frameworks that address ethics, transparency, and accountability in AI features used for writing and critique.

Conclusion

Grammarly’s Expert Review episode really highlights the legal and ethical messiness of voice replication in AI. As these AI writing tools keep popping up everywhere, the industry faces a tricky balancing act.

We need to make sure creators stay protected while still allowing for genuinely helpful and responsible AI assistance. That means clear consent, more transparency, and actual governance—otherwise, what’s stopping things from getting out of hand?

 
Here is the source article for this story: Grammarly removes AI Expert Review feature mimicking writers after backlash

Scroll to Top