This article digs into a pretty timely—and honestly, controversial—critique of Grammarly. The focus? The company’s leap into generative AI, the uproar over its “Expert Review” feature that impersonated real authors, and the messy ethical and legal questions swirling around AI writing tools, copyright, and creative labor.
It frames Grammarly’s tactics as part of this bigger pattern: ad-driven growth and a data-to-model economy that, let’s be real, a lot of folks in the field eye with suspicion. The piece doesn’t shy away from tough questions about consent, authenticity, and whether a machine could ever really replace thoughtful writing. (Spoiler: probably not.)
Grammarly’s pivot to generative AI and the Expert Review feature
Back in 2023, Grammarly made a pretty big shift. They moved from being just a proofreading tool to rolling out a bunch of generative-AI features that could actually write for users and offer tone tweaks like “diplomatic” or “assertive.”
This move turned Grammarly into more than a grammar checker. Suddenly, it was a writing assistant that could draft or reshape content for you. Things came to a head in August, when they launched “Expert Review”—pitched as critiques from named subject-matter experts, both living and dead, supposedly adding credibility to your feedback.
But the Persona behind Expert Review? That’s where things got weird. The service started generating AI impersonations of famous writers and scientists, using their names and writing styles without asking permission—implying endorsement or authority where there was none.
Disclaimers, if you could even find them, were buried in the fine print. The advice from the AI often didn’t make much sense or just didn’t line up with what the real contributor would’ve said. This gap between flashy marketing and actual usefulness quickly became a hotbed for criticism of Grammarly’s approach to AI-assisted writing.
The impersonation scandal and the response
After investigations dug up the deception, Grammarly’s policy fix was to ask victims to email a special address to opt out. That put the burden on the people affected—and didn’t even cover deceased authors.
Journalist Julia Angwin ended up leading a class-action claim, alleging misuse of her identity. Grammarly’s CEO apologized, said Expert Review would be shut down, and promised a rework that would actually give real experts some say over how (or if) their work got used.
“No algorithmic mimicry can replace thoughtful writing,” one observer said, nailing the core issue. Impersonating real authors without consent just shreds trust and blurs the line between helpful assistance and outright misrepresentation.
This incident is now a frequent touchpoint in debates about who owns AI-generated advice, how experts get represented, and what kind of safeguards we need when AI tries to sound like real people.
Ethical, legal, and industry implications
The Grammarly episode really highlights a bigger trend in AI writing tools: companies using writers’ work to train models that make them money, often with almost no transparency about where the data comes from or whether anyone gave permission.
Critics say this setup takes advantage of people’s insecurities about their writing, pushing generative solutions that might replace real expertise while hiding behind slick marketing. The business model usually leans on ad-driven growth and big funding rounds—Grammarly just closed another one—which raises the question: does chasing revenue matter more than accountability?
It also brings back old debates about copyrights, the rights of living versus deceased authors, and what kind of legal rules we need for AI impersonation and training data. The blunt argument? No amount of AI shine makes it okay to steal someone else’s voice or ideas. Something’s gotta change—regulation, industry norms, or both—to stop this from happening again.
Implications for authors, users, and policy
- Consent and rights: AI features need explicit permission to use someone’s voice or style.
- Transparency: Users deserve clear disclosures about when content is AI-generated or AI-assisted, and which (if any) experts are actually involved.
- Accountability: There must be straightforward ways to address impersonation or misrepresentation, including for deceased creators.
- Model governance: AI companies should avoid exploiting writers’ work to train commercial systems without fair pay or acknowledgment.
What this means for users and the market
If you use AI writing tools, the Grammarly mess is a pretty loud warning about the limits of “expert” branding when it’s just synthetic voices instead of real expertise. There’s always the risk that the hype outpaces the product, which can lead to disappointment and distrust for readers and clients.
For the broader market, this whole episode just makes it even clearer: we need real oversight for how AI learns from human work and how much it’s allowed to imitate or replace human judgment in writing and communication. Otherwise, who’s really in control?
Takeaways for consumers and industry practice
- Ask for transparency about how companies use your data and where AI-generated feedback actually comes from.
- Stick with tools that let you opt in to expert engagement and draw clear lines about impersonation.
- Back regulatory and company policies that protect writers’ rights and push for ethical AI development.
- Let’s not forget—genuine, thoughtful writing is still a human strength. It’s not just something to hand over to machines.
The Grammarly case makes us rethink how we market, build, and regulate AI writing tools. We need stronger safeguards, honest pricing, and real respect for creative work—something algorithms just can’t fake.
But here’s the big question: can we create AI that helps without taking advantage, and truly respects the people behind the words? I’m not sure, but it’s worth aiming for.
Here is the source article for this story: Vindicated At Last In My Years-Long Loathing Of Grammarly