This blog post looks at how the New York Times cut ties with freelance reviewer Alex Preston after he submitted an AI-assisted draft that echoed language from a Guardian review. It covers the events, how the editors reacted, and the bigger questions swirling around AI, plagiarism, and freelance journalism these days.
What happened and why it matters for journalism ethics
In January, a NYT book review by Alex Preston—covering Watching Over Her by Jean-Baptiste Andrea—caught readers’ attention. People noticed it sounded a lot like a later Guardian review by Christobel Kent.
The Times investigated and found that Preston had used an AI tool to draft his review. He’d missed or ignored passages that the AI had lifted from the Guardian article.
The Times responded by adding an editor’s note to the review. They acknowledged the use of AI, linked to the Guardian article, and said the overlap broke their standards.
Afterward, Preston’s relationship with the Times ended. He’d written six reviews for them between 2021 and 2026.
Preston insists he didn’t use AI for his other work at the Times. He expressed regret, calling himself “hugely embarrassed,” and apologized to the Times, Christobel Kent, and the Guardian.
He’s an experienced journalist and author, with bylines at the Observer, FT, Guardian, and Economist. He also holds a senior role at Man Group.
Earlier this year, Preston wrote about AI risks and opportunities for Man Group. The timing’s a bit awkward, if you ask me.
Editorial response and the boundaries of AI in freelance journalism
The Times’ move to cut ties with Preston highlights a bigger issue: how should AI-assisted writing be handled when freelancers are involved? Their editor’s note and link to the Guardian piece show an attempt at transparency. Still, it makes you wonder if pre-publication checks are enough, especially with AI now in the mix.
Editorial integrity sits at the heart of all this—for readers, editors, and publishers.
Key facts from the case include:
- AI-assisted drafting was used by Preston on the January review, leading to language overlap with a Guardian piece.
- The Times added an editor’s note and linked to the Guardian article, signaling a standards violation.
- Preston was terminated as a Times contributor after six reviews over a five-year span.
- He claims not to have used AI on other Times work, though the incident has colored perceptions of his broader reporting.
- Preston’s public apology emphasized accountability to the Times, the Guardian, and readers.
- Preston’s background includes work for major outlets and a leadership role at Man Group, where he recently discussed AI risks and opportunities.
Broader implications for freelancers, editors, and newsroom policies
This incident really shines a light on how the ethics of freelance journalism keep shifting, especially now that AI tools are everywhere. Newsrooms are juggling the speed and convenience of AI with the need to keep things original and make sure everyone gets proper credit.
They’re also trying to draw clear lines about what counts as acceptable use of synthetic help. The situation points to a few areas where newsroom policies could use some tightening up:
- Clear guidelines on when and how freelancers can use AI for drafting, editing, or fact-checking.
- Stronger pre-publication checks that look specifically for AI-generated content and any overlaps with what’s already out there.
- Being upfront with readers about editorial processes when AI gets involved.
- Training freelancers and staff on using AI ethically, respecting copyright, and spotting plagiarism.
On the risk side, it’s honestly striking how fast one mistake can put someone’s credibility on the line—especially if they’re a senior journalist at a big-name outlet or corporation. As AI keeps weaving into how we write, editorial standards have to keep up if we want to hang on to trust, originality, and accountability in journalism.
Here is the source article for this story: The New York Times drops freelance journalist who used AI to write book review