The article dives into an experiment where WIRED’s Gear Reviews team put ChatGPT to the test. They wanted to see if the AI could accurately reproduce WIRED’s own product recommendations.
Even though the AI linked to the right WIRED guides, it kept making mistakes. Sometimes, it even invented “phantom” recommendations that didn’t appear in the original lists.
This test raises some real concerns. AI-generated summaries can mislead readers and chip away at trust in editorial picks.
Experiment summary and motivation
The team set out to check if an AI assistant could truly echo expert reviewers’ top picks and buying guides. They zeroed in on WIRED’s official buying guides—one of the magazine’s most trusted resources for consumer tech.
They also wanted to see how the AI handled source pages and whether it could verify that listings were still active. The results? There’s a noticeable gap between what the AI spits out and what WIRED’s editors actually recommend.
Key observations
- Phantom picks: The AI named an LG QNED Evo Mini‑LED as its top TV pick, but WIRED’s guide doesn’t mention that model at all.
- Overconfident substitutions: It suggested AirPods Max 2 as WIRED’s top wireless headphones—even though reviewers hadn’t tested them yet.
- Laptop misidentification: The bot cited an older MacBook Air (M4, 2025) instead of the current MacBook Air (M5, 2026) as the best notebook.
- Verification gaps: Even when given the right source pages, the AI couldn’t always check if the listings were live or matched WIRED’s actual content.
So, while the AI could point to the right guides, it didn’t reliably reflect the editors’ tested picks. Sometimes it just “filled in” details, almost as if it wanted to sound authoritative instead of sticking to the facts. This kind of hallucination can easily mislead readers and erode trust.
Implications for readers and publishers
There’s a real tension here between AI help and editorial integrity. When an AI gets the top picks wrong or invents options, it pulls readers away from the careful, tested reviews that editors have worked hard to validate.
Publishers feel the impact, too. WIRED points out that affiliate links in its guides help fund journalism, but AI tools can siphon traffic away and shrink revenue. That only makes it harder to support the original reporting and testing that readers rely on.
Practical takeaways for consumers and publishers
What readers should do
- Trust primary sources by checking the original WIRED reviews and buying guides. Don’t just rely on AI-generated summaries—they can miss the mark.
- Cross-check listings to make sure the items you’re eyeing are still current. Sometimes, publisher guides get updated, and it’s easy to miss a change.
- Beware of AI-driven “top picks” that show up with the wrong models or outdated info. It happens more than you’d think.
- Support journalism by interacting with the publisher’s actual content. Keep an eye out for affiliate disclosures, since that revenue helps keep independent reporting alive.
If you’re an editor or running a platform with AI tools, here’s the thing: AI might help as a starting point, but it can’t replace real curation and hands-on testing. The most reliable advice? Stick with WIRED’s original editorial reviews.
Here is the source article for this story: I Asked ChatGPT What WIRED’s Reviewers Recommend—Its Answers Were All Wrong