KNBR Host Blasts 95.7 Over AI Misuse and On-Air Gaffe

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

The following piece digs into a recent on-air spat between two Bay Area sports radio stations. It all started with a miscaptioned post about a San Jose Sharks comeback.

We’ll look at what actually happened, how AI got dragged into the argument, and what the whole mess says about accuracy, competition, and the role of automated tools in sports media coverage.

Incident Overview and Context

This whole thing blew up across radio, social media, and even short-form video. KNBR host Adam Copeland called out rival station 95.7 The Game after their Facebook caption claimed the Sharks had tied their game “down by 2 goals in the fourth period.”

That’s a pretty glaring mistake, since NHL games only have three periods—and San Jose wasn’t actually down by two at any point. Copeland posted a screenshot of the caption on X, roasted the rival’s sports knowledge, and then claimed—citing anonymous “sources from inside The Game”—that AI had generated the caption.

This accusation turned the error into a bigger swipe at the other station’s editorial standards.

The argument spilled over onto Copeland’s show, “Dirty Work.” Co-host Derek Papa and former Sharks broadcaster Brodie Brazil jumped in too.

Brazil poked fun at the idea of a “fourth period” and questioned whether AI had anything to do with the caption. Later in the week, The Game took down the original Facebook post, but a clip from the show still floated around on TikTok with a corrected caption.

The whole sequence really put a spotlight on the tension between local sports media outlets. It also raised some good questions about how AI is used in captioning and whether it could mess with credibility or audience trust.

Timeline of Events

Here’s how it all went down, step by step:

  • 95.7 The Game posts a Facebook caption about the Sharks’ 4-3 comeback, but says “fourth period,” which isn’t a thing in hockey.
  • Adam Copeland shares the post on X, calls out the rival station, and teases more coverage on KNBR.
  • Copeland claims—based on “inside sources”—that AI wrote the caption, which definitely ups the stakes.
  • The conversation moves to Copeland’s show, with guests getting sarcastic and skeptical about the whole AI angle.
  • By Thursday afternoon, The Game deletes the Facebook post. The TikTok video stays up, just with a fixed caption.

The AI Claim and Its Implications

The big question here: Was the caption just a human mistake, or did AI mess it up? Copeland’s claim about AI—and those mysterious “inside sources”—seemed designed to make people doubt The Game’s editorial process.

This taps into a bigger conversation about how reliable AI-generated captions really are, especially during fast-paced sports games. Sure, AI can speed things up, but if it spits out bad info, that can really damage a station’s credibility. Fans want accurate updates, not weird errors that leave them confused.

Public Response and Platform Developments

People definitely noticed the mistake, especially once it got shared and reshared across platforms. The Game took down the Facebook post, maybe hoping to stop the spread of the error, but the TikTok clip kept the conversation alive.

It’s a good example of the weird tug-of-war between old-school radio hosts and the new world of social media, where errors can go viral and get picked apart almost instantly.

What People Said and What Changed

  • Fans and industry folks debated whether those “inside sources” were real and if AI really played a role. Some questioned the ethics of making AI claims without proof.
  • People wondered how fast media outlets should fix mistakes, and how they should let everyone know across different platforms.
  • Transparent captioning practices suddenly seemed a lot more important, especially when videos are being shared everywhere.

Implications for AI-Enabled Captioning in Sports Media

This whole incident sits right at the intersection of media rivalry and the growing influence of AI in journalism. AI tools are popping up everywhere for captions and content, and people expect them to get things right.

When they don’t, trust takes a hit—even if it was just a tech hiccup and not someone trying to mislead anyone.

Key Concerns for Accuracy and Trust

Here are some of the big takeaways for media outlets dealing with AI:

  • Double-check AI-generated captions and content before posting or airing them.
  • Be clear when AI is used, and fix errors quickly and openly.
  • Keep training staff so they can spot and fix AI mistakes as they happen.

Takeaways for Industry and Fans

Sports media professionals really need to double-check facts, handle AI-generated content with care, and think before posting on social platforms. Fans should look for primary sources and try to verify claims, especially since digital reporting moves so quickly now.

  • Audiences benefit from transparency about whether tools like AI were used in captioning or posting processes.
  • Editorial standards keep changing with technology, and institutions have to keep up if they want to stay credible on radio, social, and video.
  • Clear corrections help sustain trust when errors happen, which feels extra important in the fast-paced world of sports news.

 
Here is the source article for this story: KNBR host goes scorched-earth on 95.7 gaffe, AI usage

Scroll to Top