AI Writing: Beyond Tools, Focus on Creativity and Accuracy

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article takes a look at how a distinctive AI-writing marker—the sentence construction “It’s not just this — it’s that”—has become nearly universal in corporate communications. It signals the growing influence of generative AI on professional writing.

Drawing on Amanda Silberling’s report and a Barron’s analysis of AlphaSense’s database, the piece traces how a single stylistic quirk is spreading through news releases, earnings reports, and government filings. It also raises questions about training data, ethics, and corporate policy. Honestly, who’d have guessed a simple phrase could stir up so much?

Marker phrases and the rise of AI-assisted corporate writing

The phrase “It’s not just this — it’s that” has shifted from a stylistic option to a recognizable marker of AI-influenced business prose. Silberling points out that this construction, once rare, now pops up with unsettling regularity in corporate materials as companies describe AI-driven shifts or strategic pivots.

This trend fits a broader pattern, where generative AI models train on massive existing corpora and end up recycling familiar phrasings in new texts. Em-dashes—another favorite in AI-assisted writing—get flagged too, both by readers and by those hunting for machine-generated passages.

The It’s not just this — it’s that construction

In 2025, Silberling highlights real-world examples from firms like Cisco, Accenture, Workday, McKinsey, and several Microsoft posts. They use the construction to frame AI and other strategic shifts.

These examples show how a narrow linguistic device can become a standard tool in corporate storytelling, especially when teams need to explain complex, tech-driven changes in a punchy, policy-friendly way. The pattern’s everywhere now, reflecting how much the industry leans on generative models to produce or polish formal communications—sometimes at the cost of a unique author’s voice.

Quantifying the trend across corporate documents

Barron’s looked at AlphaSense data and found a sharp jump in this construction across all sorts of documents, from news releases to earnings reports and government filings. The frequency shot up from about 50 mentions in 2023 to over 200 uses in 2025.

This surge shows just how widespread the pattern’s become in corporate communications, where everything’s supposed to sound clear, structured, and future-focused. It’s a little uncanny, honestly.

What the data suggests about AI reliance

Researchers and observers warn that spotting the phrase doesn’t prove a text is AI-written. Max Spero, CEO of Pangram, says that even with a high base rate, we shouldn’t jump to conclusions based on one feature.

Corporate documents, shaped by policy needs and formal tone, show a high rate of AI-assisted phrasing. This reflects a growing reliance on generative models in routine communications, not emotion or personal voice. It’s more about efficiency than creativity, really.

Ethical and cultural implications

Silberling also raises ethical worries about models training on writers’ work without clear permission. If corporate and public texts shape future AI outputs, then consent and fair use become crucial debates for researchers, companies, and policymakers.

Pangram’s perspective, added in the article’s update, suggests this isn’t just a linguistic fad. It’s a sign of deeper shifts in how writing gets made, regulated, and valued in a data-driven world. There’s a lot left to untangle here, isn’t there?

Implications for policy, practice, and perception

As AI-assisted writing gets more common, organizations run into all kinds of practical and strategic challenges.

Here are a few implications for practice and governance:

  • Quality control: Using AI for drafting can speed things up. But it might also chip away at a team’s unique voice or create stylistic inconsistencies between departments.
  • Transparency: Companies might have to admit when AI helps craft their communications, especially when talking to regulators or investors.
  • Ethical standards: It’s important to have clear rules about training data and whether it’s okay to use writers’ work to train AI models.
  • Detection and accountability: Auditors can use linguistic markers as clues, though they’re not foolproof. Governance should focus on outcomes and whether teams follow the rules.

Honestly, when you see phrases like “It’s not just this — it’s that” and the heavy use of em-dashes, it’s more than just a style thing. It hints at a bigger shift: generative AI-enabled writing is changing how organizations communicate, and it’s shaking up ethics and policy in ways we’re still figuring out.

 
Here is the source article for this story: It’s not just one thing — it’s another thing

Scroll to Top