Overview: This blog post distills a recent article that pokes at Ed Zitron’s ongoing AI skepticism. It traces his shift from economic doubt to fraud allegations, weighing those claims against signals of AI adoption, value creation, and changing business models in 2026.
From economic doubt to fraud allegations: Zitron’s evolving stance
Early on, Zitron argued that AI offered limited utility and came with high costs. That was a fair concern in 2024, when models were still maturing and inference costs hadn’t dropped yet.
The article points out that since then, model capabilities and cost efficiency have improved dramatically. By 2026, Zitron’s focus shifted to accusing firms like OpenAI and Anthropic of fraud—a move the author sees as a sign that the old economic arguments just don’t stick anymore.
Shifting from economic skepticism to conspiracy says more about Zitron’s narrative than the market itself. The piece pushes back on using stories of unprofitable heavy users or executive mistakes as evidence of systemic fraud, since heavy-user subsidies are pretty standard in business and don’t mean the whole sector is doomed.
What the data say about AI adoption and value
The article highlights some clear signals of mainstream adoption and value:
- GPT-level capability costs have dropped by about 1,000x since GPT-4, making practical use a lot more feasible.
- Roughly 30% of Fortune 500 companies now have some kind of enterprise AI deal, which shows real corporate uptake.
- Over half of Americans use chatbots every week—so consumer demand and familiarity are definitely there.
Economic risk, sustainable models, and the fraud narrative
The article argues that Zitron’s turn to conspiracy overlooks real, well-known economic risks in AI—like winner-take-all dynamics, high capital needs, and tough competition for compute and talent. Analyses such as Epoch AI do raise concerns about recouping model-training costs before newer models arrive, but these are risks, not proof of fraud.
The author thinks skeptics should focus on real financial risks, not sweeping accusations. A rigorous critique would test business models and capital plans against actual data, instead of declaring fraud without solid evidence.
Where AI adds value today: concrete gains and prudent skepticism
Despite all the back-and-forth, the article points out that AI delivers real benefits today, especially in coding and software development. Paying users often see clear advantages, which really challenges the idea that AI has no utility at scale.
Still, the article pushes for ongoing, thoughtful scrutiny of business models, capital spending, and competition to make sure AI’s growth is sustainable.
- Coding productivity boosts save developers time and cut down on errors.
- Enterprise AI adoption keeps growing among big firms, leading to new workflows and clearer ROI.
- Consumer engagement with chatbots stays strong, showing that people want interactive, accessible AI tools.
A principled path forward: balanced critique in a fast-moving field
Let’s be honest: skepticism only works if it’s rigorous and based on real financials and competitive data. It’s not enough to just question AI—people should look at whether revenue, margins, and capital planning can actually stand up to tough competition and rapid tech changes.
Progress in this space is real, but it deserves ongoing scrutiny. Any critique ought to stick to evidence, not just speculation.
Here is the source article for this story: AI’s biggest critic has lost the plot