China’s AI Surge Means U.S. Struggles to Compete in Chips

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article takes a look at US export controls on AI-enabled semiconductors in 2022. The goal was to slow China’s progress in AI, but honestly, the results have been pretty underwhelming.

The strategic race has shifted. It’s less about building the biggest models and more about actually putting AI to work in the real world. Some experts are now suggesting a global AI-safety pact with China, seeing it as a more practical way to curb dangerous capabilities.

With three decades in AI policy and tech deployment, the author leans toward governance and collaboration. These might just offer more lasting protection than simply trying to wall off technology.

Background: the export-control approach and the China challenge

The Biden administration tried to slow China’s AI growth by blocking exports of advanced semiconductors. These chips are essential for training and running large models.

But progress in AI isn’t just about having the best chips. Chinese developers got creative. They used foreign data centers, hid the origins and ownership of their systems, and sidestepped some of the intended bottlenecks.

On top of that, shifting policies from different US administrations made the strategy even less clear. No one seemed sure what would come after hardware restrictions.

Why export controls failed to stop progress

Export restrictions alone can’t hold back an entire ecosystem. Even with limited access to top chips, Chinese teams kept moving forward.

They leaned on foreign data centers, split up workloads, and used clever replication tricks. Distillation let them mimic advanced models by combining lots of smaller chips.

Every time a leading model dropped, Chinese teams often managed to reverse-engineer it and put out solid copycats. The old idea of one country making a sudden, runaway leap in AI? That’s looking pretty shaky now, with fast followers closing the gap.

  • Training and running models on foreign infrastructure to dodge local bottlenecks
  • Hiding where models come from to cover their tracks
  • Using lots of less-powerful chips plus distillation to get close to top-tier performance
  • Quickly reverse-engineering new public models

Beyond the model: deployment as the new battleground

The real edge now isn’t about building the biggest model first. It’s about weaving AI into business processes and military systems in ways that actually matter.

Deployment shapes how well AI performs in the real world, how much money it saves, and who gets a strategic advantage. When AI becomes part of day-to-day workflows or defense systems, the speed and reliability of those AI-driven decisions start to matter more than model size.

Deployment touches everything—operational efficiency, making tough decisions under uncertainty, and even autonomous military capabilities. The focus is shifting. It’s less about flashy model breakthroughs and more about getting robust, safe, and scalable AI into the places that count.

Implications for policy and practice

Policy now has to tackle safety, governance, and interoperability as AI gets embedded deeper into daily life and defense. Without real safeguards, fast deployment could make risks worse—think misinformation, manipulation, or autonomous weapons.

The big question isn’t just, “Who can train models fastest?” It’s, “Who can build trustworthy, accountable AI systems that actually work at scale?”

A proposal on the table: a global AI-safety pact with China

Since export controls can’t really stop progress, the author argues for a global AI-safety pact with China. The idea is to set universal limits on dangerous capabilities, accepting that supply-side restrictions only go so far.

This kind of agreement would focus on shared safety principles, real commitments, and ways to prevent escalation—both in business and the military.

What would a safety pact look like?

A workable AI-safety pact could include things like:

  • Agreement on key safety standards for development and deployment
  • Mutual promises of transparency and verification mechanisms for critical systems
  • Rules to limit certain dangerous capabilities, with benchmarks for autonomous weapons and deceptive AI
  • Joint governance involving industry, academia, and the military
  • Dispute resolution and enforcement to keep cooperation on track

The pact would admit that export controls matter, but they aren’t enough. International collaboration on safety could cut risk without shutting down innovation. It’s a shift from pure competition to a more negotiated, hopefully stable, approach to global AI development.

What this means for the future of AI policy

For the US and the world, the way forward blends strategic diplomacy, safety-focused governance, and responsible innovation. Focusing on deployment safety, shared standards, and real diplomacy with China could lower the risk of dangerous AI while keeping the door open for progress.

This approach accepts that we’re living in a multipolar AI world. It tries to line up incentives for safer, more accountable AI systems, even if it’s a bit messy along the way.

Key next steps

  • Start bilateral talks with China that really focus on AI risk governance and safety commitments.
  • Work on multilateral frameworks and make sure to bring in other major AI developers.
  • Match up export-control policies with binding safety commitments, plus real verification mechanisms.
  • Put more resources into safety research and set deployment standards for both industry and government.

 
Here is the source article for this story: Opinion | I Went to China to See Their Progress on A.I. We Can’t Beat Them.

Scroll to Top