Anthropic’s analysis puts frontier AI right at the heart of a global political technology race. The next few years will show whether democracies or authoritarian regimes—especially the Chinese Communist Party (CCP)—end up setting the rules for transformative AI.
The main point? Compute power—those chips, data centers, and the whole infrastructure behind cutting-edge models—looks like the deciding factor for safety norms, governance, and who gets the upper hand. China does have world-class talent, cheap energy, and huge datasets. But honestly, they’re still mostly held back by limited access to top-tier compute and advanced semiconductor supply chains.
The piece sketches out two possible scenarios for 2028. It argues that the policy choices leaders make right now could tip the scales toward democracies or the CCP when it comes to setting the future rules for AI governance.
Compute as the decisive input for frontier AI
Anthropic makes it clear: frontier models aren’t just technical marvels. They could transform a bunch of sectors—semiconductors, biotech, materials, you name it—and speed up how fast military and cyber capabilities roll out.
The real edge comes from access to compute and the infrastructure that supports it. Democracies, led by the U.S. and partners, have built a serious advantage with their chipmaking ecosystems and tightly woven supply chains. Current export controls have helped keep that lead intact.
Two 2028 scenarios: democracy-led safety norms vs CCP-led AI norms
- Scenario A – Democratic lead solidifies safety governance: Democracies tighten controls, disrupt distillation attacks, and keep a 12–24 month edge in frontier AI. With open collaboration and strong international norms, they shape global safety standards and block techno-authoritarian dominance.
- Scenario B – Inaction or limited progress: CCP-linked labs start catching up, using their strengths in talent and data, but still run into compute bottlenecks. If democracies don’t act decisively, authoritarian-led AI norms could gain ground, pushing censorship, surveillance, and military modernization even further.
Anthropic notes that these aren’t set in stone, but real possibilities depending on policy, export-control enforcement, and how open research and safety cooperation play out. The CCP already uses AI for censorship, surveillance, and cyber operations. If governance falls behind, a more aggressive AI frontier could make these capabilities even more powerful.
Risks and leverage points in a tense strategic landscape
The report highlights a few big risk channels. First, distillation attacks—ways to copy or refine capabilities from limited info—become a threat if safeguards are weak or misaligned.
Second, export-control loopholes might let Chinese labs copy U.S. innovations, closing the technological gap democracies have worked to maintain. Third, open-weight releases of AI models can make powerful tools available to more actors than ever, raising the stakes for dual-use threats.
Anthropic also points out the ongoing dominance of U.S. and allied chipmakers—NVIDIA, TSMC, ASML, and others—as a key advantage. Tightening export controls and boosting allied coordination could widen the compute gap for democracies and cut the risk of dual-use advances getting into adversarial hands.
Policy implications and governance strategies
Strategic emphasis needs to stay on maintaining a capability lead to shape global AI governance. Democracies should pursue a coordinated approach that mixes strong safety research, smarter export controls, and international norms-setting to prevent techno-authoritarian entrenchment.
Engaging with Chinese AI experts on safety is valuable. But that has to go hand-in-hand with steps to keep a democratic edge in compute, chip manufacturing, and infrastructure security.
So, how do we move from ideas to action? Policymakers should consider a few key moves:
- Strengthen and align export controls with allies to protect high-end compute access, but avoid needless friction that slows legitimate research.
- Invest in domestic and allied semiconductor capability and data-center infrastructure to keep the compute lead strong.
- Expand international safety collaboration and governance to shape norms around model safety, risk assessment, and managing dual-use issues.
- Promote responsible open research, but also reduce distillation risks by improving governance of model weights, training data, and evaluation standards.
Key takeaway: Frontier AI is set to change the landscape of strategic advantage and risk. Democracies need to act decisively on compute access, export controls, and global governance so that safety norms actually keep up with rapid capability growth.
Here is the source article for this story: 2028: Two scenarios for global AI leadership