Context
- The United States has withdrawn its AI export control plan called the Framework for AI Diffusion. This move is seen as positive. The framework was considered harmful to AI growth and global relations.
- However, new developments show that AI controls will still continue, but in different ways.
AI Diffusion Framework
- In its final days, the Biden administration introduced the AI Diffusion Framework, treating AI like nuclear technology.
- It imposed export controls and licenses, favoring allies and restricting adversaries like China and Russia.
- Rationale Behind the Framework
- The U.S. believed that computational power drives AI strength.
- To maintain its edge, it aimed to limit adversaries’ access to powerful compute while centralizing AI development within allied nations.
- Counterproductive Impact of the Framework
- While trying to tighten controls, the Framework unintentionally harmed global cooperation, even among allies.
- It prompted partners to seek independence from the U.S. tech ecosystem, undermining trust and collaboration.
- Mischaracterization of AI as Military Tech
- AI is primarily a civilian technology with military applications, unlike nuclear tech.
- Treating it as a defense tool restricted innovation, which is international and collaborative by nature.
- Innovation Driven by Restriction
- Tight controls encouraged alternative innovations.
- China’s DeepSeek R1 is a result of such restrictions, achieving top-tier AI performance with less compute, reducing the effectiveness of export controls.
- Revocation and the Road Ahead
- The Trump administration revoked the framework due to its flaws.
- This benefits countries like India, which were disadvantaged.
- However, the U.S. approach to controlling AI diffusion, especially towards China, is expected to continue in new forms.
The Possible Replacement
- Continued Efforts Despite Framework Withdrawal
- Although the AI Diffusion Framework has been rescinded, the U.S. continues to tighten controls on Chinese access to AI chips.
- Expansion of Export Controls
- In March 2025, the U.S. broadened export restrictions and added more companies to its blacklist, reinforcing efforts to limit AI chip access.
- Hardware-Based Monitoring Measures
- The administration is considering new on-chip features to monitor and restrict AI chip usage, targeting functionality and specific applications.
- New U.S. legislation proposes built-in location tracking in AI chips to prevent their diversion to countries like China and Russia.
- Shift from Trade to Technological Enforcement
- Rather than relying solely on trade restrictions, the U.S. now aims to achieve the framework’s goals through technological controls embedded in AI hardware.
Emerging Concerns with New Control Measures
- Technologically enforced AI controls raise issues of ownership, privacy, and surveillance.
- These measures may deter legitimate users while failing to stop malicious actors.
- Undermining Trust and Autonomy
- Such controls reduce user autonomy and create trust deficits.
- Nations—even allies—may fear losing strategic autonomy and seek alternatives to U.S.-based AI systems.
- Tactical Shift, Not Strategic Change
- The withdrawal of the AI Diffusion Framework reflects a tactical adjustment, not a change in the core U.S. strategy to control AI proliferation.
- Risk of Repeating Past Mistakes
- If technologically-driven controls are fully adopted, they could reproduce the same harmful outcomes as the rescinded framework, weakening global trust in U.S. leadership.
Conclusion: A Missed Opportunity for Strategic Reflection
- Persisting with control-based policies signals that the U.S. has not fully absorbed the lessons of the Framework’s failure, potentially undermining its global AI leadership.