The article The AI Sovereignty Paradox at Home and Abroad by Michael Froman examines the emerging AI sovereignty paradox and its implications for global governance and security.
At its core, the AI sovereignty paradox reflects a fundamental tension: whether governments can maintain control over artificial intelligence while relying on private firms that develop and manage these technologies.
The AI sovereignty paradox in the United States
In the United States, the AI sovereignty paradox is evident in the conflict between the Pentagon and private AI firms such as Anthropic. On one hand, the government seeks unrestricted access to advanced AI systems for national security purposes. On the other hand, companies impose safeguards to prevent misuse, including restrictions on mass surveillance and autonomous weapons.
As a result, a key question emerges: can a state be fully sovereign if private firms limit how critical technologies are used? At the same time, removing these constraints raises concerns about excessive government power.
Therefore, the AI sovereignty paradox highlights a dual risk—either diminished state authority or unchecked technological power in the hands of governments.
Global inequality and technological dependence
Beyond the United States, the AI sovereignty paradox takes a different form. Many countries are not debating control over domestic AI champions but rather struggling to access advanced technologies.
For example, countries such as India are prioritizing equitable access, inclusive growth, and climate applications of AI. However, most nations depend heavily on U.S. and Chinese firms across the AI stack, including chips, cloud infrastructure, and foundational models.
Moreover, the concentration of technological capacity is significant. The United States accounts for roughly 75 percent of global AI computing power, while China represents about 15 percent. Consequently, other countries face structural dependence, limiting their ability to achieve true technological sovereignty.
Fragmented governance and policy challenges
The AI sovereignty paradox is further complicated by the absence of unified global governance. In the United States, a laissez-faire approach prioritizes innovation, while other regions pursue regulatory frameworks focused on safety and accountability.
However, instead of convergence, a fragmented system is emerging. Different countries and regions—including the European Union, India, and others—are developing distinct regulatory models. As a result, global AI governance is becoming increasingly complex and inconsistent.
This fragmentation may create operational challenges for companies, which must adapt to multiple regulatory environments. At the same time, it introduces friction that could slow the global expansion of AI technologies.
Conclusion
Overall, the AI sovereignty paradox underscores a central dilemma in the age of artificial intelligence.
Governments must balance national security, technological control, and democratic accountability, while also navigating global inequalities and fragmented governance systems. Ultimately, resolving this paradox will require new frameworks that reconcile innovation with sovereignty at both national and international levels.
Reference
Froman, M. (2026). The AI sovereignty paradox at home and abroad. Council on Foreign Relations. https://www.cfr.org/articles/the-ai-sovereignty-paradox-at-home-and-abroad
