The following piece is part of the U.S.-India AI Fellowship Program’s short-form series.
By: Raj Shekhar
The pursuit of Artificial Intelligence (AI) has rapidly emerged as a top strategic priority for leaders in business and politics globally. Both public and private-sector investments in AI are at an all-time high. While enterprises and startups are exploring opportunities to commercialize AI for growth and profitability, governments are focused on developing national capabilities to enhance or retain global competitiveness and safeguard national security and interests. Only time will reveal if such enthusiasm surrounding AI was truly warranted. For now, AI’s contribution to the global economy is estimated to reach trillions of dollars by the end of this decade, with its creators often likening its development to the Manhattan Project — making fears over its control and diffusion all but inevitable. This article explores how the recent U.S. AI export control rules could impede international cooperation on the responsible governance of AI, with far-reaching implications for the United States, the Majority World, and the global economic and security landscape.
The control conundrum
Currently, firms in the United States control the largest share of the global market for several key components of the AI technology stack — primarily AI model weights (parameters that determine model performance) and advanced computation chips (specialized graphics processing hardware that enables models to run complex, parallel computations). The United States, therefore, remains in a privileged yet potentially precarious position to determine who has access to the core building blocks of AI within its control and under what terms and conditions. Any such determinations are bound to have serious economic and security implications, both direct and indirect, not only for people, businesses, and institutions within its borders but also for populations and industries worldwide. This certainly assumes that AI’s economic and humanitarian potential and the security threats from its unrestricted proliferation are not overstated. Therefore, the U.S. leadership should thoroughly evaluate and re-evaluate these determinations before embedding them into their foreign policy.
The series of AI export controls introduced by the United States between October 2022 and December 2024 — which primarily targeted countries like China that do not seem to align with the American commitment to protecting democracy, human rights, and a rules-based international system — were understandable. Moving forward, it would also be plausible for the United States to extend the controls to countries acting as intermediaries to help China, Russia, or other countries under the U.S. arms embargo to bypass them — so long as the identification of such countries is based on evidence rather than speculation. However, the much more recent AI export control rules that were introduced in the final days of the Biden administration reflect an excess of fear and unchecked paranoia.
This new set of rules imposes a byzantine set of restrictions on access to the U.S.-produced AI chips and model weights for approximately 150 countries — covering almost the entire Majority World. One could think of these countries as having “weaker alignments with U.S. interests, stronger ties to U.S. adversaries, and more transactional foreign policies,” as some pundits have suggested. This suggestion is overbroad and cannot be taken at face value, especially considering that India, too, has been subject to these restrictions, despite being a member of the Quad and having cultivated a strong strategic partnership with the United States in recent years. The decision to restrict nearly three-quarters of the world’s population from accessing critical U.S. AI infrastructure warrants further justification. Without this, the foreign policy measure by the United States appears disproportionate and ill-advised, as it could lead to serious consequences for the global economic and security landscape.
The new AI export control rules have already begun to fuel a climate of fear and distrust within the global AI ecosystem. Not only do the rules threaten to stifle the U.S. AI chip market, but they also run counter to the positive role the Trump administration could play in addressing the profound inequities within global AI supply chains, and advancing global harmonization of AI trust and safety standards for a safer, more secure, and more equitable world. As a result, Washington could be viewed as abdicating this crucial role and disregarding the G7 consensus and the United Nations General Assembly resolutions of March 2024 and June 2024, which aim to bridge the AI and digital divides pervading our world today. Through its new AI export control policy, the United States risks weakening its geopolitical position, damaging its reputation as a reliable and trusted technology partner for the Majority World, and forfeiting key levers that could help reinforce its AI industry’s global competitiveness, and shield its interests from both short-term and long-term economic and security vulnerabilities.
The diffusion dividend
By extending critical infrastructure and application development support aimed at bridging the global AI divide through strategic cooperation with countries like India and Brazil, the United States could strengthen its geopolitical position and enhance trust in its technological leadership — while expanding its network of strong allies and mitigating the risk of rival coalitions, such as those led by China, emerging in the global AI race.
Further, this support from the United States to various countries or regions could be conditioned on their cooperation with Washington on key AI safety and security initiatives, solidifying the U.S. leadership in shaping global regulations and standards for AI. This could reduce the risk of theft or diversion of critical U.S. AI infrastructure and help create a more favorable and predictable global regulatory environment for American AI companies. The United States could also condition its support on agreements for cross-border sharing of data to support its AI research and development activities — suggestions are already circulating in the press for India to “marshal its continental size datasets and leverage them to negotiate better access to Graphics Processing Units (GPUs)…[from the United States].” Additionally, the United States could secure import-duty-free market access for its AI products and services in foreign markets, along with opportunities for American entities to establish research or manufacturing facilities in regions with cost advantages in AI talent, such as India — home to a rapidly growing pool of skilled professionals in data science and machine learning. By leveraging these strategies, the United States could make its firms more competitive globally, accelerating the growth and expansion of its AI industry.
The United States could even link its support to ‘more expansive [non-AI] purposes.’ For instance, in exchange for advanced computing, Washington could seek cooperation from countries on key U.S. foreign policy priorities, such as reducing dependency on Chinese imports or reduction of tariffs on U.S. exports. Finally, the U.S. support, allowing more AI-lagging countries to make strides toward AI-readiness, could unlock new markets for the U.S. AI hardware, software, and solutions. It could also enrich the global pipelines of AI talent and high-value datasets that the U.S. AI industry could leverage to pursue new AI frontiers.
The ultimate fallout
If Washington proceeds to implement its new AI export control policy, responsible governance of AI could devolve into an idealistic aspiration. Without a swift and strong alternative to the United States to lead efforts in advancing AI equity and safety, countries around the world are likely to face economic setbacks and be pushed to reconsider or realign their priorities and partnerships to meet their own fiscal and developmental objectives. This could mean the marginalization of ethics and safety considerations in AI development and the derailment of meaningful international cooperation on the responsible governance of AI, exposing the world to far-reaching economic and security risks. The Trump administration should, therefore, revisit the policy before the situation escalates.
Raj Shekhar is part of the U.S.-India AI Fellowship Program at ORF America. He is currently the Responsible AI Lead at the National Association of Software and Services Companies.