The Case for Deregulating AI – and Its Inherent Risks

The following piece is part of the U.S.-India AI Fellowship Program’s short-form series.

By: Honson Tran

In his remarks at the AI Action Summit in Paris, U.S. vice president J.D. Vance urged other world leaders to strike a more ambitious vision for artificial intelligence based on deregulation, market agility, and strong collaboration with allies rather than risk-averseness when developing AI systems. He called for reducing hurdles that could slow down the pace of innovation – such as stringent bureaucratic oversight – to allow cutting-edge products to enter markets quicker. This ‘build-first’ approach would also allow U.S. AI companies to have more influence in setting global standards. In addition, Vance raised questions about whether those pushing for AI safety and advocating for stronger guardrails were being driven by competitors seeking market dominance rather than motivated by the public interest. There was also an urgency from Vance’s viewpoint on maintaining American AI leadership against adversaries, who could be much more accepting of a “do now, worry later” approach than the United States.

To some degree, Vance’s message resonated with others, including his host. French president Emmanuel Macron stated the it should be a wake-up call for Europe to consider regulatory reform or risk being left behind. Macron raised fears that there might not be anything left to regulate at all, emphasizing a need for Europe to gain a stronger footing in AI before regulation discussions. Deregulation and shared standards among allies have the potential to open up talent and research between nations. Already, in an effort to cut red tape, France and the EU have committed to reducing regulation.

But there are reasons to remain concerned about a “do now, worry later” perspective when it comes to emerging technologies. For example, nuclear fission was inadvertently discovered, not developed intentionally for an atomic bomb. Similarly, the steam engine was originally invented to pump water out of coal mines, but became the primary locomotive engine in the 19th and early 20th centuries. A given technology simply serves as a stepping stone to many others, both those that benefit society and potentially harm it. In other words, knowledge is neutral; it is the application that carries the ethical weight. It is possible that the transformer model from “Attention Is All You Need” may be seen as AI’s “nuclear fission” moment, hyper-accelerating the conduit from science fiction to reality. This knowledge unlocked a new era of AI and a tsunami of applications, severely complicating the task of regulating and restricting artificial intelligence technologies.

In essence, the concern about AI safety is not about preventing discoveries but rather about managing the rate of discovery and development, and proactively addressing the potential for misuse. A global agreement on pace should be defined. To address the concerns of adversaries gaining the AI advantage, companies and nations must work harder to ensure safety is not seen as a bottleneck in the AI race.

Vance ended his speech by mentioning the opportunity of a lifetime to capture “lightning in a bottle,” urging a call to action for the United States and its allies to seize the moment and join forces in the AI revolution. By championing deregulation and questioning the established safety-first narrative, Vance laid out a vision that empowers innovators and challenges them to balance rapid growth with ethical responsibility. As private firms continue to drive technological progress, the ability of the United States and its allies to navigate the delicate balance of progress and principles is critical to ensuring that AI augments humanity without harming it. 

Honson Tran is part of the U.S.-India AI Fellowship Program at ORF America. He is currently a Developer Experience Lead at Latent AI.