Jharkhand first phase sees 66.18% voter turnout

As per ECI data, Kharsawan constituency recorded...

Why Donald Trump won and Kamala Harris lost

The campaign strategy of the Democrats was...

Congress performance in Jharkhand, Maha will decide I.N.D.I.A future

New Delhi: The performance of the Congress...

The future of AI policy in the United States under Trump 2.0

BusinessThe future of AI policy in the United States under Trump 2.0

The 2024 election cycle has wrapped up and the United States is poised for a substantial policy shift, as President-elect Donald Trump is set to become the 47th president, with Republicans securing control of the Senate and potentially the House of Representatives as well. This shift in power positions Trump and his allies in Congress to redefine AI’s regulatory landscape. Advocating for minimal government interference, Republicans are expected to push for substantial changes to Biden’s AI policy framework, which could radically alter the trajectory of AI governance in the U.S. This shift comes at a critical time for the AI industry, facing complex challenges that transcend partisan divides. Here’s a look at the potential implications of a Trump administration on U.S. AI policy.
Biden’s AI framework, solidified through an executive order (AI EO) in October 2023, focused on voluntary guidelines rather than strict mandates. This policy tackled issues from advancing AI in healthcare to addressing intellectual property risks. It also sought to bolster transparency and security by requiring companies to disclose model training and testing processes. The establishment of the U.S. AI Safety Institute (AISI) within the Department of Commerce’s National Institute of Standards and Technology (NIST) marked a significant step toward prioritizing AI safety and mitigating potential societal impacts.
However, these regulations have not gone unchallenged. Critics from Trump’s camp argue that the Biden administration’s requirements, including reporting obligations, could stymie innovation and put proprietary information at risk. During a House hearing, Representative Nancy Mace (R-SC) expressed concerns that Biden’s mandates could “scare away would-be innovators,” a sentiment shared by JD Vance, Trump’s vice president-elect. Vance has been a vocal proponent of loosening federal control on AI to prevent existing tech giants from monopolizing the industry under the guise of safety and security.

Trump’s AI Vision: Light Touch and Deregulation

If history is any guide, Trump’s approach to AI is likely to prioritize innovation over regulation. His previous administration laid the groundwork for federal investment in AI research while emphasizing “trustworthy” AI applications that align with American values. Yet, Trump’s rhetoric during the recent campaign suggests he may scrap Biden’s executive order altogether, potentially eliminating the AISI and rolling back oversight structures aimed at addressing ethical concerns.
Trump’s allies are particularly sceptical of NIST’s AI safety standards, which some Republicans have described as politically biased. Senator Ted Cruz (R-TX) has openly criticized the standards as “woke” and likened them to attempts at censoring conservative viewpoints under the guise of disinformation prevention. This ideological clash highlights a significant challenge for AI regulation: balancing free expression with the need to address AI-driven misinformation and social harms.
While Trump’s AI policy specifics remain sparse, he has emphasized support for “AI development rooted in free speech and human flourishing,” suggesting an inclination toward less restrictive oversight and minimal federal intervention. Some Republicans, however, are open to safety-focused guidance, particularly in cases where AI could pose physical risks, such as in military or bioweapons applications.

States Take the Lead: Localized AI Governance

With the federal government potentially scaling back on AI oversight, state governments could step in to address gaps in regulation. This trend is already visible, with Democratic-led states implementing their own AI policies. California, for instance, has enacted numerous AI safety bills requiring companies to disclose their training data and methodologies, and Colorado has taken a tiered, risk-based approach to AI deployment. Tennessee has also led efforts to protect voice artists from unauthorized AI cloning.
A Trump-led federal rollback could intensify these state-led initiatives, potentially creating a fragmented regulatory environment where companies must navigate varied state laws. Dean Ball, a research fellow at George Mason University, expects states like California and New York to advance their own AI standards, likely focusing on transparency and accountability.
While state policies may partially offset federal inaction, this decentralized approach could complicate compliance for AI developers, particularly those operating across multiple jurisdictions. The Biden administration’s focus on cohesive national standards for AI safety may be difficult to maintain if Trump dismantles Biden’s framework, leaving states to fill the regulatory void.

Trade and Tariffs: Economic Implications for AI

AI’s evolution under a Trump administration may also be shaped by international trade policies, particularly toward China. Trump has expressed concerns over China’s AI ambitions, and his protectionist stance suggests that tighter export controls could be on the horizon. AI research relies heavily on advanced hardware, much of which is manufactured abroad. Import restrictions or tariffs could disrupt supply chains and increase development costs, impacting the AI sector’s economic trajectory.
While the Biden administration introduced export controls on
AI technologies to limit China’s access, Chinese companies reportedly circumvent these measures by using U.S.-based cloud services. Trump’s proposed tariffs on Chinese tech products could further strain these channels, pushing AI firms to seek costly alternatives.
Hamid Ekbia, a public affairs professor at Syracuse University, warns that such restrictions could hinder international cooperation on AI, which is increasingly necessary to manage AI’s global impact. Trade barriers, he suggests, may not only affect U.S.-China relations but also slow AI progress worldwide by limiting access to crucial resources and impeding cross-border research.

The Need for Bipartisan AI Strategy

Despite the political divide, experts caution against making AI policy a partisan battleground. Sandra Wachter, a data ethics professor at Oxford, underscores that AI’s risks are universal and require cooperation beyond party lines. Issues such as misinformation, privacy concerns, and potential economic displacement transcend political ideologies, affecting people globally.
Yet, given Trump’s inclination to repeal Biden’s executive order and replace it with a light-touch regulatory approach, the immediate future of U.S. AI policy appears uncertain. While the AI industry may welcome reduced federal intervention, the lack of a clear, cohesive strategy may lead to fragmented policies across states and missed opportunities for addressing pressing ethical and safety concerns.
As AI continues to reshape industries and societies, the U.S. government’s approach to regulation — or lack thereof — will likely influence the trajectory of AI innovation, both domestically and globally. Whether Trump’s administration can balance industry growth with responsible oversight remains to be seen, but the stakes are high.

Author is Research Fellow at India Foundation

- Advertisement -

Check out our other content

Check out other tags:

Most Popular Articles