With the U.S. presidential election imminent, the impact on AI regulation is being closely scrutinized by industry leaders.
Candidates Vice President Kamala Harris and former President Donald Trump each vow to foster AI innovation, yet the broader political landscape and inconsistent state regulations may hamper the nation’s global influence in AI governance.
Technology executives are navigating a complex regulatory landscape as they seek to comply with emerging policies, with a majority expressing that keeping pace with evolving legislation is a substantial hurdle, according to recent findings by PwC.
The U.S., home to many of the world’s premier AI companies, is in a unique position to shape global standards in AI. However, experts suggest that partisanship and the fragmented nature of state-level rules currently diminish the country’s potential for leadership in this domain.
The next administration will inherit a climate of demand for regulatory clarity that manages AI’s inherent risks without hindering innovation. Both Harris and Trump have indicated their support for policies that would drive domestic AI development, yet their approaches differ significantly.
The Harris-Walz campaign is building upon the existing AI executive order issued by President Joe Biden in October 2023, while the Trump-Vance agenda advocates for repealing key aspects of that framework.
Despite these positions, analysts suggest that bipartisan agreement on AI regulation is unlikely, regardless of the outcome of the election. Although a lighter regulatory stance might accelerate AI growth, concerns about adequate safeguards against misuse remain.
Internationally, countries are moving forward with stringent AI policies. The European Union’s upcoming AI Act, for instance, is expected to impose significant regulations for companies operating within its borders, and countries like China, Canada, and Brazil are progressing on their regulatory frameworks for high-risk AI systems.
While the U.S. is expected to play a role in AI regulation through industry standards, state laws, and federal proposals, analysts suggest that it is unlikely to achieve the coherence seen in the EU’s regulatory structure.
Currently, AI does not take center stage in U.S. campaign platforms or policy discussions, though it remains relevant in the broader context of future manufacturing and workforce dynamics. Experts anticipate AI’s prominence in political discourse will grow in the coming years as sector-specific initiatives influence regulatory considerations.
Recent efforts by the Department of Labor, tasked by the Biden administration, underscore the government’s commitment to worker well-being, as outlined in new guidance for AI developers. The Harris-Walz campaign has indicated plans to further these commitments while also pledging investments in advanced technologies to bolster American manufacturing. In contrast, the Trump-Vance platform emphasizes the use of sophisticated technology for border security and military objectives, illustrating the candidates’ divergent views on AI’s role in U.S. policy.
Historical trends suggest comprehensive federal AI legislation could take years to materialize. The EU’s General Data Protection Regulation, effective since 2018, has influenced U.S. state privacy laws, yet a federal privacy law remains absent. For AI, similar delays may emerge, and experts caution that partisanship could continue to obstruct the development of cohesive national AI policies.
As regulatory efforts continue both in the U.S. and abroad, technology leaders are advised to monitor evolving legislative landscapes closely. With a focus on state-level regulations and international compliance standards, businesses are expected to stay agile regardless of the election’s outcome.