Experts Warn of AI Risks: Global Efforts to Regulate Advancements and Ensure Safety

Share on twitter
Share on linkedin
Share on facebook
Share on reddit
Share on pinterest

According to a panel of leading experts, including two pioneers of AI often referred to as the “godfathers” of the field, the world is not adequately prepared for advancements in artificial intelligence. They criticize the lack of substantial regulatory progress by governments. 

The experts argue that the move towards autonomous systems by technology companies could significantly enhance the influence of AI, necessitating regulatory frameworks that activate when AI systems achieve specific capabilities.

This call to action is endorsed by 25 specialists, among them Geoffrey Hinton and Yoshua Bengio, who have both received the ACM Turing Award—often compared to the Nobel Prize in computer science—for their contributions to AI.

This alert is being raised as political figures, industry specialists, and tech leaders prepare to gather for a two-day summit in Seoul.

The academic paper stated that “managing extreme AI risks amid rapid progress” advocates for government safety frameworks that impose stricter regulations as AI technology advances swiftly.

Published in the Science Journal, the paper argues that society’s response, despite promising initial efforts, falls short of addressing the potential for rapid, transformative progress anticipated by many experts. It notes that AI safety research is lagging and that current governance initiatives lack the necessary mechanisms and institutions to prevent misuse and recklessness, barely addressing autonomous systems.

At last year’s global AI safety summit held at Bletchley Park in the UK, a voluntary testing agreement was established with major tech companies, including Google, Microsoft, and Meta, led by Mark Zuckerberg. Concurrently, the EU has implemented an AI Act, and the US has introduced new AI safety requirements through a White House executive order.

The paper highlights that advanced AI systems, capable of performing tasks typically associated with intelligent beings, have the potential to cure diseases and improve living standards. However, it also cautions that these technologies could undermine social stability and facilitate automated warfare. The paper further emphasizes that the industry’s shift towards developing autonomous systems presents an even more significant threat.

Additionally, the paper states that companies are increasingly focusing on creating generalist AI systems capable of autonomous actions and goal pursuit. The authors caution that advancements in these systems’ capabilities and autonomy could significantly amplify AI’s impact, potentially leading to widespread social harm, malicious uses, and an irreversible loss of human control over autonomous AI systems. They warn that if AI development continues unchecked, it could result in the marginalization or even extinction of humanity.

Some time ago, two tech companies offered a preview of future advancements. OpenAI’s GPT-4o demonstrated its ability to conduct real-time voice conversations, while Google’s Project Astra showcased features such as identifying locations with a smartphone camera, reading and explaining computer code, and generating alliterative sentences.

Among the co-authors of the proposals are Yuval Noah Harari, bestselling author of Sapiens; the late Daniel Kahneman, Nobel laureate in economics; Sheila McIlraith, an AI professor at the University of Toronto; as well as a professor at the University of California, Berkeley, Dawn Song. The paper is a peer-reviewed revision of the initial proposals presented before the Bletchley meeting.

A UK government spokesperson responded, disagreeing with the assessment. The spokesperson highlighted that the upcoming AI Seoul summit would be crucial in advancing the initiatives set forth at the Bletchley Park summit. 

During the summit, several companies are expected to update world leaders on their progress in fulfilling the commitments made at Bletchley to ensure the safety of their AI models.

0 replies on “Experts Warn of AI Risks: Global Efforts to Regulate Advancements and Ensure Safety”

Related Post