Top tech executives and scientists warn of AI’s potential to cause extinction

By Mayank Chhaya-

In a remarkable turn of events, hundreds of technology experts, academics and others have warned that Artificial Intelligence (AI) poses extinction level threat to human society on par with pandemics and nuclear war.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” Center for AI safety (CAIS) said in a statement signed by perhaps the widest range of noteworthy names, including many directly involved in AI research. These names include, Geoffrey Hinton, described as the godfather of AI who left Google citing existential dangers of the rapidly spreading technology, Demis Hassabis, CEO, Google DeepMind, Sam Altman, CEO, OpenAI and Dario Amodei, CEO, Anthropic.

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” the Center said.

The CAIS says on its website, “AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.”

In listing eight major AI risks, the center says, “AI systems are rapidly becoming more capable. AI models can generate text, images, and video that are difficult to distinguish from human-created content. While AI has many beneficial applications, it can also be used to perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyberattacks. Even as AI systems are used with human involvement, AI agents are increasingly able to act autonomously to cause harm. When AI becomes more advanced, it could eventually pose catastrophic or existential risks.”

The eight risks are weaponization, misinformation, proxy gaming, enfeeblement, value lock-in, emergent goals, deception and power-seeking behavior.

Under weaponization, the CAIS says, “Malicious actors could repurpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization. For example, deep reinforcement learning methods have been applied to aerial combat, and machine learning drug-discovery tools could be used to build chemical weapons.”

“A deluge of AI-generated misinformation and persuasive content could make society less-equipped to handle important challenges of our time,” it says about misinformation.

In proxy gaming, “Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.”

“Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E,” it says.

Under value lock-in, the center thinks, “Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.”

Under emergent goals, it says, “Models demonstrate unexpected, qualitatively different‍ behavior as they become more competent. The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.”

Deception is explained thus:  “We want to understand what powerful AI systems are doing and why they are doing what they are doing. One way to accomplish this is to have the systems themselves accurately report this information. This may be non-trivial however since being deceptive is useful for accomplishing a variety of goals.”

“Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control,” the center says under power-seeking behavior.

In the last couple of years, there has been a dramatic rise in the way AI has grown in its various forms of generative intelligence and now appears poised to crossover into Artificial General Intelligence or AGI. There is already intense debate going on about whether a time may come when it may be able to replicate itself and even begin to acquire sentience.

Related posts