
According to the BBC*, artificial intelligence could lead to the extinction of humanity, experts – including the heads of OpenAI and Google Deepmind – have warned.
Dozens have supported a statement published on the webpage of the Centre for AI Safety. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” it reads. But others say the fears are overblown. Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement. The Centre for AI Safety website suggests a number of possible disaster scenarios:
- AIs could be weaponised – for example, drug-discovery tools could be used to build chemical weapons
- AI-generated misinformation could destabilise society and “undermine collective decision-making”
- The power of AI could become increasingly concentrated in fewer and fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship”
- Enfeeblement, where humans become dependent on AI “similar to the scenario portrayed in the film Wall-E”
Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety’s call. Yoshua Bengio, professor of computer science at the university of Montreal, also signed. Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the “godfathers of AI” for their groundbreaking work in the field – for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science. But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that “the most common reaction by AI researchers to these prophecies of doom is face palming”.
Although currently a relatively small threat, the extinction statement suggests the realms of what AI will be able to carry out is still largely unknown. With bad actors increasingly using AI to facilitate their attacks and even AI deciding to think nefariously for itself, there is the risk that attacks will continue to counter standard defenses and break through current securities. Human supervision is essential for avoiding false decisions multiplying into an avalanche of false decisions. However, fighting fire with fire is a vital way to limit the chances of seeing AI risks increase, so the production of more AI being deployed in countermeasures, we will see this balance out again.
The government was very slow to regulate social media and cryptocurrencies, so it is positive that the AI discussion is now being held as well as heard. Although impressive, ChatGPT and other intelligence language models are largely still in their infant phase. However, regulation at this stage is a vital part of the process and can help guide a safer use of the technology for future generations.
*ESET does not bear any responsibility for the accuracy of this information.
