- Item 1
- Item 2
- Item 3
- Item 4
High-Profile AI Leaders Warn of “Risk of Extinction” from AI
Leading AI scientists and industry leaders warn of AI as a 'societal-level' risk, calling for global action alongside other existential threats like pandemics and nuclear war.
OpenAI CEO Sam Altman testifies before the US Senate. Photo credit: Getty
🧠 Stay Ahead of the Curve
Top AI scientists and industry leaders have issued a joint statement, warning of AI as a 'societal-level' risk akin to pandemics and nuclear war.
This united front, involving influential figures from OpenAI, Google Deepmind, and more, highlights the significance and urgency of addressing potential AI-related threats to humanity.
The call for global cooperation and regulation in raises unresolved questions the future direction of AI technology, including complex issues of governance and ethics.
May 30, 2023
Prominent AI scientists and other industry leaders have released a statement warning the world that AI is a “societal-level” risk that poses a “risk of extinction.”
The 22-word statement, available in full on the Center for AI Safety’s website, states:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
While past statements, including a call for a 6-month ban on the development of more advanced AI statements, have attracted notable voices such as Elon Musk, this statement is notable for the broad support it features prior to its official release.
Among its notable AI industry leaders include Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google’s Deepmind unit; Dario Amodei, CEO of Anthropic; and Emad Mostaque, CEO of Stability AI.
A wide number of AI scientists have also signed on to the statement, most notably Geoffrey Hinton and Yoshua Bengio, two AI researchers who won the Turing award for their work on neural networks. Hinton recently left Google and warned in an interview that the progress of AI systems was “scary” and that the public would “not be able to know what is true anymore” as AI-created content rapidly flooded our society.
Other prominent figures include many senior researchers from OpenAI, Deepmind, and a broad number of notable research universities including Stanford, Princeton, UC Berkeley, and the University of Cambridge.
This statement comes at a time when AI leaders are increasingly calling for regulation, highlighting the threat posed by unrestricted AI systems at a time of rapid progress. In his testimony to the US Senate, OpenAI CEO Sam Altman notably called for both a US regulatory body to license AI models, as well as global cooperation on the dangers posed by AI.
Last week, OpenAI followed up with a blog post highlighting their proposal for the governance of superintelligence, calling for an international body to ensure “the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.” An organization like the IAEA (International Atomic Energy Agency) could eventually be needed to ensure global cooperation, OpenAI called out.
As AI leaders come together and call for a united front on addressing the risks posed by AI systems, questions remain on how and if regulation is possible. Open-source models, which are rapidly advancing in capabilities, may be challenging if not impossible to regulate as training costs decrease and models become more powerful. And a number of nations may refuse to sign on to any international efforts, leaving gaps in governance and cooperation.
Regardless, the signatories of the statement seem to feel that they are taking an important step forward. “We got (almost) the whole AI crew together,” exclaimed Stability AI CEO Emad Mostaque. Explaining the purpose behind the statement, he added, “our focus is on inputs & open-based resilience - healthier free range, organic models.”
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
News
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI WorldMay 05, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023