AI poses ‘risk of extinction’, tech CEOs warn
Artificial intelligence poses a “risk of extinction” that calls for global action, leading computer scientists and technologists have warned.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war,” a group of AI experts and other high-profile figures said in a brief statement released by the Center for AI Safety, a San Francisco-based research and advocacy group, on Tuesday.The signatories include technology experts such as Sam Altman, chief executive of OpenAI, Geoffrey Hinton, known as the “godfather of AI”, and Audrey Tang, Taiwan’s digital minister, as well as other notable figures including the neuroscientist Sam Harris and the musician Grimes.
The warning follows an open letter signed by Elon Musk and other high-profile figures in March that called for a six-month pause on the development of AI more advanced than OpenAI’s GPT-4.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The rapid advancement of AI has raised concerns about potential negative consequences for society ranging from mass job losses and copyright infringement to the spread of misinformation and political instability. Some experts have raised fears that humanity could one day lose control of the technology.
The European Union has said it hopes to pass legislation by the end of the year that would classify AI into four risk-based categories.
China has also taken steps to regulate AI, passing legislation governing deep fakes and requiring companies to register their algorithms with regulators.
Beijing has also proposed strict rules to restrict politically-sensitive content and require developers to receive approval before releasing generative AI-based tech.