Warning from tech leaders including ChatGPT creator
More than 350 of the world’s most distinguished experts in artificial intelligence, including the creator of ChatGPT, have warned of the possibility that the technology could lead to the extinction of humanity.
In a joint statement, backed by the chief executives of the leading AI companies, they said that mitigating this risk “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
Rishi Sunak has declared that the UK will be at the forefront of efforts to develop AI responsibly, promising that the technology would be implemented “safely and securely with guard rails”.
Dan Hendrycks said some AI companies actively sought regulation
Many experts have expressed concern about the risk of models such as ChatGPT being used to propagate misinformation and cybercrime, as well as causing society-wide disruption to jobs.
The statement, co-ordinated by the US Center for AI Safety (CAIS), acknowledged these worries, but said that we should also discuss more serious but less likely threats, including the potential that accelerating AI could lead to the collapse of civilisation.
Some computer scientists fear that a superintelligent AI with interests misaligned to those of humans could supplant, or unwittingly destroy, us. Others worry that overreliance on systems we do not understand leaves us in catastrophic danger if they go wrong.
Dan Hendrycks, director of the CAIS, said that the statement was a way of “coming out” for many researchers. “People were much too afraid to speak up earlier,” he said. “This establishes it as an intellectually credible concern.”
Among the signatories are Sam Altman, chief executive of OpenAI, which made ChatGPT; Geoffrey Hinton, the University of Toronto academic often described as the godfather of AI; and Demis Hassabis, chief executive of Google Deepmind, which designed programs to defeat the best players at Go and chess. It was also signed by the heads of Anthropic AI, Inflection AI and Stability AI, and computer science professors from Cambridge, Oxford, Harvard, Yale and Stanford.
Other academics dismissed the statement as unhelpful. Dr Mhairi Aitken, ethics research fellow at the Alan Turing Institute, called it a “distraction” from more pressing threats from AI. “The narrative of super-intelligent AI is a familiar plotline from countless Hollywood blockbuster movies, and that familiarity makes it compelling, but it is nonetheless false,” she said.
Dr Carissa Véliz, from the institute for ethics in AI at Oxford University, was suspicious of the motives of some signatories. “I worry that the emphasis on existential threat is distracting away from more pressing issues, like the erosion or demise of democracy, that CEOs of certain companies do not want to face,” she said. “AI can create huge destruction short of existential risk.”
Hendrycks said that while those backing the statement differed in their view of the severity of the risk, all believed that there was a potential for catastrophe if AI was handled badly.
Lord Rees of Ludlow warned that multinational companies could bypass regulations
“Things are moving extremely quickly,” he said. “These programs are continuously violating our expectations. We’re currently in an AI arms race in industry, where companies have concerns about safety but they’re forced to prioritise making them more powerful more quickly.”
Concerns about the most severe threats from AI range from the possibility of it being used by humans to design bioweapons, to AI itself engineering the collapse of civilisation.
Hendrycks, who has a PhD from the University of California, Berkeley, said his worry was that humans might gradually lose control, until our values were not aligned with those of a vastly superior intelligence.
“We’re going to be rapidly automating more and more, giving more and more decision-making control to systems. If corporations don’t do that, they get outcompeted. What happens when you have AI competing with each other in a very intense timescale is that you end up getting selection of the fittest.
“Evolution doesn’t select for things that have the nicest characteristics.” His fear was that we “become a second-class species”.
Lord Rees of Ludlow, the astronomer royal and founder of Cambridge University’s centre for the study of existential risk, also signed the statement.
“I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems. Large-scale failures of power grids, internet and so forth can cascade into catastrophic societal breakdown,” he said.
“These potentially globe-spanning networks need regulation, just as new drugs must be rigorously tested. And regulation is a special challenge for systems developed by multinational companies, which can bypass regulations just as they can evade a fair level of taxation.”
Hassabis has previously said that he would be more worried about the world without the prospect of AI. Yesterday DeepMind said that its goal was to gain its benefits while mitigating risks. “Artificial intelligence will have a transformative impact on society and it’s essential that we recognize the potential opportunities, risks and downsides,” a spokesman said.
Karine Jean-Pierre, White House press secretary, said: “The president and the vice president have been very clear on this,” she said. “It is one of the most powerful technologies that we see currently in our time, but in order to seize the opportunities it presents we must first mitigate its risks, and that’s what we’re focusing on here.”
Source: The Times
Recent Comments