A group of leading AI researchers, engineers and CEOs has issued a new warning about the existential threat they believe AI poses to humanity.
The Explanation of 22 wordsshortened to make it as widely acceptable as possible reads: “Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This statement, published by a San Francisco-based nonprofit organization, the Center for AI Safety, is co-signed by figures such as Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Youshua Bengio – two of three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize for Computing”) for their work in the field of AI. At the time of writing, the year’s third winner, Yann LeCun, now chief AI scientist at Facebook parent company Meta, has not yet signed.
The statement is the latest high-profile intervention in the complicated and controversial AI security debate. Earlier this year, an open letter signed by some of the same individuals who supported the 22-word warning called for a six-month “pause” in AI development. The letter was criticized on multiple levels. Some experts thought it exaggerated the risk of AI, while others agreed with the risk but disagreed with the proposed solution in the letter.
Dan Hendrycks, executive director of the Center for AI Safety, told The New York Times that the brevity of today’s statement – which suggests no possible ways to mitigate the threat of AI – was intended to avoid such disagreement. “We didn’t want to insist on a very large menu of 30 possible interventions,” says Hendrycks. “When that happens, it dilutes the message.”
“There’s a common misconception, even in the AI community, that there are only a handful of doomers.”
Hendrycks described the message as a “coming out” for industry figures concerned about AI risks. “There’s a common misconception, even in the AI community, that there are only a handful of doomers,” Hendrycks said. The times. “But in fact, a lot of people would privately voice their concerns about these things.”
The broad outlines of this debate are well known, but the details are often endless, based on hypothetical scenarios in which AI systems rapidly increase in capacity and no longer function securely. Many experts point to rapid improvements in systems such as large language models as evidence of future expected gains in intelligence. They say that once AI systems reach a certain level of sophistication, it can become impossible to control their actions.
Others doubt these predictions. They point to the inability of AI systems to handle even relatively mundane tasks, such as driving. Despite years of effort and billion-dollar investments in this area of research, fully self-driving cars are far from being a reality. If AI can’t handle even this one challenge, skeptics say, what chance does the technology have of matching every other human achievement in years to come?
Meanwhile, AI risk advocates and skeptics alike agree that AI systems, even without improvements to their capabilities, pose a number of threats today — from using them to enable mass surveillance to powering faulty “predictive policing” algorithms. and facilitating the creation of disinformation and disinformation.
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.