A number of well-known AI researchers – and Elon Musk – have done just that signed an open letter calling on AI labs around the world to pause the development of large-scale AI systems, citing fears of the “profound risks to society and humanity” they say this software poses.
The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently engaged in an “out of control race” to develop and deploy machine learning systems “that no one — not even their makers – can understand, predict, or control reliably.”
“We call on all AI labs to immediately pause for at least 6 months from training AI systems that are more powerful than GPT-4.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months from training AI systems that are more powerful than GPT-4,” the letter reads. “This break should be public, verifiable and include all key actors. If such a pause cannot be implemented quickly, governments must step in and impose a moratorium.”
Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque . The full list of signatories can be seen herealthough new names should be treated with caution as there have been reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, a person partly responsible for current race dynamics in AI).
The letter is unlikely to have any effect on the current climate in AI research, in which tech companies like Google and Microsoft scramble to deploy new products, often overruling previously raised concerns about security and ethics. But it’s a sign of the growing opposition to this “send now, fix later” approach; an opposition that could potentially find its way into the political realm for consideration by actual legislators.
As noted in the letter, even OpenAI itself has expressed the potential need for “independent review” of future AI systems to ensure they meet security standards. The signatories say that this time has now come.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and audited by independent third-party experts,” they write. “These protocols should ensure that systems that adhere to them are beyond any doubt secure.”
You can read the letter in full here.
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.