OpenAI says it could ‘stop working’ in the EU if it can’t meet future regulations

by Janice Allen
0 comments

Sam Altman, CEO of OpenAI, has warned that the company could pull its services from the European market in response to AI regulations being developed by the EU.

Speaking to reporters after a talk in London, Altman said he had “a lot of concerns” about the EU AI law, which is currently being finalized by lawmakers. The terms of the law have expanded in recent months to include new obligations for makers of so-called “base models” — large-scale AI systems that power services like OpenAI’s ChatGPT and DALL-E.

“The details really matter,” Altman said, according to one report The Financial Times. “We will try to comply, but if we cannot comply, we will stop working.”

In comments indicated by Time, Altman said the concern was that systems like ChatGPT would be classified as “high risk” under EU law. This means that OpenAI would have to meet a number of security and transparency requirements. “Either we can solve those requirements or not,” Altman said. “[T]here are technical limits to what is possible.”

In addition to technical challenges, disclosures required under EU AI law also pose potential business threats to OpenAI. A provision in the current draft requires base model makers to disclose details of their system’s design (including “required computational power, training time, and other relevant information related to the size and power of the model”) and “summaries of copyrighted data” provide. used for training.”

OpenAI used to share this kind of information, but has stopped as the tools have become more and more commercially valuable. That’s what Ilya Sutskever, co-founder of Open AI, said in March The edge that the company had been wrong to disclose so much in the past, and that keeping information such as training methods and data sources secret was necessary to prevent its work from being copied by rivals.

In addition to the potential business threat, forcing OpenAI to identify the use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the internet, much of which is copyrighted. When companies disclose these data sources, it exposes them to legal challenges. For example, OpenAI rival Stability AI is currently being sued by stock photo creator Getty Images for using copyrighted data to train its AI image generator.

Altman’s recent comments help paint a more nuanced picture of the company’s need for regulation. Altman has told US politicians that regulation should mainly apply to future, more powerful AI systems. The EU AI law, on the other hand, is much more focused on the current capabilities of AI software.

You may also like

All Right Reserved Businesskinda.com