AI is entering an era of corporate control

by Janice Allen
0 comments

An annual report on the progress of AI has highlighted the increasing dominance of industry players over academia and government in deploying and securing AI applications.

The 2023 AI Index — compiled by researchers from Stanford University and AI companies including Google, Anthropic, and Hugging Face — suggests that the world of AI is entering a new phase of development. Over the past year, a host of AI tools have gone mainstream, from chatbots like ChatGPT to image-generating software like Midjourney. But decisions about how to deploy this technology and how to balance risks and opportunities are firmly in the hands of business players.

Advanced AI requires huge resources, putting research beyond the reach of academia

The AI ​​Index states that for years academia has led the way in developing advanced AI systems, but industry has now firmly taken over. “In 2022, there were 32 major machine learning models produced by industry, compared to only three produced by academia,” it says. This is mainly due to the increasing need for resources – in terms of data, personnel and computational power – required to create such applications.

For example, in 2019, OpenAI created GPT-2, an early major language mode, or LLM – the same application class used to power ChatGPT and Microsoft’s Bing chatbot. GPT-2 cost about $50,000 to train and contains 1.5 billion parameters (a metric that tracks the size and relative sophistication of a model). Jump forward to 2022 and Google created its own state-of-the-art LLM, called PaLM. This is estimated to cost $8 million to train and contains 540 billion parameters, making it 360 times larger than GPT-2 and 160 times more expensive.

The increasing need for resources for AI development is firmly shifting the balance of power towards corporate players. Many experts in the AI ​​world worry that corporate incentives will also lead to dangerous outcomes, as companies rush to release products and sideline security concerns in an effort to outsmart rivals. This is one of the reasons why many experts are currently calling for a delay or even a pause in AI development, as in the open letter signed last week by Elon Musk, among others.

As AI systems become more widely deployed, the number of abuses has skyrocketed

As AI tools become more widespread, it comes as no surprise that the number of errors and malicious use cases would also increase; by itself it is not an indication of a lack of proper security. But other evidence does suggest a connection, such as the trend for companies like Microsoft and Google cutting their AI safety and ethics teams.

The report does note that interest in AI regulation from legislators and policymakers is growing. An analysis of legislative files in 127 countries found that the number of bills using the phrase “artificial intelligence” increased from just one passed in 2016 to 37 passed in 2022. In the US, scrutiny is also increasing at the state level, with five such bills passing proposed in 2015 against 60 AI-related bills proposed in 2022. Such increased interest could act as a counterbalance to corporate self-regulation.

However, the AI ​​Index report covers much more than this. You can read it in full here or check out some selected highlights below:

You may also like

All Right Reserved Businesskinda.com