In a discussion of threats emanating from AI systems, Sam Altman, the CEO and co-founder of OpenAI, confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4. released in March.
Speak to one event at MITAltman was asked about a recent open letter circulating among the tech world asking that labs like OpenAI pause the development of AI systems “more powerful than GPT-4”. The letter highlighted concerns about the security of future systems, but have been criticized by many in the industry, including a number of signatories. Experts disagree on the nature of the threat posed by AI (is it existential or more mundane?) and how the industry could “pause” development in the first place.
At MIT, Altman said the letter “missed most of the technical nuance about where we need the break” and noted that an earlier version claimed that OpenAI is currently training GPT-5. “We’re not and won’t be for a while,” Altman said. “So in that sense it was kind of silly.”
However, the fact that OpenAI doesn’t work on GPT-5 doesn’t mean it doesn’t extend the capabilities of GPT-4 – or, as Altman liked to point out, given the security implications of such work. “We’re doing other things on top of GPT-4 that I think have all sorts of security issues that are important to address that have been completely left out of the letter,” he said.
You can watch a video of the exchange below:
GPT hype and the misconception of version numbers
Altman’s comments are interesting, but not necessarily because of what they reveal about OpenAI’s future plans. Instead, they point to a key challenge in the AI ​​safety debate: the difficulty of measuring and tracking progress. Altman may say that OpenAI is not currently training GPT-5, but that’s not a particularly meaningful statement.
Part of the confusion can be attributed to what I call the version number misconception: the idea that numbered technical updates reflect clear and linear improvements in capability. It’s a misconception that has been harbored for years in the consumer tech world, where numbers assigned to new phones or operating systems strive for strict version control, but are really just marketing tools. “Of course the iPhone 35 is better than the iPhone 34,” is the logic of this system. “The number is bigger ipso facto the phone is better.”
Due to the overlap between the worlds of consumer technology and artificial intelligence, the same logic is now often applied to systems such as OpenAI’s language models. This doesn’t just apply to the kind of hucksters who post hyperbolic 🤯 Twitter threads 🤯 predicting that super-intelligent AI will be here in a few years because the numbers keep getting bigger but also from more informed and sophisticated commentators. Since many claims about AI superintelligence are essentially unfalsifiable, these individuals rely on similar rhetoric to make their point. They draw fuzzy graphs with axes labeled “progress” and “time,” draw a line up and to the right, and uncritically present it as evidence.
This is not to assuage fears for the security of AI or to ignore that these systems are improving rapidly and are not completely under our control. But it is to say that there are good arguments and bad arguments, and just because we’ve given a number to something – be it a new phone or the concept of intelligence – doesn’t mean we have the full measure of it.
Instead, I think the focus in these discussions should be on capabilities: on demonstrations of what these systems can and cannot do and predictions of how this may change over time.
Therefore, Altman’s confirmation that OpenAI is not currently developing GPT-5 will not be of any comfort to those concerned about the security of AI. The company is still expanding GPT-4’s potential (connecting it to the internet, for example), and others in the industry are building similarly ambitious tools, enabling AI systems to act on behalf of users. There’s no doubt a lot of work is being done to optimize GPT-4 as well, and OpenAI could release GPT-4.5 first (as did GPT-3.5) – another way version numbers can be misleading.
Even if the governments of the world could somehow enforce a ban new AI developments, it is clear that society has its hands full with the systems that are currently available. Sure, GPT-5 isn’t coming yet, but does it matter if GPT-4 is still not fully understood?
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.