Home Technology OpenAI's Next AI Model Could Launch This Summer, Adding New

OpenAI's Next AI Model Could Launch This Summer, Adding New

Capabilities. OpenAI's CEO Sam Altman is over GPT-4, which he says 'kind of sucks.' Good thing his company's next AI is supposed to launch soon.

By Inc.Arabia Staff
images header

BY KIT EATON @KITEATON

When you see "AI" or "chatbot" in a news article, it's likely that ChatGPT comes to mind right away. OpenAI's chatbot is perhaps the best known AI at the moment, which made it all the more surprising when OpenAI's controversial CEO, Sam Altman, suggested that ChatGPT-4, its latest AI model, "kind of sucks."

Altman, who made the remark on a podcast earlier this week, tempered that criticism by explaining it was like looking back on old editions of iPhones. Those earlier models are technologically weak compared with the latest offerings, and he said the next version of OpenAI's model would be a significant leap forward. Pressed on whether or not GPT-5 would come out this year, he answered he didn't know. However, Business Insider just published a story suggesting GPT-5 is on track for a mid-2024 release. According to people who've tested it, it really may be much, much better, and have enterprise-centric skills.

Anonymous CEOs who spoke to Business Insider confirmed that they'd seen demos of GPT-5 and that it is "really good" and "materially better" than the company's earlier offerings. That chimes with Altman's comments, of course. The GPT-5 system is said to be still in training, and that would also track with a summer release window. OpenAI will likely incorporate as much up-to-date training data in the model as possible when it's released, and it will also be busy incorporating good things learned from GPT-4 and earlier models and fixing any issues.

The people who saw the GPT-5 demonstration also said that it was soon to be "red teamed" -- a fairly standard process some businesses use before launching a new product, where it's given to a group of people who essentially try to mess with it and even break it as they seek out any final vulnerabilities. Given how AI mistakes recently landed products like Google's Gemini and Microsoft's Copilot in hot water, this sounds like a smart move by Open AI.

Interestingly, the new report also suggested that OpenAI may include an enterprise offering for ChatGPT-5 that would involve AI "agents." We're all familiar with AI chatbots and even generative AI imagery, and it's clear that advice and new material dreamed up in text or image form can be incredibly useful to business users and even for more creative types.

But AI agents are much more exciting: Current-gen AI's are arguably passive, output-only systems, spitting out text and imagery and even video when you prompt them, usually with text-based commands. But agents are like smart tools that use AI powers to actually go and do real-world tasks. Bill Gates explained last year how agents could change the world, and recent rumors have hinted that OpenAI was working on the tech.

The business uses of an AI agent system like this will be obvious to anyone who's been tasked with filling in 100 nearly identical Excel spreadsheets at work, or any similar mundane office task. With AI agents, you should be able to show an AI tool what you need it to do and it could take over for you, like a smart AI "ghost" that briefly takes control of your computer. In theory, AI agents could even do more complex things, like booking flights for upcoming off-site meetings or other tasks.

Popping the hype bubble briefly, it's also worth remembering that AI tech isn't perfect, so even though GPT-5 may be materially better than GPT-4, it will probably still face some of the legal, ethical, and misinformational challenges that earlier large language model AIs have faced.

But at least one classic AI problem may not be around for long, according to Nvidia boss Jensen Huang. Speaking to journalists after the launch of the company's super-powerful new AI processing chips, Huang said he thought AI hallucinations -- when an AI offers up false, imaginary, or bizarre information a user didn't prompt for -- is a problem that is solvable, perhaps by training AIs to properly do their research before answering a query, website TechCrunch reports.

Hallucinations are just part of the way current AI tech works due to the way they extrapolate data from the training info they've seen: The answer the AI mathematically guesses at may not be at all rooted in reality. The notion that hallucinations are fixable is a reassuring thought, and one that may quell even some of Altman's worries -- he has been stridently calling for global regulation of AI systems for some time.

Photo Credit: Getty Images.

Last update:
Publish date: