OpenAI Is Working on New Frontier Model to Succeed GPT-4

OpenAI has begun training a new flagship artificial intelligence model to succeed GPT-4, the technology currently associated with ChatGPT. The new model — which some are already calling GPT-5, although OpenAI hasn’t yet shared its name — is expected to take the company’s compute to the next level as it works toward artificial general intelligence (AGI), intelligence equal to or surpassing human cognitive abilities. The company also announced it has formed a new Safety and Security Committee two weeks after dissolving the old one upon the departure of OpenAI co-founder and Chief Scientist Ilya Sutskever.

“OpenAI is aiming to move AI technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity,” writes The New York Times.

Although experts disagree on when or if AGI will be achievable, major players such as Google, IBM, Anthropic, Microsoft and Meta have said they are pursuing that goal. This past week Elon Musk’s xAI announced it raised $6 billion to build advanced AI.

Among companies that have been building AI for more than a decade, NYT detects “a noticeable leap roughly every two to three years.” A multimodal version, called GPT-4o (with an “o” for “omni”) that can generate and respond to sound and images — communicating in what NYT calls “a highly conversational voice” — was unveiled this month.

Training multimodal AI models can take “months or even years,” according to NYT, which surmises “that OpenAI’s next model will not arrive for another nine months to a year or more.”

Ars Technica says “critics, like Meta chief AI scientist Yann LeCun, think [OpenAI] is nowhere near developing AGI,” so the dust-up over dissolving the previous committee and need for a new one may be overblown.

AGI or not, it will likely be a “frontier model,” the AI industry term for a new system that significantly pushes current boundaries, writes Ars Technica.

To address global concerns regarding the effect such platforms may have on mankind, OpenAI has passed the custodial torch.

The new Safety and Security Committee is led by “OpenAI directors Bret Taylor (chair), Adam D’Angelo, Nicole Seligman and Sam Altman (CEO),” who “will be responsible for making recommendations about AI safety to the full company board of directors,” explains Ars Technica.

The SSC formation follows a May 21 “safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures,” AT reports.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.