Saturday, March 7, 2026

Creating liberating content

Blockchain Futurist Conference Returns...

Toronto, ON —  — Blockchain Futurist Conference returns to Toronto on July 21–22, 2026, bringing...

AINext Conference Las Vegas...

AINext Conference Las Vegas 2026 is set to bring together AI innovators, enterprises,...

2026 Global Game Connect...

Organized by HUIDU, the Global Game Connect (GGC) will take place from March 16–17,...

Crypto Expo Europe 2026:...

Bucharest, Romania – March 1-2, 2026 – The countdown has begun for one of...
HomeOpenAI to drop...

OpenAI to drop confusing model naming with release of GPT-5

OpenAI will begin phasing out its current system of naming foundation models, replacing the existing “GPT” numerical branding with a unified identity under the forthcoming GPT-5 release.

The shift, announced during a recent Reddit AMA with core Codex and research team members, reflects OpenAI’s intention to simplify product interactions and reduce ambiguity between model capabilities and usage surfaces.

Codex, the company’s AI-powered coding assistant, currently functions via two primary deployment paths: the ChatGPT interface and the Codex CLI. Models including codex-1 and codex-mini underpin these offerings.

According to OpenAI’s VP of Research Jerry Tworek, GPT-5 aims to consolidate such variations, allowing access to capabilities without switching between model versions or interfaces. Tworek stated,

“GPT-5 is our next foundational model that is meant to just make everything our models can currently do better and with less model switching.”

New OpenAI tools for coding, memory, and system operation

The announcement coincides with a broader convergence across OpenAI’s tools, Codex, Operator, memory systems, and deep research functionalities into a unified agentic framework. This architecture is designed to allow models to generate code, execute it, and validate it in remote cloud sandboxes.

Multiple OpenAI researchers emphasized that model differentiation through numeric suffixes no longer reflects how users interact with capabilities, especially with ChatGPT agents executing multi-step coding tasks asynchronously.

The retirement of model suffixes is set against the backdrop of OpenAI’s increasing focus on agent behavior over static model inference. Instead of branding releases with identifiers like GPT-4 or GPT-4o-mini, the system will increasingly identify through function, such as Codex for developer agents or Operator for local system interactions.

According to Andrey Mishchenko, this transition is also practical: codex-1 has been optimized for ChatGPT’s execution environment, making it unsuitable for broader API use in its current form, though the company is working toward standardizing agents for API deployment.

While GPT-4o was publicly released with limited variants, internal benchmarks suggest the next generation will prioritize breadth and longevity over incremental numerical improvements. Several researchers noted that Codex’s real-world performance has already approached or exceeded expectations on benchmarks like SWE-bench, even as updates like codex-1-pro remain unreleased.

The underlying model convergence is meant to address fragmentation across developer-facing interfaces, which has generated confusion around which version is most appropriate in various contexts.

This simplification comes as OpenAI expands its integration strategy across development environments. Future support is expected for Git providers beyond GitHub Cloud and compatibility with project management systems and communication tools.

Codex team member Hanson Wang confirmed that deployment through CI pipelines and local infrastructure is already feasible using the CLI. Codex agents now operate in isolated containers with defined lifespans, allowing for task execution lasting up to an hour per job, according to Joshua Ma.

OpenAI model expansion

OpenAI’s language models have historically been labeled based on size or chronological development, such as GPT-3, GPT-3.5, GPT-4, and GPT-4o. However, GPT-4.1 and GPT-4.5 are, in some ways, ahead of and, in other ways, confusingly behind the latest model, which is GPT-4o.

As the underlying models begin executing more tasks directly, including reading repositories, running tests, and formatting commits, the importance of versioning has diminished in favor of capability-based access. This shift mirrors internal usage patterns, where developers rely more on task delegation than model version selection.

Tworek, responding to a query about whether Codex and Operator would eventually merge to handle tasks including frontend UI validation and system actions, replied,

“We already have a product surface that can do things on your computer—it’s called Operator… eventually we want those tools to feel like one thing.”

Codex itself was described as a project born from internal frustration at under-utilizing OpenAI’s own models in daily development, a sentiment echoed by several team members during the session.

The decision to sunset model versioning also reflects a push toward modularity in OpenAI’s deployment stack. Team and Enterprise users will retain strict data controls, with Codex content excluded from model training. Meanwhile, Pro and Plus users are given clear opt-in pathways. As Codex agents expand beyond the ChatGPT UI, OpenAI is working toward new usage tiers and a more flexible pricing model that may allow consumption-based plans outside API integrations.

OpenAI did not provide a definitive timeline for when GPT-5 or the complete deprecation of existing model names will occur, though internal messaging and interface design changes are expected to accompany the release. For now, users interacting with Codex through ChatGPT or CLI can expect performance enhancements as model capabilities evolve under the streamlined identity of GPT-5.

The post OpenAI to drop confusing model naming with release of GPT-5 appeared first on CryptoSlate.

Get notified whenever we post something new!

spot_img

Create a website from scratch

Just drag and drop elements in a page to get started with ABM Tech.

Continue reading

Polymarket data shows low chances of impeachment for President Donald Trump

Crypto-based prediction markets are signaling that impeachment odds for US President Donald Trump remain low, despite a formal push in Congress. According to data from Polymarket, crypto bettors estimate that there is just a 6% chance that Trump will face...

US lawmakers push COIN Act to block officials from profiting from crypto

A group of US lawmakers, led by Senator Adam Schiff, introduced a new bill on June 23 to stop public officials, including the president, from using digital assets for personal gain. The Curbing Officials’ Income and Nondisclosure bill, also known...

Ethereum developers issue proposal to halve block slot time to boost transaction speed

Ethereum’s core developers are pushing for a major technical change that could reshape how quickly the network processes transactions. On June 21, Barnabé Monnot, one of Ethereum’s core contributors, suggested a new proposal, EIP-7782, which would halve the block slot...

Enjoy exclusive access to all of our content

Get an online subscription and you can unlock any article you come across.