Thursday, June 12, 2025

Creating liberating content

Tether amplifies gold strategy...

Tether has taken a new step in its long-term strategy of diversifying outside...

70% of $1B Plasma...

Investor demand for Plasma’s upcoming XPL token continues to soar, with the stablecoin...

Just 216 Bitcoin holders...

A growing share of Bitcoin’s circulating supply is now concentrated in the hands...

European Blockchain Firm Secures...

A European blockchain-focused company has gained approval to mobilise over €10 billion in...
HomeOpenAI to drop...

OpenAI to drop confusing model naming with release of GPT-5

OpenAI will begin phasing out its current system of naming foundation models, replacing the existing “GPT” numerical branding with a unified identity under the forthcoming GPT-5 release.

The shift, announced during a recent Reddit AMA with core Codex and research team members, reflects OpenAI’s intention to simplify product interactions and reduce ambiguity between model capabilities and usage surfaces.

Codex, the company’s AI-powered coding assistant, currently functions via two primary deployment paths: the ChatGPT interface and the Codex CLI. Models including codex-1 and codex-mini underpin these offerings.

According to OpenAI’s VP of Research Jerry Tworek, GPT-5 aims to consolidate such variations, allowing access to capabilities without switching between model versions or interfaces. Tworek stated,

“GPT-5 is our next foundational model that is meant to just make everything our models can currently do better and with less model switching.”

New OpenAI tools for coding, memory, and system operation

The announcement coincides with a broader convergence across OpenAI’s tools, Codex, Operator, memory systems, and deep research functionalities into a unified agentic framework. This architecture is designed to allow models to generate code, execute it, and validate it in remote cloud sandboxes.

Multiple OpenAI researchers emphasized that model differentiation through numeric suffixes no longer reflects how users interact with capabilities, especially with ChatGPT agents executing multi-step coding tasks asynchronously.

The retirement of model suffixes is set against the backdrop of OpenAI’s increasing focus on agent behavior over static model inference. Instead of branding releases with identifiers like GPT-4 or GPT-4o-mini, the system will increasingly identify through function, such as Codex for developer agents or Operator for local system interactions.

According to Andrey Mishchenko, this transition is also practical: codex-1 has been optimized for ChatGPT’s execution environment, making it unsuitable for broader API use in its current form, though the company is working toward standardizing agents for API deployment.

While GPT-4o was publicly released with limited variants, internal benchmarks suggest the next generation will prioritize breadth and longevity over incremental numerical improvements. Several researchers noted that Codex’s real-world performance has already approached or exceeded expectations on benchmarks like SWE-bench, even as updates like codex-1-pro remain unreleased.

The underlying model convergence is meant to address fragmentation across developer-facing interfaces, which has generated confusion around which version is most appropriate in various contexts.

This simplification comes as OpenAI expands its integration strategy across development environments. Future support is expected for Git providers beyond GitHub Cloud and compatibility with project management systems and communication tools.

Codex team member Hanson Wang confirmed that deployment through CI pipelines and local infrastructure is already feasible using the CLI. Codex agents now operate in isolated containers with defined lifespans, allowing for task execution lasting up to an hour per job, according to Joshua Ma.

OpenAI model expansion

OpenAI’s language models have historically been labeled based on size or chronological development, such as GPT-3, GPT-3.5, GPT-4, and GPT-4o. However, GPT-4.1 and GPT-4.5 are, in some ways, ahead of and, in other ways, confusingly behind the latest model, which is GPT-4o.

As the underlying models begin executing more tasks directly, including reading repositories, running tests, and formatting commits, the importance of versioning has diminished in favor of capability-based access. This shift mirrors internal usage patterns, where developers rely more on task delegation than model version selection.

Tworek, responding to a query about whether Codex and Operator would eventually merge to handle tasks including frontend UI validation and system actions, replied,

“We already have a product surface that can do things on your computer—it’s called Operator… eventually we want those tools to feel like one thing.”

Codex itself was described as a project born from internal frustration at under-utilizing OpenAI’s own models in daily development, a sentiment echoed by several team members during the session.

The decision to sunset model versioning also reflects a push toward modularity in OpenAI’s deployment stack. Team and Enterprise users will retain strict data controls, with Codex content excluded from model training. Meanwhile, Pro and Plus users are given clear opt-in pathways. As Codex agents expand beyond the ChatGPT UI, OpenAI is working toward new usage tiers and a more flexible pricing model that may allow consumption-based plans outside API integrations.

OpenAI did not provide a definitive timeline for when GPT-5 or the complete deprecation of existing model names will occur, though internal messaging and interface design changes are expected to accompany the release. For now, users interacting with Codex through ChatGPT or CLI can expect performance enhancements as model capabilities evolve under the streamlined identity of GPT-5.

The post OpenAI to drop confusing model naming with release of GPT-5 appeared first on CryptoSlate.

Get notified whenever we post something new!

spot_img

Create a website from scratch

Just drag and drop elements in a page to get started with ABM Tech.

Continue reading

Tether amplifies gold strategy with around $90 million stake in Elemental Altus

Tether has taken a new step in its long-term strategy of diversifying outside its primary stablecoin issuance business. On June 12, the USDT stablecoin issuer disclosed that it acquired an equity stake in Elemental Altus Royalties, a firm specializing in...

70% of $1B Plasma deposits held by top 100 wallets, showing concentrated interest in XPL sale

Investor demand for Plasma’s upcoming XPL token continues to soar, with the stablecoin infrastructure platform now securing $1 billion in deposits to support the planned sale. Plasma confirmed that its revised $1 billion deposit cap was reached in under 30...

Just 216 Bitcoin holders own over 6 million BTC as market concentration grows

A growing share of Bitcoin’s circulating supply is now concentrated in the hands of major institutional players and centralized entities, a new report by Gemini and Glassnode reveals. According to the findings, over 30% of Bitcoin’s supply is now controlled...

Enjoy exclusive access to all of our content

Get an online subscription and you can unlock any article you come across.