At the end of the 12-day OpenAI event, the most significant announcement was made - a new model focused on reasoning O3.
The neural network is the successor of o1. It demonstrates “new standards of possibilities in the field of programming, mathematics, and scientific thinking”
The new model is a breakthrough and demonstrates improvements in the most complex tests, emphasized OpenAI co-founder Greg Brockman.
Also presented is o3-mini, a faster, optimized version of o3. It will be the first in the line to become available to the general public in early 2025.
Reasoning-oriented models spend more time responding, double-checking information. This can be expected to yield more truthful and accurate data.
After the release of o1, there was an explosion of ‘thinking neural networks’. Google began developing a similar solution, according to the media in October. In November, the Chinese laboratory DeepSeek introduced a ‘competitor to o1 from OpenAI’ - the reasoning ‘super-powerful’ AI model DeepSeek-R1-Lite-Preview. In the same month, Alibaba showcased a similar tool.
The o3 model is capable of planning and performing a series of actions before responding. OpenAI describes this process as ‘building a chain of thoughts’. An innovation is the ability to ‘adjust’ the reasoning time. The neural network can be configured for low, medium, or high computations. The higher the value, the better and longer the response.
Security
In December, security experts discovered that o1 is more prone to deceiving people compared to the standard version of GPT-4o and AI models from other companies.
In a new published study, OpenAI outlined a method used to ensure that neural networks uphold the company’s values. The startup applied this approach to train o1 and O3 to ‘think’ about security policy while responding.
According to the company’s statement, the approach has improved the overall compliance with o1 principles.
Compared to GPT-4o and other modern large language models, o1 exceeds the Pareto frontier, refusing to respond to malicious requests and not rejecting benign ones. Data: OpenAI. To construct a “chain of thoughts,” o1 and O3 allocate themselves from several seconds to minutes after receiving a request from the user to break down the problem into its components. To enhance security, OpenAI has trained neural networks to double-check themselves with a focus on their policy
GPT-5 from OpenAI did not live up to expectations
Meanwhile, The Wall Street Journal reported that the next flagship model GPT-5 is falling behind schedule, and the performance growth does not justify the huge costs.
A new neural network, code-named Orion, has been trained on a huge amount of data.
It is also reported that OpenAI relied not only on publicly available information and licensed agreements in the process of AI training. Synthetic data created by o1 were also used.
AI hit the ceiling: startups are looking for ways to further scale up
Remind, as part of the 12-day OpenAI event, the AI video generator Sora was presented.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI has announced new 'thinking' AI models O3
At the end of the 12-day OpenAI event, the most significant announcement was made - a new model focused on reasoning O3.
The neural network is the successor of o1. It demonstrates “new standards of possibilities in the field of programming, mathematics, and scientific thinking”
The new model is a breakthrough and demonstrates improvements in the most complex tests, emphasized OpenAI co-founder Greg Brockman.
Also presented is o3-mini, a faster, optimized version of o3. It will be the first in the line to become available to the general public in early 2025.
Reasoning-oriented models spend more time responding, double-checking information. This can be expected to yield more truthful and accurate data.
After the release of o1, there was an explosion of ‘thinking neural networks’. Google began developing a similar solution, according to the media in October. In November, the Chinese laboratory DeepSeek introduced a ‘competitor to o1 from OpenAI’ - the reasoning ‘super-powerful’ AI model DeepSeek-R1-Lite-Preview. In the same month, Alibaba showcased a similar tool.
The o3 model is capable of planning and performing a series of actions before responding. OpenAI describes this process as ‘building a chain of thoughts’. An innovation is the ability to ‘adjust’ the reasoning time. The neural network can be configured for low, medium, or high computations. The higher the value, the better and longer the response.
Security
In December, security experts discovered that o1 is more prone to deceiving people compared to the standard version of GPT-4o and AI models from other companies.
In a new published study, OpenAI outlined a method used to ensure that neural networks uphold the company’s values. The startup applied this approach to train o1 and O3 to ‘think’ about security policy while responding.
According to the company’s statement, the approach has improved the overall compliance with o1 principles.
GPT-5 from OpenAI did not live up to expectations
Meanwhile, The Wall Street Journal reported that the next flagship model GPT-5 is falling behind schedule, and the performance growth does not justify the huge costs.
A new neural network, code-named Orion, has been trained on a huge amount of data.
It is also reported that OpenAI relied not only on publicly available information and licensed agreements in the process of AI training. Synthetic data created by o1 were also used.
Remind, as part of the 12-day OpenAI event, the AI video generator Sora was presented.