OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously
Share
SHARE
“It’s fully open source, and we expect it to rapidly improve,” said Sam Altman, OpenAI’s CEO, in a tweet.
Software engineers can use Codex CLI to navigate codebases, write code, and generate documentation with unprecedented ease. The tool leverages the models’ advanced reasoning capabilities to assist developers in various coding tasks, making it a valuable addition to any developer’s toolkit.
With the release of o3 and o4-mini, OpenAI continues to push the boundaries of what AI can achieve. These models not only represent a significant leap in AI capabilities but also signal OpenAI’s commitment to advancing the field of artificial intelligence.
As the competitive landscape in AI intensifies, OpenAI remains at the forefront, setting new benchmarks and delivering groundbreaking technologies that have the potential to reshape industries across the board. With o3 and o4-mini, OpenAI is once again proving its leadership in the field and paving the way for a future where AI plays a central role in problem-solving, innovation, and progress.
For more information on OpenAI’s latest models and tools, visit their website and stay tuned for updates on their ongoing research and developments.
OpenAI is making strides in the field of artificial intelligence with their latest release of o3 and o4-mini models. These models offer the benefits of multimodal reasoning, allowing users to pass screenshots or low fidelity sketches to the model through the command line, combined with access to local code.
To encourage adoption of these new models, OpenAI has launched a $1 million initiative to support projects using Codex CLI and OpenAI models. Grants are available in increments of $25,000 in API credits, making it easier for developers to explore the capabilities of these cutting-edge models.
One of the key focuses for OpenAI is ensuring the safety and ethical use of their AI models. Extensive safety testing has been conducted on the new models, with a particular emphasis on their ability to refuse harmful requests. The company has implemented system-level mitigations to flag dangerous prompts and has rebuilt their safety training data to enhance the overall safety of the models.
The deployment timeline for o3 and o4-mini is immediate for ChatGPT Plus, Pro, and Team users, with Enterprise and Education customers gaining access next week. Free users can also sample o4-mini by selecting “Think” in the composer before submitting queries. Developers can access both models through OpenAI’s Chat Completions API and Responses API, with some organizations requiring verification to access them.
The release of o3 and o4-mini represents a significant commercial opportunity for OpenAI. The models are more capable and cost-efficient than their predecessors, making them an attractive option for organizations looking to leverage advanced AI capabilities. OpenAI is bridging the gap between reasoning and conversation with these models, offering a more holistic approach to AI that combines specialized reasoning with natural conversation abilities and tool use.
As competition in the AI space continues to intensify, OpenAI’s focus on both reasoning capabilities and practical tool use sets them apart from other players in the field. By delivering intelligence and utility through their AI models, OpenAI is positioning themselves as leaders in the industry. The introduction of o3 and o4-mini represents a significant advancement in AI technology, allowing machines to perceive images and manipulate visual information in a way that mimics human thinking processes.
Overall, OpenAI’s latest releases demonstrate their commitment to advancing AI technology while prioritizing safety and ethical considerations. The future of AI looks bright with models like o3 and o4-mini leading the way towards more intelligent and capable systems.