OpenAI reveals new artificial intelligence tool it claims can think like a human before answering
New series of AI models are designed to help with complex tasks and harder problems, the company said
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.ChatGPT creator OpenAI has revealed new tools that it claims can consider their answer before they respond.
The systems are aimed to reasoning more deeply so that they can better help with more complex problems, the company said.
The tools – named the o1 series, and codenamed Strawberry – are limited in various ways, since they cannot receive images or look at the web.
But they should be able to do more deep work, the company claimed. When posed a question, they are able to think about their response “like a person would”, it said, allowing them to “refine their thinking process, try different strategies and recognise their mistakes”.
In a blog post detailing the new models, which remain early preview versions, OpenAI said the o1 series works best when dealing with mathematics and coding tasks.
“These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields,” the company said.
“For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.”
Alongside the main version, OpenAI said it was also rolling out a “faster, cheaper” version called o1-mini which it said was “particularly effective” at coding.
“As a smaller model, o1-mini is 80% cheaper than o1-preview, making it a powerful, cost-effective model for applications that require reasoning but not broad world knowledge,” OpenAI said.
Sam Altman, the OpenAI chief executive, appeared to snap at one user who asked about the company’s long-promised new voice features. OpenAI’s voice tools has proven controversial since it emerged that one sounded remarkably similar to Scarlett Johanson, who said that she had not given permission for her likeness to be used.
“When are we getting the new voice features??” one user replied to a tweet in which Mr Altman had announced the availability of the new Strawberry tool. OpenAI had suggested that the tools would be available to paying customers months ago.
“how about a couple of weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?” wrote Mr Altman.
The announcement highlights how the AI firm is looking to diversify beyond just ChatGPT its accompanying tools for consumers, as it looks to capitalise on the AI frenzy.
Earlier this week, it was reported that the company was in talks with investors to raise 6.5 billion dollars (£5 billion) at a valuation of 150 billion dollars (£115 billion), making it one of the most valuable start-ups in the world.
In addition to the announcements around its new models, OpenAI also revealed that it had “formalised agreements” with AI safety institutes in the UK and US, and confirmed it had granted both institutes “early access to a research version” of the new models.
“This was an important first step in our partnership, helping to establish a process for research, evaluation, and testing of future models prior to and following their public release,” OpenAI said.
Additional reporting by agencies