ChatGPT rival passes university exam
Professor says Claude AI answers questions in law and economics test ‘better than many humans’
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
An artificial intelligence bot similar to OpenAI’s ChatGPT has achieved a passing grade in a university exam, an economics professor has revealed
The Claude AI, developed by the research firm Anthropic, earned a “marginal pass” on a blind graded law and economics test, which examiners described as “better than many human” candidates.
Professor Alex Tabarrok from George Mason University said he believed the Claude AI was an improvement on the artificial intelligence built by OpenAI, though still had notable flaws compared to the best human students.
The professor cited an example of an answer given to a question about how the law surrounding intellectual property could be improved.
The Claude AI offered a 400-word answer containing five main points detailing potential improvements to IP laws, however it reportedly lacked clear reasoning.
“The weakness of the answer was that this was mostly opinion with just a touch of support,” Professor Tabarrok said.
“A better answer would have tied the opinion more clearly to economic reasoning. Still a credible response and better than many human responses.”
The latest AI achievement comes amid a wave of hype surrounding natural language models following the public release of ChatGPT last year.
OpenAI’s technology made headlines for its ability to offer human-like responses to a vast range of queries, from explaining complex scientific concepts in simple terms, to coming up with new ideas for a business proposal.
It’s apparent abilities have led some schools and universities to ban it from computers and devices in an attempt to prevent cheating, while some fear it contains inherent biases that are common for AI algorithms trained on human-generated data sources.
OpenAI CEO and co-founder Sam Altman claims the technology is currently “incredibly limited” but good enough to “create a misleading impression of greatness”.
Mr Altman said it would be “a mistake to be relying on it for anything important right now”, however warned that future iterations will soon make ChatGPT “look like a boring toy”.
Anthropic is one of a number of competitors in a field that includes Google’s unreleased LaMDA and Sparrow chatbots.
The startup was co-founded by former employees of OpenAI, with head-to-head comparisons between ChatGPT and Claude finding that they both have strengths in different areas.
“Overall, Claude is a serious competitor to ChatGPT, with improvements in many areas,” researchers noted in a recent test of the two AI bots.
“Claude’s writing is more verbose, but also more naturalistic. Its ability to write coherently about itself, its limitations, and its goals seem to also allow it to more naturally answer questions on other subjects. For other tasks, like code generation or reasoning about code, Claude appears to be worse.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments