Concerns mount as ChatGPT passes MBA exam given by Wharton professor
AI scores somewhere between a B- and B on the exam
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI’s artificial intelligence chatbot has passed the final exam of an MBA programme designed for Pennsylvania’s Wharton School, according to a new study.
Professor Christian Terwiesch, who authored the study, noted that educators should be concerned that their students might be cheating on homework assignments and final exams using such AI chatbots.
The yet-to-be peer-reviewed research found AI chatbot GPT-3 did an “amazing job at basic operations management and process analysis questions including those that are based on case studies”.
GPT-3 – an older version of the ChatGPT bot that has gained prominence – scored somewhere between a B- and B on the exam, according to Dr Terwiesch.
The study noted that the AI displayed a “remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers and consultants”.
While the chatbot demonstrated these abilities in the study, Dr Terwiesch said at times it also “makes surprising mistakes in relatively simple calculations at the level of 6th grade Math” and was not able to crack more advanced questions related to process analysis.
The Wharton school researcher behind the new study believes the latest findings have important implications for business school education with the need for new exam policies and better curriculum design focusing on collaboration between humans and AI.
The scientist believes the research also highlights the need to simulate real world decision-making processes using AI and the need to teach “creative problem solving, improved teaching productivity and more”.
In December last year, the chatbot ChatGPT gained prominence for its ability to respond to a range of queries with human-like text output, with some experts speculating that it may revolutionise industries and likely even replace tools like Google’s search engine.
Another recent study also revealed that ChatGPT could pass the US medical licensing exam USMLE – a three-part exam that usually takes students about four years of med school and about two years of clinical rotations to pass.
Researchers, including those from the Harvard School of Medicine in the US, found that the chatbot “performed at or near the passing threshold for all three exams without any specialised training”.
In this research, which needs to be peer reviewed as well, scientists found that ChatGPT also “demonstrated a high level of concordance and insight in its explanations”.
The findings, the Harvard scientists said, highlight the potential of chatbots to assist with medical education and “potentially, clinical decision-making”.
The new research fuels further concerns raised by AI experts and academics following the popular OpenAI’s release of ChatGPT.
Several scholars have raised concerns that the AI’s use by students may disrupt academia.
The New York City education department said earlier that it was worried about the negative impacts of the chatbot on student learning, citing “concerns regarding the safety and accuracy of content”.
“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” OpenAI chief Sam Altman wrote in a Twitter thread.
Mr Altman was referring to artificial general intelligence or the ability to learn intellectual tasks that humans can perform.
“There are going to be significant problems with the use of OpenAI tech over time; we will do our best but will not successfully anticipate every issue,” he said.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments