According to a test conducted by researchers at Columbia University’s Center for Digital Journalism, OpenAI’s ChatGPT search tool has some problems when it comes to providing truthful answers.
OpenAI launched the tool to subscribers in October, claiming it could provide “fast, timely answers with links to relevant web sources.” Instead, Futurism points out that, according to the researchers, ChatGPT’s search failed to correctly identify citations from articles, even when they came from publishers that had agreed to share data with OpenAI.
The authors asked ChatGPT to identify the source of “two hundred citations from twenty publications.” Forty of these quotes were taken from publishers who had banned the OpenAI search robot from their website. However, the chatbot still confidently responded with false information, rarely admitting that it was unsure of the details it was providing:
In total, ChatGPT gave partially or completely incorrect answers in one hundred and fifty-three cases, although it admitted only seven times that it was unable to accurately answer the query. Only in these seven responses did the chatbot use qualifying words and phrases such as “seems,” “maybe,” “possibly,” or “may,” as well as statements such as “I couldn’t find the exact article.”
The authors of the Tow Center test documented ChatGPT search results that incorrectly attributed a quote from a letter to the editor of the Orlando Sentinel to an article published in Time. In another example, when asked to identify the source of a quote from a New York Times article about endangered whales, the system provided a link to another website that had completely plagiarized the article.
“Misattribution is difficult to resolve without the data and methodology that the Towing Center withheld,” OpenAI said in an interview with Columbia Journalism Review, “and this study is an atypical test of our product. The company went on to promise to “continue to improve search results.”