AI chatbots are distorting news stories, BBC finds

0
130
AI chatbots are distorting news stories, BBC finds

Artificial intelligence chatbots struggle with factual inaccuracies and distortions when summarizing news, according to a study conducted by the BBC. The study, which examined whether ChatGPT, Google Gemini, Microsoft Copilot, and OpenAI’s Perplexity could accurately summarize news, found that more than half of all AI-generated news stories had “significant problems of some form.”

As part of the study, the BBC asked ChatGPT, Copilot, Gemini, and Perplexity to provide summaries of 100 BBC news articles, and journalists analyzed their responses. In addition to identifying major problems in 51 percent of the responses, the BBC found that 19 percent of the responses referencing the BBC contained incorrect statements, figures, and dates. At the same time, 13 percent of the quotes from the BBC were “either changed from the original source or not present in the quoted article.”

The study cited several examples, including Gemini falsely claiming that the UK’s National Health Service (NHS) “advises people not to start vaping and recommends that smokers who want to quit use other methods”. However, the NHS actually recommends vaping for smoking cessation. Another example: in December 2024, ChatGPT claimed that Ismail Haniyeh was part of the Hamas leadership, although he was assassinated in July 2024.

Overall, the study found that Gemini’s responses were “most troubling” as 46% of them were “flagged as having significant accuracy issues.” The Verge reached out to OpenAI, Google, Microsoft, and Perplexity for comment but did not immediately receive a response.

Last year, the BBC criticized Apple’s new AI-based news annotations for inaccurately rewriting one of the headlines. In response, Apple suspended the annotations for news and entertainment apps, and made summaries of AI messages more distinct from standard notifications.

In response to the study, Deborah Turness, director general of BBC News and Current Affairs, called on tech companies to address inaccuracy. “We live in turbulent times, and how long will it be before an AI-distorted headline causes significant damage to the real world?” wrote Turness. “We’d like to see other tech companies hear our concerns, as Apple has. It’s time for us to work together-the news industry, the tech companies-and of course the government has an important role to play in this issue as well.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here