Yesterday morning, billionaire Los Angeles Times owner Patrick Soon-Shiong published a letter to readers announcing that the paper is now using artificial intelligence to add a “Voices” label to articles that take a “particular position” or are “written from a personal perspective.” According to him, these articles can also receive a set of AI-generated “opinions” that appear at the bottom as bulleted points, including those labeled “Different views on the topic.”
“Voices is not limited to the content of the ‘Opinion’ section,” writes Sun-Shiong, ”it also includes news comments, criticism, reviews, etc. If a piece takes a position or is written from a personal perspective, it can be labeled as ‘Voices’.” He also said: “I believe that providing a variety of viewpoints supports our journalistic mission and helps readers navigate the issues facing this country.”
LA Times union members did not take the news too well. In a statement published by The Hollywood Reporter, LA Times vice chairman Matt Hamilton said the union supports some initiatives that help readers separate news from opinion pieces: “But we don’t think this approach-analysis created by artificial intelligence and not vetted by the editorial staff-will help increase trust in the media.”
It’s only been a day, and the changes have already led to some questionable results. The Guardian refers to a March 1 article in the LA Times about the dangers associated with the unregulated use of artificial intelligence to create content for historical documentaries. At the bottom of the article, it claims that the publication’s new artificial intelligence tool claims that history “generally favors a center-left viewpoint” and suggests that “artificial intelligence is democratizing historical storytelling.”
Apparently, information was also added at the end of the Los Angeles Times article of February 25 about California cities that elected Ku Klux Klan members to their city councils in the 1920s. One of the now-deleted AI-generated opinions is that local historical sources sometimes portray the KKK as “a product of ‘white Protestant culture’ responding to social change rather than a movement that openly preaches hatred, minimizing its ideological threat.” This is true, as the author notes on X, but it seems clumsily presented as a counterpoint to the story’s premise – that the faded legacy of the Klan in Anaheim, California lives on in school segregation, anti-immigration legislation, and local neo-Nazi groups.
Ideally, if artificial intelligence tools are used, they should be used with some editorial oversight to prevent mistakes like the one the LA Times made. Sloppy or no oversight seems to lead to problems such as news aggregator MSN’s recommendation of the Ottawa Food Bank as a place for tourists to eat lunch or Gizmodo’s clumsy non-chronological “chronological” list of Star Wars movies. And recently, Apple tweaked the look and feel of its Apple Intelligence messages after they misrepresented a BBC headline falsely suggesting that suspected UnitedHealthcare CEO Luigi Mangione had shot himself.