AI Article Error Bloomberg Summaries Removed, LA Times Tool Removed
Bloomberg has removed a summary from an AI article, and the LA Times has removed the tool from an opinion article.
Bloomberg announced on Jan. 15 that it would add AI-generated summaries to news articles, and one summary from a March 6 article was removed because the AI article incorrectly stated that President Trump imposed tariffs on Canadian goods last year, not this year.
A Bloomberg spokesperson said on Jan. 29 that the AI article summaries are “not meant to replace our journalism, but to complement it,” and that “we are transparent when stories are updated or revised, and when AI is used.”
The spokesperson continued, “Reporters have full control over whether their summaries appear before and after publication, and can remove summaries that do not meet our standards.” “We publish thousands of articles every day, and 99% of our AI summaries currently meet our editorial standards.” Bloomberg announced on the 26th that the AI-generated summary of the Trump auto tariff announcement, “Trump’s tariffs escalate trade wars with allies,” was removed “because it incorrectly stated when the broader tariffs would take place.”
Bloomberg’s revised article lacked “attribution on tariff timing,” the Times said.
The New York Times reported on the 29th that “Bloomberg had to correct at least 30 AI-generated summaries of articles published this year,” and that “Bloomberg’s article on the 29th accurately reported that Trump would announce the tariffs that day, but the bulleted summary of the AI-generated article incorrectly stated when the broader tariffs would take place.”
The Times said that “in early March, the Los Angeles Times removed an AI tool from an opinion article after it described the Ku Klux Klan as something other than a racist organization.”
Bloomberg editor-in-chief John Micklethwait made an extremely positive assessment in his January 10 article, “How AI Will Help Journalism,” in which he wrote, “Consumers like the fact that they can quickly find out what a story is about. Journalists are more skeptical.” He added, “AI summaries are only as good as the stories they are based on. And getting the story is still a human-driven part.”
He went on to say that “journalists worry that people will read the summary instead of the article,” regarding the reader preference for AI-based article summaries.
This editorial direction, which directly introduced his AI-based summary articles to the paper, was based on a lecture he gave at City St. George’s, University of London, on the role of AI in journalism.
As the trend of American media outlets looking for the best way to use AI technology in reporting and editing increases, problems are being exposed here and there.
The newspaper chain Gannett is using similar AI-generated summaries for its articles.
The Washington Post has a tool called “Ask the Post” that generates answers to questions from published Post articles.
As AI expands into the realm of stories and reporting, the problem has cropped up in the media’s own functions.
Bloomberg, which was the first to introduce AI directly into the realm of reporting, changed its statement from the editor-in-chief that “AI summaries are only as good as the stories they’re based on, and humans are still an important part of getting the stories,” to a spokesperson two months later that it was “not meant to replace journalism, but to complement it,” and the article was removed.