New York
The press freedom group Reporters Without Borders is urging Apple to remove its newly introduced artificial intelligence feature that summarizes news stories after it produced a false headline from the BBC.
The backlash comes after a push notification created by Apple Intelligence and sent to users last week falsely summarized a BBC report that Luigi Mangione, the suspect behind the killing of the UnitedHealthcare chief executive, had shot himself.
The BBC reported it had contacted Apple about the feature “to raise this concern and fix the problem,” but it could not confirm if the iPhone maker had responded to its complaint.
On Wednesday, Reporters Without Borders technology and journalism desk chief Vincent Berthier called on Apple “to act responsibly by removing this feature.”
“A.I.s are probability machines, and facts can’t be decided by a roll of the dice,” Berthier said in a statement. “The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”
More broadly, the journalist body said it is “very concerned about the risks posed to media outlets by new A.I. tools, noting that the incident emphasizes how A.I. remains “too immature to produce reliable information for the public, and should not be allowed on the market for such uses.”
“The probabilistic way in which A.I. systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public,” RSF said in a statement.
In response to the concerns, the BBC said in a statement, “it is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”
Apple did not respond to a request for comment.
Apple introduced its generative-AI tool in the US in June, touting the feature’s ability to summarize specific content “in the form of a digestible paragraph, bulleted key points, a table, or a list.” To streamline news media diets, Apple allows users across its iPhone, iPad, and Mac devices to group notifications, producing a list of news items in a single push alert.
Since the AI feature was launched to the public in late October, users have shared that it also erroneously summarized a New York Times story, claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested. In reality, the International Criminal Court had published a warrant for Netanyahu’s arrest, but readers scrolling their home screens only saw two words: “Netanyahu arrested.”
The challenge with the Apple Intelligence incident stems from news outlets’ lack of agency. While some publishers have opted to use AI to assist in authoring articles, the decision is theirs. But Apple Intelligence’s summaries, which are opt-in by the user, still present the synopses under the publisher’s banner. In addition to circulating potentially dangerous misinformation, the errors also risk damaging outlets’ credibility.
Apple’s AI troubles are only the latest as news publishers struggle to navigate seismic changes wrought by the budding technology. Since ChatGPT’s launch just over two years ago, several tech giants have launched their own large-language models, many of which have been accused of training their chatbots using copyrighted content, including news reports. While some outlets, including The New York Times, have filed lawsuits over the technology’s alleged scaping of content, others like Axel Springer, whose news brands include Politico, Business Insider, Bild and Welt have inked licensing agreements with the developers.