Paris branch takes a critical look at AI. Sylvia Edwards Davis reports talking point
Dodgy with facts, but it’ll get better
T
he Paris branch of the NUJ hosted a panel on how AI is affecting journalism. This was part of a seminar held
jointly with the Anglo-American Press Association, which I moderated as the continental Europe representative on the new media industrial council. Regulation emerged as a critical concern. Eric Scherer, director of News MediaLab and international affairs at France Télévisions, addressed the nature of this revolution. “If the last great wave of technology was about disseminating information, the new wave is about producing it,” he noted. While it would be shortsighted to limit its potential to help society in fields such as health or the environment, Scherer made the case for treating news differently. “The new tools can be used to twist
reality, to manipulate facts or events. AI does not do any research, does not question sources, does not give its own sources,” he said. Scherer added that a handful of giant
private platforms are engaged in a “very secret arms race”. With AI becoming more powerful and radically cheaper by the month, “we just don’t know anything about what is under the hood. We don’t understand how it works and sometimes [the platforms] don’t know either.” Scherer underlined that high- ranking scientists and developers are urging regulators to slow down the roll-out to gain visibility. “Some say that artificial general
intelligence, where AI will be smarter than humans, is still far away – there is a big debate raging – but I would be very cautious.” Scherer spoke from the front row as a member of the commission led by Nobel Peace laureate Maria Ressa,
08 | theJournalist
which crafted the Paris Charter for AI and Journalism, under the umbrella of Reporters Without Borders. He sees the charter as an initiative to establish an ethical framework to protect the integrity of information and the profession. “The core principle is that ethics
must govern technological choice within the media; human agency must remain central in editorial decisions in a human-machine-human model. The charter calls for the media to help society distinguish with confidence between authentic and synthetic content and, of course, for the media to participate in global AI governance and defend the veracity of journalists when negotiating with these platforms.” The commission comprises 32 members from 20 countries with support from 16 partner organisations including the Canadian Journalism Foundation, DW Akademie, the European Journalism Centre, the Pulitzer Centre, the Thomson Foundation, the European Federation of Journalists and the Asia-Pacific Broadcasting Union, among others. Given the pace of AI development, both panellists agreed that continuing education and adaptation are essential if journalists are to maintain a competitive edge. Chris O’Brien, founder and editor of The French Tech Journal and its weekly newsletter La Machine: France AI Radar, warned: “While AI productivity tools are everywhere all of a sudden, the output is still pretty recognisable in the repetitive writing or cartoonish images but, in the very near future, we won’t be able to tell.” O’Brien, who is involved in the
implementation of AI, said that it was key that a journalist engaging with the tools understood their limitations. “You could be unknowingly
“
Journalists will distinguish themselves by the nature of the relationship they have with AI
plagiarising, because AI has gone out and taken a line from someone else’s article. If you have to spend all this time fact-checking and making sure you haven’t plagiarised, the question is: did you really save yourself time?” AI’s propensity for sloppiness and
creating illusions can lead to a journey down a rabbit hole of accuracy. “Everything sounds reasonable and then you go: ‘Wait a minute! I’m pretty sure Hitler didn’t win World War II’,” says O’Brien.
His concerns are in line with data. In October, OpenAI released the SimpleQA benchmark, which measures the ability of large language models (LLMs) to answer questions on facts. Its own o1-preview model was correct only 42.7% of the time, while ChatGPT 4.0 came in with 32.8 per cent and Anthropic’s Claude 3.5 Sonnet with a dismal 28.9 per cent. Scherer recounted a recent
Talking ethics and governance: Eric Scherer, Sylvia Edwards Davis and Chris O’Brien
experience in the newsroom at France Télévisions, a counterpart of the BBC, where AI is generally given a wide berth. A blurring tool was applied to protect female Iranian protesters; it later became known that face blurring could be undone by another AI tool, “so this is a big risk on the network’s archives in a very sensitive case for people who trusted us to protect their identity”. Scherer believes that LLMs are not going to replace journalists in the short term. However, he added that someone “who knows how to use it will far outstrip all others. Journalists will distinguish themselves by the nature of the relationship they have with AI.”
PAUL GRAYSON:
PHOTEINOS.COM
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28