AI JUAN ROBALLO / ALAMY STOCK PHOTO
• Always provide editorial oversight for AI-produced content • Be transparent with audiences
about AI use, including clear labelling. • Fact check information produced
by AI • Take care over data put into AI tools
and avoid infringing copyright • Do not use generative AI to depict
real events • Do not share personally identifiable
The assistant you should never trust
AI can rehash press releases and do research but can get things very wrong. Neil Merrick looks at Impress advice
A
s anyone who’s been stalked by their AI assistant will say, artificial intelligence can be a pain in the neck.
Copilot, the AI assistant created by
Microsoft, will happily offer to write the next paragraph of your story – providing you give it a precis of what you require that does not exceed 1,000 words. Why are so many media organisations getting excited about AI? Is it just a way of saving money, with AI tools carrying out more tiresome research – or are journalistic standards at stake? Among the sceptics is Peter Jukes, co-founder and executive editor of Byline Times. It is better, he says, to regard AI assistants as over-enthusiastic trainees who are keen to help out but whose work needs careful looking over. “Never trust that mendacious, eager intern,” he told a webinar organised in April by media regulator Impress to launch guidance on using AI in newsrooms. According to Jukes, AI is potentially a better research tool than Google, with less need to scroll through links that are out of date or promoted for commercial reasons. But that is not to say you can trust AI.
“It does dive a bit deeper but like
anything, you [need to], verify, verify, verify,” he says.
10 | theJournalist It is hard to gauge exactly how much
journalists use AI or the precise role it plays in creating news. In March, The Independent launched Bulletin, a platform ‘for seriously busy people’ that uses Google Gemini, an AI chatbot, to create bullet point briefings. These are checked by journalists before publication. Regional titles owned by Newsquest
have been using AI to gather stories from council reports and other documents since 2023. It is up to reporters to input ideas and check the quality of information provided. Editorial development director Toby
Granville told a Society of Editors conference in April that Newsquest employs 36 ‘AI-assisted’ reporters to turn press releases into stories, helping free up other journalists for meatier tasks. A consultation by Impress at the start of this year showed 84 per cent of members want strong guidance on AI. Half had a negative view of AI, with just 19 per cent seeing it as positive. Andrea Wills, chair of Impress’s code committee, says the regulator is most concerned about unethical uses of generative AI, for example when using tools such as ChatGPT. The committee is keen that publishers employ AI in an ethical and responsible way. Impress’s guidance says publishers should:
“
We must never use AI in a way that’s materially misleading but we also want to use it creatively
information when using AI. AI makes mistakes. Research by
chatbots and other AI tools therefore needs corroboration in the same way as other sources. In addition, it is biased, with no recognition of equality and diversity. Matthew Eltringham, a senior
editorial adviser at the BBC, told the Impress webinar that, before using AI, BBC journalists are instructed to prompt Copilot and other tools by asking them to create an image of a sailor on a bike. “Almost 98 times out of 100, you’ll
get a white male wearing a Jean Paul Gaultier style sailor’s outfit on a bike that looks roughly like a Harley Davidson,” he says. “Almost certainly, he will have a bit of designer stubble on his chiselled jaws.” An AI-generated ‘interview’ with
Michael Schumacher, published by German celebrity magazine Die Aktuelle two years ago, flagged up the dangers of using generative AI and misleading audiences. Although the editor was sacked, it took another year for Schumacher’s family to win compensation. The BBC is looking into the use of AI to anonymise people’s voices, while maintaining authenticity. “We must never use AI in a way that’s
Schumaker quotes were AI-generated
materially misleading the audience but we also want to use AI creatively,” says Eltringham. Tech giants such as Microsoft want to be seen to be working alongside journalists, not threatening jobs. Krishna Sood, who leads on AI
at Microsoft, says the firm is keen to collaborate “on solutions that make newsrooms stronger” while creating efficiencies. Perhaps, therefore, AI should be given the occasional opportunity to prove its worth but approached with extreme scepticism and under no circumstances allowed to set the news agenda or make editorial decisions. “It doesn’t make choices,” warns Peter Jukes. “It has no skin in the game.”
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28