search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AI


readers. It’s a description of the image. But it wasn’t always getting done. We introduced AI to that task, and now it is.” The FT Group, which includes specialist magazines as well as the newspaper, is actively considering several potential AI projects, including helping journalists get through big documents for investigations and analysis, voicing up articles, and bullet pointing stories. Like the BBC, Steve says the FT has strong safeguards around content. “Our policy says that people are at the centre of our


journalism and everything must be created by humans. If AI is used, it should be declared. When you start to automate without safeguards, that’s when you can lose your credibility.” Bird says the technology isn’t the problem. The problem comes when companies are aggressively focused on commercial return. “Developments need to be driven by journalists, but too


often, commercial managers see this as a quick fix. And it isn’t.” Not all employers in our industry are so conscientious. I sent an email to a contact who works for Reach. They say


that while they don’t want to be thought of as Luddites, many journalists are worried about jobs going and standards dropping. There is talk of a news ‘hub’ being created and from that AI will rewrite stories for each title – which could include The Mirror, The Express, the Daily Record, the Daily Star and local papers such as the Manchester Evening News and the Liverpool Echo. Reach already uses something they call Gutenbot, which


re-versions stories by Reach journalists so they can be shared across titles. Is this the same idea that the BBC has, but seen through a profit-only lens? My source tells me Reach does not care about standards and is all about cutting costs and getting online clicks rather than concentrating on journalistic skill, contacts and hard work. It does seem like the main national media outlets are all on the same track in terms of the ways technology can be used, but, as Bird says, that’s not the problem – it’s how you use it. The Guardian newspaper came under fire at the beginning of the year after bosses used AI to help write headlines during


I asked Perplexity AI to explain what a large language model is


Could you explain in 150 words what a large language model is? For an audience of unionised journalists A large language model


(LLM) is a sophisticated type of artificial intelligence designed to understand, interpret, and generate human language. At its core, an LLM is powered by artificial neural networks – a computational approach inspired by how the human brain works. These models are ‘large’ because they are


trained on vast amounts of text data, such as books, news articles, websites, and social media, which enables them to predict and construct language in a contextually appropriate way. For journalists, LLMs can


quickly produce text summaries, suggest headlines, and even draft articles, influencing how


newsrooms operate. They can also assist with fact-checking and multilingual translation. However, concerns remain: LLMs can reproduce biases present in their training data and may accidentally generate inaccuracies. As professionals dedicated to truth and employment rights, unionised journalists should scrutinise both the benefits and challenges these tools introduce to the craft and workplace.


Below: Readers prefer news with human oversight, according to Reuters Institute Digital News Report


Proportion that say they are very or somewhat comfortable with news being produced in the following ways – USA and Europe


Comfortable Neither/nor Don’t know Uncomfortable Mainly AI with some human oversight USA Europe 15 19 21 29 Mainly human with some help from AI USA Europe 0% 30 25% 50% 38 26 33 6 10 75% 30 27 100%


Q2_AIComfortlevel_2024_1. In general, how comfortable or uncomfortable are you with using news produced in each of the following ways? Base: total sample in USA = 2053, Europe = 50,532.


18 | theJournalist 7 10 53 46


a dispute over the sale of The Observer. While staff recognised that AI was being used in the industry, they say they had been told that it would be used at The Guardian only after consultation and with readers being fully informed. The union is in discussion with the paper about the use of AI. Over the past decade, the Reuters Institute Digital News Report has been surveying thousands of people across the world to find out their relationship with the news. They’ve started to ask respondents about the use of AI. This year’s report showed that AI chatbots and interfaces are emerging as search engines and other platforms integrate real-time news. The report says that both the FT and The Washington Post are trialling chatbots on their own content. This may be similar to a project that Time Magazine has undertaken with a company called Eleven Labs, where you can ask Time’s conversational AI (or chatbot) questions about their person of the year, so it almost seems as if you have access to your own personal journalist. You can be more sure of the accuracy, as the large language model is trained only on verified Time material. So what do audiences think? The majority of respondents


to Reuters’ survey are still uncomfortable with its use, which will be a relief to NUJ members, at least for now. There is some enthusiasm for making the news more accessible or relevant, including summarisation (27 per cent), translation (24 per cent), better story recommendations (21 per cent) and using chatbots to ask questions about news (18 per cent). Respondents were asked what they thought would be good


about AI. They said it would make news cheaper and more up to date, but they also said it might make it less transparent, less accurate and less trustworthy. It would be interesting to be in the shoes of whoever gets


that gig as director of AI and machine learning at The Washington Post. Tasked with ensuring operational efficiency and innovative journalism, it may be that they both come out on top – but at what cost to us humans?


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28