search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
watching you Big data is


AI poses huge risks to staff, such as physical and political surveillance, warns Steve Bird


I


n the spring, about 30 NUJ reps from media companies around the UK met in Birmingham to discuss the state of journalism. During the lunch break, strikers from the nearby Amazon warehouse were invited in to build solidarity.


As they described their work conditions, they conjured up a


dystopian scene. They talked about low pay and punishing levels of supervision. The most moving account – and the most inspiring – was of the appearance of the first GMB strikers emerging through the fog at midnight, walking past ranks of surveillance cameras and security fencing. To the journalists, this experience of being managed by


computer metrics, having personal space stripped away and being deskilled and micromanaged was alien and shocking. Few foresaw that just a few months later this picture of a shifting balance of power between human and machine would be being discussed as a part of the very near future, not just for Amazon warehouse workers but for everyone. At the heart of the matter is AI – the revolutionary technology that seems capable of performing almost any intellectual task that a human can do. Its advent has been described by Microsoft founder Bill Gates as “fundamental as the creation of the microprocessor, the personal computer, the internet and the mobile phone”, and by Sam Altman, CEO of OpenAI, the company behind ChatGPT, as a technology with prospects that are “unbelievably good”. But, hinting at the capacity for such vast processing power to learn independently outside human control or be used to develop dangerous technology, Altman also told StrictlyVC, ominously, that the new tech could mean “lights out for all of us”. Massive investment and the steep growth of computer processing power means these scenarios are no longer fantastical and many are fast becoming a reality. AI is already having an impact in workplaces. Mary Towers is a leading member of the TUC’s working


group on AI. “Since Covid, there has already been a significant amount of AI technology rolled out”, with workers being used as a “live experiment”, she says. “This is not restricted to the gig economy,” she adds. Giving evidence at a committee on the Data Protection and


16 | theJournalist


Digital Information Bill in May, Towers said: “Twenty per cent of German people and 35 per cent of Spanish people are subject to algorithmic management systems at the moment.” And the ability to delegate communications and admin to AI


is about to become widespread. Companies including Microsoft and Google are already testing versions of generative AI to add to the software they provide, including Gmail and Google Docs.


So it is no surprise that admin and white-collar workers are being seen at particular risk. In June, multinational HR company Challenger, Gray and Christmas reported that 4.9 per cent of monthly job losses were attributable to the effects of AI. Newsquest has advertised for its first ‘AI-powered journalist’ with the role involving “efficiently upload and manage a high volume of stories, using time-saving AI tools and techniques”. Fears about the future of journalism were outlined in stark terms in a motion passed at the NUJ Delegate Meeting (DM) in April: “This DM is concerned that such is the pace at which artificial intelligence (AI) produces apparent news content, our entire information ecosystem could become unbalanced. “AI has the capacity to economically disadvantage individual





This technology, which ought to be wonderful, is being developed in a society that is not designed to use it for everybody’s good


creators – particularly visual creators such as photographers, videographers, illustrators and cartoonists. Their work is potentially scraped, without their permission, credit or payment, and used to create content that competes directly with their work.” In political and cultural terms, the prospect of technology


that can exactly mimic the tone, appearance and context of content from reputable news outlets, for example, could take the generation of fake news and false accounts to a new level. It also raises the question of how to protect unique content, as well as individual style and ‘voice’ in media work. This is about not just plagiarism but also individual


ownership rights in a world where media production is being created at scale based on past work. Towers is concerned that the UK has no AI-specific laws, with no equivalent of the EU’s AI Act. “This is a huge worry for us,” she says. While the EU, Canada and the US are developing legislative controls (Canada has mandatory algorithm assessments), the UK’s white paper has no statutory footing.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28