search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AI


It can be very difficult to tell whether an image is genuine, saysAndrew Wiard


Is seeing still believing?


This fake ‘photograph’ won first prize in the DigiDirect photography competition (https://tinyurl.com/4za3b6px)


T


he picture opposite above is a fake. I don’t mean it’s photoshopped or edited. It was never a photograph in the first


place. But it still won first prize in a photographic competition. Its creators Absolutely AI confessed: “The surfers in our image never existed. Neither does that particular beach or stretch of ocean. It’s made up of an infinite amount of pixels taken from infinite photographs that have been uploaded online over the years by anyone and everyone.” Fakes like these are now ubiquitous,


and ever more realistic and convincing. The other ‘photograph’ here was not just a fake picture but also fake news, made with artificial intelligence (AI) text-to-image generator Midjourney. This was used to introduce a Turkey/Syria disaster appeal, which has now disappeared along with any money raised. This is frightening. It strikes at the heart of everything we do. It goes way beyond the doctoring of authentic photographs. Professional photographers can guarantee their work with film or raw file originals. But with AI there are no originals, just word prompts entered into a text to image generator. So, what to do when we can’t trust


the evidence of our own eyes? One answer is to identify fakes. Both the UK and the EU, with its forthcoming Digital Services Act, are proposing the compulsory labelling of AI pictures, which will make a dramatic difference. But this is not foolproof. We also need to guarantee the reality, the authenticity of the pictures we create. The solution here lies in digitally identifying genuine photographs, at


the moment of creation, and establishing their provenance, digital truth fighting digital lies. This can be done today and with no need for legislation. Here is how it works. A digital code inserts provenance information into every photograph as it is taken, even before it leaves the camera. This code then tracks and traces every change and alteration on its travels through computer software, and then on to agencies, archives, websites and social media. At any point, anyone can check for the digital signature which if present will reveal its origins, history, authenticity, authorship and ownership. That, in a nutshell, is the scheme launched in 2019 by the Content Authenticity Initiative (CAI ), founded by Adobe and joined initially by the New York Times and then Twitter. This led in 2021 to the Coalition for


Content Provenance and Authenticity (C2PA) run by Adobe, Arm, the BBC, Intel, Microsoft and Truepic. It is described as a ‘mutually governed consortium created to accelerate the pursuit of pragmatic, adoptable standards for digital provenance, serving creators, editors, publishers, media platforms, and consumers’. As the aim is for universal adoption by the worlds of publishing and digital photography, the digital code upon which it depends is open source and available to all. Here’s an introduction – an interview with Andy Parsons, senior director of CAI at Adobe: http://bit.ly/3lRZM1r. Parsons says he “cannot guarantee


the veracity of the thing that is depicted, but we can guarantee the details are on how it was made, where it was made, who made it [and] what equipment was used”.


Fake ‘photograph’ generated by AI text-to-image generator Midjourney used for Turkey/Syria earthquake appeal. Close inspection reveals an almost hidden sixth finger on the firefighter’s hand





Fake photographic news is the one AI problem with a clear, practical solution. The NUJ should give this scheme it its full support


This technology works. Nikon and Leica are already inserting the code into some of their latest cameras. Within a few years, it could be available in all, for not only professionals but also amateurs. So, why hasn’t it been adopted by Apple and iPhones? Fake photographic news is the one


AI problem with a clear, practical solution now. However, universal acceptance and critical mass are the necessary preconditions for this scheme to work, and, in my view, the NUJ should give it its full support. So we can once more trust our


own eyes, and seeing will still be believing.


theJournalist | 18


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28