Fraud prevention & security
AI in disguise Last May, a new ruse intending to defraud banking customers emerged in the form of a ‘deep fake’ video – clips that appear to be a well-known personality, but are in fact elaborate fakes. The con involved using an AI-generated clone of Matt Comyn, the CEO of the Commonwealth Bank of Australia (CBA), purporting to promote a passive income product, ironically also meant to be an AI tool developed by CBA. In fact, this product wouldn’t have provided punters with a passive income boost – but would rather have fleeced them of what funds they already had. As the Australian Financial Review reported in June, the video posted on Facebook and Instagram showed a very convincing and human-like Matt speaking to viewers about this supposed new financial product. He (or perhaps ‘it’?) even tries to sell the fake product to the bank’s clients. “We launched an automatic platform for passive income,” the fake Comyn says. “You invest in gold, banks, stocks, funds and other profitable instruments. The platform will automatically generate profits.”
But if you watch and listen closely, the tell-tale signs of something not-quite-human become noticeable. Comyn’s accent, for instance, isn’t quite right, and the movement of his lips isn’t properly synchronised with the words he is saying. The scam was therefore uncovered, and CBA warned its customers of the deep fake video later in May. “We urge everyone, not just our customers, to be extremely sceptical of any such ‘opportunity’,” James Robert, CBA’s general manager of fraud, said, echoing the sentiment that if something sounds too good to be true – it usually is.
These scams run the full gamut of impersonating CEOs, like the CBA example above, to chatbots appearing to be legitimate customer service representatives – and having eerily human-like conversations with customers, all to extract information from them. AI can also greatly improve the efficacy of so-called ‘credential stuffing’ attacks, whereby stolen username and password combinations are purchased from the dark web, only for an AI to then try them across a range of sites and hope they strike lucky. In this way, AIs may be able to gain access to one email address, work out that address is linked to another website that has, say, credit card information stored, and then use the hijacked email address to work its way around two-factor authentication.
Taking out human error With such a dizzying array of AI dangers, it’s surprising that businesses are able to fend off the daily attacks against their customers. This, of course, used to be the sole domain of ‘white hat’
Future Banking /
www.nsbanking.com
human operatives working for in-house security operations centres, who would sift through abnormal activity on their company’s networks, servers and data centres to try and catch attacks before they develop. With the advent of advanced AI technology, however, which is only limited by computational power, and which can otherwise execute millions of attacks within a short timeframe, mankind alone clearly can’t hope to face the onslaught without help.
Cybercriminals have been able to exploit generative AI and powerful digital multimedia tools to fool banking customers into believing they are talking to legitimate representatives.
“There is no point in wasting time lamenting the use of powerful AI tools by cybercriminals. Rather, the correct approach is to ‘get with the times’ and realise that the technological world...waits for no one.”
Quite aside from the victims of fraud themselves, these cybersecurity professionals are often susceptible to fraudsters or, indeed, their own foibles. We often like to talk about how all people make mistakes, and that’s a simple truism – nobody is perfect. Yet the risk of a human operator getting it wrong can be astronomical, not least for maintaining a relationship of trust between a financial institution and its clients, who may ultimately find themselves being relieved of their money. Nor is this just a theoretical danger. In 2022,
IBM’s Cyber Security Intelligence Index conducted a comprehensive study involving its commercial clients in 130 countries, and discovered that human error poses a significant challenge for financial institutions. In fact, they found that simple mistakes were a major contributing cause in 95% of all breaches. These errors can, moreover, occur in various ways, such as data analysis mistakes or accidental clicks on phishing emails.
$3.5trn
Association of Certified Fraud Examiners
41
The amount lost to fraud annually.
Golden Sikorka/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53