search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
industryopinion


ChatGPT: Threat or opportunity?


Ian Hirst, partner, Cyber Threat Services at Gemserv discusses the security pros and cons of ChatGPT.


C


hatGPT has been the subject of many a heated debate since its launch at the end of last year. Developed by OpenAI, a research and development company which also created


generative artificial intelligence (AI) tools like DALL-E, ChatGPT uses an extensive GPT-3 language model based on billions of data points gathered from across the internet to respond to questions and instructions in a way that mimics human conversation. People interacting with ChatGPT have used it to compose poetry, write academic essays, explain scientific concepts and produce code. As with all new technology, there are some fantastic possibilities, however there is also serious potential for exploitation by bad actors.


The security challenges of ChatGPT As an open, readily accessible tool, the billions of data points ChatGPT is trained on are made accessible to malicious actors who could use this information to carry out any number of targeted attacks. One of the most concerning capabilities of ChatGPT is that it is


designed to imitate realistic conversations, which threat actors could leverage for phishing attacks. By posing as ChatGPT, threat actors could deceive users into thinking that they are interacting with a legitimate and trustworthy source, when in reality they are being exposed to harmful and malicious content. Users who fall victim to targeted phishing campaigns could


suffer financial losses, compromise their personal information or even download and install malware, causing significant harm. Te tool also opens up opportunities for more sophisticated impersonation attempts, in which the AI is instructed to imitate a victim’s friend, colleague or family member in order to gain trust before launching an attack. Whether it’s through a more general phishing attempt or an act


of impersonation, cybercriminals could use the AI technology to generate convincing campaigns at scale, which is highly concerning.


Leveraging ChatGPT as a force for good While there are a multitude of ways threat actors could use ChatGPT for criminal activity, there are also a number of ways that the AI technology can be used as a force for good in cybersecurity. Experts


www.pcr-online.biz


can actually leverage ChatGPT to help detect and prevent cybercrime by monitoring conversations for suspicious activity. If any part of the conversation is considered unusual or suspicious, it can then be flagged to security experts for further investigation. Taking a more preventative approach to security by flagging potential threats early means that incidents can be tackled in a timely and effective manner, before escalating into a harmful attack. ChatGPT could also aid cybersecurity teams with research


and development, allowing experts to study the ways in which cybercriminals operate and to develop new strategies and technologies to defend against them. In an always evolving space, it’s important that security teams stay one step ahead of threat actors at all times, which means keeping staying across new and developing technologies. Realistic chat conversations can also be used to educate the public


about cybersecurity best practices and help them to develop skills to protect themselves online. For example, ChatGPT could be used to simulate phishing attacks and teach people how to identify and avoid them. It’s important to carefully consider the potential risks and


benefits of any technology, and to take appropriate precautions to protect against misuse and abuse. ChatGPT has the potential to be a powerful tool in the fight against cybercrime and could help to improve cybersecurity in a variety of ways. It should also be noted, however, that ChatGPT is not currently connected to the internet and the current model is based upon data up to and including Q4 2021. If ChatGPT were connected to the internet, it would have access to a vast amount of information and data available on the web. Tis would allow the algorithm to provide more comprehensive and accurate responses to user queries and questions, becoming more powerful still. Tere are certain technical details that have to be figured out


before ChatGPT is widely used, to prevent negative outcomes such as cyberattacks and the spread of misinformation. In general, AI and machine learning (ML) models rely on lots of training and fine-tuning to reach a level of ideal performance. We need to tread carefully in the development of ChatGPT to ensure it remains an opportunity, and not a threat.


April 2023 | 15


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52