Concern has been growing around the idea that bad actors could use ChatGPT as a phishing tool to trick individuals into giving away their personal information, such as passwords and credit card numbers. The model's ability to generate text that resembles human speech makes it an ideal tool for impersonating legitimate companies and organizations in an attempt to steal personal information from unsuspecting victims.
Consider that, from a simple text prompt, ChatGPT could be used to generate an email or message that appears to be from a well-known financial institution such as a bank. As is typical in phishing techniques, the text can be specified to contain a fake request for the recipient to provide details such as their account number and password. Often, they may try to have the victim reveal personal and financial information such as the answers to their security questions. ChatGPT is sophisticated enough that the text generated by the model would be carefully crafted to appear authentic and trustworthy, making it easier for the perpetrator to convince the recipient to give away their information.
Even more harmful is when ChatGPT is used in conjunction with OpenAI Codex, another language model which was trained on the Python programming language using millions of GitHub repositories, which are publicly available codes for a variety of programs. From a simple text prompt, bad actors could abuse these tools to produce malware which can be injected into a phishing email or message, or other harmful code that may also attempt to steal your login information.
Further, there is no definitive way yet to determine what text has been generated by AI tools and what was produced by a human. Unlike deepfakes and other images generated using AI, text does not include as many identifiable artifacts. For example, AI-powered image generators can struggle to properly draw hands, so you may notice an extra finger on some images. It also can struggle with rendering text, or detailed items such as jewellery. AI-generated text, however, does not have the same kind of tells. Emails such as those described above are almost indistinguishable from genuine emails from trusted companies, besides the fact that they may ask you to reveal sensitive information.
Phishing scams are a growing concern in today's digital world, with millions of individuals and organizations falling victim to these attacks each year. According to the Anti-Phishing Working Group's quarterly reports on phishing trends, Q1 2022 was "the worst quarter for phishing that APWG has ever observed", with over 1,000,000 recorded phishing attacks in a quarter for the first time ever. Staggeringly, this was record was beat again in Q3 2022 with over 1,270,000 attacks recorded over the three-month period.
Additionally, the Verizon 2021 Data Breach Investigations Report found that around 36% of data breaches involved phishing, but note that this value can vary greatly between different sectors and industries. For example, their 2022 Data Breach Investigations Report highlights that in the mining industry phishing accounts for over 60% of all data breaches. This illustrates the effectiveness of phishing as a tool for hackers to steal personal information.
While individuals are often the targets of phishing attacks, companies tend to be targeted more frequently due to holding a much larger amount of valuable data. CISCO’s 2021 data suggests that the most targeted companies are financial services firms, which are the victims of 60% more phishing attempts than the next-highest sector.
The onset of AI-powered tools like ChatGPT could mean that the success rate of phishing attempts is likely to increase, and AI-generated phishing messages will become more frequent as the technology advances. According to a study published by MIT, 96% of IT and security leaders are anticipating that cyber attacks in future will be assisted by AI, and in response are considering using AI themselves defensively.
In the words of Elon Musk, ChatGPT is already "scary good". Enough so that, with enough data to train the model on, it is already possible that any individual could be imitated using tools like ChatGPT for the purpose of socially engineering their victims. The data needed to train the model could be taken from social media platforms or other writing attributed to that person. In theory, this model would be able to recognize and recreate the idiosyncrasies that we consciously and subconsciously use to identify each other, and bad actors can exploit this to subvert the trust we have in people we know to improve their phishing strategies. In fact, in response to Elon Musk's comment, OpenAI's own CEO Sam Altman tweeted “I agree on being close to dangerously strong AI in the sense of an AI that poses, e.g., a huge cybersecurity risk.”
The risks associated with the abuse of AI-assisted tools are already great enough that some are beginning to petition that governments seek to regulate the use of these tools. So far, this technology has advanced far quicker than the laws which govern their use. There is consensus among professionals that the risk to companies and individuals will only continue to increase as the technology improves, which it is currently doing in leaps and bounds.
ChatGPT was built on OpenAI's GPT-3.5 language model, itself an improvement on their GPT-3 model. GPT-4 is slated for release in the first quarter of this year, 2023. During the month of January, rumours were circulating online that this next iteration will be trained on over 5000 times the data of its predecessor, though these have been dismissed by Altman. This rumour does, however, raise concerns that another substantial improvement to the tool could massively increase the risk to companies and individuals, perhaps beyond what they are currently capable of defending against.
To protect themselves from this type of phishing attack, individuals should be cautious of unsolicited emails or messages that request personal information.
Before responding to any such requests, they should verify the authenticity of the sender and ensure that they are communicating with a legitimate source.
Also, recognise that these emails typically try to invoke a sense of urgency, such as threatening that your account may be locked. These are tell-tale signs of a phishing attempt, though not always, so make sure to carefully review the email contents, the sender, and treat any embedded links and attachments with caution.
It is also important for companies and organizations to be aware of this potential threat and to educate their employees on how to spot phishing scams. This can help reduce the risk of sensitive information falling into the hands of hackers who are using tools like ChatGPT for malicious purposes.
In conclusion, ChatGPT is a powerful tool that can be used for a variety of purposes, but it also presents potential dangers if used by individuals with malicious intent. By exercising vigilance and taking steps to protect sensitive information, individuals and organizations can reduce the risk of falling victim to phishing attempts.
Phishing Attack Trends Report – 1Q 2022, Anti-Phishing Working Group
Phishing Attack Trends Report – 3Q 2022, Anti-Phishing Working Group
2021 Data Breach Investigations Report, Virizon
2022 Data Breach Investigations Report, Virizon
Cybersecurity threat trends: phishing, crypto top the list, CISCO
Preparing for AI-enabled cyberattacks, MIT Technology Review Insights
The information contained in this article is provided for informational purposes only and does not constitute professional advice and is not guaranteed to be accurate, complete, reliable, current or error-free.
Donald Trump’s re-election brings new debates about data privacy, AI regulation, and cybersecurity. This article evaluates his policies, from TikTok to encryption, and their possible effects on US citizens and innovation.
Read moreThere is a clear path of progression for a bad actor to go from unknown and uninvolved, to standing shoulder to shoulder with the internet's most sophisticated criminals. In this article we attempt to answer the question of how bad actors are made.
Read moreThe rise of cybercrime in recent years has been staggering. This article explores the actions and strategies employed by governments to protect citizens and institutions from the ever-evolving digital underworld.
Read moreNews of the latest cyberattack comes from Italy, where on the afternoon of the 20th October it was disclosed that SIAE, the Italian Society of Authors and Publishers, was targeted by a ransomware attack. SIAE, which was founded in 1882, is the Italian copyright collecting agency for artists in different areas of the entertainment industry, including television, music, theatre, visual arts and literature, and aims to guarantee that artists receive the right remuneration for their work.
Read moreYou have probably noticed that all the phishing mails are poorly written and some details may let us think they are somewhat unprofessional. Find out why.
Read moreIn the last years the world of cybercrime has been constantly developing, leading to a growing number of cyberattacks carried out by different hacking syndicates. This article provides a general overview of the 10 current prominent hacking groups, exploring their systems, motivations and tactics.
Read moreChatbots have been around for years already, but in the rapidly evolving landscape of artificial intelligence (AI) and machine learning, they have emerged as powerful tools that enhance customer service, streamline business operations, and provide personalized user experiences. However, alongside their legitimate uses, a darker side has emerged: bad actors have begun to proliferate fraudulent AI tools and chat bots.
Read morePlease fill in the form below (fields with * are mandatory) and we will respond to your request as soon as possible!