
Since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratise cybercrime.
With ChatGPT, any user can enter a query and generate malicious code and convincing phishing emails without any technical expertise or coding knowledge.
While security teams can also leverage ChatGPT for defensive purposes such as testing code, by lowering the barrier for entry for cyberattacks, the solution has complicated the threat landscape significantly.
Although AI can be used in cyberattacks enabling criminals to act even quicker and more efficiently, it can also be used to underpin antivirus and security solutions which are also cleverly learning its methods to overcome the technology. ChatGPT in particular can enable malicious actors to improve phishing, business-email-compromise attacks, but also generate more credible looking content for fake news or to convince an insider to cooperate.
There is already a constant battle of AI cat and mouse but whilst the antidote is up to fight these new powerful campaigns, awareness and education still serves as the best form of vigilance for staff.