Watch out - ChatGPT is being used to create malware
The infamous AI is making malware easy
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
The world’s most popularchatbot,ChatGPT, is having its powers harnessed by threat actors to create new strains ofmalware.
Cybersecurity firm WithSecure has confirmed that it found examples of malware created by the notoriousAI writerin the wild. What makes ChatGPT particularly dangerous is that it can generate countless variations of malware, which makes them difficult to detect.
Bad actors can simply give ChatGPT examples of existing malware code, and instruct it to make new strains based on them, making it possible to perpetuate malware without requiring nearly the same level of time, effort and expertise as before.
For good and for evil
The news comes astalk of regulating AIabounds, to prevent it from being used for malicious purposes. There was essentially no regulation governing ChatGPT’s use when it launched to a frenzy in November last year, and within a month, it was alreadyhijacked to write malicioius emails and files.
There are certain safeguards in place internally within the model that are meant to stop nefarious prompts from being carried out, but there are ways threat actors can bypass these.
Hackers are using ChatGPT to write malware>Fake ChatGPT apps are being used to push malware>Is ChatGPT becoming a serious security risk for your business?
Juhani Hintikka, CEO at WithSecure, toldInfosecuritythat AI has usually been used by cybersecurity defenders to find and weed out malware created manually by threat actors.
It seems that now, however, with the free availability of powerful AI tools like ChatGPT, the tables are turning. Remote access tools have been used for illicit purposes, and now so too is AI.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Tim West, head of threat intelligence at WithSecure added that “ChatGPT will support software engineering for good and bad and it is an enabler and lowers the barrier for entry for the threat actors to develop malware.”
And the phishing emails that ChatGPT can pen are usually spotted by humans, as LLMs become more advanced, it may become more difficult to prevent falling for such scams in the neat future, according to Hintikka.
What’s more, with thesuccess of ransomware attacks increasingat a worrying rate, threat actors are reinvesting and becoming more organized, expanding operations by outsourcing and further developing their understanding of AI to launch more successful attacks.
Hintikka concluded that, looking at the cybersecurity landscape ahead, “This will be a game of good AI versus bad AI.”
Lewis Maddison is a Reviews Writer for TechRadar. He previously worked as a Staff Writer for our business section, TechRadar Pro, where he had experience with productivity-enhancing hardware, ranging from keyboards to standing desks. His area of expertise lies in computer peripherals and audio hardware, having spent over a decade exploring the murky depths of both PC building and music production. He also revels in picking up on the finest details and niggles that ultimately make a big difference to the user experience.
Windows PCs targeted by new malware hitting a vulnerable driver
Dangerous Android banking malware looks to trick victims with fake money transfers
Apple MacBook Pro 14-inch M4 (2024) review: one of the best Pro laptops around just got better