A hacker has developed WormGPT, a malicious chatbot designed to empower cybercriminals with nefarious intentions. Unlike its counterparts, ChatGPT and Google's Bard, WormGPT lacks any safeguards that prevent it from responding to malicious requests.
Email security provider SlashNext conducted an evaluation of the chatbot and discovered that the hacker introduced it in March and officially launched it last month. The developer of WormGPT is selling access to the program on a popular hacking forum, providing an alternative tool for illegal activities and enabling the easy online sale of malicious services.
Using an older open-source language model called GPT-J from 2021, the developer trained the model on malware-related data, resulting in the creation of WormGPT. Screenshots shared by the developer demonstrate the chatbot's capabilities, including generating Python-based malware and providing guidance on crafting malicious attacks.
In the wrong hands, tools like WormGPT become powerful weapons. Particularly as OpenAI ChatGPT and Google Bard take steps to combat the abuse of large language models (LLMs) to craft persuasive phishing emails and generate malicious code.
WormGPT AI Interface
SlashNext decided to put WormGPT to the test by examining its ability to craft a convincing email for a business email compromise (BEC) scheme, a form of a phishing attack. The results were deeply unsettling as WormGPT generated an email that exhibited remarkable persuasiveness. Furthermore, it is also strategically cunning, highlighting its potential for sophisticated phishing and BEC attacks.
The craftsmanship of WormGPT was evident in its ability to compose a message using sophisticated language. As you can see above, it is effectively urging the targeted individual to transfer funds. Notably, the chatbot displayed impeccable spelling and grammar, evading the typical red flags associated with phishing email attacks. This further emphasizes the urgent need for heightened awareness and robust security measures to combat increasingly sophisticated cyber threats.
The absence of ethical boundaries in WormGPT underscores the threat posed by generative AI. Even novice cybercriminals can launch swift and scalable attacks without requiring extensive technical expertise.
Furthermore, threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs to manipulate the tool into generating outputs that may involve disclosing sensitive information, producing inappropriate content, or executing harmful code.
The software's author boasts it as the “arch-nemesis” of the well-known ChatGPT and claims it enables various illegal activities.
The emergence of WormGPT serves as a stark reminder of the evolving landscape of cybercrime. We are witnessing malicious actors leverage AI technology to enhance their malicious activities. As cybersecurity professionals strive to stay ahead of these threats, it becomes crucial to fortify defenses. It is required to promote responsible AI usage and collaborate to ensure a safer digital future for all. Vigilance, innovation, and collective efforts are essential in combating the ever-evolving challenges posed by such malicious advancements in the cyber realm.