/vnd/media/media_files/2025/11/13/the-dark-side-of-ai-2025-11-13-10-43-24.jpg)
Every breakthrough brings a shadow. The printing press gave us literature and propaganda. The internet ushered in connectivity, but also introduced cybercrime. Now, artificial intelligence has gifted us ChatGPT—and its malicious counterpart, WormGPT.
While millions have marvelled at ChatGPT’s ability to write poetry and solve problems, cybercriminals were developing their own AI assistant. WormGPT is a chilling example of what happens when artificial intelligence meets criminal intent. Promoted on Telegram channels and dark web forums as a “jailbreak-free, unfiltered” alternative to mainstream AI, WormGPT has become the tool of choice for AI-powered cybercrime.
The Cybercriminal’s AI Assistant
Unlike ChatGPT, which refuses requests to write malicious code or provide dangerous instructions, WormGPT has no such constraints. For a monthly fee of USD 60–120—roughly the price of a gym membership—cybercriminals can access an AI model that offers capabilities including the creation of advanced malware such as keyloggers, ransomware, and Trojans; the generation of compelling phishing emails designed to deceive even cautious users; and the development of system exploits and reverse shells for infiltration.
It can also analyse code in real-time to spot vulnerabilities, scrape websites to extract data, or deface pages, and help attackers bypass antivirus detection using advanced obfuscation techniques. In effect, it is a tireless criminal accomplice—available 24/7, requiring no profit share, and accepting cryptocurrency payments.
Built for Disruption
WormGPT is built on GPT-J, a six-billion-parameter language model that lacks the ethical guardrails embedded in commercial platforms. Its developers fine-tuned it with a customised training dataset composed of malware samples, jailbreak prompts, red teaming manuals, and social engineering scripts. The result: a model with ChatGPT-like fluency, but none of the moral filters.
The service is hosted anonymously, using low-cost VPS servers and GPU rental platforms, with payments processed via cryptocurrencies such as Bitcoin and Monero. Like legitimate SaaS businesses, WormGPT evolves in versions, offering enhanced features and performance. However, unlike enterprise software, this model is not designed to enhance customer service—it is engineered to facilitate the expansion of criminal operations.
India in the Crosshairs
India, with its fast-growing digital economy and expanding online population, has become a prime target for AI-powered cybercrime. The scale of attacks is staggering. In the first half of 2024, India witnessed 135,173 financial phishing attacks—a 175% increase over the previous year.
The complexity of these attacks is even more concerning. In early 2024, a major Indian bank was compromised through an AI-assisted phishing campaign that leveraged natural language processing to mimic internal communications. Attackers studied social media posts, LinkedIn profiles, and historical emails to craft messages that were so authentic that several executives inadvertently disclosed their credentials, compromising sensitive databases and transaction logs.
New data reveals that 80% of phishing attacks in India now use AI-generated content. Between January and May 2025, cybercriminals stole the equivalent of USD 112 million in a single state. These attacks demonstrate a deep understanding of local languages, cultural contexts, and ongoing events, indicating that AI is now being utilised not only to generate text but also to conduct research at scale.
QR code phishing is also on the rise. In India’s mobile-first payment ecosystem, attackers have been distributing fake posters and WhatsApp messages that redirect users to fraudulent UPI portals, making scams even more effective.
The Persuasion Engine
Security researchers who studied WormGPT describe its outputs as “remarkably persuasive”—especially in business email compromise (BEC) attacks. The AI can generate emails that mirror corporate tone, structure, and urgency with uncanny precision. What once required years of social engineering expertise can now be executed by a novice, empowered by an algorithm. It is like handing someone who can barely compose a text message the ability to write a deceptive email that could fool a Fortune 500 executive. Cybercrime has never been so accessible—or scalable.
The Pandora’s Box Problem
The FBI and global cybersecurity bodies have raised repeated alarms about the risks posed by AI tools like WormGPT. The bigger issue, however, lies in what the WormGPT saga reveals: once malicious AI proves viable, its replication becomes inevitable.
After mainstream media attention forced the original developer to shut down the project, the vacuum was swiftly filled by others. The model endures—rebranded, redistributed, and continually improved. Variants like FraudGPT and DarkBERT have emerged, creating an entire ecosystem of AI-enabled attack tools. The genie is not just out of the bottle—it is learning, iterating, and teaching others to do the same.
The Evolving Cat-and-Mouse Game
Cybersecurity now demands a new mindset. Traditional defences—designed to counter predictable, human-generated threats—are struggling against AI-driven attacks that can generate thousands of variants in minutes. Signature-based detection and static code analysis tools are becoming obsolete in the face of adversarial AI.
To counter this, cybersecurity firms are deploying AI-powered defences. Machine learning algorithms are now being used to detect unusual behaviour, predict vulnerabilities, and counter evolving threats. In this digital arms race, humans are increasingly relegated to the role of strategists, coaching AI agents on both sides.
The Uncomfortable Truth
The ongoing evolution of WormGPT reveals an uncomfortable truth about artificial intelligence—the same capabilities that make AI revolutionary also render it potentially dangerous. The technology does not differentiate between assisting a student in writing an essay and aiding a criminal in crafting malware. It lacks judgment and does not moralise; it simply optimises for the task it is assigned.
This incident also demonstrates how swiftly criminal markets adapt to new technologies. Within months of ChatGPT’s public release, underground communities had developed, marketed, and monetised their versions optimised for illegal activities. As legitimate AI models advance, these criminal variants continue to evolve in tandem, often incorporating new techniques and capabilities shortly after they emerge in mainstream AI research.
As AI capabilities continue to grow, the WormGPT phenomenon serves as a stark reminder that regulation and innovation must move in tandem. Criminal AI tools will become more powerful in the coming months—but so will defence models and frameworks.
The critical question is no longer whether AI will be weaponised—it already is. The challenge now is whether defenders can keep pace in a landscape where the threat surface is growing, and the speed of adaptation is accelerating.
In this high-stakes game of digital chess, WormGPT was not the final move. It was merely the opening gambit.
/filters:format(webp)/vnd/media/media_files/2025/11/13/govind-rammurthy-2025-11-13-10-44-44.jpg)
The author is the CEO and Managing Director of eScan.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)