AI now behind majority of spam emails, new research finds

By April 2025, 51% of spam emails were generated using AI rather than written by humans, indicating a significant shift towards automation in low-effort cyberattacks.

author-image
Voice&Data Bureau
New Update
threat-spotlight AI-email attacks over time

Email scammers are increasingly turning to artificial intelligence (AI) tools to launch large-scale spam campaigns, rather than sophisticated, targeted attacks, according to new research conducted by the Universities of Columbia and Chicago. The study, which leverages threat detection data from cybersecurity company Barracuda, found that 51% of spam emails are now AI-generated, compared to just 14% of business email compromise (BEC) messages. Nonetheless, the use of AI in both categories is rising steadily.

Advertisment

The researchers analysed a vast dataset of unsolicited and malicious emails provided by Barracuda, spanning from February 2022 to April 2025. 

Key findings

Their findings highlight several key trends. By April 2025, 51% of spam emails were generated using AI rather than written by humans, indicating a significant shift towards automation in low-effort cyberattacks. During the same period, 14% of business email compromise (BEC) attacks were also AI-generated. A steady increase in AI-generated email content has been observed since the public release of ChatGPT in November 2022. AI-written emails are typically more formal and polished, often employing sophisticated language and exhibiting fewer grammatical errors compared to those written by humans. Cybercriminals appear to be leveraging AI to experiment with different wording and phrasing, in order to evade security filters and increase the likelihood that recipients will click on malicious links. However, while the quality and effectiveness of the content are improving through the use of AI, attackers have yet to significantly change their overall tactics.

Advertisment

“Determining whether or how AI has been used in cyberattacks is a difficult challenge, since we can only see the attack, but not how it was generated,” explained Asaf Cidon, Associate Professor of Electrical Engineering and Computer Science at Columbia University. “Our analysis suggests that, by April 2025, the majority of spam emails were not written by humans but by AI. In the case of more sophisticated attacks like BEC, which require content tailored to the specific context of a victim, most emails are still written by humans. However, the proportion generated by AI is steadily increasing.”

Advertisment

The researchers identified AI-generated content by establishing a baseline: emails sent before the release of ChatGPT in November 2022 were assumed to be human-written. This allowed them to train detection models capable of identifying whether subsequent emails were likely created using AI.

Commenting on the implications for India, Parag Khurana, Country Manager for India at Barracuda Networks, said,“Cybercriminals are already using AI to automate and scale email attacks, which makes it critical for Indian organisations to gain deeper visibility into evolving threats and adopt a platform-based approach to defend against them. At Barracuda, we’re seeing growing demand for solutions that offer multi-layered protection alongside continuous threat detection and response. By integrating threat intelligence across email, data, and network security, businesses can respond to AI-generated cyberattacks more quickly and with greater precision.”

To stay ahead of these evolving email threats, Barracuda recommends the deployment of advanced, AI-powered, multi-layered email protection. In addition, organisations should invest in cybersecurity awareness training to ensure employees can recognise the latest tactics and threats used by cybercriminals.

Advertisment

The research, published as part of Barracuda’s Threat Spotlight series, was authored by Wei Heo, with contributions from Van Tran, Vincent Rideout, Zixi Wang, Anmei Dasbach-Prisk, M. H. Afifi, Junfeng Yang, and Professors Ethan Katz-Bassett, Grant Ho, and Asaf Cidon.