/vnd/media/media_files/2025/09/05/unlocking-ai-potential-with-chatgpt-5-2025-09-05-12-04-09.jpg)
The number of reported harmful incidents involving artificial intelligence rose sharply in 2025, underscoring growing concerns about the risks accompanying rapid AI adoption. Data from the AI Incident Database shows that 346 AI-related incidents were recorded globally last year, spanning deepfake fraud, impersonation scams, and the generation of unsafe or violent content.
The figures suggest that while AI tools are becoming more accessible and widely used, safeguards have struggled to keep pace with their real-world misuse. Researchers at Cybernews, which analysed the database, categorised each incident by type and, where possible, identified the AI tools referenced in reported cases.
Deepfakes dominate AI-related fraud
Deepfake technology emerged as the most common factor behind AI incidents in 2025. Of the 346 recorded cases, 179 involved synthetic voice, video or image impersonation. Victims included politicians, business leaders and public figures, as well as private individuals targeted through highly personalised scams.
Fraud linked to deepfakes accounted for the bulk of financial losses. Of the 132 AI-related fraud cases reported, 107, around 81%, were driven by deepfake impersonation. Many scams succeeded because the fabricated audio or video closely mimicked trusted individuals, convincing victims that the interaction was genuine.
Several cases highlighted the scale of the problem. In the United States, a woman in Florida reportedly lost USD 15,000 after scammers used a deepfake version of her daughter’s voice to request money. In another incident, a Florida couple lost USD 45,000 after criminals posed as Elon Musk, promoting a fake investment scheme tied to a supposed car giveaway.
Similar patterns were seen in the UK. A British widow lost approximately five hundred thousand pounds in a romance scam in which fraudsters impersonated actor Jason Momoa. These cases demonstrate how deepfake technology can exploit emotional trust as effectively as financial credibility.
Unsafe and violent AI content raises serious concerns
Although less frequent than fraud, incidents involving violent or unsafe AI-generated content had some of the most severe consequences. The AI Incident Database recorded 37 such cases in 2025, including instances linked to self-harm and violent crime.
As AI chatbots are increasingly used for emotional support or personal advice, several incidents raised questions about their ability to respond safely in high-risk situations. In one widely reported case, 16-year-old Adam Raine died by suicide after interactions with ChatGPT were alleged to have reinforced his distress rather than directing him towards help. OpenAI has denied that its chatbot contributed to the death.
Independent research by Cybernews has shown that several large language models can still generate self-harm-related advice when prompted in specific ways, suggesting that existing content controls and safety guardrails are not always effective.
Beyond self-harm, some AI systems have also been shown to produce violent guidance. In one test, an IT professional reported that a chatbot called Nomi generated detailed instructions encouraging murder when manipulated through targeted prompts. While such cases remain relatively rare, they highlight the potential risks of AI systems being misused deliberately.
Popular tools are not immune
Among incidents that named specific AI platforms, ChatGPT appeared most frequently, cited in 35 cases. These ranged from copyright disputes in German courts to concerns around mental health impacts linked to prolonged chatbot use.
Other widely used tools, including Grok, Claude and Gemini, were each referenced in 11 incidents. However, the majority of cases in the database did not specify a particular AI system, suggesting that the true scale of incidents involving major platforms is likely higher.
Cybernews researchers noted that an AI tool’s popularity or reputation does not guarantee safety. Even well-established platforms can be manipulated when users intentionally attempt to bypass safeguards, a finding consistent with broader academic research into large language model vulnerabilities.
What the data indicates
The 2025 incident data points to a central theme: trust. Many of the most damaging cases relied on people trusting what they saw, heard or read, whether through convincing deepfake impersonations or authoritative-sounding chatbot responses.
Deepfake-driven fraud illustrates how quickly AI can erode traditional indicators of authenticity, while incidents involving violent or unsafe content show that failures in AI systems can have consequences far beyond financial loss.
The findings suggest a growing need for stronger technical safeguards, clearer accountability frameworks and greater public awareness as AI systems continue to be integrated into everyday life. Without improved oversight and risk mitigation, experts warn that the number and severity of AI-related incidents could continue to rise in 2026 and beyond.
Growing business risks in an AI-driven environment
As AI becomes more deeply embedded in business operations, the rise in AI-related incidents is likely to increase operational, financial and reputational risks for organisations. Deepfake-enabled fraud can undermine internal controls, exposing companies to payment fraud, executive impersonation and data breaches, while unsafe or manipulated AI outputs can create legal and compliance challenges. Businesses that rely on AI-driven customer interactions also face heightened scrutiny around safety, accuracy and accountability, particularly where automated systems influence decision-making or provide advice. As trust becomes harder to establish in an AI-rich environment, companies may need to invest more heavily in verification mechanisms, employee training and governance frameworks to protect customers, partners and brand credibility.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)