/vnd/media/media_files/2025/02/25/the-ai-snake-in-smartphones-808837.jpg)
Ever heard of Mithridates of Pontus? The ruler was so scared of nefarious royal plots and assassination attempts that he started fiddling with toxicology. He, as the myth goes, took small doses of lethal poisons in a clever concoction to make himself immune to any kind of attack mixed in his food.
If he could do that, why can we not? Especially when the arsenic of AI-deep-fakes, the snake bites of audio/video cloning and the venom of AI frauds can be anywhere in the air today? Brad Pitt look-alikes, Sunil Mittal’s voice-cloned finance instructions, a CFO on a video call asking an employee for a big money transfer, a family member begging for help in an emergency, or a cop executing a digital arrest—the colours may vary. Still, the poison stays lethal and becomes more and more dangerous. And right now, it is someone else’s big bowl of pudding.
AI is helping criminals by getting information—images, whereabouts, identities, and voice samples—and making it super-easy to play and exploit victims.
The Snake Pit of AI-threats
2024 was full of vipers coming out of all kinds of sleeves. We all know how sophisticated, smart, sly, on-the-nose and well-equipped AI-enabled scammers and fraudsters have become. They can clone voices. They can fake faces. They can push the right emotional buttons. All made possible with AI on steroids.
Digital arrest victims could have easily lost more than Rs 2,100 crore in India in the last few months. Almost 64% of respondents admitted to increased fraud losses over the past year, as spotted in a February 2024 study by Forrester Consulting-Experian. These fangs will only grow more deadly and longer in the near future. According to reports, authorities warn that Indians could lose more than Rs 1.2 lakh crore over the next year to cyber fraud. These losses could be as high as 0.7% of GDP.
Deloitte’s Center for Financial Services has also predicted that Gen AI could enable fraud losses to reach USD 40 billion in the United States by 2027, up from USD 12.3 billion in 2023. It turns out that the ready availability of new generative AI tools is capable of making deep-fake videos, fictitious voices, and fictitious documents easily and cheaply available to bad actors. As per the estimates, generative AI email fraud losses could rise to about USD 11.5 billion by 2027 in an “aggressive” adoption scenario.
“Smartphone OEMs have always adapted to shifting user contexts, and the transition to AI-centric experiences will be no exception.”- PRABHU RAM, VP – Industry Research Group, CyberMedia Research
It is not hard to understand what is happening. AI is helping criminals not just by getting information—images, whereabouts, identities, and voice samples—but is also making it super-easy to play with to exploit victims, all that at a never-before speed and scale given to these attempts.
So, what if we injected some of this potion into our defences, too? Especially as the AI smartphone is emerging as the next big shiny thing in mobility. Speed of action may matter a lot when fighting against cyber-crime. In July 2024, it was shared that the Citizen Financial Cyber Fraud Reporting and Management System, under I4C, launched for immediate reporting of financial frauds and to stop siphoning off funds by fraudsters helped to save more than Rs 2,400 crore for more than 7.6 lakh complaints. AI can also help with faster alerts, swifter information-sharing, and better spotting of scamsters. If only this bodyguard was built into the devices!
Can the new breed of AI Smartphones not have built-in security for AI-generated frauds, scams, cloning and other security threats?
Is it Achievable?
As of date, AI smartphones are designed to do some basic stuff like managing calendars, observes Ashish Karnad, Executive Vice President, Hansa Research. “The phones currently have Small Language Models, built to manage these basic tasks, but managing security threats would require Large Language Models (LLMs). This is because many security threats exist outside of the phone AI, and unless the AI learns continuously, it cannot manage them.”
Small doses can begin, though.
AI smartphones hold the potential to become the first line of defence against the growing challenges of AI-generated frauds, deep-fakes, scams, and spam, weighs in Prabhu Ram, Vice President – Industry Research Group, CyberMedia Research. “Equipped with real-time threat detection, deep-fake analysis, advanced spam filters, and secure biometric verification, future AI phones will proactively safeguard users while continuously learning from emerging threats.
“Smartphones today have Small Language Models built to manage basic tasks. Managing security threats would require Large Language Models.”- ASHISH KARNAD, Executive VP, Hansa Research
AI smartphones increasingly incorporate advanced security features to counter growing threats like AI-generated frauds, scams, and deep-fake manipulation, concurs Biswajeet Mahapatra, Principal Analyst at Forrester, as he sketches an optimistic picture. “Key measures include on-device AI detection for phishing and scam prevention, behavioural analysis to flag anomalous activities, and enhanced encryption for secure communication. Countermeasures against AI cloning, such as voiceprint recognition and deep-fake detection, are becoming standard.”
AI can, for now, start by attacking the red dots that AI snipers are marking like One Time Password (OTP). Karnad explains that the well-known OTP is one of the biggest security threats today. “The OTP is very insecure and vulnerable to SIM Swap frauds where the fraudsters can read your OTP. This can be handled only if one moves to e-SIMs, which are way more secure than a physical SIM card. Some phone and service providers have started trials with e-SIMs, but this will take a couple of years before they gain acceptance on a large scale. The government may also need to mandate and drive this.”
“The cost of advanced features creates accessibility gaps, while malicious actors exploit adversarial AI and zero-day vulnerabilities to outpace defences.”- BISWAJEET MAHAPATRA, Principal Analyst, Forrester
According to Neehar Pathare, Managing Director and CEO of 63SATS, the USD 500-billion smartphone industry is now an AI battleground, with manufacturers racing to offer enterprise-grade features that enhance productivity and security. “Samsung’s Galaxy S24 Ultra integrates AI via its custom Exynos chipset, optimising performance and extending battery life. Meanwhile, Google’s Pixel lineup leverages AI in photography, showcasing the potential of custom Tensor chips,” he explains.
It looks like AI can finally help those fighting crime. But doing so ‘right away’ is a different question. Achieving scale, commercial incentives and impact are different chemical formulas, as we look for an AI antidote inside our phones.
Is it Feasible?
A lot of ifs and buts can dilute this promise.
Let us start with the integration of AI-driven security in smartphones. As Mahapatra argues, this part faces challenges such as rapidly evolving threats like deepfakes and voice cloning, limited processing power in budget devices, and balancing robust security with user privacy and usability. There are also intrinsic issues like fragmented industry standards, vulnerabilities in third-party apps and supply chains, and varying global regulations.
“They add complexity, while the lack of public awareness about AI-generated threats weakens adoption. Economic pressures, such as the cost of advanced features, create accessibility gaps, and malicious actors exploit adversarial AI and zero-day vulnerabilities to outpace defences. Addressing these challenges requires collaboration among stakeholders, continuous innovation, ethical considerations, and public education to build secure and equitable AI smartphone ecosystems.” Mahapatra says.
Pratik Shah, Managing Director – India and SAARC at F5, contends that while built-in security mechanisms are plausible, they face constraints such as limited computational power, battery efficiency, and the constantly changing nature of AI-powered attacks.
Balancing the two sides of the pendulum will be tricky, too, as Pathare illustrates. “Apple stands out with what it calls as a privacy-first approach, processing many AI functions on-device to limit cloud data transmission. Yet, privacy concerns linger. Apple recently agreed to pay USD 95 million to settle a proposed class action lawsuit claiming that its voice-activated Siri assistant violated users’ privacy. Mobile device owners complained that Apple routinely recorded their private conversations after unintentionally activating Siri.”
Privacy is clearly the artery that should never be touched while injecting any antidote against AI.
Theriac, not Snake Oil
Despite practical push-backs and some valid questions, hope is still around the corner. As Karnad augurs, this capability can be made possible if the phones pair with a cloud that can offer LLM processing. “The learning of the AI model happens on the cloud, and the implementation of the learning happens locally on the phone,” he says, adding, “The LLMs on the cloud can be trained to manage frauds related to text, voice, images, and video, while the phones can sense light and RF signals, and hence, they could potentially be used effectively. For instance, if there is a phishing email, the AI, which is connected to the cloud through the phone, can read the email, connect with the cloud to check for fraud, process the information within the phone and inform the user that it is a fraud. And all this could happen in real-time,”
Cloud can be a good addition to this concoction. Shah also seconded that idea: “We believe in a multi-layered approach combining device-level and cloud-based security. Devices often rely on cloud platforms for intensive threat detection, where scalable AI and machine learning models analyse and mitigate risks in real-time.”
“The USD 500-billion smartphone industry is now an AI battleground, with manufacturers offering enterprise-grade features to enhance productivity and security.”- NEEHAR PATHARE, Managing Director & CEO, 63SATS
Counterpoint Research also portends that a wide range of GenAI use cases will be available on flagship and mid-range smartphones, facilitated by OS upgrades and new model launches. What’s interesting is that ecosystem collaboration, fuelled by partnerships between smartphone brands, global LLM providers and SoC vendors, is expected to accelerate the realisation and adoption of GenAI in smartphones. As 5G networks facilitate cloud-based AI, flagship smartphones also look at a surge in on-device AI computing power, with current SoCs boasting capabilities exceeding 40 TOPS (trillions of operations per second). Remember that this local processing power allows for faster and more efficient execution of AI tasks.
As AI evolves, integrated security features will become more robust, Pathare echoes. Mahapatra highlights general security improvements, including hardware-level AI chips for local data processing, robust biometric authentication, and AI-powered app scrutiny. “Companies like Apple, Google, and Samsung lead the way with features like on-device processing, spam detection, and anomaly detection, while ongoing innovations such as real-time fraud detection and deep-fake warnings aim to enhance smartphone security further,” he says.
However, Shah reminds us that built-in security alone is not a standalone solution. “We are advocates for zero-trust architecture, ensuring security is seamlessly integrated into every layer of your digital infrastructure, from applications to networks, to protect against evolving threats. Collaboration among device manufacturers, security providers, and developers is key to address evolving AI-generated threats proactively.”
Ram jumps in, sprinkling reason on all the hope: “Smartphone OEMs have always adapted to shifting user contexts, and the transition to AI-centric experiences will be no exception. As ubiquitous smartphones evolve into AI-enabled devices, we will witness a shift in user behavior from passive searching to active seeking, driven by AI’s role in redefining user experiences.”
Or as some theories around Mithridates suggest, he never drank any toxin. He just faked it to ward off his enemies.
The Last Straw
No matter how we sip it, the paradox of immunity cannot be overlooked. Several accounts exist about how Mithridates died. But most dovetail on the surmise that when he confronted that the enemy was near and unavoidable, he tried to poison himself, his wife, and his daughters. Everyone dropped except Mithridates. The poison did not work on him. Eventually, he died because of a sword, whether of a trusted soldier or a dreaded foe- remains a debate. But he seemed to have nailed the bitter truth well (Appian, The Mithridatic Wars, XVI.111) when he complained.
“Although I have kept watch and ward against all the poisons that a man takes with his food, I have not provided against that most deadly of all poisons, which is to be found in every king’s house, the faithlessness of army, children, and friends.”
AI smartphones may or may not be able to grant the peace of mind that people desire. But the biggest poison remains the same to this day—trusting those who should not be trusted. There is no antidote for recklessness. For now, as people drink more and more AI, remembering that should help.
By Pratima Harigunani
pratimah@cybermedia.co.in