India’s deepfake dilemma

India faces an unprecedented challenge as morphed videos of public figures blur the truth, potentially swaying the fate of the world’s largest democracy.

VoicenData Bureau
New Update


India faces an unprecedented challenge as morphed videos of public figures blur the truth, potentially swaying the fate of the world’s largest democracy.


On 18 April, a First Investigation Report (FIR) was filed about a video recorded by Bollywood actor Aamir Khan. Within a week, a second FIR was lodged about a different video attributed to fellow Bollywood actor Ranveer Singh. Among many things that seem potentially wrong about this situation, the biggest is that neither Khan nor Singh had ever said what they were seen saying in these alleged videos.

Following up on these FIRs, on 30 April, the ruling Bharatiya Janata Party (BJP) filed a complaint with the Election Commission of India about AI-morphed videos of their leaders. These videos, referred to today as ‘deepfakes’, could have a massive consequence—influencing the next central government and prime minister of the world’s largest democracy and the fifth largest global economy.

With voice cloning on the rise, there are concerns about multimodal deepfakes—ones that use more than one type of data to create convincing fake content.


The Deepfake Debacle

Deepfakes are not a new phenomenon, but their presence in the Indian political landscape is a recent development. Over the past five years, the sophistication and capability of artificial intelligence, coupled with the increasing availability of AI-powered tools, have raised concerns about the potential misuse of this technology. As with any new technology, there is potential for both good and bad. With deepfakes, the line between the two is blurred.

Yet, these videos come at a time when India votes to decide its next central government. The outcome would have major consequences, for it would sum up who gets to lead the world’s largest democracy and its economy for the next five years. In such times, opinions against the incumbent government by two prominent public figures, Khan and Singh, could swing thousands, if not more, opinions.


While public figures’ involvement in political campaigning is common, the nature of the reported videos is the concern. They were not originals but AI-meddled versions of disjointed videos. This is the power of deepfakes. They can be used by a ruling party to create morphed and edited versions of content that help them stay in power or by an opposition to challenge the established order. The potential to sway public opinion is immense.

As many opinions have underlined, what is alarming today is that access to deepfakes has become super affordable and accessible—thanks to abundant personal and identifiable data such as voices and photographs being available online, access to inexpensive internet connectivity, and the force multiplier effect that social media can have on ‘viral’ content. Even without technical know-how, anyone can hire one among many ‘consultants’ who would subsequently help create content that is, at its core, illicit.



Why it Matters Today?

At the heart of it all is data accessibility—lots of data, which is just what AI needs. In an oversimplified format, an AI algorithm is a mindless executioner—capable of producing precisely what it is asked to, sans conscience or ethics, as long as there is enough data to ask the algorithm what is needed.

As the seven-phase Lok Sabha elections progress, there is no dearth of publicly available data on politicians and other notable public figures. Since 2014, every general election has started involving increasing volumes of cutting-edge technology, leaving behind the days of television, door-to-door, and newspaper- reliant campaigning.


Yet, this year marks the first election in AI’s backyard, where capable machines play a far bigger role than one may imagine in shaping the public narrative. With AI, the publicly available data of hundreds of hours of video campaigning, speeches, and photo opportunities could swing either way—and be used by any political party to influence any opinion.

The availability of tools such as X (formerly Twitter) and Meta’s WhatsApp and Instagram make things worse—cumulatively, nearly 1 billion users access these platforms. Given the erratic nature of social media, such platforms can easily have manipulated content that flies under the fact-checking radar.

The availability of tools such as X, WhatsApp and Instagram makes the issue of deepfakes worse—cumulatively, nearly 1 billion users access these platforms.


Are there Answers?

What complicates the matter is that there are no answers to the solution. Unlike cryptocurrencies, AI cannot simply be ‘banned’. Today, AI powers some of India’s most vital banking operations, cyber security standards, secure file storage and communications, and a vast volume of the nation’s economy.

On that note, the use of deepfakes is further complicated by their non-dependence on one particular type or subset of technology. They can be created using tools that need not be publicised or are often simple and public-facing.


The next complication comes to identifying who is to be blamed here. The creator of a deepfake is often not the person who posts the content. The origins of such content are sometimes obscure and circulate through group messaging channels on WhatsApp or its rival, Telegram. On that note, companies enjoy safe harbour protection as social media intermediaries—in simpler terms, social platforms, when pulled up, state that they are not the creators of a type of content but mere middlemen in a distribution network.

Since intermediaries have a legitimate defence here, the onus lies in identifying individuals who could, under the Indian Penal Code Sections 416, 503, and 504, be punished for intimidation, harassment, and impersonation with fraudulent intent. However, technical complications often make it challenging to trace problematic content.

What is unfortunate, however, is that each of these ‘answers’ to the issue of deepfakes manipulating our elections comes with caveats. With voice cloning scams on the rise, there are concerns regarding multimodal deepfakes—ones that use more than one type of data to create convincing fake content and can circulate them across telephony and the internet.

The AI algorithm is a mindless executioner—capable of producing precisely what it is asked to, as long as there is enough data to get what is needed.

India has already tried through a central government-linked advisory on AI to ask companies to get approval from the centre for every AI model in operation and disclose fortnightly reports of how such models are being moderated and controlled. In due course, the advisory was toned down from establishing a consent mechanism under the belief that taking such a drastic step to AI would be bad for the overall economy. In its absence, however, there is no clear route to reining in the threats of deepfakes. This, therefore, leaves us with a unique election year in which the veracity of content shared in the name of popular public figures would become challenging to identify.

In the future, this election could have a significant role to play in policy veterans trying to understand how these issues can be mitigated in the long run.

By Vernika Awal