Advertisment

Will India’s data protection rules bridge gaps?

India’s Data Protection Rules aim to balance privacy, AI, and innovation by addressing algorithmic accountability and fostering trust in the digital ecosystem.

author-image
Voice&Data Bureau
New Update
Data Privacy Rules

By Dr Jaijit Bhattacharya

Advertisment

Implementing the fundamental Right to Privacy in the digital age has been a long-awaited legislative milestone. The recently announced draft of Digital Personal Data Protection Rules under India’s Digital Personal Data Protection Act (DPDP) represents a crucial step forward. 

Released by the Ministry of Electronics and Information Technology for stakeholder feedback, the draft data privacy rules aim to anchor trust and transparency in India’s growing digital ecosystem. However, striking a balance between safeguarding privacy and fostering innovation remains a critical challenge as businesses navigate compliance responsibilities and new opportunities to create a secure and innovative digital environment.

A Step Towards a Balanced Approach

Advertisment

The Draft Rules establish a robust framework for safeguarding personal data while promoting trust in digital systems. The framework aligns with global standards by emphasising informed consent and individual rights, ensuring individuals have greater control over their data. A notable component of the draft Rules is the compliance requirements from Significant Data Fiduciaries (SDFs).

As defined under the DPDP Act, SDFs are entities that handle high volumes of sensitive data that pose heightened risks to individual rights, national security, or public order. The classification of SDFs acknowledges their pivotal role in critical sectors such as AI, telecommunications, and healthcare, where vast volumes of sensitive personal data are processed. While these measures provide robust safeguards for data privacy, they introduce significant compliance challenges, particularly for businesses reliant on cutting-edge technologies.

Understanding Key Compliance Obligations

Advertisment

SDFs are subject to rigorous compliance requirements, including annual Data Protection Impact Assessments, regular audits, and mandatory reporting to the Data Protection Board. These measures aim to ensure accountability and transparency in handling personal data. 

Moreover, SDFs must ensure their algorithmic software does not pose risk to data principals’ rights. This is particularly challenging for industries relying on AI systems, which often involve complex, self-learning algorithms. While these requirements underscore the importance of securing digital ecosystems, they also raise concerns about operational feasibility.

Navigating Challenges of Algorithmic Accountability

Advertisment

Algorithmic accountability is one of the draft rules' most nuanced aspects. However, terms such as “not likely to pose a risk” and “due diligence” leave significant room for interpretation, creating compliance uncertainty for businesses. AI systems are inherently complex, and their outputs may not always be fully predictable. Mandating SDFs to guarantee the absence of risks could discourage innovation or lead to excessive caution in deploying AI-driven solutions.  

Given these challenges, an alternative approach to fostering accountability without stifling innovation is empowering data principals through informed consent. Policymakers could take inspiration from public initiatives like the government’s Digital Fraud campaign, which raises awareness about online fraud prevention. Similar efforts to educate individuals about their data rights and informed consent mechanisms could enhance transparency and accountability while allowing businesses the flexibility to innovate responsibly.

Balancing Data Protection Principles with AI Functionality

Advertisment

Data protection principles such as purpose limitation and data minimisation often conflict with AI systems' operational needs. AI thrives on large datasets and iterative learning processes, creating tensions with privacy-centric rules.

The draft rules empower individuals to withdraw consent at any time, making it mandatory for data fiduciaries to delete personal data upon request. While this ensures privacy rights, it can be challenging for AI systems since, once trained, these models can no longer rely on original data. If businesses are required to erase trained models following consent withdrawal, it could disrupt the iterative learning processes vital to AI development.

To address this dilemma, policymakers could consider exploring safe harbour provisions. Such provisions would permit data fiduciaries to retain trained models provided the original data is anonymised and rendered non-identifiable. This approach would respect individual privacy rights while accommodating the operational realities of AI-driven businesses, ensuring that innovation and privacy can coexist.

Advertisment

The journey toward a secure and innovative digital future is complex, but the draft data privacy rules offer a promising foundation for trust and transparency in India’s digital ecosystem. However, challenges such as the ambiguities surrounding algorithmic accountability and the feasibility of compliance requirements must be addressed to ensure that innovation is not inadvertently stifled. 

A regulatory framework that balances privacy and innovation is essential to realising the draft rules' full potential. With thoughtful recalibration and active stakeholder engagement, India can establish a benchmark regulatory framework that upholds individual rights, fosters innovation, and propels the nation toward leadership in the global digital economy.

The author is the President of Centre for Digital Economy Policy (C-DEP).Jaijit Bhattacharya

Advertisment