Author: Akash Avthale

Balancing Artificial Intelligence and Privacy: Legal Challenges of AI in the Data Protection Era in India
Artificial Intelligence (AI) has transformed society into a digital era which revolutionizes industries and enhances decision-making processes across numerous areas . However, the rapid advancement and widespread adoption of AI technologies have given rise to significant privacy and data protection concerns. AI Major changes are creativity, effectiveness, and economic expansion. But also it brought substantial questions regarding the protection of personal data ,ethical concerns and regulation concerns .
“ Data is the new science. Big Data holds the answers “ -Pat Gelsinger that means protection of data and balancing privacy and governance are mandatory things in the new era . This is the digital world as we often know people stored their data on electronic devices. There is a huge, exponential growth of personal data stored online .
The challenge facing the government and policymakers is a delicate balance between harnessing the power of AI for societal benefit and safeguarding the fundamental right to privacy .
Privacy Concerns in the AI
AI is growing fast, and with that, privacy concerns are growing bigger.
Large amounts of personal information are often collected, utilized, and stored within an AI system. Such a system might even create new, or exacerbate existing, privacy problems.
Data breach
The more data a company collects for artificial intelligence, the more tempting that company is for hackers. In case of a data breach, people’s private information can be accessed and used for purposes such as identity theft or fraud with cyber-attacks becoming more advanced and most AI data stored on one location in the cloud, for instance, the risk for these issues is even higher. Data is the very foundation upon which the functionality and advancement of AI rest. AI systems demand
access to rich personal data to learn from, adapt and create forecasts. This interdependence creates significant privacy issues, because the collection and manipulation of personal data often occurs without formal consent or Understanding of the individuals involved.
Case study : In October 2020, BigBasket, one of India’s largest online grocery websites, suffered a massive data breach affecting approximately 20 million users. The breach successfully compromised sensitive information including, but not limited to, email addresses, hashed passwords, phone numbers, addresses, birthdates, and even users’ purchase histories. Later on, the stolen data was found to be available on the dark web. The breach occurred due to vulnerabilities within the system of BigBasket, reflecting poor database security and encryption practices. This sparked serious concerns over the level of security that such
e-commerce companies offer to consumer data and intensified calls for stricter cybersecurity standards in the country.
Unauthorised data
Many organizations try to protect privacy by anonymizing data before analysis. However, modern AI techniques can often figure out who individuals are by linking anonymized data with other available information. This process, called data
deanonymization, weakens traditional privacy protection methods and raises concerns about how effective current anonymization techniques are.
Ethical concern in AI
The use of personal data in AI raised not only ethical issues but also legal compliance.
Informed consent
Many of us give consent to the AI system for using our data for betterment in terms of searching history, and sometimes we do not read the declaration of consent, which has a huge impact on data privacy.
It is hard for people to give real informed consent when using Al systems because these technologies are complex, and most users do not fully understand how their data will be used or reused. Al can take the same data and analyze it in many different ways, which makes it even more confusing for users to know what they are agreeing to. As data becomes more and more valuable, there is an emerging argument about who owns personal data and who should benefit from it. That argument thus again relates to the notion that personal data is not just a commodity but closely connected with a person’s identity and autonomy. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), the Supreme Court of India recognized this as it held that the right to privacy, to control over one’s personal information, is a basic right.
The Digital Personal Data Protection Act, 2023 Overview
The Digital Personal Data Protection Act, 2023 was introduced as a bill on August 3, 2023 and was formally enacted on August 11, 2023. The major points are as follows –
a) Protection of Digital Personal Data: For securing the personal information of individuals that is digitally collected, stored, processed, or shared.
(b) Data Principal Rights ( Empowerment): The right to consent, correction, erasure, and grievance redressal are ways to give people greater control over their personal data.
(c) Accountability of Data Fiduciaries: To ensure that the collection or processing of personal data by an organization, the data fiduciary, is done lawfully, fairly, and in a transparent manner.
(d) Legal Usage of Information: To set clear principles for lawful processing of digital personal data, emphasizing consent and legitimate use.
(e) Data Security and Minimization: For data to be collected only for certain purposes, retained only during the time which is necessary, and protected against breach or misapplication
(f) Establishment of a Regulatory Authority: To establish the Data Protection Board of India, for the purpose of enforcement, grievance redressal, and ensuring compliance with the Act.
(g) Promotion of Innovation and Ease of Doing Business: To balance data protection with the need for innovation, economic growth, and efficient digital governance.
(h) Cross-Border Data Flow Framework: To regulate the transfer of personal data outside India, allowing such transfer only to the countries notified by the Central Government.
(i) Penalties and Enforcement: To provide for significant penalties in cases of data breaches, non-compliance, or failure to exercise appropriate security measures.
Data Protection and Privacy in India
Until 2023, India did not have a separate law or framework to govern data protection. The Information Technology Act, 2000 (IT Act), along with rules notified thereunder, formed the basis around which the data protection framework revolved.This led to the criticism of the existing data protection framework, which left millions vulnerable to data breaches, identity thefts and surveillance.
Right to Privacy was firstly recognized as a fundamental right in India in the landmark case of Justice K.S. Puttaswamy (Retd.) and Anr. Vs Union of India and Ors., 2017,where the Honorable Supreme Court, through a nine-judge bench headed by then the Chief Justice of India, Justice J.S. Khehar, held Right to Privacy as a fundamental right under Article 21 of the Constitution of India, under the Right to Life and Personal Liberty. The earlier judgments of Kharak Singh Vs State of UP and M.P. Sharma Vs Satish Chandra were overruled, which had previously not recognized the ‘Right to Privacy’ as a fundamental right. It was also clarified that this right could be infringed only in case of a compelling state interest for doing so.This landmark judgement paved the way for the development of a personal data protection law.
FAQ
What is the main framework of law concerning data protection in India?
The main law related to data protection in India is the Digital Personal Data Protection Act, 2023 (DPDPA 2023). It details how personal data is collected, processed, stored, and shared by organizations, including AI systems.
What are the most significant privacy challenges posed by AI?
AI raises serious privacy concerns because of the large volumes of personal data it requires and the sensitive information it can deduce. Processes of decision-making are often not transparent, making it difficult for individuals to discern how their data is being used. In addition, AI-driven profiling leads to potential discrimination, a loss of autonomy, and other adverse outcomes. Besides, the presence of security gaps in AI systems enhances the risks of data disclosure and misappropriation.
Does the DPDPA restrict transfers of data across borders for AI training?
The Act allows cross-border data transfers except to blacklisted countries (to be notified). The training of AI models using personal data is allowed, provided it is based on consent, security, and purpose-limitation requirements.


