Is India Prepared for the Next Cyber Threat?

This article is written by Ammara Mehvish Shaikh, Government Law College, Mumbai.

Deepfakes, AI Manipulation & the Law: Is India Prepared for the Next Cyber Threat?

In the last few years , especially after 2023 onwards, India has seen this huge sudden burst of artificial intelligence tools, some good, some weird and some honestly very scary. One of the scariest problems right now is deepfakes. These are basically AI generated videos, images or audios that look so real that even your own family sometimes cannot understand if its fake or not. And earlier , it took very high tech labs to make such things, but today,  literally anyone with a phone and some free app can create deepfake content in minutes. .

This is becoming a big cyber threat because people are using deepfakes to  destroy reputations, create fake political speeches, blackmail women , steal money through voice clones and so much more. Even celebrities like Rashmika Mandanna’s deepfake case created huge panic because the fake video went viral before authorities could even react .

So the entire question now is. Is India legally prepared or not? Do we have strong laws? Do courts even understand how deepfakes work? And what can be done ahead so that the technology doesn’t run faster than justice. This article tries to talk about these points in a very simple and honestly human way because most legal writing becomes so technical that nobody wants to read it.

Case Laws 

Even though deepfakes are new ,  Indian courts have already faced some similar issues under cybercrimes , morphing , privacy issues and electronic evidence . Below are important decisions :

Shreya Singhal v. Union of India, (2015) 5 SCC 1

Facts: This case wasn’t about deepfakes directly, but it dealt with online speech, misuse of laws and how the government cannot punish people just because content is online .

Held: Section 66A IT  Act was struck down . The court said restrictions must be reasonable .

Relevance: Today. , when deepfakes go viral, people immediately want arrests or blocking, but Shreya Singhal reminds us that content moderation must be balanced and not random or over-criminalising.

Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1

Facts: The landmark privacy judgment recognised privacy as a fundamental right .

Held: Personal data, bodily integrity, informational privacy all fall under Article  21.

Relevance: Deepfakes directly violate  privacy , especially women’s bodily autonomy when their faces are pasted on obscene content. So Puttaswamy forms the backbone for arguing consent-  based rights against AI manipulation.

State of West Bengal v. Animesh Boxi, (2018 Cri LJ 4727)

Facts: A college girl’s  morphed pictures were circulated by the accused , leading to humiliating revenge porn.

Held: The court convicted him under IT Act and IPC, calling such acts a “serious digital assault. “

Relevance: This case almost mirrors the emotional impact that deepfake victims face and gives courts a foundation to deal with non-consensual AI- generated sexual content.

Tata Sons v. Greenpeace,  (2011 SCC OnLine Del 466)

Facts: An animated parody video was made by Greenpeace. Tata sued for trademark issues.

Held: Court protected satire , allowed digital expression.

Relevance: Courts must differentiate between harmful deepfakes  (fraud, sexual , political manipulation) and satire or harmless parody. Not all manipulated content is illegal .

Faheema Shirin v. State of Kerala, (2019) 2 KLT 97

Facts: Case about digital rights and access  to technology.

Held: Right to internet is part of right to life .

Relevance: Regulation of deepfakes should not break or over-regulate internet freedoms, otherwise innovation will also suffer.

Kishore v. State of Maharashtra, (2020 SCC OnLine Bom 1114)

Facts: WhatsApp chats and digital evidence reliability issue .

Held: Digital evidence must satisfy authenticity under Section 65B Evidence Act .

Relevance: Deepfakes challenge evidence law heavily—because how do you “prove ” an AI manipulated video is fake or real in court?

Sabu Mathew George v. Union of India, (2016) 7 SCC 221

Facts: Court ordered Google and others to remove certain content relating to gender selection .

Relevance: Court held that intermediaries can be directed to remove harmful content. This logic applies to deepfake takedown orders too .

X v. State (NCT of Delhi), 2022 SCC OnLine Del 1383

Facts: Revenge porn and manipulated intimate content.

Held: Court strongly condemned misuse of women’s photos and granted immediate relief.

Relevance: Deepfake porn is the fastest-growing abuse category  this case supports swift, compassionate court response .

9. WhatsApp LLC v. Union of India, 2021 (Traceability Case)

Facts: Government demanded traceability of messages for crime investigation .

Held: Matter still ongoing but raises big concerns regarding end-to-end encryption .

Relevance: If deepfakes are spread anonymously, traceability becomes a major investigation challenge.

Rashmika Mandanna Deepfake Incident (2023–2024 actions)

Facts: Viral Instagram reel showing a fake obscene clip.

Outcome: Delhi Police Cyber Cell launched suo moto inquiry; IT Ministry demanded stronger takedown protocol.

Relevance: Showed how fast deepfakes can harm someone’s dignity in seconds.

Understanding Why Deepfakes Are Hard to Control 

Deepfakes feel scary because:
• they look real,
• they spread faster than facts,
• victims face shame even when they did nothing,
• laws are slow but technology runs like a bullet train,
• and honestly most police officers still don’t have proper AI training.

India’s IT Act was written in 2000, a time when nobody even imagined AI could clone a face or create fake speeches of politicians. So there is a huge mismatch between the law and the digital reality we live in today.

Also, deepfakes create emotional trauma. People lose jobs, marriages break, elections get influenced, women get blackmailed, and sometimes victims stop going out of their homes. Legal books don’t always capture this human pain, but it is real.

Is India Prepared? (Short answer: Not fully, but trying)

What India currently has:

  • IT Act, 2000 (Section 66E, 67, 67A, 67B)
  • IPC Sections—cheating, defamation, identity theft
  • Intermediary Guidelines, 2021
  • Online takedown mechanisms
  • New proposals in Digital India Act (still upcoming)

But none of these laws explicitly use the word deepfake. And because deepfakes evolve every month, the law keeps feeling a bit backdated.

What India Still Needs Badly

  1. A clear deepfake-specific legal definition
  2. A punishment structure based on severity (sexual, political, financial fraud)
  3. Mandatory watermarking of AI-generated media
  4. A national deepfake reporting portal
  5. Fast-track takedown system within 1 hour
  6. Police training programmes on AI forensics
  7. Digital evidence authenticity guidelines

Without these, most cases will end in confusion, delays, acquittals or wrong arrests.

Conclusion

Deepfakes are not just another “internet problem,” they are literally the next big threat to privacy, reputation, democracy and dignity in India. The technology is advancing too fast, and sometimes law feels slow, confused, or just reactive. But India is not completely unprepared—courts, ministries, cyber cells, and experts are all trying to catch up.

What we need now is a proper, dedicated legal framework that recognises AI manipulation and deepfake harms, instead of forcing everything to fit into old provisions of the IT Act. The law must become faster, more human-centered and more technologically aware. Because at the end, it’s not only about cyber law—it’s about people, trust and safety in a digital world where seeing is no longer believing.

FAQs

1. What exactly is a deepfake?

A deepfake is an AI-generated fake video, photo, or audio that looks real but is completely manipulated.

2. Is making a deepfake illegal in India?

If it is harmful, sexual, defamatory, or used for cheating or blackmail—yes, it becomes illegal under IT Act and IPC.

3. Can victims of deepfake porn file a complaint?

Yes. They can file an FIR under Sections 66E, 67, 67A IT Act and also use IPC offences like outraging modesty.

4. How do I report a deepfake?

You can report it to:

  • Cyber Crime Portal (cybercrime.gov.in)
  • Local police station
  • Social media platform directly (Instagram, YouTube etc.)

5. Can deepfake evidence be used in court?

Yes, but only after proving authenticity under Section 65B Evidence Act. AI forensics experts may be required.

6. Are intermediaries responsible for deepfakes?

They must remove harmful content quickly once notified; otherwise, they lose safe-harbour protection.

 References