This article is written by Aditi Dubey of Soa National Institute of Law.

The digital ecosystem in India is explosive and has in excess of 900 million internet users by the year 2026 and buzzing with digital offerings such as Facebook, Instagram, X and streaming powerhouses like Netflix and Hotstar. These in the context of the law are considered as intermediaries in accordance with Section 2(1)(w) of the Information Technology Act, 2000 (IT Act) since they only relay the third-party content, and they do not produce it. Section 79 offers the so-called safe harbour immunity, though, under the condition of carefully observing guidelines and having no actual knowledge about the prohibited content at all.
IT (Intermediary Guidelines and Digital Media Ethics Code), 2021 cracks the knuckles of significant social media intermediaries (SSMI) with more than 50 lakh users to consider having mandatory compliance officers and technologies to trace the originators in serious crimes based in India. OTTs, in their turn, are struggling with self-classifications, content management, and complaint systems. Having its takedown procedures refined with 2025 updates, this framework attempts to balance Article 19(1)(a), free speech protections, with the needs of public safety, but tends to become the focal point of heated discussions on government overreach.
Legal Framework and Key Obligations
Intermediaries have a defined directive of blocking 22 categories of illegal content immediately, in accordance with Rule 3(1)(b) we’re talking defamation, obscenity, threats to the sovereignty of India, the child sexual abuse material (CSAM) etc. The timeframes are unkind: 36 hours to respond to complaints or 72 hours in case it is a court order. In the modern world, when fake news can become viral in a few seconds, such a sense of urgency is quite reasonable.
When it comes to the big players of the social media intermediaries (SSMIs) the pressure mounts tremendously. They require a chief compliance officer on site in India, a focal point of contact 24/7 to law enforcement and technology tools to trace the original sender of horrific offenses such as rape or terrorism (yes, even at the expense of cracking end-to-end encryption). On top of that proactive: AI scanning of CSAM, monthly reporting to MeitY, and retention of user data of 180 days.
Moving the gears to OTT platforms under Part III, it has more to do with self-policing. They categorize shows and movies according to age groups, turn-on parental control and adhere to a strict Code of Ethics which creates a demarcation of vulgarity or anything that fuels community division. Complaints ascend a three-level ladder: commence with in-house groups, proceed to self-regulating institutions and lastly, fall down to inter-ministerial control which is reminiscent of old school broadcast regulations. Go against all odds and you may end up with fines of up to 50 crore rupees or even a complete ban.
Next is the Digital Personal Data Protection Act, 2023 (DPDP Act), which adds a layer of privacy on top of that and requires consent to data processing and comprehensive impact analysis. The 2025 Sahyog Portal made it a reality by allowing officials to issue one-click takedowns of CSAM Ive followed MeitY updates boasting of reduction of days of response to hours. That being said, not everyone goes through smoothly. An IAMAI survey I saw in 2026 highlighted that 40% of middle-tier apps are actually battling these compliance expenses which is crying out the need to use some form of tiered response to equalise the playing field.
On the whole, what I like about this arrangement is that it causes platforms not to remain passive hosts as they become active gatekeepers. An example is deepfakes, where in the 2024 elections, more than 15,000 videos that had been tampered with by AI were removed by Spot 3, according to the official figures of MeitY. Effective?
Sure, it is not as good as the Digital Services Act of the EU, which compels systemic issues such as algorithmic bias to undergo deep risk audits. That is one of the areas that India may wish to fill in the near future instead of later.
Challenges in Implementation
Law has a fine ring, but life stings. Like the 2021 Kerala HC challenge of WhatsApp that puts privacy in the spotlight, Traceability will directly conflict with Article 21. End-to-end encryption helps to protect dissidents, but it could also facilitate terror schemes. These indiscriminate prohibitions on communal disharmony drive platforms into self-censorship overkill they go retaliating too hard, and in the process, stifle free speech.
The strain of resources is felt most of all. SSMIs such as Meta sink all their funds in AI moderators, whereas startups are on the verge of collapse. Deepfake deluges exceeded 200,000 per month in 2025, according to a CERT-In report, requiring 24/7 monitoring but only 60% compliance even by the leading platforms in 2026 was reported by MeitY audits.
The privacy concerns increase under the DPDP, where websites retain user information up to 180 days, and unattractive security breaches such as the 2025 Flipkart hack that leaked 10 million profiles cause people to call with shorter retention. Reading through MeitY filings, the imbalance shines through: Big Tech manages to comply due to his size, and independents go under. Reforms? Increase SSMI levels to 1 crore users or subsidize AI to SMEs.
Judicial Interpretations: Guardrails for Balance
The courts have intervened as the actual MVPs and have pulled back some of the aggressive pushes by the government. Take Shreya Singhal v. Union of India (2015). Section 66A was ruled unconstitutional in the Supreme Court as being too broad and general. It further explained that the actual knowledge of the Section 79(3)(b) applies to kick in only with the court orders or governmental notices and not random personal complaints. The decision saved middlemen the panic attacks. It gave the mob driven moderation a good check.
Then there’s Kunal Bahl v. Snapdeal founders were offered a reprieve in State of Karnataka (2022). The court claimed that any type of crime committed by third party sellers cannot deprive them of immunity unless the platform provides active assistance. It’s a key nudge. Remaining inactive keeps you safe. On the other side, v. Christian Louboutin SE. The Delhi High Court criticized Amazon as more than an observer (Nakul Bajaj, 2018). The platform was enhancing imitations by advertisements and delivery. This poses problematic questions to OTT folks. In case Netflix promotes its daring originals, does it push them into the realm of active?
Recent cases continue to keep the things changing. The Netflix and Meta were put under the scrutiny of the 2025 Uday Mahurkar PIL on obscenity and CSAM. It tickled MeitY to constrict. The 2024 POCSO case of Bombay HC went further. It required active CSAM identification that beats safe harbour in the case of children. The partial stay on the IT Rules of news aggregators by Kerala HC did not alter due diligence. But it stayed off excess on practical publishers. These aren’t dry precedents. They are real-life dialogs that define the game. Others According to ASCI statistics, OTT self-classification increased by 30 after Mahurkar. This enhances responsibility without necessarily prohibiting them.
Case Laws at a Glance
Shreya Singhal v. Union of India (2015)
Quashed Section 66A on ground of vagueness. Only court or government orders should be involved in limited takedown. Enhanced safe harbour cover boosts tremendously.
Kunal Bahl v. State of Karnataka (2022)
The intact but passive Held immunity. Bombed FIRs on Snapdeal executives. Passivity pays off.
Christian Louboutin SE v. Nakul Bajaj (2018)
Protection is lost through active role such as ads or logistics void. Important test of OTT content pushes.
Conclusion
In 2025, the IT Act and 2021 Rules in India were updated. They fight to ensure that the online world is safer through targeting deepfakes and CSAM. All this is as we allow 900 million people continue to exercise their freedom of thoughts. Judicial decisions are also important. Shreya Singhal eliminated ambiguous laws at the time. Mahurkar has recently demanded stricter scrutiny of platforms. These cases taken in combination assist in creating a reasonable balance. Issues have not disappeared completely. The privacy is usually in conflict with the necessity to track the bad messages. The small firms have difficulty with high expenses.
New laws regarding data protection require time to be incorporated right into the mixture. All the same there are bright spots ahead. Introduce unambiguous regulations around AI. Allow some buffer to the smaller businesses with less steps. Hold open discussions with common people to get opinions. Experiment with alliances such as Meta 2026 tests which identify toxic material fast. I feel good about the direction in which it is going because I am a student of law and have personally traced this entire change. It indicates a freer internet more so here in India.
Frequently Asked Questions
Who qualifies as a Significant Social Media Intermediary (SSMI)?
Social media platforms with over 50 lakh registered users in India must appoint India-based chief compliance officers, 24/7 nodal contacts, and enable traceability of first originators for serious crimes like rape or terrorism.
Do OTT platforms enjoy the same safe harbour as social media?
Yes, under Section 79 IT Act, but Part III of the 2021 Rules adds publisher-like obligations content self-classification, parental controls, and ethics code compliance which may limit immunity if curation enables violations.
What triggers loss of safe harbour protection?
Failure of due diligence, such as not removing prohibited content within 36 hours of complaints or 72 hours of court orders or conspiring in illegal activity.
How have courts defined ‘actual knowledge’ for takedowns?
As per Shreya Singhal v. Union of India (2015), only court or government directives constitute “actual knowledge” private complaints do not trigger intermediary liability.
What are the penalties for non-compliance?
₹50 crore fines for OTT violations, safe harbour loss with IPC prosecutions (Sections 153A/295A), mandatory MeitY audits for SSMIs and potential operational bans.
References
IT Rules 2025 Amendment – Rule 3(1)(d) on Takedown Procedures, available at https://neetiniyaman.com/it-rules-2025-amendment-intermediary-liability-rule-3-1-d/
MeitY Official Notification – IT Rules Implementation, available at https://www.pib.gov.in/PressReleasePage.aspx?PRID=2205140
Uday Mahurkar v. Union of India, Bombay HC (2024), https://www.casemine.com/judgement/in/68ad65bbf9ea075c9ab008a1
Compliance of POCSO Necessary for Safe Harbour Protection for Intermediaries, IPR MentLaw , https://iprmentlaw.com/2024/10/20/compliance-of-pocso-necessary-for-safe-harbour-protection-for-intermediaries/


