Social media firms must remove ‘unlawful , hurtful’ content within three hours, says Centre

Social media firms must remove ‘unlawful , hurtful’ content within three hours, says Centre

Social media firms must remove ‘unlawful , hurtful’ content within three hours, says Centre

New Delhi: Social media firms must remove ‘unlawful , hurtful’ content within three hours, says Centre

Social media companies must now remove or disable access to certain unlawful or harmful content within three hours of receiving a valid government direction, a formal grievance, or becoming aware of a clear violation, under amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on Tuesday.

The new deadline — a sharp reduction from the previous 36-hour window under Rule 3(1)(d) — was not part of draft amendments released by the ministry of electronics and information technology (MeitY) in October 2025 and has been introduced only in the final notification. The amendments take effect on February 20, 2026.

The tighter timeline, requiring near-instant action from platforms such as Instagram, Facebook and YouTube, is expected to face pushback from industry. Legal experts warned the shift leaves little room for careful review, especially in cases involving subjective violations such as copyright disputes or fair use claims, and could force platforms to remove reported content first and assess it later, increasing the risk of over-takedown.

ALSO READ | ‘Chinese manjha’ trade: Uttar Pradesh govt puts online platforms, social media groups under scanner

Google did not respond to HT’s queries by the time of publication, while Meta Platforms said it is reviewing the amendments internally. X did not respond.

A senior MeitY official, requesting anonymity, defended the compressed deadline. “Experience has shown us that intermediaries are capable of actually acting fairly fast. There have been cases when they have been able to act within minutes. So clearly they have the technical capacity to act fast,” the official said.

The government has also reworked how platforms identify AI-generated or “synthetically generated” content, dropping rigid technical requirements proposed in October’s draft.

The draft had mandated visible watermarks covering at least 10% of a screen or audio tags during the first 10% of a clip—a fixed-size requirement that has now been removed.

Instead of prescribing exact dimensions, the final rules require platforms to use “reasonable and proportionate” technical measures to ensure AI content is “clearly and prominently displayed with an appropriate label or notice, indicating that the content is synthetically generated.”

“If I see a video, I should know that something is AI generated,” another MeitY official said.

The official warned that intermediaries might lose safe harbour protection under Section 79 of the IT Act, 2000, if they fail to follow due diligence obligations. Failures that could jeopardise this protection include ignoring lawful takedown orders, missing mandated deadlines such as the three-hour window, or failing to label or act against unlawful synthetic content.

ALSO READ | ‘Paid promotion’: Delhi Police slam ‘missing girls’ panic going viral on social media

Source: Read full coverage

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *