India Orders Social Media Platforms to Remove Deepfakes Within Three Hours

india social media

India has introduced new rules requiring social media companies to remove deepfakes and other illegal AI-generated content within three hours of receiving a takedown order — a major shift in how platforms must operate in one of the world’s largest online markets.

On Tuesday, India announced mandates that require social media platforms to remove illegal AI-generated content much faster and ensure that all synthetic content is clearly labeled. According to a report by TechCrunch, these requirements become legally binding on February 20.

The legislation could significantly affect how tech companies moderate content in India — which has nearly 1.02 billion internet users and about 500 million unique social media users. Social media platforms will be expected to deploy technical tools to detect and label deepfakes, verify user disclosures, and prevent the creation or distribution of banned synthetic content.

TechCrunch reports that the new mandate is part of several changes to India’s 2021 Information Technology rules. The amendments bring deepfakes under a formal regulatory framework and require labeling and traceability for synthetic audio and visual content. They also sharply reduce the time platforms have to comply with takedown orders.

Under the updated rules, social media companies must comply with official takedown orders within three hours. Certain urgent user complaints must be addressed within two hours. This replaces the previous 36-hour deadline for removing unlawful material, according to The Verge. The shorter timeline applies to deepfakes and other harmful AI-generated content.

India’s amended Information Technology Rules require digital platforms to deploy “reasonable and appropriate technical measures” to prevent users from creating or sharing illegal synthetically generated audio and visual content, commonly known as deepfakes. If such content is not blocked, it must include “permanent metadata or other appropriate technical provenance mechanisms.”

The rules also set out specific obligations for social media companies. Users must disclose when content has been generated or edited using AI. Platforms are required to use tools to verify those disclosures and clearly label AI-generated material so that users can immediately recognize it as synthetic. For example, AI-generated images may need overlaying text identifying it as fake.

Certain types of synthetic content are prohibited outright, including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes. Companies that fail to comply — particularly when content has been flagged by authorities or users — risk losing safe-harbor protections under Indian law, which could increase their legal liability.


Image credits: Header photo licensed via Depositphotos.

Discussion