Deepfake and AI-generated Content: Legal Framework Needed in India

In the age of artificial intelligence, deepfakes and AI-generated content have emerged as double-edged swords—serving creative innovation on one hand and posing severe threats to privacy, democracy, and digital safety on the other. In India, where internet penetration is soaring and social media is a key political and cultural tool, the lack of a dedicated legal framework to regulate deepfake content raises urgent concerns.

While AI can generate hyper-realistic visuals and audio, the misuse of such tools—especially for misinformation, defamation, identity theft, and harassment—demands swift legal intervention. Countries like the US, China, and the EU have initiated discussions and legislations around AI-generated content, but India lags behind.

Legal Vacuum in India

India currently does not have any dedicated law specifically addressing deepfakes or AI-generated media. However, a few provisions in existing laws can be invoked depending on the context of misuse:

Relevant Constitutional and Legal Provisions

  • Article 21: Guarantees the right to privacy, as affirmed in the Justice K.S. Puttaswamy v. Union of India judgment (2017).
     
  • Section 66E & 67A of the IT Act, 2000: Address violation of privacy and publishing sexually explicit material electronically.
     
  • Section 469 of IPC: Punishment for forgery intended to harm reputation.
     
  • Section 500 of IPC: Criminal defamation, applicable when deepfakes are used to malign individuals.
     
  • Indecent Representation of Women (Prohibition) Act, 1986: Can be invoked when AI-generated content targets women.

Use Cases and Threats

AI-generated content is being misused in various formats:

  • Political Manipulation: Deepfakes mimicking politicians can sway voter perception.
  • Revenge Porn: Morphed or fabricated intimate videos, especially of women, circulate on private and public platforms.
  • Corporate Espionage: Fake voices or visuals of executives may lead to fraud or manipulation.
  • Social Engineering: Deepfake videos and audios are used in scams, impersonating CEOs, relatives, or officials.

Recent Cases and Public Outcry

In 2023–24, several Indian celebrities and influencers became victims of deepfake videos, leading to widespread public outrage but little legal remedy. The absence of a regulatory mechanism makes it difficult to trace the origin, punish offenders, or even remove content swiftly from the internet.

Why India Needs a Specific Framework

1. Definition and Categorization

  • Deepfakes should be clearly defined and categorized—whether used for satire, pornography, impersonation, or fraud.

2. Consent and Authenticity

  • Mandating digital watermarking or disclaimers for AI-generated content would help distinguish real from fake.
     
  • Enforcing explicit consent before creating or sharing AI-generated content involving identifiable individuals.

3. Platform Responsibility

  • Clear obligations for social media platforms under a revised Intermediary Guidelines framework to detect, report, and remove deepfake content swiftly.

4. Penal Provisions

  • A graded punishment system depending on the intent and consequence of deepfake misuse—ranging from fines to imprisonment.

5. Right to Be Forgotten

  • A legal right allowing victims to demand the removal of manipulated content from digital platforms, in line with Article 21.

Government Efforts So Far

  • The Digital India Act (under draft) is expected to address some aspects of AI misuse and intermediary responsibilities.
     
  • The Data Protection Act, 2023 offers limited privacy safeguards, but doesn't explicitly cover deepfakes.
     
  • MeitY and PIB have issued advisories, but enforcement remains weak without statutory backing.

Global Benchmarks

  • China requires deepfake creators to label content as synthetic and obtain consent.
     
  • California, USA bans the use of deepfakes to influence elections or create non-consensual sexual content.
     
  • European Union is drafting a comprehensive AI Act that includes obligations for transparency and safety in AI-generated content.

Conclusion

As India surges ahead in digital innovation, its legal system must catch up with the ethical and societal risks posed by AI-generated content. A strong, specific legal framework that balances innovation with accountability is critical—not only to protect individuals from harm, but to uphold democratic integrity and constitutional rights in the digital age.

LEAVE A COMMENT

0 comments