Case Law On Ai-Assisted Online Harassment, Cyberstalking, And Defamation Prosecutions
1. United States v. Lori Drew (2009)
Facts:
Lori Drew created a fake MySpace account to impersonate a teenage boy, targeting another teen online. The victim, believing the communications were real, experienced severe emotional distress and eventually committed suicide.
Legal Issues:
Drew was charged under the Computer Fraud and Abuse Act (CFAA) for unauthorized access to a protected computer.
Also charged with conspiracy to commit fraud related to her online actions.
Outcome & Takeaways:
The felony CFAA charges were acquitted; misdemeanor convictions were later overturned due to vagueness in the law.
Highlighted limits of traditional criminal statutes when applied to online harassment and the difficulty of proving “unauthorized access” versus “social engineering or deception.”
Relevance to AI-assisted harassment:
If an AI tool were used to generate fake profiles or send automated harassing messages, prosecutors would need to demonstrate intent, control, and direction behind the AI, much like the human-driven intent in Drew’s case.
2. Godfrey v. Demon Internet Service (UK, 2001)
Facts:
Dr. Laurence Godfrey discovered defamatory content about him posted on Usenet by a third party. He sought legal action against the Internet Service Provider (ISP) for not removing the content.
Legal Issues:
The case dealt with online defamation and ISP liability.
The question was whether the ISP was responsible for content posted by third parties once notified.
Outcome & Takeaways:
The Court ruled that ISPs must act upon notice of defamatory material, or they risk being liable.
Established the principle that online intermediaries may bear responsibility once aware of illegal content.
Relevance to AI-assisted harassment:
If AI tools are used to generate defamatory posts or harass individuals, ISPs and platforms may be required to remove such content once notified, or they could share liability. Human actors directing AI remain the primary focus of prosecution.
3. Kowalski v. Berkeley County Schools (USA, 2011)
Facts:
A student, Kara Kowalski, created a MySpace page targeting another student with derogatory and false statements. The school disciplined her for cyberbullying.
Legal Issues:
The case questioned whether off-campus online harassment could be regulated by schools without violating free speech rights.
Outcome & Takeaways:
The Court upheld the school’s disciplinary actions, emphasizing the disruptive impact of the online content.
Reinforced that online harassment can be actionable if it causes foreseeable harm to victims or the environment (e.g., schools).
Relevance to AI-assisted harassment:
Automated AI tools could scale harassment, generating larger volumes of harmful content. Courts and regulators may treat AI-generated harassment like human-driven harassment if harm is caused.
4. Kirti Vashisht v. State of Delhi (India)
Facts:
The petitioner was a victim of non-consensual sharing of intimate images (“revenge porn”) online by a former partner.
Legal Issues:
Applied the Information Technology Act, 2000 (Sections 67/67A) for transmitting sexually explicit material.
Focused on online harassment, defamation, and privacy invasion.
Outcome & Takeaways:
Court held the perpetrator liable and punished under IT Act provisions.
Demonstrated that online harassment using technology (even without physical intrusion) can attract criminal liability.
Relevance to AI-assisted harassment:
AI could generate deepfake content or automatically distribute such intimate material. Liability would attach to human operators directing AI, similar to the case’s approach to traditional technology-facilitated harassment.
5. U.S. Federal Case: James Florence (2025, AI Chatbot Assisted Cyberstalking)
Facts:
The defendant used AI chatbots to impersonate a university professor and harass multiple individuals online. Some AI-generated content included explicit images and messages. Victims experienced fear and distress, and a minor was involved.
Legal Issues:
Cyberstalking, harassment, and creation/distribution of explicit AI-generated content.
The case examined whether AI automation affects criminal liability.
Outcome & Takeaways:
The defendant pled guilty to multiple counts of cyberstalking and harassment.
Marked one of the first prosecutions where AI-generated content played a central role in harassment.
Relevance to AI-assisted harassment:
Shows that using AI to automate harassment does not shield perpetrators from liability.
Prosecutors focus on control, intent, direction, and resultant harm, even when AI performs the operational tasks.
Key Insights Across Cases
Intent and Human Control: AI tools do not eliminate liability; intent and direction by humans remain central.
Harm and Fear: Criminal liability hinges on demonstrable harm, fear, or distress.
Platform/ISP Responsibility: Platforms may share liability once notified of harmful content.
Digital Evidence: Logs, communications, AI-generated outputs, and metadata are crucial for proving the human connection.
Statutory Adaptation: Traditional laws (CFAA, IT Act, defamation statutes) are adapted to address AI-assisted online harassment and defamation.

comments