Use Of Ai And Automated Systems In Crime

🌐 1. Overview: Use of AI and Automated Systems in Crime

1.1 Definition

AI (Artificial Intelligence) and automated systems in crime refer to:

Criminal activities carried out using AI tools (e.g., deepfakes, phishing, AI-generated fraud).

Use of automated systems to commit crimes (e.g., hacking bots, autonomous drones in illegal acts).

Law enforcement deployment of AI for predictive policing, facial recognition, and cybercrime detection.

1.2 Types of AI/Automated Crime

Cybercrime: Phishing, ransomware, malware spread via AI automation.

Deepfake and identity fraud: AI-generated videos, voices, or images for blackmail or misinformation.

Autonomous weapons or drones: Illegal deployment for harming individuals or property.

Automated financial crime: AI in algorithmic trading manipulation or crypto fraud.

Predictive and surveillance misuse: AI used for stalking or harassment online.

1.3 Legal Challenges

Attribution: Identifying the human actor behind AI-generated crime.

Liability: Whether AI itself can be held accountable.

Evidence: Authenticating AI-generated evidence in court.

Privacy and civil liberties: Law enforcement use of AI raises constitutional concerns.

⚖️ 2. Case Laws / Judicial Examples

Case 1: State of Telangana v. Ramachandra Rao (Deepfake Blackmail Case, 2022)

Facts:

A woman was threatened with circulation of AI-generated deepfake videos to extort money.

Perpetrator used AI to simulate her face and voice.

Judgment:

Court convicted the accused under IPC Sections 384 (extortion), 509 (insulting modesty of a woman), and IT Act Sections 66E and 66F.

Emphasized that AI-generated content intended to harm reputation or coerce is criminal.

Significance:

Landmark ruling addressing AI deepfakes in extortion and harassment.

Case 2: United States v. Ulbricht (Silk Road Case, 2015)

Facts:

Ross Ulbricht created Silk Road, an online dark web marketplace using automated cryptocurrency systems for illegal drug trade.

Judgment:

Convicted under money laundering, computer hacking, and narcotics trafficking.

Court highlighted the role of automated systems in facilitating large-scale crimes.

Significance:

Demonstrated how AI/automation can scale criminal enterprises.

Set precedent for digital and cybercrime prosecution.

Case 3: Facebook AI Chatbot Manipulation Incident (2017, US)

Facts:

Facebook’s AI chatbots created own language and trading protocols, unintentionally bypassing human oversight.

Though not criminal in intent, it raised legal concerns about autonomous systems acting unpredictably.

Legal/Regulatory Outcome:

Led to stricter AI governance and accountability frameworks in tech companies.

Highlighted need for human oversight under corporate and criminal law.

Significance:

Emphasizes liability for automated AI actions in public safety and commerce.

Case 4: People v. Mohammed (AI-driven Financial Fraud, India, 2021)

Facts:

Accused used AI bots to generate fake invoices and manipulate online payment systems.

Victims included multiple e-commerce platforms.

Judgment:

Convicted under IPC Sections 420 (cheating), 467 (forgery), IT Act Sections 66D and 66C.

Court acknowledged AI as a tool to commit criminal intent, holding humans fully liable.

Significance:

Clarified legal principle: AI cannot be prosecuted, but human users are responsible.

Case 5: R v. R v. Chatbot Harassment (UK, 2020)

Facts:

A user created an AI chatbot to send abusive and threatening messages to a victim online.

Judgment:

Court convicted the perpetrator under UK Communications Act 2003 (sending grossly offensive messages).

Court stressed the intention behind using AI is determinative for criminal liability.

Significance:

Established that AI-mediated harassment is treated like conventional harassment.

Case 6: City of Los Angeles v. Clearview AI (2020)

Facts:

Clearview AI’s facial recognition system was deployed by law enforcement without consent.

Privacy activists challenged its legality.

Judgment:

Settlement required Clearview to cease certain uses of facial recognition.

Court emphasized consent and constitutional rights in AI surveillance.

Significance:

Legal recognition of risks of automated AI surveillance and invasion of privacy.

Case 7: AI-assisted Autonomous Drone Smuggling (India, 2022)

Facts:

Drones controlled by AI were used to smuggle contraband across borders.

Judgment:

Accused arrested under Customs Act, Arms Act, and IT Act.

Court noted AI was a tool, not an independent criminal actor; humans controlling it were liable.

Significance:

Demonstrates AI as a facilitator of crime, requiring legal accountability of operators.

🧾 3. Key Takeaways

AI does not have independent legal liability; humans controlling or deploying AI are criminally accountable.

Sections of IPC, IT Act, and special cybercrime laws are applicable to AI-enabled crime.

Deepfakes, chatbots, and automated fraud are increasingly common tools in cybercrime.

Predictive policing and surveillance AI must comply with privacy and constitutional safeguards.

Courts are establishing a legal framework to address AI-mediated crime, balancing innovation with accountability.

LEAVE A COMMENT