Research On Emerging Ai-Related Criminal Legislation And Case Law

Case 1: ELVIS Act (Tennessee, USA, 2024)

Facts:
The ELVIS Act (Ensuring Likeness Voice and Image Security) was passed by the Tennessee legislature in 2024, specifically responding to the use of AI technologies to clone a person’s voice, image or likeness without consent. The Act makes it a criminal offence (in Tennessee) to use someone’s voice/likeness via AI without consent in certain contexts.

Legal basis:
It is state‐legislation targeting AI identity impersonation, voice‐cloning and unauthorized deep‑voice/likeness use. The statute addresses harms arising from AI impersonation and protects the “voice and image security” of persons.

Outcome & Significance:
As a piece of emerging legislation, ELVIS signals a shift: rather than relying solely on traditional impersonation/identity theft laws, the law explicitly targets AI‑enabled misuses (voice, image, likeness). It marks one of the first statutes in the US to single out AI voice/likeness cloning as a criminal offence. It establishes that lawmakers are now recognising AI‑driven identity impersonation as distinct from earlier digital frauds.

Case 2: Take It Down Act (USA, 2025)

Facts:
In 2025, the US Congress passed the Take It Down Act, which criminalises the publication of intimate images and deepfakes created via AI without a person’s consent. The Act also requires platforms to remove such content within 48 hours and prevents re‑upload of duplicates.

Legal basis:
Federal legislation tackling non‑consensual intimate imagery and AI‑generated “deepfakes”. It bridges traditional “revenge porn” laws and AI‑specific threats (deepfake imagery).

Outcome & Significance:
By enacting this law, the US recognises that AI‑generated impersonation or intimate imagery is a serious criminal matter and adds procedural obligations on platforms. Significantly, it shows how criminal legislation is starting to include AI‑enabled harms rather than only human‐actor crimes. The law will influence future prosecutions involving AI‐generated content.

Case 3: Emerging EU Legislation – Artificial Intelligence Act / EU Guidelines (European Union)

Facts:
The EU’s proposed Artificial Intelligence Act aims to regulate high‑risk AI systems and includes criminal or administrative sanctions for misuse of AI (e.g., biometric systems, predictive policing, manipulative AI). While not yet all criminal law, the guidelines specify prohibited AI practices and enforcement penalties (fines, possibly criminal sanctions at member state level).

Legal basis:
EU‐level legislation (and national implementation) which focuses on AI systemic risks, including misuse by employers, websites, police. It includes bans on certain AI practices (social scoring, biometric predictive policing) and sets sanction mechanisms for violations.

Outcome & Significance:
Even though the AI Act primarily regulates rather than criminalises, it is emerging criminal legislation in that certain prohibited AI practices will lead to penalties (including criminal liability depending on member states). It represents a major shift: AI misuse is no longer purely a regulatory/compliance matter but can form the basis of criminal liability. The Act’s forthcoming full applicability (e.g., August 2026) shows the near‑term legislative landscape for AI‑related crime.

Case 4: Italy’s Comprehensive AI Law (2025)

Facts:
Italy became the first EU country to pass a comprehensive AI law aligned with the EU AI Act, including criminal provisions. The law introduces prison sentences (1‑5 years) for harmful misuse of AI such as generating deepfakes, AI‑enabled fraud, identity theft, and regulation of AI in workplaces/healthcare.

Legal basis:
National legislative framework explicitly penalising harmful AI misuse (deepfakes, identity theft, AI‑based fraud). It includes oversight agencies, criminal penalties and integrates AI regulation into criminal law.

Outcome & Significance:
This is one of the first examples of legislation with explicit criminal sanctions for AI misuse. It marks a milestone: lawmakers recognising AI harms (identity theft, deepfakes) as crimes in their own right and embedding penalties in legislation. For future case law, this sets the statutory basis for prosecution of AI‑driven crimes.

Case 5: UK Case – Use of AI Tools Banned in Sentencing Order (UK, 2024)

Facts:
In the UK, a sex‑offender (Anthony Dover) convicted of creating over 1,000 indecent images of children was given, as part of his sentencing, an order banning the use of AI generation tools (e.g., “nudifying” websites, text‑to‑image generators) for five years. Although this is not legislation per se, it serves as a precedent leveraging AI misuse in criminal case terms.

Legal basis:
The sentencing order uses existing criminal law (child sexual abuse images) but adds a novel restriction on AI tool access. The court explicitly recognised AI generation tools as a risk vector for new types of sexual abuse imagery.

Outcome & Significance:
This case demonstrates how judicial decisions are adapting to AI risks even before broader legislation. While not a legislative statute, it shows how case law and sentencing orders now address AI‑enabled misuse (deep‑fake creation, AI imagery) as part of criminal responsibility. It sets a precedent for courts imposing AI‑tool‑specific restrictions on offenders.

Case 6: Texas Senate Bill 20 – AI‑Generated Child Pornography (Texas, USA, 2025)

Facts:
Texas passed Senate Bill 20, known as the “Stopping AI‑Generated Child Pornography Act,” which creates criminal offences for possessing, promoting, or producing visual material that appears to depict a child under age 18, whether using AI, animation, or real images. The law explicitly includes AI‑generated or AI‑assisted content.

Legal basis:
State criminal legislation targeting knowingly creating or distributing visual material depicting minors, including through AI. The statute is one of the earliest to treat AI‑generated child sexual content as a distinct criminal offence.

Outcome & Significance:
The law expands traditional child pornography offences to include AI‑generated material, closing a legal gap. It provides prosecutorial basis for crimes that involve AI‑generated imagery of children, which previously may have been hard to prosecute under older statutes. It marks the growing trend of AI‑specific criminal statutes at state level.

Analysis & Synthesis

From these six examples, several key observations emerge:

AI‑Specific Criminalization is Emerging

Many jurisdictions are enacting laws that explicitly reference AI (“voice cloning via AI”, “deepfakes”, “AI‑generated child imagery”).

This is a shift from traditional laws relying on analogies (fraud, impersonation) toward statutes tailored to AI‑enabled harms.

Statutes Address Diverse Harms

Identity theft / voice‑cloning (ELVIS Act)

Deepfake intimate imagery (Take It Down Act)

AI in child pornography (Texas SB20)

Fraud/identity theft/AI misuse (Italy’s law)

Surveillance/manipulation (EU AI Act)

Sentencing restrictions on AI tool use (UK case)

Legislative vs Judicial Response

Many laws are preventative/regulatory, but increasingly criminal penalties are embedded.

Courts are also adapting by imposing AI‑specific restrictions even without full statutes yet.

Challenges in Implementation

Defining key terms: “AI‑generated”, “deepfake”, “voice/likeness cloning”.

Proof of AI use, attribution of harm to AI vs human actor.

Intersection with constitutional issues: freedom of expression, due process, rights to privacy.

Tools for enforcement: identification of AI‑generated content, forensic tracing.

Prosecutorial and Legal Implications

Prosecutors now have statutory bases to charge AI‑enabled offences rather than rely solely on analogues.

AI evidence (voice‑cloning logs, deepfake metadata) will become more central.

For defense, issues will arise around attribution (who controlled the AI?), intent, and automation.

Legislators likely to expand criminal liability further (autonomous bots, automated AI fraud, predictive policing misuse).

Global Trend and Comparative Law

While U.S. and U.S. states lead in AI‑specific statutes (ELVIS, Take It Down, Texas SB20), the EU and individual countries (Italy) show broader regulatory frameworks with criminal sanction components.

The global landscape: many jurisdictions still rely on existing laws adapting to AI, but that is changing.

Conclusion

We are in the early phase of AI‑related criminal legislation, but the momentum is strong. The examples above illustrate how legal systems are rapidly developing statutes and case law to address the unique harms posed by AI technologies—voice/likeness cloning, deepfakes, AI‐generated child imagery, manipulative AI systems, and more. These laws provide new statutory tools for prosecution and regulation of AI‐enabled crime.

LEAVE A COMMENT