Research On Ai-Assisted Deepfake Pornography, Harassment, And Exploitation Prosecutions

Case 1: Hugh Nelson – AI-generated child sexual abuse imagery (UK)

Facts:

Hugh Nelson, a man from the UK, used AI software to transform real photographs of children into sexualized images.

He distributed these images online, including in chatrooms where people paid him for access.

Legal Charges:

Creation and distribution of indecent images of children.

Encouraging offences of child rape.

Court Outcome:

Nelson pleaded guilty.

Sentenced to 18 years in prison, with extended licence conditions.

Significance:

One of the first UK cases where AI-generated child sexual abuse material was prosecuted.

Shows that courts treat synthetic abuse images seriously, even when no new physical abuse occurred.

Highlights how AI lowers the barrier to create illegal content, presenting new enforcement challenges.

Case 2: Jonathan Bates – Deepfake adult pornography (UK)

Facts:

Jonathan Bates created deepfake pornographic images of his ex-wife and three other women.

He superimposed victims’ faces onto naked bodies and posted them on pornographic websites with identifying information.

Some victims faced harassment, unwanted visitors, and job consequences.

Legal Charges:

Stalking.

Non-consensual sharing of intimate images (“revenge porn”).

Court Outcome:

Bates pleaded guilty.

Sentenced to 5 years in prison with restraining orders for each victim.

Significance:

Demonstrates how deepfakes are weaponized for revenge, harassment, and sexual exploitation.

Shows that existing stalking and intimate image abuse laws can be applied to deepfake content.

Raises questions about regulating the act of creation versus just distribution.

Case 3: Anthony Dover – AI tool restriction as a preventive measure (UK)

Facts:

Anthony Dover, a convicted sex offender, was prohibited from accessing AI tools capable of generating sexualized images.

The prohibition was part of a sexual harm prevention order after he was caught possessing indecent images of children.

Legal Charges:

Prior charges: possession of indecent images of children.

Court order: prohibited from using AI image-generation tools.

Court Outcome:

Court explicitly banned AI tool usage for 5 years.

Significance:

Highlights preventive judicial measures targeting AI technology.

Recognizes AI tools as vectors of potential harm.

Suggests that courts are expanding enforcement beyond traditional offences to include access to enabling technology.

Case 4: Callum Brooks – Deepfake nude image of a schoolfriend (Scotland)

Facts:

Callum Brooks used AI software to alter two Instagram photos of a former schoolfriend, making it appear as though she was nude.

He sent the altered images to two friends without her consent.

Legal Charges:

Disclosing a photograph of a person in an intimate situation without their consent, under the Abusive Behaviour and Sexual Harm (Scotland) Act 2016.

Court Outcome:

Brooks was fined £335.

Significance:

First Scottish case addressing AI-deepfake pornography.

Shows the use of existing intimate image abuse laws to prosecute AI-generated content.

The relatively low penalty illustrates a gap between severity of harm and sentencing, suggesting a need for deepfake-specific legislation.

Key Takeaways from These Cases

AI as an enabler: AI reduces barriers for creating sexualized content, both for children and adults.

Existing laws apply but gaps remain: Courts use laws on image abuse, stalking, and child exploitation to prosecute deepfake offences, but some jurisdictions lack deepfake-specific statutes.

Preventive measures: Courts are starting to restrict AI tool access to reduce risk of offences.

Harm recognition: Victim impact includes emotional distress, reputational damage, and persistent online harm.

Sentencing discrepancies: Some cases (e.g., Callum Brooks) show light penalties despite significant harm, highlighting the need for legislative updates.

LEAVE A COMMENT