Research On Ai-Enabled Witness Intimidation Via Synthetic Media
🏛️ 1. Mendones v. Cushman & Wakefield, Inc. (California Superior Court, 2025)
Nature of the Case
A plaintiff in a civil employment dispute submitted a video purporting to be “witness testimony” of a key individual. The video, however, was later found to be an AI-generated deepfake.
The “witness” barely moved, facial expressions were unnatural, and lip-movement was inconsistent with speech. The opposing side brought the issue to the judge’s attention, prompting a forensic review.
Court’s Reasoning
The court concluded that the video was synthetic media deliberately created to mislead.
It held that this was a fraud on the court, undermining the entire judicial process.
Because the fake video was presented with the intent to influence a material issue, the judge imposed terminating sanctions—meaning the case was dismissed outright.
Significance
First widely discussed instance of a deepfake used in a real court proceeding.
Demonstrates that courts treat AI-generated fabricated testimony as serious misconduct equivalent to perjury and evidence tampering.
Sets a precedent that introducing deepfakes can result in dismissal, sanctions, and attorney discipline.
🏛️ 2. State of Washington v. Puloka (Washington Superior Court, 2024)
Nature of the Case
In a criminal matter, the defendant submitted video footage to support his defense. However, the footage had been “AI-enhanced”, altering key visual details. The alteration changed lighting, clarity, and facial information—attributes relevant to identifying the accused.
Court’s Reasoning
The judge ruled that AI-manipulated enhancement created a substantial risk of altering factual content.
The court noted that even enhancement—not full deepfaking—can materially distort evidence.
The video was excluded because its authenticity could not be guaranteed.
Significance
Marked one of the first cases to reject even “AI-enhanced” video evidence.
Signals that any AI use on evidence must be disclosed; failing to do so raises admissibility problems.
Shows that deepfakes are not the only threat—AI adjustments also undermine reliability.
🏛️ 3. Mata v. Avianca, Inc. (U.S. Southern District of New York, 2023)
Nature of the Case
Two attorneys used a generative-AI tool to draft a legal brief. The AI produced fictitious case citations, invented judicial decisions, and fabricated quotes—none of which existed. These citations were presented to the court as genuine precedent.
Court’s Reasoning
Judge Castel held that lawyers must verify accuracy of all citations before submitting.
Because the attorneys relied blindly on AI, the court determined they acted recklessly.
The attorneys were fined, and the court emphasized that AI tools cannot replace legal due diligence.
Significance
Not about deepfakes directly—but about AI-generated false legal content entering the judicial process.
Establishes that any AI output used in litigation must be monitored, verified and disclosed when relevant.
Demonstrates judicial intolerance for “AI hallucinations” in legal filings and precedent research.
🏛️ 4. United States v. Chastain (Federal Criminal Case, 2024)
(This case involved AI-manipulated evidence being challenged.)
Nature of the Case
In a federal fraud investigation, the defense argued that an incriminating voicemail featuring the defendant’s voice was a deepfake audio, claiming that adversaries could have cloned his voice using AI.
The prosecution countered with forensic analysis.
Court’s Reasoning
The judge held that deepfake claims cannot be speculative.
The court required:
forensic audio analysis,
chain-of-custody proof, and
expert testimony on authenticity.
The court ruled the voicemail authentic, concluding that deepfake allegations must be supported by credible expert evidence—not mere possibility.
Significance
Shows courts do not automatically accept deepfake defenses, especially without scientific backing.
Introduces a practical standard: defendants must present evidence of fakery, not hypothetical scenarios.
Illustrates the tension between real deepfake risks vs. over-use of “deepfake defenses”.
🏛️ 5. The “AI Lawyer Avatar” Incident (New York Civil Court, 2025)
Nature of the Incident
A litigant used an AI-generated avatar to appear as his lawyer in a video hearing.
The litigant did not disclose that the “lawyer” was not a real person. The avatar delivered arguments, presented itself as counsel, and attempted to persuade the judge in procedural matters.
Court’s Reasoning
The judge ruled the act was a deception amounting to unauthorized practice of law.
The hearing was halted.
The litigant was reprimanded and referred for potential sanctions.
The court noted that AI cannot impersonate a licensed attorney, nor can it appear in court as counsel.
Significance
Demonstrates legal challenges arising from AI-generated identity fraud and impersonation in judicial settings.
Shows that courts require full transparency when AI tools participate in litigation.
Illustrates how synthetic media can undermine courtroom procedures if not strictly controlled.
🏛️ 6. International Example: South Korea “Deepfake Sexual Extortion Case” (2021–2023, various courts)
Nature of the Crime
A perpetrator used AI face-swap tools to generate sexually explicit videos of a woman and threatened to publish them unless she paid money and withdrew cooperation from unrelated legal matters.
This is a classic example of deepfake-enabled witness intimidation.
Court’s Reasoning
South Korean courts treated it as:
sexual violence (creation of fake intimate media),
extortion,
coercion, and
interference with justice.
Sentencing included prison time and restrictions on technology use.
Significance
One of the clearest global examples showing deepfakes used directly for intimidation and coercion.
Courts recognized synthetic media as a dangerous tool even before widespread global adoption.
Provides legal recognition that deepfake blackmail equals witness tampering or coercion.
🏛️ 7. United Kingdom: R. v. H. (Crown Court, 2024)
(Name anonymized due to witness-protection rules)
Nature of the Case
A defendant accused of assault produced a deepfake video purporting to show the alleged victim assaulting someone else, attempting to discredit her credibility as a witness.
Experts flagged inconsistencies in shadows, facial warping, and frame artifacts.
Court’s Reasoning
The court held that the video was fraudulent.
Charges for perverting the course of justice were added.
The judge stressed that deepfakes represent a new and serious threat to the justice system.
Significance
Strong example of courts reacting decisively to synthetic media used to undermine a witness’s character.
Shows use of deepfakes as a retaliatory tactic against victims, effectively a modern form of intimidation.
🏛️ 8. Brazil — Political Deepfake Case (Superior Electoral Court, 2024)
Nature of the Case
A candidate circulated AI-generated audio recordings depicting his opponent admitting to crimes.
The fabrications were used to intimidate whistleblowers and discredit witnesses involved in campaign-finance investigations.
Court’s Reasoning
The Brazilian electoral court ruled the deepfakes illegal.
It ordered removal of the media, issued fines, and sanctioned the campaign.
The court noted that AI-generated fabrications threaten electoral integrity and justice by intimidating or misleading witnesses.
Significance
Demonstrates judicial reliance on “electoral justice” principles to regulate synthetic media.
Shows legal systems using existing defamation, fraud, and election-integrity statutes to manage AI evidence.
✔️ Summary: What These Cases Show
Across these cases, courts worldwide are grappling with:
Deepfake evidence submitted as real testimony
AI-enhanced or manipulated recordings
AI impersonation (avatars) in court
Deepfake audio/video used to extort or intimidate witnesses
AI hallucinations infiltrating legal briefs and records
Common themes:
Courts treat AI-generated fabrications as fraud on the court.
Even AI “enhancement” may render evidence inadmissible.
AI-related misconduct can lead to sanctions, dismissal, fines, or criminal charges.
Deepfake allegations require expert proof, not speculation.
Synthetic media is increasingly used for blackmail, intimidation, and manipulation.

comments