Ai Voice Fraud Prosecutions

What is “AI Voice Fraud”

“AI voice fraud” broadly refers to the use of artificial intelligence, especially voice cloning, deepfake voice technologies, or generated voice impersonations, to commit fraud, mislead, impersonate, or cause another harm. Commonly, this includes:

Impersonating someone else (a family member, public figure, or political candidate) to trick victims into giving money or information.

Creating fake robocalls or political disinformation using AI-generated voices.

Using voice cloning to create defamatory or harmful content attributed to a person.

Voice identity theft—using someone’s voice likeness without consent to mislead or defraud.

Legal issues that come up include:

Fraud (state and/or federal)

Wire fraud, mail fraud

Impersonation statutes

False Statements

Identity theft / misappropriation of identity or likeness

Telecommunications law (robocalling, unwanted calls)

State “deepfake” laws, voice rights / publicity rights / privacy rights

In many cases, statutes that predate AI voice fraud are being applied in novel ways; regulators are also stepping in via rules (e.g. FCC) to cover robocalls or election interference or spoofing.

U.S. Cases / Regulatory / Legal Actions / Lawsuits Relating to AI Voice/Deepfake Voice Fraud

Here are more than five instances showing how U.S. prosecutors, regulators, or civil litigants have addressed AI voice fraud (or very close enough), with as much detail as is currently publicly available:

Case / Regulatory Event 1: FCC / “Biden deepfake robocalls” (Steve Kramer, New Hampshire, 2024)

Facts: Calls were made to thousands of New Hampshire voters just before the Democratic primary using an AI-generated voice meant to sound like President Joe Biden. The calls purported to discourage Democrats from voting by suggesting the primary should be skipped until the general election.

Legal / Regulatory Action:
  • The Federal Communications Commission (FCC) proposed a $6 million fine against Steve Kramer, the political consultant who commissioned/sent the calls. Reuters+2NPR+2
  • Lingo Telecom, the company that transmitted the robocalls, was fined $2 million. Reuters+1
  • Kramer has also been indicted on criminal charges in four New Hampshire counties, including multiple counts of impersonating a candidate, and voter suppression. NPR+1

Legal Basis:
  • Violation of FCC rules (caller ID authentication, spoofing, robocall laws) NPR+2WIRED+2
  • Using AI-generated voice in election interference / impersonation statutes.

Significance:
  • One of the first major cases where AI voice cloning was used in a political context and where both regulatory (FCC) and criminal (state) actions were pursued.
  • Sets precedent for what kinds of speech / calls generated by AI are illegal under current laws (robocalls, misinformation, impersonation).

Case / Regulatory Event 2: Maryland Deepfake Case — Dazhon Darien

Facts: A former high school athletics director in Maryland, Dazhon Darien, used AI technology to create a deepfake audio recording of his high school principal making racist and antisemitic comments. The recording was disseminated widely on social media, causing public outcry and disruption. AP News

Legal Action:
  • Darien entered an Alford plea to a misdemeanor count of disturbing school operations (meaning he did not admit guilt but acknowledged there was sufficient evidence). AP News

Sentence: Four months in jail. AP News

Significance:
  • Shows how “deepfake” voice content (not necessarily directly financial fraud) is being prosecuted under more traditional criminal statutes (disruption, defamation potential, etc.).
  • Demonstrates that voice cloning for non‑financial but socially harmful ends (defaming, disrupting) is subject to criminal penalty.

Case / Event 3: Lehrman & Sage v. Lovo Inc. (Voice Actors’ Lawsuit)

Facts: Two voice actors (Paul Skye Lehrman and Linnea Sage) filed a class‑action lawsuit in Manhattan federal court, accusing Lovo Inc., a startup, of copying their voices without permission and using those voice clones commercially. They alleged that Lovo misled them when collecting voice samples and then used their voices in voiceovers for promotional material and sold voice licenses. The Hindu

Legal Claims:
  • False advertising, fraud, violating their publicity rights (the right of one’s voice/image/likeness) The Hindu

Current Status: The case is a civil lawsuit; the outcome (judgment) is not yet widely published as fully resolved. It is likely still pending.

Significance:
  • Highlights civil remedies and litigation for voice misuse.
  • Puts a spotlight on consent and commercial use of voice clone services, and ownership or licensing of one’s voice.

Case / Incident 4: “Fake Kidnapping / Daughter voice cloning” Scams

Facts: There are several reported incidents in the U.S. where AI‑voice cloning was used to create emotional urgency (daughter in trouble, accident, etc.) to get people (often parents or elders) to send money (ransom, bail fees etc.). One example: in Arizona, a mother was called believing her 15‑year‑old daughter was kidnapped (due to voice cloning from online videos), and she was told to pay ransom. humanity.org

Legal Action / Outcome: In many such situations, these are under investigation or reported as scams. I did not locate confirmed published criminal judgments in U.S. courts in these reports with detailed sentencing in all cases. Some may lead to prosecutions depending on local law.

Significance:
  • Illustrates how AI voice fraud is being used for traditional “emergency / family impersonation” scams, which historically have been prosecuted under fraud or false pretenses statutes.
  • Shows the emotional and financial harm, and the increasing risk as voice cloning becomes easier.

Case / Regulatory / Statutory Development: FCC’s Ruling Making AI‑Generated Robocalls Illegal Under TCPA

Facts: The FCC has updated rules interpreting the Telephone Consumer Protection Act (TCPA) to declare robocalls that use AI‑generated voices to be illegal. This covers unsolicited calls, scam calls using voice clones or fake voices. WIRED+1

Legal Effect: Gives the FCC the ability to fine companies violating this rule, and also civil/private enforcement possibilities (e.g., recipients tacking on statutory damages for calls in violation of TCPA). WIRED+1

Significance:
  • Sets a regulatory standard prospective moving forward: even if criminal prosecution is lagging, companies (and individuals) can face enforcement and financial penalties via telecommunications / consumer protection regulation.
  • Helps pave the way for future criminal or civil prosecutions.

Gaps & Why Few Fully Established Case Law Judgments Yet

The technology is very new, and law often lags technology; many incidents are recent, investigations ongoing.

Proving voice cloning / deepfake fraud in court often requires forensic voice experts, chain of custody, proving voice similarity, proving damage or financial loss, etc. These are complex evidentiary burdens.

Some cases are resolved via pleas or non‑public settlements or via regulatory action rather than full trials with published opinions.

In many jurisdictions, laws specific to deepfakes or AI voice cloning are only just being drafted or enacted, or courts have not had the chance to decide full case law yet.

Synthesis: Likely Legal Paths / Statutes Used

From current cases, the legal tools used include:

State fraud / false pretenses statutes: when someone uses AI voice impersonation to get money or benefits.

Impersonation statutes: Some states criminalize impersonating someone else, particularly if person is public figure or private individual, using someone’s identity.

Wire fraud or mail fraud: if communication is interstate, conducted using mails, telephone, internet.

Telephone Consumer Protection Act (TCPA) for unwanted robocalls, misleading automated calls.

Regulatory enforcement (e.g., FCC, state Attorney General) against spoofing, political manipulation, telecommunications violations.

Civil claims: fraud, misrepresentation, publicity/voice rights, misappropriation of likeness, sometimes defamation if false statements are broadcasted.

Examples Table

Case / Event DistillationType (Criminal / Civil / Regulatory)Key Legal Claims / StatutesOutcome / Penalty / Status
Steve Kramer deepfake Biden robocallsRegulatory + CriminalFCC rules, election law, impersonation, spoofing, TCPAProposed $6M fine; criminal charges in NH counties, fines also for telecom carrier Reuters+2NPR+2
Maryland Athletic Director (Darien) deepfake principal caseCriminal (State)Misdemeanor disturbing school operations, defamation/disruption via voice clone4 months jail under plea deal AP News
Lehrman & Sage v. Lovo Inc.Civil lawsuitPublicity rights / voice rights, false advertising, fraudLawsuit filed; damages sought (at least $5M); outcome pending The Hindu
“Kidnapped daughter” / ransom‐voice cloning scamsCriminal / InvestigativeFraud, extortion, impersonation /racketeering often possibleSome investigations, unclear whether fully prosecuted in published opinions for many cases yet humanity.org
FCC’s TCPA update making AI voices in robocalls illegalRegulatory changeTCPA, FCC authority over unwanted/illicit calls, spoofingRule change; future enforcement capacity boosted WIRED+1

What Might Future Case Law Look Like

Given the trends, future published case law will likely:

Feature full criminal prosecutions where financial loss is large and voice impersonation clearly traceable.

Include judgments that establish standards for expert voice clone forensics, admissible evidence of voice similarity, AI model provenance.

Examine constitutional or First Amendment issues (if impersonation involves speech) balanced against fraud / impersonation laws.

Develop state deepfake laws or statutes specifically about AI voice cloning.

LEAVE A COMMENT

0 comments