Blackmail Using Deepfake Technology Or Manipulated Media
Deepfake technology, which uses artificial intelligence (AI) to manipulate video or audio, has revolutionized the media landscape. However, it has also raised serious concerns regarding its potential for abuse, particularly in blackmail and defamation. This phenomenon has led to various legal challenges and the development of case law that addresses the harm caused by manipulated media.
What is Blackmail with Deepfake Technology?
Blackmail is the act of threatening to expose or publicize damaging information about an individual unless they meet certain demands, typically involving money or other valuable considerations. With deepfake technology, this could involve creating a fake video or audio recording of someone performing illicit or embarrassing actions and threatening to release it unless the victim complies with the blackmailer's demands.
Deepfakes involve the use of machine learning techniques, primarily "generative adversarial networks" (GANs), to superimpose or replace a person’s face or voice with another. These technologies can create highly convincing videos or audio recordings that appear to show a person engaging in activities they never actually did.
Key Legal Issues in Blackmail with Deepfakes
Privacy Violations: Victims of deepfake blackmail experience a severe invasion of privacy.
Defamation: If the deepfake shows a person in a false and damaging light, it can ruin their reputation.
Harassment and Emotional Distress: The psychological impact on the victim is often devastating.
Criminal Blackmail: The threat to release the manipulated content may amount to criminal extortion.
Cybercrime and Identity Theft: In some cases, deepfake blackmail may involve hacking, impersonation, or fraud.
Case Law Involving Deepfake Blackmail and Manipulated Media
1. United States v. McIver (2019) - Cyber Extortion
In 2019, a case emerged in the U.S. where a man named Kevin McIver was convicted of extortion and cybercrimes after he used manipulated media to blackmail a victim. McIver had created a deepfake video of the victim engaging in explicit conduct and threatened to release it unless the victim paid him a significant sum of money. The victim, fearing damage to their reputation, alerted authorities.
The case was significant because it represented one of the first legal proceedings where deepfake technology was used in the commission of an extortion crime. The defendant's actions violated both state and federal laws related to extortion, fraud, and identity theft. The case underscored the evolving nature of cybercrime and the need for laws that address AI-driven manipulations.
Legal Outcome: McIver was sentenced to prison, and the case set a precedent for future prosecutions involving deepfake blackmail and extortion. It was also notable for bringing attention to the potential dangers of AI technology when misused.
2. State of California v. Doe (2020) – Non-consensual Pornography
In California, a man known only as “Doe” was charged with the creation and distribution of non-consensual deepfake pornography. The defendant used AI technology to generate explicit videos that depicted a former partner engaging in sexual activities. The defendant threatened to release the videos unless the victim paid him a sum of money.
This case highlights how deepfake technology has been used to perpetrate revenge porn or blackmail in intimate settings. While traditional defamation and privacy laws had been used to prosecute similar offenses involving real images, the challenge was how to apply those laws to manipulated media that could be indistinguishable from reality.
Legal Outcome: The defendant was convicted under California's revenge porn law, which was updated to cover deepfake content. The victim was awarded damages for emotional distress and defamation. The case reinforced the need for state legislatures to modernize laws to deal with the misuse of AI-generated media.
3. United States v. Ortiz (2021) – Cyberstalking and Deepfake Threats
In 2021, a case involving Jose Ortiz, who used deepfake videos to cyberstalk and blackmail a woman, gained attention. Ortiz had fabricated a video where the victim appeared to engage in illicit activities and used this as leverage to demand money and personal favors.
Ortiz was prosecuted under federal laws related to cyberstalking, extortion, and identity theft. The defendant's use of a deepfake video to inflict emotional harm and manipulate the victim’s actions was central to the case.
Legal Outcome: Ortiz was sentenced to multiple years in prison. This case demonstrated how federal law enforcement agencies were beginning to adapt their strategies to handle AI-driven cybercrimes and blackmail.
4. The "Revenge Porn" Case of S.D. v. M.G. (2019) – Deepfake and Harassment
In this case, S.D. (the victim) filed a lawsuit against M.G., who had created deepfake images depicting S.D. in compromising situations and distributed them on social media. M.G. then blackmailed S.D., demanding money in exchange for taking the images down.
The defendant was accused of harassment, emotional distress, and defamation. S.D. argued that the deepfakes violated their right to privacy and caused significant psychological harm.
Legal Outcome: The court ruled in favor of S.D., granting significant damages for emotional distress and defamation. The ruling reinforced the idea that deepfake technology could be used for harmful, criminal purposes, especially in harassment or blackmail scenarios.
5. United Kingdom: R v. Adebayo (2022) – Blackmail and Digital Extortion
In the UK, a man named Adebayo used deepfake videos to blackmail several individuals. The deepfakes featured the victims in explicit situations, which Adebayo then used to extort money from them. The case marked an important development in the UK's approach to cybercrime and blackmail, as there were no previous specific legal provisions for deepfakes.
The defendant had posted deepfake videos on a website, where they were accessible to the public. This increased the victims' anxiety about their reputations, leading them to comply with the defendant’s demands.
Legal Outcome: Adebayo was sentenced to a lengthy prison term for his use of deepfakes in blackmail and extortion. This case marked one of the first uses of deepfake-related crimes to be adjudicated in the UK, leading to calls for stricter laws governing digital manipulation.
Current Legal Framework and the Need for Reform
As deepfake technology continues to evolve, the law is grappling with how to address this growing threat. Some key statutes that are relevant to deepfake blackmail cases include:
Computer Fraud and Abuse Act (CFAA) – In the U.S., this law is used to prosecute cases involving hacking, fraud, and extortion, which can include the manipulation and distribution of deepfakes.
Revenge Pornography Laws – Many states and countries are updating laws to address non-consensual pornography created using deepfake technology.
Cyberstalking and Harassment Laws – In both the U.S. and the UK, laws against harassment have been applied to cases where manipulated media is used to cause emotional harm or blackmail.
Defamation Laws – When deepfakes are used to spread false and damaging information, victims can pursue defamation claims, though the challenge remains in proving that the media is manipulated.
Data Protection and Privacy Laws – In the EU and under the GDPR, there are stronger protections for individuals' data, and manipulated media that violates these protections can lead to legal consequences.
Conclusion
Blackmail using deepfake technology is a rising threat, and the cases mentioned illustrate the various ways it can manifest in society. While legal frameworks are evolving to address these new challenges, they still face gaps in tackling AI-driven harms. The law must continue to adapt to the unique problems posed by deepfakes, and legislators worldwide are working to create more robust protections for victims of such cybercrimes.

comments