Deepfake Pornography: New Forms of Image-Based Harm and Sexual Abuse

Makayla Mason
Doctoral Student

Karen Holt, Ph.D.
Assistant Professor

School of Criminal Justice
Michigan State University

 

Introduction

Recently, the Massachusetts Institute of Technology published a review entitled, “A horrifying new AI app swaps women into porn videos with a click” (Hao, 2021). They detailed a new application, which they refused to name so as not to draw traffic to the site, that advertised anyone could use deepfake technology to “turn” someone into a porn star by transposing pictures of faces onto bodies or by stripping clothes off women’s bodies. The technology initially became popular when it was used to make pornographic videos using images of celebrities, but as more content has been created, produced, and uploaded, the targets of deepfakes now include those who are not in the spotlight or under the public eye, and who are traditionally most often victimized by image-based sexual abuse – women in the general public. This form of technology-facilitated sexual abuse is unique in its method and consequences, and the law continues to struggle to keep up with the emerging issues and harms.

The Rise of Deepfake Tech

Doctoring images and videos is not a new phenomenon, but what has changed is the accessibility to and the efficiency of the technology. Deepfakes are defined as “the product of artificial intelligence (AI) applications that merge, combine, replace, and superimpose images and video clips to create fake videos that appear authentic” (Westerlund, 2019, p. 39). The word “deepfake” is a combination of “deep learning” and “fake.” Deepfake pornography originated in 2017 on Reddit when a user with the screen name “deepfakes” used AI to create pornographic videos where the faces of celebrities were superimposed onto the bodies of adult actors (Worrall, 2021). The user created a group on Reddit titled r/deepfakes so that others could share their deepfake porn creations. Reddit banned the group in 2018, however, word had spread about the uses of deepfake and how it could be abused. Deepfake technology became publicly known and increasingly user-friendly. The creation of deepfake porn can be completed in several editing steps. All that is needed is an image of a person and a pornographic video. Deepfakes are difficult to detect, meaning the viewers usually believe it to be real and continue to share the content (Westerlund, 2019).

Sensity is a research company focused on AI -powered identity and fraud solutions. They have tracked online deepfakes since December 2018 and have published several reports which detail and describe the salient issues. They state that the number of deepfakes online has grown exponentially since 2018, roughly doubling in number every six months. One consistent finding is that between 90% and 95% of deepfakes are nonconsensual porn and that approximately 90% of this porn depicts women (Hao, 2021). Similarly, cybersecurity company Deepface, has reported that 96% of all deepfakes online are pornographic and the top five deepfake pornography websites exclusively target women (Sherman, 2021). The magnitude of the problem continues to grow, as it is estimated that up to 1,000 non-consensual deepfake videos are uploaded to porn sites monthly, and this number may be inaccurate as the best technology can only capture deepfakes approximately 65% of the time (Fergus, 2020; Maxwell, 2020).

Types of Deepfake Creators

There is extant research which has examined individuals who create deepfake images. For example, Westerlund (2019) identified four types of producers, differentiated by the motivations for creation and how the images are used. The first two types are communities of deepfake hobbyists and legitimate actors, such as television companies. These producers use the technology non-maliciously, for example, when an actor dies unexpectedly, the filmmakers are able to superimpose the late actor’s face onto a live stand-in actor. The last two types are political players such as foreign governments and activists and other malevolent actors. These two types of deepfake producers use the technology to create disturbing and harmful content, including pornographic materials.   

Deep Fake Porn as Image-Based Sexual Abuse

Deepfake technology has become a weapon used by perpetrators of sexual abuse and intimate partner violence. In intimate partner violence, individuals may create and use deepfake porn to extort and manipulate their partners and control their behavior. In addition to women, children are increasingly targets of deepfake porn, referred to as “morphed” child sexually abusive materials (CSAM). Deepfake porn allows for the creation of sexually explicit materials which can be tailored to the specific fantasies of individuals and creates opportunities for creators to design their own CSAM. Morphed CSAM is where the faces of children are superimposed upon adult actors’ bodies engaged in sex acts.

Victims of deepfake porn are faced with several challenges. First, they may not know who created the images or videos, and so apprehension of the creator is difficult. While they can request to have the images and videos removed, once they are online, they are never truly “gone.” Often, they will be reposted and will reemerge on other sites. For these victims, the pain and trauma associated with deepfakes is unique – they are mispresented in a degrading and violating manner. Unlike revenge porn, where content was often made with the knowledge and consent of the victim and then is distributed nonconsensually, deepfake content is completely fabricated and manufactured, creating additional damaging consequences for victims. Deepfake porn is unique in that it not only depicts constructed material, but it also has the power to shape what consumers believe (Lee, 2018). This quality makes deepfake porn victimization especially devastating.

Legal Challenges and Policy Implications

Much like other forms of image-based sexual abuse, the existing legislation is best designed to protect child victims. In cases involving morphed CSAM, there are CSAM laws that can be used to prosecute defendants, although there have been legal challenges surrounding First Amendment rights. District appeals courts, such as United States v. Hotaling (2011) and United States v. Mecham (2021) have dealt with this legal issue, and the courts have ruled that the mere possession of pornographic images of children, even when the images are morphed and no children are engaged in the sexually explicit conduct depicted, is a criminal act which is not protected under the First Amendment (U.S. v. Hotaling, 2011). The U.S. Supreme Court has yet to directly address the subject (Comerford, 2021).

For victims who are adults, developing and enforcing legislation has been more challenging, specifically when it comes to enforcement. Much of the focus of legislation has been on political issues surrounding deepfakes. Recently, California passed a law banning deepfakes, with specific language that criminalized the creation or distribution of doctored videos, images, or audio of politicians within 60 days of an election but also granted victims of pornographic deepfakes to sue if their images are used (Paul, 2019). With no federal laws that specifically address deepfake porn, it is left to the states. Most states have laws against non-consensual pornography, but only two states, California and Virginia, address deepfakes in their laws (Hao, 2021; Sherman, 2021). While revenge porn legislation exists, deepfake porn does not meet the criteria for this type of crime as the content which is created and distributed is not “real.” Yet despite the fact that the images are doctored and the content is “fake,” what is real are the consequences for the victims of deepfake porn.

Because deepfake technology itself has legitimate uses, it is difficult to regulate. Yet there are several ways in which the problem of deepfake can be approached. These include legislation and regulation, corporate policies and voluntary action, education and training, and the development of better anti-deepfake technology (Westerlund, 2019). It is critical to develop a multifaceted approach to prevent the technology from being weaponized against children and women.

References

Burgess, M. (2020). Deepfake porn is now mainstream. And major sites are cashing in. Retrieved from https://www.wired.co.uk/article/deepfake-porn-websites-videos-law

Comerford, T. (2021). No child was harmed in the making of this video: Morphed child pornography and the First Amendment. Boston College Law Review, 62(9), 323-343.

Fergus, J. (2020). Deepfake porn is getting easier to make and its coming for regular people soon. Retrieved from https://www.inputmag.com/tech/deepfake-porn-is-getting-easier-to-make-thats-bad-news-for-everyone

Hao, K. (2021). Deepfake porn is ruining women’s lives. Now the law may finally ban it. Retrieved from https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/

Hao, K. (2021). A horrifying new AI app swaps women into porn videos with a click. Retrieved from https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/

Lee, D. (2018). Deepfake porn has serious consequences. Retrieved from https://www.bbc.com/news/technology-42912529

Maxwell, T. (2020). Even the best deefake detector only works 65% of the time. Retrieved from https://www.inputmag.com/tech/even-the-best-deepfake-detector-only-works-65-of-the-time

Paul, K. (2019). California makes ‘deepfake videos’ illegal but law may be hard to enforce. Retrieved from https://www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce

Sensity Team. (2021). How to detect a deepfake online. Retrieved from https://sensity.ai/blog/deepfake-detection/how-to-detect-a-deepfake/

Sherman, J. (2021). “Completely horrifying, dehumanizing, degrading”: One woman’s fight against deepfake porn. Retrieved from https://www.cbsnews.com/news/deepfake-porn-woman-fights-online-abuse-cbsn-originals/

U.S. v. Hotaling (2011). 634 F.3d 725

Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39-52.

Worrall, W. (2021). What are deepfakes and why are they dangerous? Retrieved from https://hacked.com/what-are-deepfakes-and-why-are-they-dangerous/

1 Comment

Leave Comment

Your email address will not be published. Required fields are marked *