Skip to main content

Topic / Business and Regulation

Stable Signatures: An Effective Route to Regulating AI-Generated Child Sex Abuse Material 

Content warning: This piece covers child sex abuse and child sex abuse material.  

Background  

In fall 2023, more than 20 young girls in a small town in southern Spain were targets of naked deep-fake images. These images were created using the artificial intelligence-powered app “Clothoff,” and the event sparked uproar across Spain.1

This, however, was not a one-off incident. The Internet Watch Foundation warns that without legislative intervention there may soon be a flood of these images, formerly referred to as “child pornography.” Such AI-generated images and videos, now called child sex abuse material (CSAM) by experts, could spread quickly across social media. Not only will this hinder investigators working to find victims of child abuse, but it may also increase demand for such images and give perpetrators more tools to coerce or groom victims.2

Creation of Deepfakes 

As of 2019, 96% of deepfakes online were pornographic and targeted women.3 The long-term effects of AI-generated CSAM likely parallels the effects of revenge pornography, which a 2016 study found “can result in lifelong mental health consequences for victims, damaged relationships, and social isolation.”4 This is not merely a crisis for law enforcement, but it is also a public health emergency for children. 

In June 2023, the BBC found that AI-generated CSAM is being produced at an “industrial scale,” much of which is sold on the US-based site Patreon. Although the images of the girls in Spain were created using an app, most AI-generated CSAM is created using the AI image generator Stable Diffusion. 

Stable Diffusion includes code that restricts users from creating CSAM, but since it is an open source software, users can simply delete the lines of code that restrict the creation of CSAM. This makes AI-generated CSAM untraceable.5

The decentralized structure of the internet, along with the safeguards for websites hosting user-generated content and the swift replacement of websites featuring similar content, renders the permanent removal of images and videos from the internet virtually impossible. This challenge persists even in cases involving illegal, traditional CSAM.6

Legislative Responses and Their Shortcomings 

In September 2023, attorneys general from all 50 states urged Congress to investigate AI-generated CSAM and design legislation to stop its proliferation.7 There has been no legislation introduced to combat the proliferation of AI-generated CSAM specifically, but there are currently a few notable bills in Congress that tangentially tackle the issue. However, all of these bills do not sufficiently handle the threat of AI-generated CSAM, or they pose other significant risks to the internet landscape.  

The DEEPFAKES Accountability Act of 2023 would require all AI-generated imagery to contain a watermark. Representative Yvette Clarke (D-NY) originally proposed the legislation in 2019, but it has so far been unsuccessful.8 The key drawback of this approach is that users can delete the code for a traditional watermark from the open source software, create the CSAM, and continue to publish or circulate it anonymously. This legislation is therefore unlikely to hinder creators of AI-generated CSAM.  

The Preventing Deepfakes of Intimate Images Act would criminalize the nonconsensual sharing of deepfakes. The bill extends the rights of victims of nonconsensual intimate material from the Violence Against Women Act (VAWA) Reauthorization Act to victims of AI-generated pornography and CSAM.9 After a similar bill failed in 2022, Representative Joseph Morelle (D-NY) reintroduced the bill this past spring.10 While this legislation would provide important protections for victims, it does not address the root cause of AI-generated CSAM or create measures to address the potential flood of AI-generated CSAM on the internet.  

The Senate Judiciary Committee recently advanced Senator Dick Durbin’s (D-IL) Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment Act (STOP CSAM Act). This legislation would create an exception to Section 230 of the Communications Decency Act of 1996 to prohibit online platforms from hosting CSAM. 

Section 230 has long provided immunity to online platforms concerning user-generated content. If the STOP CSAM Act was passed, it would effectively cause all platforms to remove any CSAM, given that the courts would be able to find a platform liable for even unknowingly hosting CSAM. However, the degradation of Section 230 would drastically change the internet landscape. The Electronic Frontier Foundation predicts that the STOP CSAM Act would weaken privacy, namely end-to-end encryption.11

Furthermore, the ACLU believes that “the ACLU believes that the STOP CSAM Act will lead to censorship of First Amendment-protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality. The STOP CSAM Act will lead to censorship of First Amendment-protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality.”12 Although the STOP CSAM Act would eliminate CSAM, the legislation would endanger a pillar of American democracy and imperil marginalized groups.  

Finding an Effective Approach 

New research suggests a better alternative to the legislation currently on the table – stable signatures. Stable signatures are similar to watermarks in that they identify the source of the AI-generated material. However, the stable signature method embeds the watermark within the generative process itself, making it impossible for users to erase the code for the watermark. 

Expert consensus is that stable signatures “are robust, invisible to the human eye and can be employed to detect generated images and identify the user that generated it, with very high performance.”13 This method could help law enforcement detect CSAM and easily determine if the images and videos depict real-world victims of child sex abuse.  

In order to curb the proliferation of AI-generated CSAM, Congress should pass or amend proposed legislation that requires all open-source software to include stable signatures. Congress should also criminalize the creation, possession, and distribution of AI-generated CSAM. This will deter predators from creating AI-generated CSAM since it will become both traceable and illegal.  

The alarming rise of AI-generated CSAM demands immediate attention from Congress and there is no current legislation that adequately addresses this crisis. The DEEPFAKES Accountability Act and Preventing Deepfakes of Intimate Images Act will not stop the flood of AI-generated CSAM online. The STOP CSAM Act will eliminate CSAM, but it will likely obstruct First Amendment rights. 

Stable signatures are the most effective and comprehensive solution to this problem. Mandating stable signatures in open source software is crucial to protect potential victims and uphold the principles of privacy and free speech, ensuring a secure digital landscape.   

  1. Jack Guy, “Outcry in Spain as Artificial Intelligence Used to Create Fake Naked Images of Underage Girls,” CNN, September 20, 2023, https://www.cnn.com/2023/09/20/europe/spain-deepfake-images-investigation-scli-intl/index.html. ↩︎
  2. Matt O’Brien and Haleluya Hadero, “AI-Generated Child Sexual Abuse Images Could Flood the Internet. Now There Are Calls for Action,” AP News, October 24, 2023, https://apnews.com/article/ai-artificial-intelligence-child-sexual-abuse-c8f17de56d41f05f55286eb6177138d2. ↩︎
  3. Aja Romano, “New Deepfakes Research Finds They’re Mainly Used to Degrade Women,” Vox, October 7, 2019, https://www.vox.com/2019/10/7/20902215/deepfakes-usage-youtube-2019-deeptrace-research-report. ↩︎
  4. Mudasir Kamal and William J. Newman, “Revenge Pornography: Mental Health Implications and Related Legislation,” Journal of the American Academy of Psychiatry and the Law Online 44, no. 3 (September 1, 2016): 359–67. ↩︎
  5. Angus Crawford and Tony Smith, “Illegal Trade in AI Child Sex Abuse Images Exposed,” BBC News, June 27, 2023, https://www.bbc.com/news/uk-65932372. ↩︎
  6. Richard Wortley and Stephen Smallbone, “Child Pornography on the Internet,” ASU Center for Problem-Oriented Policing, January 1, 2006, https://popcenter.asu.edu/content/child-pornography-internet-0. ↩︎
  7. Meg Kinnard, “Prosecutors in All 50 States Urge Congress to Strengthen Tools to Fight AI Child Sexual Abuse Images,” AP News, September 5, 2023, https://apnews.com/article/ai-child-pornography-attorneys-general-bc7f9384d469b061d603d6ba9748f38a. ↩︎
  8. Emmanuelle Saliba, “Bill Would Criminalize ‘extremely Harmful’ Online ‘Deepfakes,’” ABC News, September 25, 2023, https://abcnews.go.com/Politics/bill-criminalize-extremely-harmful-online-deepfakes/story?id=103286802. ↩︎
  9. Nihal Krishan, “AI Deepfake Detection Requires NSF and DARPA Funding and New Legislation, Congressman Says.” FedScoop (blog), November 9, 2023, https://fedscoop.com/ai-deepfake-detection-requires-nsf-and-darpa-funding-and-new-legislation-congressman-says/. ↩︎
  10. “Bill Tracking in US – HR 3106 (118 Legislative Session),” FastDemocracy, accessed November 16, 2023, https://fastdemocracy.com/bill-search/us/118/bills/USB00075341/. ↩︎
  11. Andrew Crocker and Sophia Cope, “The STOP CSAM Act: Improved But Still Problematic,” Electronic Frontier Foundation, May 10, 2023, https://www.eff.org/deeplinks/2023/05/stop-csam-act-improved-still-problematic. ↩︎
  12. American Civil Liberties Union, “ACLU Urges Congress to Strike Down Dangerous Legislation Threatening to Destroy Digital Privacy and Free Speech Online,” September 25, 2023, https://www.aclu.org/press-releases/aclu-urges-congress-to-strike-down-dangerous-legislation-threatening-to-destroy-digital-privacy-and-free-speech-online. ↩︎
  13. Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon, “The Stable Signature: Rooting Watermarks in Latent Diffusion Models, ” arXiv, July 26, 2023, https://doi.org/10.48550/arXiv.2303.15435. ↩︎