How Pedophiles Are Using Artificial Intelligence to Exploit Kids

Artificial intelligence (more commonly known as “AI”) has gained attention and popularity in recent months, particularly since the launch of the ChatGPT chatbot from OpenAI, which captivated the internet with both its impressive abilities and surprising limitations. The millions of AI users in the U.S. are mostly eager to cheat on their homework or escape writing work emails; however, some bad actors have also discovered how to employ AI technology to attain far more nefarious ends.

Britain’s National Crime Agency is conducting a review of how AI technology can contribute to sexual exploitation after the recent arrest of a pedophile computer programmer in Spain shocked the continent. The man had been found to be utilizing an AI image generator to create new child sexual abuse material (CSAM) based on abusive images of children that he already possessed. The Daily Mail noted that the “depravity of the pictures he created appalled even the most experienced detectives …”

Many AI programs function by inputting data or content that teaches the program to recognize patterns and sequences, and recreate them in new ways. When pedophiles or sexual abusers get their hands on AI, they can further exploit the victims featured in real images to create new — and even more graphic — content. Though the AI-generated images are not “real” in the sense that they are photographs of events that necessarily transpired, they are nevertheless inherently exploitative of the victims used to train the AI, remnants of whose images may still be featured in the new CSAM.

Another form of artificial intelligence that has gained recent notoriety is known as a “deepfake.” In these unsettling images, audio clips, or videos, AI is able to create shockingly realistic manipulations of an individual’s likeness or voice in any scenario that the creator desires. While deepfakes can be used in a variety of harmful contexts, like depicting a political candidate in a situation that would damage his reputation, sexual predators who weaponize the technology have proven to be particularly vicious.

Last week, discussion of deepfake technology reached a fever pitch as a community of female online content creators realized that their images had been uploaded online in the form of pornographic deepfakes. The women who had been victimized reported feeling extremely violated and deeply unsettled with the knowledge that this pornographic content had been created and distributed without their consent — and that people who knew them personally had been watching the deepfakes to satisfy their own perversions. Deepfake technology knows few bounds; pedophiles with access to images of children could similarly employ this form of AI to create CSAM.

The normalization of AI-created pornography or child sexual abuse material serves no beneficial purpose in society — and, in fact, can influence cultural mores in profoundly harmful ways. Already, having the technological capability to manufacture AI-generated CSAM has emboldened pedophile-sympathizers to advocate for their inclusion in the liberal umbrella of sexual orientations.

The Young Democrats, the youth division of the Democratic Party in the Netherlands, recently made a statement claiming that not only is pedophilia “a sexual orientation that one is born with,” but also claiming that the “stigma” surrounding pedophilia is causing pedophiles to suffer from higher rates of depression and suicidal thoughts. The Dutch Young Democrats group advocates against criminalizing hand-drawn or AI-generated child sexual abuse material because it “does not increase the risk of child abuse” and could potentially “help pedophiles get to know their feelings without harming other people.”

Pornography of any kind is inherently exploitative — the pornography industry thrives off dubious consent and, often, knowing exploitation of trafficking victims and minors. Using AI technology to create images or videos that constitute pornography or child sexual abuse material perpetuates a chain of abuse even if the new content created is different from abuse that physically occurred.

AI-generated pornography or CSAM cannot circumvent the extreme violations against human dignity caused by creating exploitative sexual content. Modern nations require laws that appropriately address modern concerns; while the progression of AI technology can, in some ways, certainly benefit society, its capability to produce exploitative material and allow the rot of pedophilia to continue festering must be addressed.

AUTHOR

Joy Stockbauer

Joy Stockbauer is a correspondent for The Washington Stand.

RELATED ARTICLES:

‘Godfather of AI’ Quits Google, Shares Warning About AI’s Potential For Destruction

The World Economic Forum’s ‘AI Enslavement’ is Coming for YOU!

‘Common Sense’: UK Bars Most Men from Women’s Prisons

HICE: Why the Dramatic Increase in Teen Girl Suicides?

Corruptions of Sexuality Mean Opportunities for the Truth

The Equal Rights Amendment Seeks to Erase, Not Help, Women

EDITORS NOTE: This Washington Stand column is republished with permission. ©All rights reserved.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

1 reply
  1. Anonymous
    Anonymous says:

    Thank you Dr. Swier, great article. I’ve been using AI image generation tools and I’ve seen some of the things you discuss. I concur they’re revoulting and I’d like to make offer some toughts:
    1. In my opinion the matter is more complex ethically than you depict it. The AI does not necessarily need to be trained on CSAM material in order to reproduce it. To use an analogy, we can train it three different concepts separately – eg. beach, dog and bowler hat. Once done, it will be able with some success to produce “an image of a dog sitting on a beach wearing a bowler hat”, even though it never “saw” this perticular combination as part of its training. As long as the AI knows the concept of child (from e.g. stock photos) and erotic content from adult-only sites, it will be able to combine it into something looking like CSAM. I hope you now understand where the complexity and dilema opens – there is no victim (person doesn’t exist, if its not a manipulation of existing photo) and no abuse (act never happened). I frankly do not know what to think of this but I do support the opinion that this kind of content poses risk of encouriging real crime.
    2. There is more then enough evidence to support claim of abuse and exploitation in the industry, but not all adult content is exploitative – the phenomenon of platforms like onlyfans proves that and is on the raise, After seeing the rapid raise of AI pornography, I believe we will see this industry polarize, On one side we will be literally flooded with endless stream of generative content, tailored and customized to our desires but in the end always “soul-less”. On the other side will be real life people and couples monetizing their intimate life and offering a touch of genuinity. Existing big platforms will be, if not aleady are, on a decline, hopefully together with the associated exploitation you wrote.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *