Recently, explicit AI-generated photos of Taylor Swift and her boyfriend, Travis Kelcie, surfaced on the social media platform X (formerly known as Twitter).
The photos were eventually taken down, but not before they were shared 45 million times, and many copies of the photos remain online.
Images such as these are not difficult to create. First, many pictures of the person are collected. Then, using AI that is easily accessible through “nude generators,” these photos are analyzed and used to recreate the person's features, like their face and body, but in a nude state. This AI is smart enough to determine what the person would look like if they were nude, based on how the person looks in the photos, and make these fake images look very realistic, even though they are not real photos. This does not require any technical knowledge, and the photos can be easily made with a number of online apps. And to be clear, these aren’t being made with mainstream AI tools such as ChatGPT or Claude2, but with smaller, purpose-specific applications.
While this has caught the public’s attention, it is not a new phenomenon. Other higher-profile cases include at least 30 high school girls in Spain, one of whom was only 11 years old, who had this happen to them in the spring, and a number of girls in Westfield, NJ, who were also victims. In Canada, police launched an investigation in December after fake nude photos of female students at a Grade 7-12 French immersion school in Winnipeg were shared online. All the perpetrators did was use photos of the girls that had been published online.
Previous estimates by Wired show that in the first nine months of 2023, at least 244,635 deep-fake videos were uploaded to the top 35 websites that host deep-fake pornography. It is not known how much of that was consensual and how much was not consensual.
“It’s not just celebrities [targeted],” said Danielle Citron, a professor at the University of Virginia School of Law. “It’s everyday people. It’s nurses, art and law students, teachers, and journalists. We’ve seen stories about how this impacts high school students and people in the military. It affects everybody.”
Experts estimate that there are hundreds of thousands of similar photos of young children circulating online.
This problem is only likely to grow worse. There are many such image generators, and creating such a tool does not require any significant technical skill. And the images can be created anywhere in the world and posted on any platform.
As it gets easier to produce videos and immersive environments, it’s not difficult to think about what is coming.
Despite the obvious unethical nature of the activity, it is difficult to limit, at least for now.
For a number of reasons, the problem is difficult to solve through legal channels.
First, at least in the United States, only 10 states have laws against this, and it is not a federal crime, though in the case of minors, laws against the sexual exploitation of minors might apply (depending on how they are interpreted). In May 2023, Rep. Joe Morelle introduced the Preventing Deepfakes of Intimate Images Act to criminalize the non-consensual sharing of sexual deep-fake images online. The bill was referred to the House Committee on the Judiciary but has not seen any progress since.
Enforcing a limit on production would be difficult, but there is legislation under consideration that prohibits the sharing of such photos online. If this passes, it would at least deter widespread sharing.
Second, the sheer volume of images of this nature will make it very difficult for the police to enforce. Enforcement also requires technical skills, and there are limited law enforcement resources, especially relative to the size of the problem.
Individuals can put more pressure on social media companies to take the images down, and they certainly could do better in this regard, but it is difficult, even using AI, to detect and remove images that quickly. Facebook is able to remove approximately 95% of hate speech using AI, but even if social media companies could duplicate that success rate with the images or move it to 99%, a lot of damage would still be done. And, of course, there is a lot of private sharing which is difficult to discover and track.
Beyond legal action, there are some things private actors could do to limit this.
Action to terminate the accounts of individuals who share the images or any nude image could, again, deter sharing the images.
Pressure can be put on companies such as Visa, Mastercard, and Paypal that provide payment processing for these apps. In some cases, that has proven effective.
More pressure could be put on search engine companies such as Google and Bing to remove these images from their search results. Although both companies prohibit non-consensual intimate imagery, it is difficult to continually remove because the images are often not distinguishable from real images.
What does this mean for educators?
First, schools need to be prepared for the fact that more and more cases like these are going to arise among their students and that this will create in-school complications, as those sharing the photos are often classmates.
Second, schools should integrate understanding the harms of these types of actions into any AI literacy programs. While ethical admonishments may only go so far, they can help reduce the problem. And in the absence of much current legal recourse, it is the best available solution.
Third, schools should work to educate the broader community about such issues.
Fourth, schools may wish to consider what will happen to students who engage in such actions while using school networks, during the school day, or at school-sanctioned events.
This is expected to continue to be a widespread problem, even if laws are eventually passed, and it will be made worse with video and immersive environments. It is important for school administrators to understand this issue and to be prepared with an appropriate response.
Aa society integrates more and more into an AI-World, there will be many challenges. This is one of them.