This week, millions of people encountered fake sexually explicit AI-generated images of Taylor Swift on various social media platforms, prompting concerns about the unregulated use of AI technology.
The White House expressed alarm over the incident, with Press Secretary Karine Jean-Pierre stating, “We are alarmed by the reports of the…circulation of images that you just laid out – of false images to be more exact, and it is alarming.” She emphasized the role of social media companies in enforcing their rules to prevent the dissemination of misinformation and non-consensual sharing of intimate imagery.
While the administration has taken steps to address online harassment and abuse, there is currently no federal law in the U.S. preventing the creation and sharing of non-consensual deepfake images. Outraged fans and lawmakers are pushing for legislative action.
Rep. Joe Morelle recently renewed efforts to pass the “Preventing Deepfakes of Intimate Images Act,” which seeks to criminalize the nonconsensual sharing of digitally-altered explicit images, imposing both criminal and civil penalties.
The rise of deepfake pornography, made possible by advancements in AI technology, has become a pervasive issue. What was once a skill requiring technical expertise is now accessible through apps, contributing to a commercial industry focused on creating and distributing digitally manufactured content featuring sexual abuse.
The incident involving Taylor Swift likely used an AI text-to-image tool, and the social media platform X (formerly Twitter) faced criticism for hosting some of the fabricated images. The account responsible was suspended after millions of views.
Experts emphasize the need for comprehensive regulations to address the harm caused by the widespread creation and dissemination of non-consensual deepfake images. Urgent legal measures are required to protect individuals from such malicious practices.