Connect with us

Hi, what are you looking for?

Entertainment

On social media, explicit AI-generated images of Taylor Swift spread quickly.

Taylor Swift arrives at the world premiere of the concert film "Taylor Swift: The Eras Tour" on Wednesday, Oct. 11, 2023, at AMC The Grove 14 in Los Angeles. (AP Photo/Chris Pizzello)

This week, disturbingly explicit fake images of Taylor Swift, likely generated by AI, spread like wildfire on social media, upsetting fans and prompting lawmakers to call for stronger measures against the spread of such content.

One particularly troubling image, initially shared on X (formerly Twitter), garnered a shocking 47 million views before the account responsible was suspended on Thursday. Despite efforts to suspend accounts sharing the fake images, they continued circulating on other social media platforms.

In protest, Swift’s fans flooded X with related keywords and the plea “Protect Taylor Swift” to drown out explicit content and make it harder to find.

Cybersecurity company Reality Defender revealed that these explicit images were likely created using a diffusion model, an AI-driven technology available through various apps and public models.

The booming AI industry has led to the release of tools allowing users to create manipulated content with ease. These tools, though popular, have made it simpler and cheaper to produce deepfakes – fake content depicting individuals doing or saying things they never did.

Researchers are increasingly worried that deepfakes are becoming a powerful tool for disinformation, enabling users to create nonconsensual nude images or embarrassing portrayals of public figures. Instances like AI-generated explicit content featuring Taylor Swift underscore the growing threat.

Platforms like X maintain a zero-tolerance policy toward such content and actively work to remove identified images and take action against the responsible accounts. However, concerns have arisen about the surge in problematic content on X since Elon Musk’s acquisition in 2022, with looser content rules and staff changes impacting the platform’s ability to manage harmful content.

Despite some companies prohibiting users from creating explicit imagery using generative AI tools, individuals find ways to bypass these rules, posing a continuous challenge for platforms. The images originated in a Telegram channel dedicated to producing such content but gained widespread attention after being shared on X and other social media services.

While some states have imposed restrictions on pornographic and political deepfakes, the impact has been limited, and there are currently no federal regulations addressing these issues. Lawmakers, including Representative Joe Morelle and Senator Mark Warner, have renewed calls for action and legislation to combat the disturbing trend of AI-generated explicit content. The incident involving Taylor Swift highlights the urgency of addressing the broader challenges posed by deepfake technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Technology

Apple is gearing up for a significant refresh of its iPad lineup in 2024, starting with the anticipated launch of the iPad Pro in...

Business

Microsoft Teams had a major hiccup on Friday, causing disruptions and various issues for users. The problem started around 11 a.m. EST and quickly...

Entertainment

Olivia Rodrigo’s Guts World Tour is gaining attention not only for her musical prowess but also for her distinctive fashion choices on stage. Styled...

Business

JetBlue Airways is considering pulling out of its $3.8 billion acquisition of Spirit Airlines following a federal judge’s blockage of the deal in response...