Social media platform
Deepfakes flooded several social media sites, from Reddit to Facebook. This has led to renewed calls for stronger laws and regulations regarding AI, especially when it is misused for sexual harassment.
Here's what you need to know about the Swift episode and the legality of deepfakes.
What happened to Taylor Swift?
On Wednesday, AI-generated sexually explicit images began circulating on social media sites, particularly on X. One image of the megastar was viewed 47 million times in the roughly 17 hours it was on X before it was removed on Thursday.
Deepfake detection group Reality Defender told The Associated Press it tracked dozens of unique images that were spread to millions of people on the internet before being removed.
X prohibited queries related to Swift search and photos and displayed an error message instead.
Instagram and Threads allow Swift searches, but they display a warning message, especially when searching for images.
What do platforms like X Sites and AI Sites say?
On Friday, X's safety account issued a statement reiterating the platform's “zero-tolerance policy” against posting non-consensual nude images. The platform said it removed the images and took action against accounts that violate its policies.
Meta also issued a statement condemning the content, adding that it would “take appropriate action if necessary.”
The company said it is “closely monitoring the situation to ensure that any further violations are immediately addressed and the content removed.”
OpenAI said it has safeguards in place to limit the generation of harmful content on platforms such as ChatGPT, and will reject “requests asking for the names of celebrities, including Taylor Swift.”
Microsoft, which provides an image generation tool based in part on the website DALL-E, said Friday that it is investigating whether its tool has been misused.
Kate Vredenberg, assistant professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics, said that social media business models are often built on sharing content, and that the approach He pointed out that the usual thing to do is to clean up after such an event.
Swift has not released a statement regarding the image. The pop star was seen cheering on her boyfriend Travis Kelce at an NFL game in the US on Sunday as the Kansas City Chiefs advanced to the Super Bowl.
Posting non-consensual nude (NCN) images is strictly prohibited on X and we have a zero-tolerance policy against such content. Our team actively removes all images identified and takes appropriate action against the accounts that posted them. we are close…
— Safety (@Safety) January 26, 2024
What are deepfakes? How can they be exploited?
Swift is not the first celebrity to be targeted using deepfakes, a type of “synthetic media” or virtual media that is manipulated and altered by artificial intelligence.
Generate images and videos from scratch by giving the AI tool prompts for what you want to see. Or you can swap someone's face in a video or image with another person's face, such as a celebrity's face.
Fake videos are commonly created in this way to support politicians or to show people engaging in sexual acts that they did not actually engage in. Recently, a sexually explicit deepfake of actress Rashmika Mandhana went viral on social media, causing an uproar in India. A gaming app used a deepfake of cricket icon Sachin Tendulkar to promote its product. And in the United States, the camp of former Republican presidential candidate Ron DeSantis is leading the race for the party's presidential nomination, with former President Donald Trump disliked by many conservatives for wearing masks and supporting mask-wearing. He created a deepfake that shows him kissing public health expert Anthony Fauci. Vaccines during the COVID-19 pandemic.
Some deepfakes are easy to identify due to their low quality, while others are much more difficult to distinguish from real videos. Several generative AI tools, such as Midjourney, Deepfakes Web, and DALL-E, are available to users for free or at low cost.
More than 96 percent of deepfake images online today are pornographic in nature, and almost all of them target women, according to a report by Sensity AI, an intelligence company focused on deepfake detection.
Are there laws that can protect online users?
Laws specifically addressing deepfakes vary by country and typically range from requiring the disclosure of deepfakes to prohibiting harmful or malicious content.
Ten US states, including Texas, California, and Illinois, have criminal laws against deepfakes. Lawmakers are pushing for similar federal legislation and further restrictions on the tool. Democratic Representative Yvette Clark of New York has introduced a bill that would require creators to digitally watermark deepfake content.
Sexually explicit deepfakes can violate the country's wide-ranging laws if they are not consensual. The United States does not criminalize such deepfakes, but it has state and federal laws targeting privacy, fraud, and harassment.
In 2019, China enacted a law requiring the disclosure of deepfake usage in videos and media. In 2023, the UK made it illegal to share deepfake porn as part of its Online Safety Act.
In 2020, South Korea enacted a law that criminalizes the distribution of deepfakes that harm the public interest and imposes penalties of up to five years in prison or fines of about 50 million won (about $43,000) to deter abuse.
In India, the federal government issued an advisory in December to social media and internet platforms to prevent deepfakes, which violate India's IT rules. Deepfakes themselves are not illegal, but depending on their content, they may violate some of India's information technology rules.
Hesitancy to increase regulation often stems from concerns that regulation will impede technological progress.
“This is just assuming that we couldn't make these regulatory or design changes to at least significantly reduce it,” Vredenberg said. In some cases, there is also a societal attitude that such incidents are the price of such tools, which marginalizes the victim's perspective, Vredenberg said.
“It paints them as a minority in society that can be affected for the benefit of all of us,” she said. “And that's a very uncomfortable position for all of us, socially.”
How has the world reacted?
The White House said it was “alarmed” by the images as Swift's fan base, known as Swifties, rallied to take action against them.
“Social media companies make their own decisions about content moderation, but they have put in place their own rules to prevent the spread of misinformation and non-consensual intimate images of real people,” White House press secretary Karine Jean said in a statement. “We believe there is an important role to play in enforcement.” – Pierre said at the press conference.
US lawmakers also expressed the need to introduce safeguards.
Since Wednesday, the singer's fans have been quick to report accounts and fire back against X with the hashtag #ProtectTaylorSwift and flooded with more positive images of Swift.
“To do the hard work of putting pressure on companies, we often rely on the users, the affected people, or those in solidarity with them,” Vredenberg said, noting that everyone has the same kind of He added that they are relying on anger because they are unable to mobilize public pressure. Whether it will lead to lasting change is still something to worry about.