GOP-Led Bill Seeks To Hold Facebook, Instagram, X, And Other Platforms Responsible For Policing And Removing Deepfake Porn Images

Loading...
Loading...

The surge in deepfake AI-generated pornographic content has spurred U.S. lawmakers to introduce a bill aiming to mandate social media companies like Meta Platforms Inc.’s Facebook, Instagram, and Elon Musk’s X, among others, to remove such images from their platforms.

What Happened: The bill, named the Take It Down Act, would make it illegal to publish or threaten to publish deepfake pornographic images. Social media platforms would be required to remove these images within 48 hours of a valid request from the victim, reported CNBC on Tuesday.

The bill, sponsored by Sen. Ted Cruz (R-TX), would also mandate social media companies to make reasonable efforts to remove any other copies of the images, including those shared in private groups. The Federal Trade Commission would be responsible for enforcing these new rules.

"By creating a level playing field at the federal level and putting the responsibility on websites to have in place procedures to remove these images, our bill will protect and empower all victims of this heinous crime," Cruz said in a statement, according to the report.

The legislation will be introduced by a bipartisan group of senators, including victims of deepfake porn, on Tuesday.

See Also: Apple’s AI Privacy Measures, Elon Musk’s Robot Prediction, And More: This Week In Artificial Intelligence

Deepfake AI-generated images have affected a wide range of individuals, from celebrities like Taylor Swift to high school students. The bill aims to create a federal-level playing field and place the responsibility on websites to have procedures in place to remove these images.

While there is a consensus in Congress about the need to address deepfake AI pornography, there are two competing bills in the Senate. Sen. Dick Durbin (D-Ill.) introduced a bipartisan bill earlier this year that would allow victims of non-consensual deepfakes to sue individuals who had created, possessed, or distributed the image.

Why It Matters: The rise of deepfake scams has become a significant threat, with criminals using generative AI to manipulate videos, sounds, and images of real individuals, leading to substantial financial losses. The situation could worsen as AI technology continues to evolve, warned cybersecurity experts.

Amid these concerns, TikTok, the popular video-sharing app, announced that it would automatically label AI-generated content. The UK government is also preparing to criminalize the creation of sexually explicit deepfake images, attributing the rise of deepfake images and videos to rapid advancements in artificial intelligence.

Meanwhile, Microsoft and OpenAI have joined forces to fight AI deepfakes, starting a $2 million fund to support research and initiatives that combat the spread of disinformation and the misuse of AI technologies.

Read Next: Apple’s AI Endeavors, iOS 18, And A Spat With Elon Musk: This Week In Appleverse

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsRegulationsSocial MediaGlobalTechDeppfakeKaustubh BagalkoteTaylor Swift
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...