Google Takes Action Against Deepfake Porn In Search Results As Others Like Mark Zuckerberg's Meta And Elon Musk's X Also Tackle The Issue

Alphabet Inc.’s GOOG GOOGL Google has introduced new measures to combat the spread of explicit deepfake content in its search results.

What Happened: In a blog post on Thursday product manager at Google, Emma Higham, said that the tech giant is introducing new online safety features designed to simplify the removal of explicit deepfakes from Search and prevent them from ranking highly in results.

“These protections have already proven to be successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for fake explicit images as well,” she stated.

See Also: Mark Zuckerberg’s Meta Blames Hallucination For AI Assistant Incorrectly Denying Trump Assassination Attempt — But What About Google?

Google is also modifying its Search rankings to better manage queries that carry a higher risk of surfacing explicit fake content. Sites that receive a significant number of removals for fake explicit imagery will be demoted in Google Search rankings.

The company disclosed that previous updates have reduced exposure to explicit image results on queries specifically looking for deepfake content by over 70% this year.

Google is also working on distinguishing between real and fake explicit content so that legitimate images can still be surfaced while demoting deepfakes.

“While differentiating between this content is a technical challenge for search engines, we’re making ongoing improvements to better surface legitimate content and downrank explicit fake content,” Higham noted.

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It Matters: In January this year, AI-generated pornographic images of Taylor Swift, which were circulated widely on Elon Musk’s X, formerly Twitter triggered a major uproar.

While the platform eventually removed those images, the incident provoked a wider concern among users, tech behemoths, and lawmakers alike.

Last month, U.S. lawmakers introduced a bill, the Take It Down Act, mandating social media companies like Meta Platforms Inc. META and X to remove such images from their platforms.

Earlier this month, Meta’s internal Oversight Board called for clearer regulations against AI-generated pornographic content. This followed the identification of two pornographic deepfakes of prominent women on Meta’s platforms.

Musk’s social media network X also witnessed a disturbing surge in pornographic content, causing discomfort among its users. The platform subsequently revised its content policy to include an opt-in mechanism for adult content.

Photo Courtesy: Shutterstock.com

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: 

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsTechartificial intelligencebenzinga neuroConsumer Techdeepfakepornographic imagesSoftware & AppsStories That Matter
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!