Elon Musk Responds To Child Abuse Material On Mastodon, Says Twitter 'Kicked Them off So They Went Elsewhere'

Comments
Loading...
Zinger Key Points
  • Elon Musk's Twitter, now called 'X,' has asserted to have purged child abuse content, but recent findings expose Mastodon.
  • Mastodon's reported child sexual abuse material discovery highlights the need for comprehensive safety measures on decentralized networks.
  • Discover Fast-Growing Stocks Every Month

As Elon Musk’s Twitter, now called ‘X,’ has repeatedly asserted to have purged child abuse content from the platform, shocking findings reveal that Mastodon, the decentralized network championed as an alternative, harbors a dark secret of its own. 

What Happened: Twitter, now rebranded as ‘X’ under Musk’s leadership, has previously made bold claims about successfully purging child abuse content from the platform. 

However, Stanford’s Internet Observatory study has shed light on a disturbing truth — Mastodon, considered a viable alternative to Twitter, is prevalent with child sexual abuse material or CSAM.

See Also: Jack Dorsey Wishes Musk’s Twitter Beats Zuckerberg’s Threads When It Comes To This: ‘Hope They Consider… Nostr’

In response to the report, Musk, who acquired Twitter for a staggering $44 billion in October 2022, made a noteworthy comment. He stated, “We kicked them off this (Twitter) platform, so they went elsewhere.” 

This seemingly points to a migration of users who left the microblogging site to join Mastodon. Last year, between October and November, Mastodon’s monthly active users grew from 300,000 to 2.5 million. A massive credit for this growth was given to Twitter’s new leadership, failing to convince users to stay. 

What Did The Study Find: The Internet Observatory researchers conducted a two-day investigation, scanning the 25 most popular Mastodon instances for CSAM. Their findings were deeply troubling, with 112 known CSAM discovered in over 325,000 posts. Shockingly, the first instance appeared after a mere five minutes of searching.

The research employed Google’s SafeSearch API to identify explicit images and PhotoDNA, a tool used to flag CSAM. The team found 554 pieces of content that matched hashtags or keywords typically used by child sexual abuse groups online, all identified as explicit with “highest confidence” by Google SafeSearch.

Why It’s Important: While Twitter may boast about its efforts to combat child abuse content, the unsettling findings about Mastodon highlight the need for a comprehensive approach to ensuring the safety of online spaces for all users, particularly vulnerable minors.

Last month, it was reported that Instagram’s recommendation system actively facilitates connections between pedophiles and content sellers, promoting numerous accounts dedicated to underage-sex content through explicit hashtag searches. 

Instagram had over three times the number of accounts selling child sex abuse material compared to Musk’s Twitter, which showed a faster response in removing such accounts despite its smaller user base.

Image Courtesy: Wikimedia Commons

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Musk Laughs Off As User Calls Zuckerberg’s Threads And Dorsey-Backed Bluesky’ Elon Haters’

Market News and Data brought to you by Benzinga APIs

Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!