New research revealed concerning findings about the impact of Elon Musk’s Twitter algorithm changes, reportedly amplifying anger and animosity among users, particularly towards those with opposing views.
What Happened: Musk’s Twitter algorithm changes have come under scrutiny as researchers from Cornell University and UC Berkeley uncovered troubling consequences, revealing that the platform now prioritizes emotionally charged tweets that fuel anger and animosity, ultimately leading to increased polarization among users with opposing views.
See Also: Elon Musk Retorts Sharply To Twitter Censorship Accusations: ‘You’re Such A Numbskull’
According to the researchers, the study was a controlled experiment, which did not involve internal access as they relied on participants who willingly provided access to their tweets through the online crowd-working platform CloudResearch Connect.
The researchers examined tweets displayed to 806 users in February and gathered two sets of data simultaneously: the top 10 tweets presented by Twitter’s personalized “For You” timelines and the top ten tweets appearing in the chronological newsfeed.
The study stated that the only distinction between the two sets of tweets was whether they were algorithm-selected or not.
“We found that the algorithm amplifies tweets expressing stronger emotions (on four different dimensions: anger, happiness, sadness, and anxiety), especially those with anger.”
It further stated, “Exposure to these algorithm-selected tweets results in users perceiving their political in-group more positively and the political out-group more negatively, potentially contributing to greater affective polarization.”
Why It’s Important: Twitter’s “For You” recommendation feature has repeatedly sparked concerns among users. In February, numerous users voiced their complaints about an increase in Musk’s posts appearing on their “For You” timelines, regardless of whether they actually follow him on the platform.
While the “For You” section is intended to display tweets from accounts and topics that users follow, along with recommended content, it seems that the underlying algorithm behind these recommendations fails to meet the expectations of many users.
Last month, during an interview, when a BBC reporter stated that he had personally seen a rise in hate speech on the microblogging site, Musk asked for specific examples. He eventually pointed out that the reporter hadn’t been able to cite even a single instance of hateful content.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: Jack Dorsey Isn’t A Big Fan Of Elon Musk’s Twitter — But Thinks This Feature Is Just ‘Great’
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.