Alphabet Inc GOOGL GOOG is fine-tuning its content policing on its YouTube platform. Earlier this week, Google senior vice president Kent Walker wrote an op-ed piece for the Financial Times outlining the proactive approach Google is taking to limit the amount of offensive and extremist content on YouTube.
Google's Approach To YouTube Issues
Google is using a four-pronged approach to tackle the problem:
1. Google intends to increase its reliance on automated content monitoring software by training its “content classifiers” to quickly identify and remove offensive content via machine learning.
2. Google also plans to rely more heavily on its team of expert human Trusted Flaggers, who scour YouTube looking for content that has been flagged by users as inappropriate.
3. Google plans to launch more counter-radicalization efforts, such as its Creators for Change program, which redirects users who have been targeted by radical groups such as ISIS to content with a counter-extremism message.
4. Perhaps most importantly, Google will be paying more attention to content that doesn’t quite violate its current community standards. This content includes “videos that contain inflammatory religious or supremacist content,” according to Walker. While these videos will not be removed from YouTube, users will be presented with a warning prior to viewing and the videos will be restricted from earning advertising revenue.
Why Not Just Delete?
For users wondering why these videos aren’t just removed outright, Google has always been forced to walk a thin line between protecting users from harmful content and allowing content providers to express their beliefs, however controversial they may seem.
Related Link: YouTube's Hate Content May Be Too Big Of A Problem To Solve
YouTube’s current community standards rules ban a number of different types of content:
- Videos containing “nudity or sexual content.”
- Videos that “encourage others to do things that might cause them to get badly hurt.”
- Videos containing “violent or gory content that's primarily intended to be shocking, sensational, or disrespectful.”
- Videos that promote “violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics”
- Videos promoting “predatory behavior, stalking, threats, harassment, intimidation, invading privacy, revealing other people's personal information, and inciting others to commit violent acts.”
Still, many of these violations are subjective, and abusers often push the boundaries to get away with as much as they possibly can without explicitly violating the terms of service.
That Thin Grey Line
While terrorist organizations promoting violence seems like a black-and-white issue, “fake news” content or ideologically extreme conspiracy theorists promoting widely-discredited beliefs is more of a grey area that will likely continue to be a thorn in the side of Google and other social medial platforms.
There's clearly a major financial motivation for Google to get it right. In April, YouTube lost 5 percent of its top advertisers during a boycott by advertisers after reports surfaced of advertisements appearing alongside offensive or inappropriate content.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.