YouTube will start asking commenters to reconsider posting something before it goes up if Google’s artificial intelligence identifies that comment as potentially offensive, the company said Thursday. The new YouTube prompt suggests reviewing the company’s community guidelines in case the commenter is “not sure whether the post is respectful,” and then gives the option to either edit or post anyway.
“To encourage respectful conversations on YouTube, we’re launching a new feature that will warn users when their comment may be offensive to others, giving them the option to reflect before posting,” YouTube said in a blog post announcing the feature and other measures meant to improve inclusivity on the platform.
The feature is now on Android. Comments that don’t trigger this reminder can still be removed later if they’re found to violate YouTube’s community guidelines, which are essentially the service’s rule book of what’s allowed and what crosses the line. But comments that trigger this warning won’t necessarily be removed if posted.
YouTube’s system identifies potentially offensive posts by learning from what has been repeatedly reported by users.
YouTube is a company with no shortage of problems to reckon with over the years, including misinformation, conspiracy theories, discrimination, harassment, videos of mass murder and child abuse and exploitation — but its comments remain notorious for their potential to turn toxic.
YouTube’s massive scale — serving 2 billion monthly users and ingesting more than 500 hours of video uploads every minute — means the company must rely on machine learning not only to recommend what else to watch but also to police its platform. The company, for example, announced in September that artificial intelligence would start automatically determining which videos need to be blocked from underage viewers being able to watch them.
YouTube said that since early 2019, its number of comments removed daily for hate speech has multiply by 46. Between July and September, it terminated more than 54,000 channels for hate speech, out of 1.8 million total terminated channels — it said that was the most hate speech terminations in a single quarter, three times more than the previous high in mid-2019 when the company updated its hate speech policy.
The feature was announced alongside other measures meant to improve inclusivity on YouTube.
The company said it would be testing a new filter in the comments management system for channel owners, which will siphon out potentially inappropriate and hurtful comments that have been automatically held for review, so creators don’t need to read them if they don’t want to.
Starting next year initially in the US, YouTube will ask creators to take optional surveys that identify their gender, sexual orientation, race and ethnicity. The company said that data will help it “look closely at how content from different communities is treated in our search and discovery and monetization systems” and “for possible patterns of hate, harassment and discrimination that may affect some communities more than others.”
“Our creators’ privacy and ability to provide consent for how their information is used is critical. In the survey, we will explain how information will be used and how the creator controls their information. For example, the information gathered will not be used for advertising purposes, and creators will have the ability to opt-out and delete their information entirely at any time,” the company said.
Entertain your brain with the coolest news from streaming to superheroes, memes to video games.
YouTube bans more COVID-19 misinformation, Netflix ends…