Facebook is working on a new feature to flag offensive material automatically in live video streams. The social network giant to make use of Artificial Intelligence to examine the content. Facebook has involved in several content moderation controversies in the recent times. Facebook also faced an international outcry after removing an iconic Vietnam War photo due to nudity.
Historically, Facebook mostly relied on users to report offensive posts. Then the posts will be checked by Facebook employees against the company and will be deleted if there are against “community standards.” A decision on removing thorny content will be taken as per policies made by top executives at the company.
The company is already working on making use of automation to flag extremist video content. This automation has also been tested on Facebook Live, the video streaming service which let users broadcast live video. However, employing Artificial Intelligence to flag live video remained at the research stage and had a couple of challenges to overcome.
“Facebook increasingly was using artificial intelligence to find offensive material. It is an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” says, Joaquin Candela, Facebook director of applied machine learning. “A human looks at it, an expert who understands our policies, and takes it down.” He added.
Users’ computer vision algorithm needs to be fast and prioritize things in the right way. Facebook also uses automation to process millions of reports which it gets every week. But it requires proper expertise to recognize duplicate reports and route the flagged content.
Back in November, CEO Mark Zuckerberg said that Facebook would turn to automation to identify fake news. Facebook users also saw several fake news reports Ahead of the U.S. election.
“These are questions that go way beyond whether we can develop AI. Tradeoffs that I’m not well placed to determine,“ said LeCun, Facebook’s director of AI research.