Facebook has started deploying its artificial intelligence capabilities to help combat terrorists’ use of its service.
Company officials said in a blog post Thursday that Facebook will use AI in conjunction with human reviewers to find and remove “terrorist content” immediately, before other users see it. Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses.
In most cases, Facebook only removes objectionable material if users first report it.
Facebook and other internet companies face growing government pressure to identify and prevent the spread of terrorist propaganda and recruiting messages on their services. Earlier this month, British Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online. Some proposed measures would hold companies legally accountable for the material posted on their sites.
The Facebook post — by Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager — did not specifically mention May’s calls. But it acknowledged that “in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online.”
“We want to answer those questions head on. We agree with those who say that social media should not be a place where terrorists have a voice,” they wrote.