By New Scientist Staff and Press Association
British MPs have warned Twitter, Facebook and Google that they have a “terrible reputation” over their efforts to tackle abusive content.
The Commons Home Affairs committee grilled senior representatives from the three internet giants about the issue in a hearing on Tuesday. Committee chair and Labour MP Yvette Cooper cited a string of examples of material on social media sites and told the three witnesses she found none of their responses “particularly convincing”.
“You all have a terrible reputation among users for dealing swiftly with problems in content even against your own community standards,” she said.
Prior to the hearing the committee referred a number of pieces of content to the firms. In one instance, a video relating to National Action, an extreme right-wing group banned as a terrorist organisation, was removed from YouTube after it was flagged.
But Cooper questioned how the video was allowed to appear in the first place. “There aren’t that many proscribed organisations. Don’t you feel any sense of responsibility as a multi-billion pound organisation to at least check that you are not distributing material from proscribed organisations?” she said.
Peter Barron, of Google, said 400 hours of video was uploaded to YouTube every minute. “Clearly we don’t want illegal content on our platforms and when flagged to us we remove that as quickly as we possibly can,” he said.
Four pages that were flagged to Facebook all remained on the site, including one titled “Ban Islam”.
Facebook’s Simon Milner said the pages did not violate the social media giant’s terms because users are able to criticise religions, though they are not allowed to express hate against people because of their religion. “So Ban Islam is a page which is designed to criticise Islam as a religion. It is not expressly, in and of itself, designed to attack Muslims,” he said.
Twitter suspended three accounts that were flagged but one, which included a tweet with a hashtag “deport all Muslims”, remained.
Twitter spokesperson Nick Pickles said that while it was “highly offensive” the tweet did not breach the site’s rules around hateful conduct. He told the committee Twitter has recently rolled out technology to help identify accounts that break its rules, in addition to the system of user reporting.
“That’s a step change in how we deal with abuse,” he said. “We are looking for it and will take action on content even when it hasn’t been reported by users.”