The British government has unveiled an artificial intelligence system designed to remove terrorist material from video-sharing websites, part of attempts to clamp down on extremist propaganda online, like reported by middle-east-online.com.
Questions remain, however, as to whether machine learning is sophisticated enough to automatically detect and remove extremist content in an ever-evolving online landscape.
The British government said it was working with London-based ASI Data Science to create an algorithm to detect content distributed by the Islamic State (ISIS).
“I hope this new technology the Home Office has helped develop can support others to go further and faster,” UK Home Secretary Amber Rudd said. “We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images.”
ASI said the algorithm has detected 94% of ISIS propaganda with a 99.99% accuracy. The online tool uses “advanced machine learning to analyse the audio and visuals of a video to determine whether it could be ISIS propaganda,” the company said. The tool would flag suspect videos for human moderators to review.
The algorithm will be offered to smaller video platforms and cloud storage sites such as Vimeo and pCloud to vet their content. It will not be used by many of the biggest companies, such as YouTube and Facebook, which have developed their own algorithms to detect extremist content.
Machine learning — which means that a programme improves through experience without being explicitly rewritten — is undergoing a revolution.
AlphaZero, a game-playing artificial intelligence (AI) based on a machine-learning approach last year beat what had been the world’s best chess-playing programme in a 100-game match without losing a single game. It did this free of human guidance except the basic chess rules, which it had been programmed for four hours earlier.
The ASI algorithm has been trained using more than 1,000 videos, learning to recognise the ISIS logo and musical and visual cues common in ISIS propaganda. However, there are questions whether the algorithm would remain effective if ISIS simply switches its propaganda techniques.
“We’ve been very thoughtful about trying to identify characteristics of the propaganda that are very difficult for ISIS to change,” ASI’s head of data science consulting John Gibson told Wired magazine. “It’s something we’ve thought about a great deal and clearly for this thing to work well it needs to be adaptive and it needs to be able to keep up to date as the threat evolves.”
If the adaptive algorithm is successful, it could be a turning point in the fight to combat extremist and offensive content online.
Some of the world’s biggest social media and video-sharing platforms have faced increasing pressure to clamp down on extremist material, with questions about how efficient their in-house monitoring systems are.
YouTube claims to have an algorithm that can detect extremist content with a 98% success rate. In December, YouTube, which has 300 hours of video uploaded every minute, said it deleted more than 150,000 videos promoting violent extremism in the previous six months.
Facebook, the world’s largest social media network with more than 2.1 billion users, claimed last year to be removing 99% of content related to militant groups such as the Islamic State and al-Qaeda, with 83% of “terror content” removed within one hour of uploading.
Despite the claims, questions remain about Facebook’s and YouTube’s ability to screen extremist and terrorist content. Unilever, an Anglo-Dutch consumer goods producer of brands including Lipton tea, Persil laundry detergent and Dove soap, threatened to boycott social media advertising unless more was done to tackle offensive content.
Speaking at the recent Interactive Advertising Bureau conference in California, Unilever Chief Marketing and Communications Officer Keith Weed said digital media had become a “swamp” and called on corporations to do more.
“It is in the digital media industry’s interest to listen and act on this before viewers stop viewing, advertisers stop advertising and publishers stop publishing,” he warned.
Many advertisers complained about the presence of extremist content online, particularly given the way online advertising works in which ads and videos are paired by computer algorithm.
The British government was embarrassed in 2014 when ads for the BBC, the National Citizens Service and other UK agencies appeared in front of ISIS propaganda videos on YouTube.
Time will tell whether the ASI machine-learning algorithm will prove successful but what is certain is that tech companies will need to continue to innovate and adapt to screen out offensive and extremist content.
Speaking at the Digital Forum on Terrorist Prevention February 13 in California, Rudd called on tech companies to clamp down on extremist propaganda online.
“We know that in the UK, three-quarters of those convicted for terrorist offences consumed, possessed or disseminated terrorist material,” Rudd said. “Increasingly we are finding a recurring theme [in terrorist attacks]. That theme is the internet. All of the five attacks on UK soil last year had an online component.”