We haven't been able to take payment
You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Act now to keep your subscription
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account or by clicking update payment details to keep your subscription.
Your subscription is due to terminate
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account, otherwise your subscription will terminate.
LONDON TERROR ATTACK

Isis unleashes sadistic internet propaganda to spur on others

Ahmad Musa Jibril speaks in English on his YouTube videos and is considered to bridge the gap with westerners who may not speak Arabic. The preacher has more than 280,000 followers on Facebook
Ahmad Musa Jibril speaks in English on his YouTube videos and is considered to bridge the gap with westerners who may not speak Arabic. The preacher has more than 280,000 followers on Facebook

Islamic State has flooded the internet with violent jihadist propaganda in the wake of Saturday’s attack in London, an analysis shows.

Hundreds of videos which urge supporters to attack the West have been posted on Facebook, YouTube, Twitter and Telegram, the encrypted messaging application.

One video uploaded to Google Drive and seen by The Times shows a bloodstained map of London Bridge and urges followers to “do Jihad” in Europe.

Another promises that the Islamic state has a “waiting list of people wanting to become suicide bombers” while a third warns that the “nations of the Cross” will see their “blood spill like an ocean” and the “black flag [of Isis] in their hearts”.

Many of the videos were uploaded in the past 48 hours; but in other cases Isis supporters used Saturday’s attack to republish less recent propoganda.

Advertisement

Using hashtags such as #LondonBridge and #BoroughMarket, Islamists used Telegram to share an image glorifying the 2015 terror attacks in Paris. “Crusading #brits now living in a state of terror and fear!” one wrote. “Where is your great kingdom now?”

Another Telegram account, identified by Site, the terrorist monitoring service, urged supporters to “be active” on more mainstream platforms such as Twitter and Facebook. “With every account you close, we will open 1,000 to take its place, insha’Allah,” the post said.

Rita Katz, the co-founder of Site, accused Twitter of allowing Isis propaganda to remain online for weeks. “Twitter has the capability to fight Isis media on its platform on much larger scales than it has thus far,” she said.

Technology companies face growing pressure after a spate of terror attacks in Britain. One of the terrorists who killed seven people in London on Saturday night is thought to have idolised an extremist preacher on YouTube. On Sunday Theresa May accused the companies on Sunday of providing a “safe space” for extremist ideology to breed.

Karen Bradley, the culture secretary, told the BBC that the companies needed to tackle extremist content in a similar way to how they had removed indecent images of children. “We know it can be done and we know the internet companies want to do it,” she said.

Advertisement

Facebook said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it.”

However, a Facebook page belonging to an Islamic State supporter which posted images glorifying last month’s terror attack in Manchester was not immediately taken down by Facebook’s moderators despite being reported by The Times. The page, which was first identified by Gipec, a New-York based cyberintelligence company, was later removed by Facebook.

Google said it had invested heavily to fight abuse on its platforms and was already working on an “international forum to accelerate and strengthen our existing work in this area”. The firm added that it shared “the government’s commitment to ensuring terrorists do not have a voice online” and removed content that was in breach of its rules when it was made aware of it.

Twitter said “terrorist content has no place on” its platform.

Last month The Times revealed that online bombmaking guides were freely available on Facebook and YouTube. Despite flagging the material to Facebook, its moderators failed to remove one page recommending that bombs should be soaked in rat poison and vinegar to increase their effectiveness.

Advertisement

In April, the social media company’s moderators failed to remove a pro-Isis video containing graphic and uncensored footage of multiple hostage beheadings, on the grounds that the footage did not breach its “community standards”. Facebook’s algorithms also promoted jihadist content to like-minded users. The software suggested that one user join “Generation Awlaki”, a group with 2,000 members dedicated to the “works and message” of Anwar al-Awlaki, the late Islamist and senior al-Qaeda recruiter.

Analysis
A confusing legal mishmash governs what can and cannot be said online (Alexi Mostrous writes). Technically, what is illegal in the real world is also illegal if posted on Facebook. But most of the legislation was passed before social media companies became the behemoths they are today, and they will typically face no penalties for failing to remove illegal content.

Despite reams of legislation, the bar on prosecuting social media messages is high and prosecutors may face a jurisdictional challenge as many services run on servers located outside the country.

The general rule is that an offence must have a “substantial connection” with England for our courts to try it.