Your support keeps us publishing. Follow this link to subscribe to our print magazine.

How Online Radicalisation Is Fuelling Anti-Muslim Hatred

Referrals for far-right terror offences have risen markedly among young people during Covid. The phenomenon is fuelled by growing online subcultures – many of which thrive on anti-Muslim vitriol.

A new report from the Commission for Countering Extremism (CCE), released recently, argues for a more robust legal framework to tackle ‘hateful extremism’. In setting out the need for stricter controls on online content, the report notes the huge capacity for the internet to both widen and deepen hate: thousands and thousands of pieces of content that relay what the report terms ‘extremist’ views are uploaded to sites every day.

The report’s authors are not the first to note that this issue has become more pertinent during the Covid-19 pandemic. Climactic public events like the storming of the Capitol in the US create a misconception around what is more often a subtle, insidious, and private process: for example, while terror arrests in total fell in 2020, the Guardian reported in December that referrals for right-wing terror-related activity had increased by 43 percent among under-18s in the UK in the preceding year. In this, Covid is accelerating a long process of change in the nature and methodology of modern radicalisation.

After the 2013 Woolwich terror attack, I conducted research on online Islamophobia which identified eight types of Twitter user who could be classed as online trolls. Most were not members of a far-right group, but rather normal people: teachers, plumbers, local councillors. They were people like Rhodenne Chand, who was not involved in any formal far-right organising, but who was jailed for posting a series of Islamophobic tweets after the 2017 Manchester Arena attack—including the claim he wanted to ‘slit a Muslim throat’—or like rugby-playing student Liam Stacey, who was jailed for racist tweets about footballer Fabrice Muamba.

Both these cases show that while full-blown neo-Nazism seems to be making a comeback, problems often lie with individuals who most likely consider themselves—and would be considered by many others—to be normal. You also don’t need to be this obviously racist to enact or encourage prejudiced behaviour. Some people simply join in with group conversations targeting vulnerable figures; others post messages that don’t say anything specifically racist, but that they know will inflame tensions. Many people take advantage of anonymity.

For example, I encountered a post asking: ‘What is your typical British breakfast?’ Out of context, it seems harmless – but it led to a spiral of hateful comments about Muslims:

“For every sausage eaten or rasher of bacon we should chop of a Muslims head [sic].”
“Muslims are not human.”
“One day we will get you scum out.”
“Muslim men are pigs … I am all for annihilation of all Muslims.”

My study examined the use of three separate hashtags—#Muslim, #Islam, and #Woolwich—which allowed me to look at how Muslims were being viewed before and after Woolwich. The most common reappearing phrases were ‘Muslim pigs’ (in 9 percent of posts), ‘Muzrats’ (14 percent), ‘Muslim Paedos’ (30 percent), ‘Muslim terrorists’ (22 percent), ‘Muslim scum’ (15 percent), and ‘Pisslam’ (10 percent). Individual expressions of hate are taken up by virtual communities who create webpages, blogs, and forums, amplifying and intensifying antagonisms.

Social media has the power to reflect a curated image of the world. It’s a form of social creativity, through which people shape their online behaviour to try to place the social group with which they identify in a position of dominance. This is a perennial feature of human interaction: as over 50 percent of the globe are internet users, the far-right use the tools at their disposal.

But we shouldn’t be fooled into thinking that without social media, these beliefs would otherwise be isolated or fringe. While Twitter and Facebook—let alone other platforms associated specifically with the far-right—are utilised as a megaphone for racism, they reflect attitudes endemic in the offline world.

31 percent of British people believe that Islam poses a threat to the ‘British’ way of life. 18 percent believe that some races or ethnic groups are less intelligent than others; 44 percent believe that some are lazier. As a society, we need to grapple with how these ideas have become normalised in order to meaningfully challenge them, and not simply drive them underground.

The reality is that right answers for tackling this are few and far between – most solutions do not go far enough in acknowledging the protean nature of the problem. Steps taken by companies including FacebookTwitter and now TikTok to block and remove people overtly linked with the far-right are limited in scope.

More needs to be done to identify the social processes at play in the ways hatred is spread, and the ways in which some of its forms are simultaneously legitimised by public figures and politicians who entertain it offline as well as on. Mark Rowley, one of the CCE report’s two authors, is right when he says the current situation is ‘untenable’; only a broad approach can target broadly-shared attitudes, and reduce social media’s huge capacity for harm.