search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
FEATURE EXAMINING ONLINE HATE


N 2016 THE Brexit referendum was linked by the Home Office to the largest increase in police-recorded hate crime since records began. This was overtaken by a spike in hate crimes around the 2017 terror attacks in Manchester and London. Political votes and terror attacks have become ‘trigger events’ for hate crimes on the streets, but also for hate speech on social media. Exclusive analysis by the ESRC-funded


Examining online hate I


HateLab, part of the Social Data Science Lab, has shown that following trigger events it is often social media users who are first to publish a reaction. Discernible spikes in online hate speech were evident in 2017 that coincided with the UK terror attacks in Westminster, Manchester, London Bridge and Finsbury Park. HateLab analysis has shown that social media


acts as an amplifier of hate in the aftermath of terror attacks. Hate speech is most likely to be produced within the first 24-48 hours following an incident, and then dies out rapidly – much like physical hate crime following terror attacks. Where hate speech is re-tweeted, the evidence shows this activity emanates from a core group of like-minded individuals who seek out each other’s messages via the use of hashtags. These Twitter users act like an echo chamber, where grossly offensive hateful messages reverberate around members, but rarely spread widely beyond them. In the minutes to hours following an attack those


associating themselves with far-right ideologies on Twitter capitalise on the event to spread messages of hate and division. These tweeters are also known to have spread messages posted by Russian-linked fake accounts, attempting to ignite and ride the wave of anti-Muslim sentiment and public fear. For instance, in the wake of the Westminster


attack, fake social media accounts retweeted fake news about a woman in a headscarf apparently walking past and ignoring a victim. This was retweeted thousands of times by far-right Twitter accounts with the hashtag ‘#BanIslam’. The additional challenge created by these fake accounts is that they are unlikely to be susceptible to counter-speech (eg, challenging stereotypes, requesting evidence for false claims) and traditional policing responses. It therefore falls upon social media companies to detect and remove such accounts as early as possible to stem the production and spread of divisive and hateful content. The first few hours following a terror attack represent a critical period within which police and government have an opportunity to prevent hate speech, through dispelling rumour and speculation,


18 SOCIETY NOW WINTER 2018


The ESRC-funded HateLab is analysing data to see how social media spreads and amplifies hate speech in the aftermath of terror attacks and what factors help or hinder


appealing for witnesses and providing factual case updates. HateLab analysis shows that tweets from media and police accounts are widely shared in the aftermath of terrorist incidents. As authorities are more likely to gain traction in the so-called ‘golden hour’ after an attack, they have an opportunity to engage in counter-speech messaging to stem the spread of hate online. In particular, the dominance of traditional media outlets on Twitter, such as broadsheet and TV news, shows that these channels still represent a valuable pipeline for calls to reason and calm following criminal events of national interest. However, where newspaper headlines include divisive content, HateLab analysis suggests these can increase online hate speech. HateLab continues to examine the factors that enable and inhibit the spread of online hate around events like terror attacks and key moments in the Brexit process. It has officially partnered with the National Police Chiefs’ Council’s (NPCC) National Online Hate Crime Hub to develop an Online Hate Speech Dashboard to monitor aggregate trends in real-time using cutting-edge artificial intelligence. The Dashboard will be evaluated in operations throughout 2019. n


Race and religious hate speech – 2017 terror attacks


5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 55000


0


i


By Professor Matthew Williams, Professor Pete Burnap and Sefa Ozalp


HateLab is a global hub for data and insight into hate crime and speech. Web HateLab.net @hate_lab @SocDataLab


Westminster


Manchester London Bridge


Finsbury Park


01/03/2017


01/04/2017 01/05/2017


01/06/2017 01/07/2017


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36