Articles

The Ripple Effect of Misinformation from Social Media to Street Violence: The Southport Stabbings and the Rise of Social Media-Driven Riots

BY: Mustapha Lawal

“In an age where a single tweet/post can spark nationwide chaos, the tragic Southport stabbings reveal how misinformation can ignite riots and deepen societal divides”

In the eras before social media, misinformation took time to reach and influence a wide audience. Today, however, with their algorithms and user biases, social media platforms can transform isolated incidents into national crises within minutes. 

Algorithms, designed to maximize engagement, often prioritize sensational and emotionally charged content, rapidly amplifying false narratives. While mainstream media traditionally applies editorial safeguards to ensure accuracy, social media thrives on viral content with little oversight, turning disinformation into a potent force for societal discord.

On Monday, 29th July 2024. , a tragic stabbing in Southport claimed the lives of three children, setting off a chain reaction of misinformation and violence.

Misinformation’s Destructive Power

The case of the Southport stabbings in the UK illustrates how misinformation can quickly escalate into violence. After the tragic incident involving the death of three children, false information about the suspect’s identity and background spread rapidly on social media. Claims that the suspect was an immigrant and a Muslim fueled public outrage and led to riots across the country. 

The misinformation, which was initially spread by a relatively obscure social media account, was quickly amplified by others, resulting in a wave of far-right violence and social unrest. A report by the Institute for Strategic Dialogue (ISD) explained how the event transpired thus:  

In a now-deleted post, one X user shared a screenshot of a LinkedIn post from a man who claimed to be the parent of two children present at the attack, in which he alleged that the attacker was a “migrant” and advocated for “clos[ing] the borders completely”. This X user appears the first to falsely assert that: 1) the attacker’s name was “Ali al-Shakati”; 2) he was on the “MI6 watch list” [this cannot be correct, as MI5 is the security agency responsible for domestic terrorism]; 3) he was “known to Liverpool mental health services”; and 4) he was “an asylum seeker who came to the UK [sic] by boat last year”.

These false claims were then uncritically amplified by other X accounts claiming to be “news outlets”. A small account called ‘Channel3 Now’, whose website primarily contains material related to violent incidents, wrote the name “Ali al-Shakati” into an article which has since been deleted. ISD OSINT investigation suggests that a previous iteration of Channel3 Now’s website was run out of an address in Pakistan. Other reports has suggested those who run the website may be based in Pakistan and/or the United States. Channel3 Now’s ‘reporting’ was then cited by a range of accounts including ‘End Wokeness’, which has 2.8 million followers.

The police did not confirm that the name was false until midday the following day. By 3 pm the day after the attack, the false name had received over 30,000 mentions on X alone from over 18,000 unique accounts. As the alleged perpetrator is 17 and a minor under UK law, their name cannot legally be published until after legal proceedings have concluded. The name was later revealed as Axel Rudakubana, 17  by a judge who ruled against concealing his identity.

Misinformation doesn’t just harm the individuals falsely accused; it can also have far-reaching societal impacts. For example, the Southport incident led to anti-immigrant sentiment, fueling xenophobia and deepening societal divides. 

The spread of this misinformation directly contributed to violent unrest in Southport and other UK cities. On 30 July, riots broke out after a vigil for the victims, with protesters echoing anti-immigrant sentiments. Over the next two days, violent demonstrations erupted in London, Hartlepool, and Aldershot, resulting in numerous arrests. This shows how misinformation can be weaponized to manipulate public opinion and incite violence, often with the intent of promoting a particular agenda.

This problem is, however, not unique to the UK; it has caused harm in various parts of the world. In Nigeria, for instance, false claims about the #EndBadGovernanceInNigeria protests, such as the death of police officers, led to heightened tensions and confusion. Similarly, during the Ebola outbreak in Nigeria, misinformation about fake cures spread rapidly on social media, causing public panic and potentially harmful behaviours. These examples demonstrate that misinformation, fueled by social media, can have a wide range of negative consequences—from inciting violence to endangering public health. The speed and reach of social media make it easier for false information to spread, often outpacing the efforts of fact-checkers and responsible journalists to correct it.

The Complexity of Misinformation and its amplification

Professor Andrew Chadwick, an expert in online misinformation, highlighted the complex interplay of factors that contributed to the riots. Misinformation is not just about the spread of falsehoods but also about how these are amplified by influential figures and disseminated through personal messaging platforms like WhatsApp. This can lead to immediate, on-the-ground impacts in communities, as seen in the UK and in various African nations where similar dynamics have led to violence.

The false identification of the Southport attacker as “Ali al-Shakati”, a Muslim and a recent migrant to the UK was weaponized by anti-Muslim and anti-migrant users and was not only spread organically but was also amplified by social media algorithms. These users echoed Islamophobic tropes, falsely claiming the attacker arrived in 2023 and was involved in violent criminal activity.

On platforms like X and TikTok, this misinformation was recommended to users through trending topics and suggested searches. For instance, X promoted the false name as a trending topic, and TikTok suggested queries related to the fabricated identity. These recommendations significantly amplified the reach of the misinformation, spreading it to users who might not have encountered it otherwise, even after the police confirmed the name was false.

The Role of Algorithms and Bias: Platform Responses

Social media provided a crucial way for the disinformation to be circulated, both through amplification of algorithms and because large accounts shared it. According to ISD’s Rose. accounts with hundreds of thousands of followers, and the paid-for blue ticks on X, shared the false information which was then pushed by the platform’s algorithms to other users.

Social media algorithms are a key factor in the rapid spread of misinformation. These algorithms are designed to maximize user engagement by promoting content that is likely to generate likes, shares, and comments. Unfortunately, this often means that sensational and emotionally charged content—whether true or not—gets more visibility. 

Biases, both in the algorithms and among users, further exacerbate the problem. People are more likely to engage with content that confirms their preexisting beliefs, which means that misinformation that aligns with those beliefs can spread even more quickly. This phenomenon was evident in the Southport case, where biases against immigrants and Muslims made the false narrative particularly potent. The rapid spread of this misinformation not only led to violence but also widespread distrust in the media and authorities, further destabilizing the community.

Platforms have protocols to respond to crises like terrorist attacks but struggle with the broader challenge of viral misinformation that can lead to hate or violence. For instance,  X’s Hateful Conduct policy and TikTok’s Integrity and Authenticity Policies are supposed to prevent the incitement of violence and the spread of misinformation. 

However, during the Southport incident, enforcement was lacking. X allowed posts that violated its policy by spreading stereotypes about Muslims, while TikTok failed to effectively manage misinformation despite its guidelines, showcasing a significant gap in the platforms’ ability to mitigate real-world harms.

The Need for Vigilance & Guideline Implementation

Given the destructive power of misinformation, it is crucial for both social media platforms and their users to be vigilant. Platforms need to improve their algorithms to better detect and limit the spread of false information, while users should be more critical of the content they encounter and share. Educating the public about the dangers of misinformation and how to spot it is also essential in mitigating its impact.

Governments and social media companies must take proactive steps to prevent the spread of harmful misinformation. This includes enhancing content moderation, promoting digital literacy, and supporting fact-checking organizations. As the UK government announced plans to tackle violent disorder and clamp down on online misinformation, similar measures could be beneficial in African contexts where the stakes are equally high.

At FactCheckAfrica, we are dedicated to combating misinformation and promoting accurate information across the continent. Our mission is to empower communities with the tools and knowledge needed to critically evaluate online content and prevent the escalation of violence fueled by false information. We encourage all readers to remain vigilant and to rely on verified sources before sharing information.

Edited by Habeeb Adisa

Related Articles

Back to top button