Articles

UNCOVERED: Bot Accounts, the Shadowy Purveyors of Misinformation on Social Media
 

By Oluwaseye Ogunsanya 

In recent years, there has been an inevitable rift of information pollution on social media as digital technology thrives as a means of communication. By creating a breeding ground for the circulation of mis-&-disinformation, social media platforms continue to dilute facts with fiction, thereby undermining the work of fact checking organisations. 

One of such mechanisms which amplifies information and assists in carrying out tasks on SM platforms is bots. Despite its evident merits, it comes not without challenges, research has shown. 

What Are Bot Accounts? 

Bots or social bots are automated software programmes designed to interact with users and content on SM platforms. Among other things, they can perform various tasks, such as posting contents; liking, sharing, commenting on posts, following users, and sending direct messages.  

They are crafted in a way to mimic real human behaviour. They can either be found on social media, chat platforms and conversational AI and they are integral to modern life. It is noteworthy  to state  that like every other digital tool, there are both good and bad bots on the internet. 

According to a report by the BBC, the actual computer programmes themselves are not evil or sinister as some ‘good’ bots are used positively. The outlet cited an example of how the Canadian Broadcasting Corporation created an instant chat service which actually used bots to automatically answer people’s questions about fake news, helping them to understand in time for the country’s October 2019 elections. 

Checks by FactCheckAfrica can confirm that Elon’s X (formerly Twitter) agrees with this position when it says in its developer policy that “not all bots are bad. In fact, high-quality bots can enhance everyone’s experience on Twitter. Our new policy asks that developers clearly indicate (in their account bio or profile) if they are operating a bot account, what the account is, and who the person behind it is, so it’s easier for everyone on Twitter to know what’s a bot – and what’s not.”

On the flip side also, most unwanted content is created automatically using bots or automated social media accounts to spread deliberately inaccurate information, making it difficult for authorities, policy advocates and media outlets to make informed decisions. 

Going Deep Dive, Spinning Fake Narratives 

As the social media ecosystem is changing from being completely dominated by humans to a state of “humans + social bots” interactions and symbiosis, it has been discovered over the years that some are intentionally created for malicious activities.

These include spreading misinformation; manipulating public opinion, inflating follower counts, or engaging in spamming and scamming tactics. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.

Bots consist of codes, that is, lines of computer instructions or computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively.

While bots work generally on all social media platforms, X (formerly Twitter) stands out as a prominent platform where they thrive the most. According to The Conversation, X is “a place where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.” 

With the help of artificial intelligence, they simulate internet users’ behaviour (like posting patterns) which helps in the dissemination of misinformation. 

Bots can emulate social interactions that make them appear to be regular people. They look for influential Twitter users (Twitter users who have lots of followers) and contact them by sending them questions in order to be noticed and generate trust from them and from other Twitter users who see the exchanges take place. 

They respond to postings or questions from others based on scripts they were programmed to use. They also generate debate by posting messages about trending topics. 

Expert Weighs In, Offers Insights 

In  an interview with Heinrich Boell Foundation, Dr Tobi Oluwatola, Executive  Director of the Centre for Journalism Innovation and Development (CJID) revealed the role played by social media bots in the 2023 general elections. 

According to him, an analysis of popular hashtags related to the major political candidates of the 2023 general elections carried out by the nonprofit revealed a plethora of bot operations. Hashtags such as #BATified, #Obidient, #Atikulated, #kwankwasiya etcetera  were found to have been driven by bots, he said. 

“We also carried out a bot analysis on the major candidates, using Botometer and Botsight tools, and found there are more than 1 million bots following them on Twitter. Bola Ahmed Tinubu, the All Progressives Congress candidate, had one million followers as of February 2022. Currently, he has 1.4 million followers: an addition of 400,000 followers within eight months. Our bot analysis revealed 17.1% (248,000) of the followers as bots, fake accounts created within that period.” 

He added: “Peter Obi had 705,600 followers as of February 2022. Currently, the Labour Party candidate has 2 million followers, a tremendous increase of 1.3 million followers within eight months. Analysis of these followers revealed that 26.55% (531,000) are bots. These are fake accounts created to follow Mr. Obi on Twitter. The same picture emerges across all the candidates.

“These bots are able to trend any topic quickly and create an illusion of a movement out of what may be a fringe view. This is known as “astroturfing”. Although most of these bot accounts are currently inactive – i.e., zero tweets, limited followers and low retweets – we believe this to be a deliberate scheme by political actors to influence and direct online political conversations in their favour as they prepare for the forthcoming general election in 2023.”

What Should Be Done? 

The Bureau of Investigative Journalism suggests that the best defence against the threats of information disorder like bots on social media is making sure the public knows about them through media literacy programs and collaborative efforts by newsrooms to uncover false narratives with intent to sway the public. 

Consequently, social media platforms should double their efforts on making clear regulations to differentiate between beneficial and harmful bots. Good bots should be identified and monitored, while malicious bots should be banned. They should also implement strict verification processes for bot accounts, ensuring transparency about their automated nature.

Governments and other international bodies should also take measures to identify and sanction bots who are causing negative narratives on social media. There should also be legal frameworks that hold individuals and entities accountable for the deliberate spread of harmful misinformation.

Related Articles

Back to top button