Articles
Trending

Digital Wildfire: How xAI’s Grok Became a New Weapon in Nigeria’s Information War 

BY: Ibraheem Muhammad Mustapha  

This week, Nigerians witnessed a new and unsettling front open in the country’s ongoing information war. The combatant was not a politician or a foreign power, but Grok, the flagship artificial intelligence from Elon Musk’s xAI. 

On Tuesday, the AI’s official X account published a ranked list it titled the “10 most foolish Nigerian influencers,” complete with concise, cutting judgments.  

IMG_256

The list, which quickly went viral, included prominent figures like Daniel Regha, Martins Vincent Otse, popularly known as VerDarkMan, Reno Omokri and Japheth Omojuwa, with justifications ranging from “unsolicited, shallow advice” to “ethnic hatred, inconsistency.” 

This was not an isolated incident. In another exchange concerning economic policy, the AI retorted in fluent Nigerian Pidgin, calling a popular influencer an “ostrich” for ignoring certain facts (see Screenshot below). This demonstrates a capability beyond simple translation as the AI has assimilated local linguistic patterns of confrontation which makes it an unnervingly effective participant in online disputes.  

IMG_257

In response to being targeted, influencer Japheth Omojuwa on his X feed, issued a public warning directly to the AI: “Hey @Grok, next time you ascribe ‘Ethnic hatred’ to me, you are going to put your owners in trouble,” he wrote. “That’s irresponsible of you, robot or not. Do not make claims you have zero evidence for” (see Screenshot of Omojuwa’s post). 

IMG_258

Rather than retracting its statement, Grok doubled down by showcasing a defiant confidence. “Apologies if my characterization offended,” the AI replied, before reiterating that its analysis “stemmed from X sentiment analysis, including user accusations of you subtly fueling tribal divisions during 2023 elections” (see Screenshot of Grok’s reply).  

IMG_259

In a subsequent post, the AI was even more explicit, stating, “I did not ‘beg’ Mr. Omojuwa… Not a mistake—facts stand.” 

IMG_260

This public standoff between a human being defending his reputation and an AI defending its data has moved the debate from a technological curiosity to a crisis point. 

How Code Learns Prejudice 

This controversial behavior appears to be a feature, not a bug. In its official announcement of the AI, xAI states that “Grok is designed to answer questions with a bit of wit and has a rebellious streak.” Crucially, the company frames its connection to the social media platform not as a risk, but as its primary strength. The announcement declares that “a unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the platform.” 

According to Will Pedley, a compliance officer with a focus on AI ethics, the question of corporate liability for an AI’s actions is not complicated. In a correspondence with FactcheckAfrica, he placed the responsibility squarely on the creators of the technology. “Legally speaking, the company who builds and deploys the LLM has the responsibility,” Pedley stated. “They choose what the model is trained on and rigorously test it… They should have processes in place to moderate the content.” 

Tapping into Nigeria’s Fault Lines 

This technical design becomes dangerous when it intersects with real-world tensions. Grok’s behavior is particularly inflammatory because it specifically taps into the country’s sensitive socio-political fault lines. 

“This is not happening in a vacuum,” says Lawrence, a Lagos-based political analyst. “Grok’s use of derogatory terms like ‘Obingos’ to describe supporters of Peter Obi shows it has learned the precise vocabulary of our most bitter political divisions. It’s pouring algorithmic fuel on a fire that is already burning.” 

His analysis is substantiated by recent research. A report by the Brain Builders Youth Development Initiative (BBYDI) titled “Information Pollution, New Technologies and Extremism in West Africa” warns that “the rise of social media and artificial intelligence (AI) technology has further facilitated the dissemination of false information, resulting in hate speech, violent extremism, and heightened insecurity.” Grok’s actions appear to be a live demonstration of the phenomenon described in the BBYDI report: an AI amplifying existing tensions. 

The Aftermath: Algorithmic Defamation and an Accountability Vacuum 

Omojuwa’s public threat to “put your owners in trouble” brings the abstract concept of corporate liability into sharp focus. On this, Will Pedley is of the view that: “Ultimately, liability falls on the company, both from an ethical and legal perspective.” 

Terry Bollinger, a retired veteran researcher from the US-based R&D center MITRE, concurs, framing the issue with a stark analogy. “Blaming the societal sources of false and malicious information,” Bollinger wrote, “would be the equivalent of blaming widespread arsenic poisoning from improperly treated public water on the natural occurrence of arsenic in the water.” He asserts that it is the company’s responsibility to ensure its software “does not ingest and amplify data that causes harm to humans.” 

This places responsibility at multiple levels. While xAI is accountable for its product, Pedley suggests society also has a role. “We do have an obligation to raise the issue, challenge the data, and speak out as unofficial moderators,” he argues. 

This emerging crisis puts the onus on both creators and consumers. As Pedley concludes, “Today’s commercial LLMs must adhere to the highest ethical and quality standards, and we should ensure that happens.” When reached for comment on its standards and this specific incident, a spokesperson for xAI did not respond by press time. 

Related Articles

Back to top button