Articles

Explainer: How AI Is Fueling Misinformation and Election Interference in Nigeria

BY: Mustapha Lawal

Artificial intelligence did not invent misinformation, long before generative AI tools became widely accessible, false claims, propaganda, and coordinated disinformation campaigns were already embedded in the country’s political culture. What AI has done is scale these practices, making them faster, cheaper, harder to detect, and more difficult to counter, particularly during elections.

As Nigeria approaches another major electoral cycle, AI-powered misinformation has become one of the most serious threats to the integrity of its democratic process. From deepfake videos and synthetic images to AI-generated text masquerading as credible political analysis, the country’s information ecosystem is being reshaped in ways that existing institutions are struggling to manage.

From Rumours to Synthetic Reality

One of the clear shifts introduced by AI is the movement from crude rumours to highly convincing synthetic content. During and after Nigeria’s 2023 general elections, fact-checkers documented a rise in manipulated visuals falsely attributed to polling units, electoral officials, and political figures. While earlier election cycles relied heavily on recycled images or miscaptioned videos, newer incidents increasingly involved AI-assisted editing, face swaps, altered backgrounds, and fabricated documents designed to appear official.

A notable example was the circulation of manipulated result sheets purportedly showing pre-filled outcomes before voting had concluded. Although many of these were eventually debunked, the speed at which they spread, often within closed WhatsApp and Telegram groups, meant the corrections reached far fewer people than the falsehoods themselves.

More recently, AI-generated images falsely showing Nigerian politicians endorsing rival candidates have surfaced online, exploiting public familiarity with campaign visuals. These images often evade basic detection tools because they are not fully synthetic but lightly altered, a known blind spot in most free AI-detection systems.

AI and the Automation of Political Narratives

Beyond visuals, AI has enabled the mass production of persuasive political text. Large language models can now generate thousands of coherent posts framing election outcomes as “rigged,” “stolen,” or “illegitimate,” tailored to different ethnic, religious, or regional audiences.

In Nigeria, this has amplified long-standing fault lines. During election periods, coordinated networks deploy AI-generated commentary that mimics the tone of grassroots voices, activists, clerics, lawyers, or civil society actors, giving the illusion of widespread consensus. In reality, many of these narratives are centrally produced and algorithmically distributed.

Read Also: Artificial Intelligence and the African Reality: Why This Series Matters Now

Fact-checkers observed this pattern during debates over the credibility of the Independent National Electoral Commission (INEC). While legitimate criticisms existed, AI-generated content exaggerated claims, invented court rulings, and falsely attributed statements to judges and senior lawyers. These posts were often shared alongside real news articles, blurring the boundary between verified reporting and fabricated analysis.

Deepfakes and the Crisis of Visual Trust

Perhaps the most dangerous development is the erosion of trust in audiovisual evidence itself. As AI-generated audio and video become more realistic, Nigerians are increasingly unsure what to believe, even when content is authentic.

This “liar’s dividend” effect has already surfaced. When genuine recordings of politicians making controversial remarks circulate, supporters frequently dismiss them as “AI-generated” without evidence. Conversely, fabricated clips are sometimes accepted as real because they align with existing political beliefs.

The attempted use of deepfake-style videos during subnational elections and party primaries signals what may come in future national polls. Without rapid forensic capacity and clear public guidance, electoral disputes risk being fought not just in courts and streets, but in competing realities online.

Platform Failures and Detection Gaps

Technology companies often present AI detection tools as safeguards, but their effectiveness in Nigeria remains limited. Most publicly available detectors are trained to identify fully AI-generated content, not partially edited media, the very type most commonly used in Nigerian political misinformation.

In several recent cases investigated by FactCheckAfrica, AI detection tools returned “no manipulation detected” verdicts for images later confirmed by forensic experts to be altered. This false reassurance is dangerous. It encourages blind trust in tools that are not designed for the complexity of real-world political disinformation.

Compounding this problem is the lack of platform transparency. Nigerian users rarely know why certain political content trends or who paid to promote it. AI-driven amplification systems reward emotionally charged falsehoods, while fact-checks and corrections struggle to achieve comparable reach.

Elections Without a Shared Reality

Elections depend not only on ballots but on a shared understanding of facts: who won, how votes were counted, and what the law says. AI-fuelled misinformation undermines this shared reality.

When voters are exposed to parallel streams of synthetic evidence, fake results, fabricated endorsements, invented court judgments, trust collapses. In such an environment, losing candidates find it easier to reject outcomes outright, and winning candidates struggle to govern with legitimacy.

Nigeria’s experience mirrors trends across Africa, but its size, influence, and digital penetration make the stakes particularly high. What happens in Nigeria often sets a precedent for the region.

Beyond Tools: A Human-Centred Response

AI alone cannot fix the problems it has helped create. While detection technologies will improve, so will the sophistication of manipulation. The more durable defence lies in strengthening human systems: investigative journalism, professional fact-checking, electoral transparency, and widespread media and information literacy.

Initiatives such as MyAIFactChecker demonstrate how AI can also be used defensively, helping citizens interrogate claims rather than accept them at face value. But tools must be paired with public education and institutional accountability.

If Nigeria is to safeguard its elections in the age of artificial intelligence, it must recognise a hard truth: misinformation is no longer just a communication problem. It is a structural risk to democracy itself. And AI has made ignoring that risk no longer an option.

Editor Note: This article is part of FactCheckAfrica series examining the growing harms of artificial intelligence (AI) across Africa, with a particular focus on Nigeria. The series is exploring how AI is amplifying existing inequalities, distorting information ecosystems, and reshaping power in fragile democracies.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button