BY: Ibraheem Muhammad Mustapha
In an age of doubt, the global ‘liar’s dividend’ has found a fierce new battleground in politics, where the line between the real and the artificial is being deliberately blurred.
For years, the great fear surrounding artificial intelligence was deception, that we would be fooled by deepfakes so convincing they would be indistinguishable from reality. But as a recent New York Times experiment demonstrated, people are already struggling to tell the difference. This widespread confusion has given birth to a more dangerous, ironic threat: the danger is no longer that we will believe the fake, but that we will be conditioned to dismiss the real.
The ‘Liar’s Dividend’: A New Global Playbook for Deception
This phenomenon has a name: the “liar’s dividend.” Coined by legal scholars Robert Chesney and Danielle Keats Citron, the term describes how, as the public grows aware that audio and video can be faked, bad actors can escape accountability by simply denouncing authentic evidence as a “deepfake.”
It’s a tactic that serves a broader, more corrosive political strategy famously articulated by Steve Bannon to “flood the zone with shit.” The goal, as journalist Sean Illing noted in a 2020 Vox article, is to create “widespread cynicism about the truth.” This cynical apathy, where nothing can be trusted, erodes the very foundation of democratic accountability by making citizens feel that objective truth is unattainable.
A Fierce Battleground: How the AI Alibi Is Playing Out in Nigeria
This global playbook for deception has found fertile ground in Nigeria’s high-stakes political arena, where it is being deployed with alarming speed.
The tactic is being used at the highest levels of government. In March 2025, a civic group raised an alarm over an alleged conspiracy to use AI-cloned voices of Senate President Godswill Akpabio to fabricate incriminating audio clips and sow political discord. Senator Natasha Akpoti-Uduaghan of Kogi State has also [allegedly] been the target of what Action Collective, a group, called a “fraudulent AI smear campaign.” “This is a digitally manipulated campaign of calumny, and we call on Nigerians to verify every source of information relating to Senator Natasha,” the group warned. In both cases, the primary weapon was not just the potential fake content, but the resulting public confusion that makes all information suspect.
This trend has bled from elite politics into mainstream pop culture, normalizing the AI alibi for millions. In 2024, internet personality Bobrisky claimed a viral audio recording was “AI-generated” to deflect criticism, demonstrating just how quickly this defense has become a mainstream tool of denial.
A Global Contagion: International Case Studies
Nigeria’s struggle is a reflection of a worsening global crisis. The liar’s dividend is a proven political tool worldwide. During Turkey’s 2023 election, President Erdoğan showed a manipulated video at a rally linking his opponent to a militant group. According to DW’s fact-checking team, the footage was a composite of two separate videos. When questioned, Erdoğan was dismissive, stating the video’s authenticity “doesn’t concern” him, a clear admission that the goal was to generate doubt, not present truth.
Similarly, in India, a politician released audio clips allegedly exposing an opponent’s corruption. The opponent claimed they were AI fakes. Independent analysis by Rest of World found one clip was authentic and the other likely tampered with, leaving the public trapped in a state of confusion where the actual truth become irrelevant.
The Psychology of Doubt and an Inadequate Response
This confusion is the strategic goal. Research by Kaylyn Jackson Schiff and her colleagues finds that the liar’s dividend grows more powerful as audiences become more familiar with deepfakes. This familiarity primes people to dismiss legitimate information, thereby creating an environment ripe for manipulation.
The responses from those with the power to mitigate this harm have been scattershot and insufficient. Technology companies like Meta, YouTube, and TikTok are moving to label AI-generated content, but this places an unfair burden on users to act as digital detectives. X (formerly Twitter) will only label “inauthentic content,” leaving a significant loopholes. These policies are further hobbled by already overstretched moderation teams who cannot possibly detect every piece of manipulated media.
Similarly, while lawmakers across more than a dozen US states (see here, here, here, here) have introduced bills requiring disclosure of AI in political ads, but this patchwork of regulation does little to address the core problem in a borderless, global information ecosystem.
The Road to 2027: A Warning for Nigeria’s Democracy
For Nigeria, the stakes are exceptionally high. A recent report by the Punch warns that AI-generated disinformation looms as a major threat to the integrity of the 2027 elections. The “AI alibi” provides the perfect cover for political actors to deny inconvenient truths and spread malicious falsehoods with impunity. As recent events have shown, the most immediate danger of artificial intelligence is not what the machines will say. It is how they provide humans with a powerful, plausible excuse to no longer believe in anything at all.




