By Mustapha Lawal
A recent controversy over Nigeria’s Federal Inland Revenue Service (FIRS) and claims about the Tax Identification Number (TIN) reminds us of a troubling pattern: misinformation does not always come from fake stories deliberately crafted to deceive. Sometimes it is born out of misinterpretation, careless reporting, or headlines stripped of vital context. These mistakes, though sometimes, unintentional, can be just as damaging as deliberate disinformation campaigns.
When Panic Spreads Faster Than the Truth
In August 2025, social media erupted with the claim that Nigerians would no longer be able to open or operate bank accounts without first obtaining a separate TIN. The story was widely shared, amplified by alarming headlines, and quickly created anxiety among citizens already weary of bureaucratic hurdles.
Yet, when FIRS responded, it clarified that this was misleading. Nigerians are not required to obtain a new or separate TIN; the system has been integrated with existing national registries like the National Identification Number (NIN) and Corporate Affairs Commission records. For most people, no additional step was required. What sounded like a new barrier to banking access was, in reality, nothing more than a digital integration. But by the time this clarification came, the fear and distrust had already spread.
When Numbers Are Taken Out of Context
This incident echoes other episodes where facts were not so much fabricated as they were misrepresented. Earlier this year, Nigerian media and blogs reported widely that “25% of Nigerian men are not the biological fathers of their children.”
The claim, as sensational as it sounded, did not withstand scrutiny. It was based on findings from a single Lagos-based laboratory, SmartDNA, which had noted that in the tests it conducted, mostly for men seeking immigration documents or already suspicious of paternity, about one in four samples resulted in a paternity exclusion.
What the reports failed to mention was that this was a very specific, high-risk group, not a representative sample of the entire country. To generalise their results to “all Nigerian men” was misleading at best, dangerous at worst. As a result, families were left unsettled, and public discourse was skewed towards mistrust, all because headlines simplified and sensationalised what should have been a cautious, limited finding.
Deepfakes and the AI Challenge
Then there was the case of the doctored image that appeared to show Nigerian officials cutting a ribbon at a booth marked “AL28 NIGERIA” during the Tokyo International Conference on African Development (TICAD 9). At first glance, the picture looked authentic, and it spread quickly on social media. But closer inspection, and expert verification from deepfake analysts, revealed it was AI-generated.
The most troubling part? Several publicly available AI-detection tools could not identify it as manipulated, often giving a clean bill of authenticity. Here, the danger was twofold: not only did the image attempt to pass as truth, but the very tools designed to help us detect such fakes were easily fooled. In an era where images and videos carry immense persuasive power, such missteps expose just how vulnerable we are.
Old Myths That Refuse to Die
Of course, this is not entirely new. Misinformation thrives when small truths are twisted into sweeping claims. Back in the early 2000s, a study conducted among a group of Lagos traders suggested that 77% of respondents used skin-lightening products.
Somehow, that narrow finding ballooned into the widely repeated claim, still alive today, that “77% of Nigerian women bleach their skin.” From local newspapers to international outlets like the BBC and Guardian in the UK, CNN in the US, eNCA and News24 in South Africa, Vanguard and Pulse in Nigeria and the pan-African platform This is Africa amongst many others cited the figure as though it reflected the entire female population of Nigeria and West Africa.
The author of the original study later clarified that her research was limited to traders in one city, not Nigerian women as a whole. But by then, the number had travelled too far, embedding itself into global perceptions of Nigeria.
The thread running through these cases is clear. Misreporting, whether born of haste, oversimplification, or the hunt for a catchy headline, has consequences. It can create unnecessary fear, like the panic over bank accounts and TINs. It can destabilise families and erode trust, as seen in the misrepresentation of paternity test data. It can fuel international ridicule or suspicion, as with the bleaching statistic. And in the age of AI, it can even generate completely fabricated realities, leaving both citizens and detection systems unable to tell the truth from falsehood.
What the Research Tells Us
Research backs up these concerns. A study by the Centre for Democracy and Development (CDD) found that nearly 40% of Nigerians have shared misinformation online without realising it, highlighting just how easily falsehoods can blend into everyday digital life.
The Reuters Institute’s 2024 Digital News Report similarly noted that misinformation erodes trust in mainstream media: in Nigeria, trust in news fell to 34%, one of the lowest globally. This erosion of trust spills into politics and governance. During Nigeria’s 2023 elections, fact-checkers documented waves of misleading content that not only misinformed voters but deepened polarisation.
Globally, the World Health Organization has warned that misinformation during the COVID-19 pandemic, what it called an “infodemic”, directly undermined public health measures and vaccine uptake.
“What makes this more alarming is how difficult it is to correct the record once misinformation takes hold. A false claim gains traction, gets shared, is cited in further reports, and eventually becomes accepted as fact, even when the truth is available. The damage outpaces the correction, and the correction rarely travels as far as the original claim.”
Beyond Technology: The Case for Media Literacy
So what is our best defence? Technology is part of the answer, but it is far from foolproof. As the AI-generated TICAD image shows, detection tools can be tricked. Large language models like Grok, Gemini, or even ChatGPT occasionally misinterpret sarcasm, irony, or context, and in doing so may confidently present falsehoods as truth. The deeper solution lies in human oversight and public literacy. We must not only demand better of our journalists, who should ground their reporting in evidence, nuance, and accuracy, but also of ourselves as news consumers.
Media literacy is not optional in this landscape; it is survival.
The lesson is sobering: misinformation is not only about malicious fabrications but also about misreporting, misrepresentation, and misunderstanding. And unless we address this, the line between fact and fiction will only grow thinner.
Our strongest shield remains a well-informed public, able to question, to contextualise, and to resist the seduction of headlines that sound good but tell us little of the truth.




