BY: Habeeb Adisa
As Nigeria moves gradually toward the 2027 general elections, the issue of artificial intelligence, elections, and democracy is increasingly being discussed. However, most of these issues have been centered around the issue of deepfake videos. While this, indeed, is a big issue, it seems that we are overlooking another, albeit lesser, issue while our focus is centered around this particular aspect. In fact, the more serious concerns of artificial intelligence in Nigerian elections may not necessarily be the ones that could be carried out in the form of viral videos, but the ones that are not so obvious but are already in place.
One of the underreported risks is the proliferation of AI-generated propaganda in local languages. Most fact-checking activities and media monitoring platforms in Nigeria still focus on the overwhelming majority of the population who communicate in the English language.
However, political communication in Nigeria is increasingly conducted through WhatsApp voice notes, where the majority of the population communicate in their local languages, including Hausa, Yoruba, Igbo, Pidgin, and other languages. Today’s AI technology can automatically translate and synthesize speech in several languages, including local languages, which enables the creation of convincing political communication that sounds like respected figures in the local community, religious leaders, traditional rulers, or local politicians.
Let’s imagine an AI-generated voice note is spreading across the country, days before the election, claiming to be from one of the respected clerics in the country, asking their supporters to vote for their preferred candidate. By the time the information is debunked, the damage will have been done, influencing the voting behavior of thousands of unsuspecting voters. The misinformation in the next election cycle in Nigeria will not be in only English language.
The other threat that is being overlooked is the development of synthetic “grassroots support.” Artificial intelligence has enabled the development of thousands of synthetic social media accounts, which will be able to converse with each other and show support for certain hashtags or candidates. They will look and sound as if they are coming from human beings, and to the untrained eye, they will look as if they are coming from supporters of a certain candidate or cause. The significance of this is that social media trends are increasingly being used as a guide for traditional media coverage. If a journalist writes that “Nigerians are reacting online” to a certain candidate, they are basing this on what they see online. If this is actually being driven by synthetic accounts, then the perception of Nigerians is being controlled.
AI also offers the potential for a new form of psychological manipulation in the course of politics. Rather than using the same message for all, AI offers the chance to create messages for particular groups of people based on their fears, their frustrations, and their identities. For example, young voters who are struggling with unemployment might be targeted with messages that evoke strong emotions. Similarly, religious groups might be targeted with messages warning of the threat to their values, or ethnic groups might be targeted with messages reminding them of historical grievances and using those to create anger.
The complex nature of Nigeria’s society, with ethnicity, religion, and regional identity all playing such strong roles, makes such a form of manipulation particularly dangerous. AI does not simply spread misinformation, it also has the potential to exacerbate societal divisions in a very personalized way. Another factor that needs to be explored is how it might impact the legitimacy of election outcomes. Although deepfakes are talked about in the context of pre-election campaigning, they might be used post-election too. Fake videos could be created showing ballot box stuffing, bribes being offered, or election-related violence. Even if these videos are ultimately revealed to be false, they could spark protests, undermine the legitimacy of election outcomes, or make election-related disputes worse.
Legitimate proof of wrongdoing could also be dismissed as a deepfake. This, in a way, is a “liar’s dividend” where nefarious actors can use the confusion about the legitimacy of information to their advantage. When everything is a potential deepfake, trust is the first casualty. However, the greatest challenge might be how to handle the platforms from which the misinformation is being disseminated most effectively. Currently, the Nigerian political discourse is happening on WhatsApp, Telegram, and other personal messaging platforms. Creating text, image, and voice notes using AI tools is possible on a massive scale and can be disseminated quickly through trusted networks of friends, family, and key influencers within the communities. These platforms are also encrypted, which makes it hard to monitor the dissemination of the misinformation, which in turn makes it hard to detect the misinformation until it has already spread significantly.
In order to address the challenges, it is not just about the technology but how fact-checking organizations improving their multilingual monitoring capabilities to monitor the information across the various languages in Nigeria. There is a need to be careful about the social media trends being utilized by the media to gauge the views of the people without verifying if the information being utilized is genuine. There is a need to come together with the electoral body, the civil society, and technology players to be able to act swiftly in responding to the information during critical times.
Equally not to be overlooked is the awareness factor. People must become aware that the information in the form of audio, video, or online viral content is not necessarily representative of reality. The efforts to enhance media literacy must continue to evolve and adapt to the new challenges posed by the phenomenon of artificial intelligence. Trust in the integrity of the democratic process in Nigeria has always been founded not merely on the voting system. It has also been founded on trust in the information that is floating in society, as well as the democratic contest. Artificial intelligence has the capacity to destroy this trust in ways that are not easily detectable or traceable.
In the run-up to the 2027 elections in the country, the big question is no longer whether artificial intelligence will find its way into the political communication landscape in the country. It certainly will. The big question is whether the country will be able to adapt fast enough to the arrival of this phenomenon in the democratic process or whether it will simply creep into the democratic process undetected.


