FACTCHECKAFRICA
The Industrialisation of Misinformation and the Zero-Trust Economy
Looking back from early 2026, the misinformation landscape of 2025 stands out in stark contrast to previous years. In 2023, deepfakes and AI-generated content were the new toys of the digital playground – novel, intriguing, and mostly harmless.
By 2024, as these tools became more accessible, anxiety kicked in: What was real? What was fake? Could we trust our own eyes? Fast forward to 2025, and the landscape shifted again. Misinformation has become a mainstream phenomenon, permeating the internet’s background noise. What was once a rare tech marvel (the Deepfake) is now as common as clicks on a viral tweet.
The numbers tell a stark story. AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025. Deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone. The number of deepfake files is projected to have reached 8 million in 2025, up from 500,000 in 2023. Perhaps most telling, 60% of consumers encountered a deepfake video in the past year, and only 15% claim they’ve never seen a deepfake video.
This report breaks down how this toxicity played out across four key areas in 2025 – Financial, Civic, Scientific, and Legal – and what to expect in 2026. As the World Economic Forum noted in its Global Cybersecurity Report 2025, the deepfake threat is a critical test of maintaining trust in an AI-powered world.
I. The Financial Vector: The “Arup Effect” and the Liquidity Traps
Financial Sector: Deepfakes and the Cost of Trust
The financial sector has historically relied on a bedrock of Know Your Customer (KYC) protocols to establish trust. In the last few years, that bedrock fractured. We witnessed the migration of synthetic media from a tool of social media and political disinformation to a standardised weapon of corporate theft. The numbers tell a stark story of this escalation: AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025, according to Cyble, with financial institutions, publicly traded companies, and high-net-worth individuals being top targets. This marks a definitive shift from sporadic harassment to systemic risk. The global deepfake AI market is projected to reach $1,052.2 million in 2025, growing at a CAGR of 44.3%. Beyond corporate boardrooms, the threat has permeated the consumer layer. Globally, 1 in 10 adults now report encountering an AI voice scam, and devastatingly, 77% of those victims reported actual financial losses. This high success rate is equally driven by increasing sophistication.
While 2025 was the year of saturation, the architectural blueprint for these crimes was visible years prior. The earliest warning signal came in 2021 with the collapse of the digital media startup Ozy Media. In a case that now reads as a proto-deepfake event, Ozy executive Samir Rao pleaded guilty to fraud and identity theft after he used voice-faking software to impersonate a YouTube executive during a conference call. His goal was to trick Goldman Sachs into investing $40 million by simulating a reference check. At the time, this was dismissed as a bizarre outlier; in retrospect, it was the first signal that voice-only verification was obsolete.
This trajectory escalated in early 2024 with the Arup heist, where a finance worker was tricked into transferring $25 million during a video conference populated entirely by deepfake colleagues. This event proved that even video was no longer a guarantor of truth, birthing the Arup Effect that defined the fraud landscape of 2025. Throughout the third quarter of 2025, we observed a wave of Executive Cloning attacks targeting companies personnel. Unlike the crude phishing of the past, these attacks utilized real-time video and voice synthesis that passed biometric security checks. For instance, fraudsters attempted to impersonate Ferrari CEO Benedetto Vigna through AI-cloned voice calls that perfectly replicated his accent. The call was only terminated after an executive asked the caller a question that only Vigna would know the answer to. Similar attempts have earlier targeted WPP, the world’s biggest advertising group. The company’s CEO, Mark Read, became the target of an elaborate deepfake scam that utilized a high-fidelity AI voice clone and manipulated YouTube footage to impersonate him on a Microsoft Teams call. The Financial Services Information Sharing and Analysis Center warns that these attacks represent “a fundamental shift from disrupting democratic processes to directly attacking business operations”.
The financial sector is responding with increased investment in AI-powered detection tools and biometric authentication. However, experts warn that a cat-and-mouse game has begun, with AI-generated forgeries growing more sophisticated by the quarter. Deloitte predicts gen AI could enable fraud losses to reach $40 billion in the U.S. by 2027. We end the year with a financial system that effectively operates on Zero Trust, where a phone call is no longer proof of presence.
II. Civic Engagement: Trust Erosion in the Deepfake Era
In the civic arena, 2025 did not just bring us more fake news; it broke the mechanism of accountability itself. The focus of these fabrications has been ruthlessly targeted. In the first quarter of 2025 alone, politicians were successfully impersonated 56 times, creating a chaotic environment where official statements competed with synthetic hallucinations for airtime. This was not a localized phenomenon but a global contagion. Retrospective analysis shows that between mid-2023 and mid-2024, there were 82 documented cases of political deepfakes across 38 countries, laying the foundation for the ubiquity we face today.
The defining phenomenon of the year remains the Liar’s Dividend, where the existence of AI was used to dismiss real evidence. The pattern was set during the U.S. election cycle, which served as the beta-test for the tactics we saw normalized in 2025. We witnessed a barrage of hyper-specific, emotionally charged fabrications that defied debunking due to sheer volume. The now-infamous viral baseless claims about immigrants eating cats and dogs in Ohio were amplified by AI-generated imagery despite being debunked by city officials. Disaster disinformation campaigns alleged that hurricane disaster relief funding was being diverted to undocumented immigrants, which actively hampered rescue operations on the ground. We also saw targeted character assassination through the proliferation of deepfakes depicting Kamala Harris in compromising situations with Jeffrey Epstein, a robocall mimicking the voice of Joe Biden and encouraging Democrats not to vote, and baseless, AI-amplified narratives accusing Tim Walz of historic abuse. These were not just rumors; they were fully visualized synthetic realities.
As noted by PBS NewsHour, in India, campaigns utilized AI to break the linguistic barrier of a country with 22 official languages. Candidates’ voices were cloned and translated in real-time into Hindi, Tamil, Telugu, Bengali, Marathi, and Punjabi, allowing them to converse fluently with voters in regions they had never visited. While initially constructive, this technology was quickly repurposed for manipulation. Reports from Rest of World on recent elections highlighted how these tools were used to generate hyper-localized misinformation. Instead of generic robocalls, AI agents called millions of rural voters in their specific village dialects, delivering tailored promises (or threats) that were indistinguishable from a personal call from the candidate. This created a micro-targeting crisis where every voter heard a different, potentially fabricated, version of reality.
In the UK, a deepfake video purporting to show a gubernatorial candidate, Sadiq Khan, the mayor of London, making inflammatory remarks about ethnic tensions spread rapidly on social media, influencing undecided voters. In Mexico, deepfake was used to incriminate candidate. We also saw the weaponization of international figures to influence local politics. In a precursor event in March 2024, Duduzile Zuma-Sambudla, daughter of former President Jacob Zuma, shared a deepfake video of Donald Trump explicitly endorsing her father’s uMkhonto weSizwe (MK) party. The video, which gained significant online traction, featured a synthetic clone of Trump’s voice and likeness urging South Africans to vote for the MK party. In 2025, an AI deepfake of Canadian Prime Minister Mark Carney, originally released directly before the election, reached more than one million views on social media by June.
The tangible impact of this saturation is now quantifiable. A post-election survey revealed that the damage was done regardless of the eventual truth: 63% of respondents believed they had encountered election-related deepfakes, and nearly half (48%) admitted that this content influenced their voting decisions. Research published by The Alan Turing Institute paints a picture of a British public that feels under siege. According to their nationally representative survey of 1,403 people, close to nine in ten people (87.4%) in the UK are explicitly concerned about deepfakes skewing election results. Platforms like TikTok, X, and Meta responded by pledging to combat deceptive AI use by developing ways to detect and label AI-generated content. However, experts warn that as detection improves, attackers will adapt, potentially shifting to more insidious forms of influence like cheap fakes or synthetic grassroots campaigns.
Cybersecurity frameworks are being updated to address these threats, but challenges persist. As the World Economic Forum noted, misinformation and disinformation remain major risks in 2025, potentially undermining election legitimacy and widening societal divides.




