Israel-Iran Crisis Fuels Viral Misinformation: AI-Generated and Miscaptioned Content Floods Social Media

BY: Mustapha Lawal
As tensions escalate between Israel and Iran following strikes, the internet has become a battlefield of its own. Misinformation and disinformation, often AI-generated or miscaptioned, have flooded social media platforms like TikTok, X (formerly Twitter), Facebook, and WhatsApp. The consequences of these false narratives extend beyond digital confusion, threatening public understanding, exacerbating geopolitical tensions, and sowing panic across national borders.
Several striking examples have emerged in the past week. One widely circulated video titled “Doomsday in Tel Aviv” appeared to show apocalyptic destruction allegedly caused by Iranian missile strikes. However, visual analysis by Full Fact and DW showed that the footage was uploaded as early as May 28, weeks before the current conflict, and bore clear signs of AI generation. Vehicles appeared to glitch into each other, blur inconsistently, or morph into unrecognizable shapes, anomalies typical of generative AI visual tools.
Another example involves an image purporting to show Tel Aviv’s Ben Gurion Airport in ruins, with destroyed passenger planes strewn across a tarmac. A reverse image search traced the visual back to a now-deleted TikTok video posted on June 15. Closer inspection revealed bizarre rendering glitches: portholes floating in empty space, shadows inconsistent with real-world physics, and incomplete aircraft structures, all hallmarks of AI-manipulated visuals.
Not all misinformation is synthetic. Much of the current wave involves real footage from past or unrelated events, repackaged to fit the present narrative. One such video showed a sequence of missile launches shared with captions claiming to show Iranian rockets targeting Israeli cities. However, the clip was previously uploaded in October 2024, long before the current hostilities. Although likely showing Iranian military activity, it had no connection to the June 2025 escalation.
Other videos have gone through geographical distortions. For instance, a dramatic clip showing a drone strike in an urban area was widely reposted as evidence of Iranian drone attacks on Tel Aviv. In fact, the original footage came from Russian drone strikes on Kyiv in October 2022. The clip had been flipped horizontally and stripped of original context, disguising its Ukrainian origin.
Similarly, a video of a raging fire in a densely populated area was claimed to be an Iranian assault on Tel Aviv. But distinct buildings in the footage were identified as part of a motorcycle parking lot fire in China on June 11, 2025. The landmarks matched satellite images and other Chinese news coverage.
The misinformation storm hasn’t stopped at videos. Claims have circulated alongside AI-generated news presenters and fake voiceovers, including one alleging that Elon Musk plans to move his companies to Iran in praise of its alleged “tax-free” status under President Ebrahim Raisi. This claim was paired with synthetic video narrations, presenting fabricated policy announcements attributed to Iranian leadership. A closer look traced the source to accounts that regularly post exaggerated or false pro-Iran content. No credible outlet or official statement has corroborated the supposed tax-free declaration.
The consequences of such narratives are wide-reaching. Not only do they mislead viewers and compromise public understanding, but they also risk escalating tensions and fueling extremist sentiments. When fabricated images of bombed cities or collapsed airports circulate unchecked, they incite fear and can trigger real-world reactions, including panic buying, increased hostility, or retaliatory rhetoric.
The viral spread of this content is often amplified by the emotional intensity surrounding conflicts and the use of AI tools to create visually persuasive, albeit fake, imagery. As noted by The Canadian Press and DW, several pieces of viral misinformation in the Israel-Iran conflict originated from unknown accounts using AI editing tools and vague captions. Once posted, such content is picked up by other users, some well-meaning, others politically motivated, and spread across platforms without verification.
What makes this particularly dangerous is how it exploits the immediacy and chaos of conflict. In the fog of war, people are desperate for updates, and social media fills the gap left by slower, verified news. Bad actors weaponize this urgency, knowing that a shocking clip shared early can shape perceptions more powerfully than any correction issued later.
In a global context, the Israel-Iran disinformation wave is not isolated. It follows patterns observed during the Ukraine war, the Sudan crisis, and even domestic political unrest in countries like Nigeria and the U.S. Misinformation has become a standard companion to geopolitical conflict, raising urgent questions about how to fortify the information ecosystem.
Fact-checkers continue to track and debunk such narratives. But tackling the problem requires systemic effort: digital literacy campaigns, improved platform moderation, and greater public skepticism towards unverified visuals and emotionally charged posts. As more sophisticated tools for synthetic media emerge, so must stronger systems for media accountability.
The Israel-Iran crisis is yet another reminder that in times of conflict, truth becomes one of the earliest casualties. But by analyzing, verifying, and exposing disinformation, we can at least begin to contain the damage it inflicts, both online and in the real world.