BY: Oluwaseye Ogunsanya
The ongoing US-Israel and Iran conflict has once again exposed the growing threat of war-related disinformation, particularly through the spread of fake images and videos across social media platforms.
The current conflict began on February 28, following Israel’s pre-emptive missile strikes on Iran targeting military facilities, key infrastructure, and leadership sites. Hours after the announcement, President Donald Trump said the strikes were in collaboration with the United States. Iran has continued to exchange strikes and counterstrikes against the US-Israel team. As the situation intensified, competing narratives quickly emerged online, creating fertile ground for misinformation and manipulation.
According to researchers, the scale of AI-generated visuals linked to the current Middle East conflict has surpassed anything seen in previous wars. This marks a significant shift from earlier disinformation patterns. During Russia’s invasion of Ukraine in 2022, for example, platforms were flooded with relatively crude fakes, including recycled visuals, edited images, mislabeled footage, and clips taken from video games, movies, or unrelated past events.
What is emerging now is far more sophisticated. High-quality videos and images are being deliberately created using accessible artificial intelligence tools, making them harder to detect and more convincing to audiences.
Analysts have identified numerous cases of AI-generated videos and fabricated satellite imagery being used to spread false or misleading claims about the three-week-old conflict. Collectively, these pieces of content have attracted hundreds of millions of views online, significantly amplifying their reach and impact.
Particularly concerning is the role of X, which has increasingly been described as a hub for such disinformation. The platform has faced sustained criticism over the effectiveness of its internal verification systems. In one instance highlighted by disinformation expert Tal Hagin, X’s AI-powered chatbot, Grok, “failed miserably” when asked to verify a post claiming that Iranian missiles had struck Tel Aviv.
According to Hagin, Grok repeatedly misidentified both the location and date of the video, which had originally been shared by an Iranian state-owned media outlet. In an attempt to justify its incorrect assessment, the chatbot reportedly introduced an AI-generated image as supporting evidence, further compounding the misinformation.
What this evolving landscape signals for journalism is both urgent and uncomfortable. The traditional gatekeeping role of journalists is being steadily eroded in an environment where synthetic content can be produced faster than it can be verified. The speed, scale, and sophistication of AI-generated disinformation now demand a shift from reactive fact-checking to more proactive verification systems, stronger newsroom protocols, and deeper investment in digital forensics and open-source intelligence.
At the same time, audiences are more vulnerable than ever. Unlike earlier forms of misinformation that were often easier to question, today’s AI-generated visuals are highly convincing and emotionally manipulative. In the context of war, where fear, bias, and political allegiance already shape perception, such content spreads rapidly and is more readily believed. Platforms further intensify this by amplifying sensational content, often prioritising engagement over accuracy and exposing users to repeated falsehoods.
Addressing this challenge will require more than isolated platform policies. Social media companies must move beyond reactive measures such as demonetisation as exemplified by X and commit to stronger detection systems, transparent enforcement, and clearer labeling of synthetic media. Regulatory frameworks must also evolve to reflect the realities of AI-driven information ecosystems, particularly in holding platforms accountable for the spread and monetisation of harmful disinformation.
Equally important is the role of media literacy. As the line between real and artificial content continues to blur, users must develop the ability to critically evaluate what they encounter online. Without this, even the most advanced detection systems will struggle to keep pace with the scale of misleading content.
As Hany Farid, a professor at the University of California, Berkeley, notes, staying accurately informed requires avoiding “random accounts” on social media, which are particularly unreliable during moments of global conflict. Instead, audiences must anchor their understanding in credible, established journalistic sources.
Even then, users need to cultivate a sharper eye for the subtle flaws that AI often leaves behind. While synthetic media is becoming increasingly sophisticated, the truth is that it remains imperfect. Clues such as mismatched audio and video, unnatural lighting, inconsistent facial details, or visible watermarks from generation tools can flag manipulation.
If these habits are not developed, audiences risk being overwhelmed by synthetic content. Media literacy should not just be an academic concept; it should be a practical, everyday defence mechanism. By combining attention to credible sources with an awareness of technical inconsistencies, individuals can better navigate a digital environment where seeing is no longer believing.
Ultimately, the current wave of AI-driven disinformation is not just a technological problem but a structural one. It challenges how information is produced, distributed, and consumed. For journalism, platforms, and audiences alike, adapting to this new normal is no longer optional. It is necessary.




