The Rise of AI-Generated Disinformation in Conflict Zones
Every morning, millions of people living in conflict zones turn to their phones for news. What they see is not just real events but a flood of synthetic content that can be as deceptive as it is convincing. This phenomenon is not accidental; it is the result of an industry that has grown rapidly with the help of artificial intelligence (AI) tools.
In recent months, social media platforms have been flooded with videos and images from the Israeli-American military campaign against Iran. Some of these are authentic, raw footage captured on smartphones showing bombarded streets from Tehran to Beirut. However, interwoven with this genuine content is a secondary stream of high-fidelity synthetic material designed to go viral. These videos are short, lack context, and often cite no sources. They are crafted to confirm what the audience already believes or to reshape their perception of reality.
The technology behind this disinformation is now more accessible than ever. Tools like Google’s Veo, OpenAI’s Sora, and xAI’s Grok allow users to generate convincing videos of public figures saying things they never said or of recognizable locations rendered as destroyed ruins. A minute of such video can be purchased for under a dollar, making it a low-cost tool for both individuals and organized actors.
The Business of Disinformation
Social media platforms, including Meta, X, TikTok, and YouTube, operate as advertising businesses. Their algorithms prioritize engagement over accuracy, often favoring content that provokes strong reactions. This creates a cycle where sensational or controversial content generates more interaction, leading to higher ad revenue. Content creators are rewarded based on views and geographic reach, with some earning hundreds or even thousands of dollars per month from platform ad-sharing.
Compounding the issue, many fabricators use networks of linked accounts to spread their content widely. This industrial approach makes creation cheap, distribution free, and the audience global. The result is a system where misinformation can spread rapidly and with minimal effort.
State Actors and the Manufacture of Reality
Beyond individual profit-seekers, there are organized actors, including governments and state-affiliated influence operations. Their goal is not money but the manufacture of reality. For example, an affiliated Arabic account linked to the Israeli Ministry of Foreign Affairs once published a video claiming that Hamas had seized control of Al-Shifa Hospital. This fabricated content gained 16 million views before being debunked.
There is also a third category of participants: genuine believers. Many people share synthetic content not because they are paid to, but because it aligns with their hopes or beliefs. When such content goes viral, the harm it causes can be significant, even if it is later debunked or removed.
The Consequences of a Deceptive Media Environment
A media environment saturated with synthetic content can fuel sectarian, ethnic, and religious hostility. In Syria, for instance, a fabricated audio recording led to widespread violence, resulting in over 130 deaths. This highlights how disinformation can lead to real-world consequences.
The so-called “liar’s dividend” is a byproduct of this ecosystem. As populations struggle to distinguish fact from fiction, those in power can more easily manipulate the narrative to their advantage. This trend is not new; it has been seen in various forms throughout history, but the speed and scale of AI-generated disinformation make it particularly dangerous.
Addressing the Crisis
There is no single solution to the problem of AI-enabled deception. Governments, tech platforms, civil society organizations, and individual users all have a role to play. The European Union’s AI Act represents a step forward in promoting transparency and accountability, but more needs to be done.
Platforms must invest in robust detection systems to flag and remove harmful content before it spreads. Cross-platform watermarking standards for AI-generated content could also help. Additionally, demoting or removing borderline content and demonetizing fabricated content in sensitive areas would be important steps.
Digital literacy is equally crucial. The public needs to develop the skills to critically evaluate content before sharing it. Professional journalism remains one of the most reliable sources of information, and consulting credible fact-checkers can help verify the authenticity of content.
Without structural and individual interventions, the information environment will continue to function as a hall of mirrors, reflecting only what viewers already believe. Truth and a shared sense of reality are casualties of this crisis, as is the ability to hold powerful individuals and countries accountable for their actions.








