AI-generated videos and recycled war footage are warping public perception of the Iran–Israel conflict. What looks real is increasingly synthetic and dangerously effective.
According to the BBC, game clips and AI-generated content are being “passed off as real events” across social media. This marks what experts call the first major conflict where generative AI is shaping the information battlefield.
The deepfake doctrine
The latest escalation between Israel and Iran has unleashed not only rockets and drones but also digital chaos. As military strikes erupted, a parallel war ignited online, one designed to deceive and destabilize.
As reported by the BBC, verification group Geoconfirmed flagged a surge of misleading media, including recycled strike footage, video game clips, deepfakes, and “AI-generated content being passed off as real events.” These have been widely shared and often mistaken for genuine footage, twisting perceptions of the conflict with unprecedented speed.
“This is the first time we’ve seen generative AI be used at scale during a conflict,” noted Emmanuelle Saliba, chief investigative officer of the analyst group Get Real. Saliba’s assessment underscores how fast synthetic media is becoming a core weapon in modern information warfare.
AI arms every side
Both Iran and Israel are using AI to steer the narrative in their favor. Fake attack scenes are spreading rapidly across TikTok, X, Facebook, and Instagram.
Iranian state media published an AI-generated image of a downed F-35 fighter jet. At the same time, pro-Iran TikTok accounts shared flight simulator clips as real airstrikes, reaching over 21 million views before removal. Fake videos showing missile damage in Tel Aviv and Ben Gurion Airport were shared, with some traced to Iran-linked sources.
Meanwhile, the Israel Defense Forces reposted outdated footage of a missile strike on X, which was later flagged as misleading. POLITICO also reported that pro-Israel accounts shared AI-generated images of US bombers over Tehran, fabricated protest scenes aimed at mocking Iran, and fake missile launch visuals.
Fact-checking in freefall
As war-related videos circulate, many X users turn to Grok for fact-checking, but the AI chatbot is getting it wrong.
In one case, Grok insisted an AI-generated video showing missile trucks emerging from a mountainside was real, despite visual errors like rocks moving on their own. It cited Newsweek and Reuters and repeatedly told users to “check trusted news for clarity.” X declined to comment.
“This highlights a broader crisis in today’s online information landscape: the erosion of trust in digital content,” BitMindAI’s Founder Ken Jon Miyachi told AFP. Miyachi emphasized the urgent need for better detection tools, media literacy, and stricter platform responsibility to protect the integrity of public discourse.