It appears that creators have not been deterred from spreading misleading AI-generated images,” says analyst.

WASHINGTON: AI-generated videos circulating on X (Twitter), owned by Elon Musk, portray scenes such as American soldiers captured by Iran, an Israeli city reduced to ruins, and US embassies set ablaze — highlighting a wave of highly realistic deepfakes despite policy measures aimed at curbing wartime disinformation.
Researchers say the ongoing tensions in the Middle East have triggered an unprecedented flood of AI-generated visuals, surpassing anything seen in previous conflicts and often leaving social media users struggling to distinguish fabricated content from real events.
In an effort to safeguard “authentic information” during conflicts, X (Twitter) announced last week that creators who post AI-generated war videos without clearly disclosing they were artificially produced will be suspended from the platform’s revenue-sharing programme for 90 days.
Further violations could lead to permanent suspension, the platform’s head of product, Nikita Bier, warned in a post.
The new policy marks a significant shift for X (Twitter), a platform that has faced heavy criticism for becoming a hub for disinformation since Elon Musk completed his $44 billion acquisition of the company in October 2022.
The move was also welcomed by senior United States Department of State official Sarah Rogers, who described it as a “great complement” to X’s Community Notes — a crowd-sourced fact-checking system that reduces the reach, and therefore monetisation, of inaccurate content.
“The feeds I monitor are still flooded with AI-generated content about the war,” Joe Bodnar of the Institute for Strategic Dialogue told AFP.
“It appears that creators have not been deterred from sharing misleading AI-generated images and videos related to the conflict,” he added.
Bodnar highlighted a post from a prominent “blue check” X (Twitter) account — eligible for monetisation — which circulated an AI clip showing an Iranian “nuclear-capable” strike on Israel. The post received more views than Nikita Bier’s message about cracking down on AI content.
When AFP asked how many accounts had been demonetised since Bier’s announcement, X did not provide a response.
AFP’s global network of fact-checkers — spanning countries from Brazil to India — has flagged a steady stream of AI-generated fakes related to the Middle East war, many originating from X (Twitter)’s premium accounts with purchasable blue checkmarks.
These include AI videos showing a tearful American soldier inside a bombed-out embassy, US troops on their knees beside Iranian flags, and a devastated US naval fleet.
The surge of AI-fabricated visuals — often mixed with genuine footage from the region — is growing faster than professional fact-checkers can debunk.
X’s AI chatbot, Grok, has reportedly worsened the issue, incorrectly telling users that numerous AI-generated war visuals were real.
Researchers have also cautioned that X’s model, which allows premium accounts to earn payouts based on engagement, has amplified the financial incentive to share false or sensational content.
In one instance, a premium account posted an AI video of Dubai’s Burj Khalifa engulfed in flames, ignoring a request from Nikita Bier to label it as AI. The post stayed online and amassed over two million views.
As a “countermeasure,” last month the Tech Transparency Project reported that X appeared to profit from more than two dozen premium accounts belonging to Iranian government officials and state-controlled media pushing propaganda, potentially violating US sanctions.
However, researchers point out that even with strict enforcement of X’s demonetisation policy, a large number of users sharing AI-generated content are not part of the platform’s revenue-sharing programme.
These users remain subject to fact-checking through Community Notes, a crowd-sourced system whose effectiveness has been repeatedly questioned.
A study last year by the Digital Democracy Institute of the Americas found that over 90% of Community Notes on X are never published, underscoring significant limitations.
“X’s policy is a reasonable countermeasure to viral disinformation about the war. In principle, it reduces the incentive for spreading false content,” said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech.
“The challenge lies in implementation: metadata on AI content can be removed, and Community Notes are relatively rare,” he added.
“Mistakes are likely, and it is improbable that X can ensure both high precision and high recall for this policy.”
