Real photos of real events that have been “enhanced” with artificial intelligence in ways that experts claim are distorting the public’s perception of what is actually happening on the ground are a new and subtle form of disinformation that is adding to the fog of conflict as the war sparked by US-Israeli strikes on Iran continues into its second week.
According to an AFP investigation released on March 9, real photos from the battle are being processed with AI to enhance details, heighten colors, and exaggerate scenes. This results in images that appear more professional than the originals but convey a significantly different narrative.
When The Real Turns Into The Unreal
A widely circulated image shows a US pilot kneeling in front of a Kuwaiti citizen shortly after being ejected from his wrecked aircraft. Media sites released the high-resolution photo, which went viral on social media. However, AFP fact-checkers noted that the pilot appeared to have only four fingers on each hand. Detection tools verified that the picture included a SynthID watermark, a sign that Google AI had been used to process it.
But the underlying event was real. On March 2, satellite imagery verified the position and video of the same scene appeared on Telegram. This coincided with rumors that Kuwait had unintentionally fired down three US jets. AFP discovered an earlier, fuzzier version of the image that AI systems identified as genuine, indicating that it was the source of the improved version.
In another instance, following Iranian strikes on March 1, a striking picture of a huge fire close to Iraq’s Erbil airport went viral online. Although the fire was much smaller and the colors were less vibrant, the original photo depicted the same scene. Once more, SynthID reported Google AI’s involvement.
This A Completely Different Tale
Experts caution that compared to open fabrication, this type of modification is more difficult to identify. “AI-enhancement may subtly alter textures, faces, lighting, or background details, creating an image that looks more ‘real’ than the original,” stated University of Amsterdam AI professor Evangelos Kanoulas. This may “strengthen a particular narrative — for example, making a protest appear more violent, making a crowd appear larger, making facial expressions more intense” .
The University of California, Berkeley computer science professor James O’Brien issued a warning, saying that “even little changes can end up telling a very different story” and “could change the perception of events.”
Deteriorating Trust On A Large Scale
According to Shayan Sardarizadeh of BBC Verify, it’s possible that this battle has “already broken the record for the highest number of AI-generated videos and images that have gone viral during a conflict”. The group recorded recordings of strikes on Tel Aviv, artificial intelligence-generated satellite imagery, and altered material that had been seen hundreds of millions of times.
Deliberate propaganda is not the only issue. Kanoulas pointed out that generative AI has the ability to “hallucinate” components that were not present in the original image. AI-enhanced photos significantly weaken the public’s capacity to discern between reality and distortion if they are not properly labeled. “People start doubting authentic images as well,” stated Kanoulas.

