The Satellite Image That Wasnt: How AI Faked Its Way Into the Iran Conflict
On a Tuesday afternoon, it takes about as long as making coffee to turn a black-and-white satellite image into something that looks real enough to circulate online during a shooting war.
That is the experiment: take a commercial satellite capture, run it through a free AI colorization tool, post the result. See what happens. What happened in February 2026 was that MizarVision — a Chinese AI startup with fewer than 200 employees in Hangzhou and Shanghai, according to News18 — did exactly that with a Vantor News Bureau WorldView-1 capture of Iran's Konarak Naval Base during the opening day of U.S. strikes on Iran. A ship was burning. It wasn't. Stephen Wood, Vantor's senior director, saw the fake version circulate online and recognized it immediately because he had the original; MizarVision had taken a same-morning black-and-white release and colorized it in minutes.
The incident, which University of Washington's Bo Zhao describes as a sign of how rapidly the threat landscape has shifted, is not isolated. On January 29, 2026, news reports of a militant attack on an airport in Niamey, Niger, featured an image with AI-generated smoke and fires. Vantor's GeoEye-1 satellite captured imagery of the same airport on the same day. When Wood's team compared the two, they found the online image was entirely fabricated and not even of the correct airport. "With so much collection available, commercial satellite imagery makes it easy to point out a fake image," Wood said — a reassurance that depends entirely on having the original.
When Seeing Stopped Being Believing
Bo Zhao, a professor of geography at the University of Washington, coined the term "deep fake geography" in a 2021 paper published in Cartography and Geographic Information Science. Zhao and colleagues demonstrated that AI could take real satellite imagery of one city and blend in terrain features from another, creating convincing composite images of places that did not exist. The goal was not to create fakes, but to understand how to detect them. "We hoped to learn how to detect fake images so that geographers can begin to develop data literacy tools, similar to today's fact-checking services," Zhao wrote.
Five years later, the threat Zhao described has arrived in operational form.
"In the recent Iran-related events, we've already seen AI-generated or AI-altered 'satellite' images circulating as public-facing information manipulation," Zhao said by email. "More importantly, this trajectory is almost inevitable and will only become easier over time, especially as systems like ChatGPT can now generate highly realistic 'remote sensing' imagery. This is a qualitative shift from earlier techniques."
Frank Backes, president of the Space Information Sharing and Analysis Center and former CEO of synthetic aperture radar company Capella Space, puts it more bluntly. "I'm not saying it's never occurred, but that hasn't become a common norm by any means," he said of falsified satellite imagery, in what reads as both reassurance and acknowledgment that this is about to change.
The English-language Tehran Times also distributed a fabricated image appearing to show the destruction of a U.S. radar base in Qatar. It was actually an AI-manipulated Google Earth image of a U.S. base in Bahrain, according to the Central European Media Digital Observatory.
The Arms Race
The obvious question is whether the geospatial industry can detect these fakes faster than adversaries can create them. The early evidence is mixed.
A November 2025 paper on arXiv, "Deepfake Geography: Detecting AI-Generated Satellite Images," provides some grounds for cautious optimism. Researchers compared Vision Transformers (ViT-B/16) against Convolutional Neural Networks (ResNet-50) for detecting AI-generated satellite imagery using a curated dataset of over 130,000 labeled images. The Vision Transformer achieved 95.11% accuracy compared to 87.02% for the CNN. The ViT's advantage came from its ability to model long-range dependencies and detect structural inconsistencies across an entire scene, rather than just local texture artifacts.
The paper's authors note a critical limitation, though: "As generative models advance, preserving the authenticity of satellite imagery is vital to maintain trust in remote-sensing data. The rapid progress of these models introduces new risks of large-scale misinformation and forged geospatial content."
Zhao puts it more directly: "We are entering a familiar technological arms race: generation techniques evolve, detection methods follow, and are quickly surpassed."
The detection methods work best when you know what to look for. Current generative models tend to produce subtle structural artifacts: repeated terrain textures, unnatural shadow directions, or terrain transitions that don't match the geography. But these are exactly the artifacts that get harder to spot as models improve. "Once synthetic imagery crosses a certain threshold of realism, methods based on visual artifacts, statistical distributions, or frequency analysis will likely become less reliable," Zhao said.
The Chain of Custody Answer
The geospatial industry's most concrete response is not technical but procedural: the chain of custody model. Reputable commercial satellite providers deliver imagery with an extensive metadata package that identifies the sensor, date and time of capture, precise location, and the full path from satellite tasking to data delivery. When that chain is intact, Wood said, "you know that nobody outside the organization could have touched an image."
Luke Fischer, co-founder and CEO of geospatial marketplace SkiFi, which aggregates imagery from over 50 providers, frames the new reality plainly. "It's no longer about believing the pixel. It's about verifying the pipeline," he told SpaceNews.
Customers purchasing from established vendors are generally prohibited from reselling imagery, which limits the ability to launder altered versions through secondary markets. Adam Maher, CEO of Ursa Space Systems, said the abundance of commercial imagery actually helps verification. His company compares multiple sources including electro-optical, synthetic aperture radar, automatic identification system, and open-source intelligence data to cross-check what is happening in a given area. "Today, because of the commercial availability of data, you can verify imagery with a second source," he said.
This is the industry's genuine advantage: unlike a viral photograph or a social media post, commercial satellite imagery comes with an audit trail. If a fake image is claimed to come from a specific satellite on a specific date, you can check whether that satellite was actually pointed at that location at that time using orbital metadata known as two-line elements.
But this defense only works for images that originate from the legitimate commercial market. It does not protect against AI-generated images that are fabricated wholesale and presented without attribution to any satellite.
The Geopolitical Dimension
The Iran incidents illustrate the operational stakes. State actors now have clear incentive to use AI-altered satellite imagery as part of information operations during active conflicts. A convincing fake image of a burning ship or a destroyed radar installation, if picked up by news organizations and amplified on social media, can shape public perception of a conflict before fact-checkers have time to respond.
This is different from the earlier era of image manipulation, which required significant skill and access to editing software. AI tools can now take a genuine black-and-white commercial satellite image and produce a colorized, altered version in minutes. The democratization of this capability is precisely what Zhao's 2021 warnings anticipated.
The broader implication, as Zhao sees it, is that societies may be heading toward "a post-truth information environment where the issue is not simply whether an image is real, but how societies negotiate uncertainty and visually mediated evidence." Technical detection of AI artifacts will need to be combined with tracking data provenance and, critically, public AI literacy. "Ultimately, this is not just a technical problem, it's a societal one," he said.
For the commercial satellite industry, the opportunity is to position credible providers as a trusted layer in an increasingly untrusted information ecosystem. The irony is that the same AI capabilities threatening the credibility of imagery also enable faster detection of forgeries. "AI is both the problem and the solution," as Zhao put it.
The MizarVision image did not fool anyone who had access to Vantor's original. That is the industry's argument: the solution to fake satellite images is more real ones, with better provenance, from providers who can prove what they delivered and when. Whether that argument holds as generative AI continues to improve is the question the industry has not yet answered.