Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131

Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131
AI 'enhanced' images spread misinformation, distort reali... - NTS News

AI ‘enhanced’ images spread misinformation, distort reali…

Experts warn that AI-enhanced images circulating during the West Asia war are subtly altering real visuals and distorting perceptions of events on the ground.

As the war in West Asia intensifies, a surge of AI-driven disinformation has begun circulating online, with manipulated visuals shaping how audiences perceive events unfolding across the region. Beyond entirely fabricated images, another form of content has emerged: authentic photographs that have been digitally “enhanced” using artificial intelligence. Experts say such alterations can subtly change the appearance of real scenes, creating visuals that look more dramatic or detailed than the original photographs.

These modifications, they warn, may influence how viewers interpret what is happening on the ground. One widely shared example shows a kneeling US pilot being confronted by a Kuwaiti local shortly after parachuting from his aircraft. The high-quality image spread widely on social media and was even published by media outlets. However, closer inspection revealed an unusual detail: the pilot appears to have only four fingers on each hand.

When the image was analysed using AI detection tools, investigators found a SynthID watermark, an invisible marker designed to identify visuals produced with Google AI. Despite the AI signature, the scene itself appears to be genuine. A video depicting the same moment began circulating on social media on March 2, while satellite imagery confirmed the location. The footage also matched reports that Kuwait had mistakenly shot down three US warplanes that day.

AFP located an earlier version of the same image on Telegram. That version matched the widely shared photo but appeared blurry rather than sharply detailed. AI verification tools concluded the earlier image was authentic and had not been digitally enhanced. This suggests it may have served as the original image before being processed through AI tools that produced the more detailed version. Researchers say AI enhancement can change subtle elements of a photograph while maintaining the overall scene.

“AI-enhancement may subtly alter textures, faces, lighting, or background details, creating an image that looks more ‘real’ than the original,” said Evangelos Kanoulas, a professor in AI at the University of Amsterdam. This process can also influence how events are interpreted. It can “strengthen a particular narrative about an event – for example, making a protest appear more violent, making a crowd appear larger, making facial expressions more intense.” A comparable case occurred after Iranian strikes targeted the area near Erbil airport in Iraq on March 1.

Social media users widely shared an image showing a massive blaze rising into the sky. AI detection tools again identified the use of Google AI in the image through SynthID. However, the image was not entirely fabricated. An original version of the photograph showed the same location but with a much smaller fire, a thinner smoke column and less intense colours. The enhanced version amplified the scale and visual impact of the scene.

Specialists caution that the difference between simple enhancement and full image generation is often minimal. “Even little changes can end up telling a very different story,” said James O’Brien, a professor of computer science at the University of California, Berkeley, and “could change the perception of events”. Generative artificial intelligence can also introduce elements that were never present in the original image.

Kanoulas said AI systems may sometimes “hallucinate” details that did not exist in the source material. A similar situation occurred after the shooting of Alex Pretti by federal immigration agents in the US state of Minneapolis in January. An AI-enhanced image of the incident circulated widely online. The altered picture was based on a frame taken from a real video showing Pretti falling to his knees while officers stood nearby, one holding a gun to his head.

In the original low-quality frame, Pretti held an object that was actually a phone. In the AI-enhanced version, some social media users incorrectly interpreted the object as a weapon. Experts say the spread of AI-enhanced visuals during the war triggered by the US-Israeli attacks on Iran is increasingly undermining public trust in images shared online. Without clear labelling, altered visuals can make it harder for audiences to distinguish between authentic documentation and manipulated imagery.

O’Brien warned that this type of content is already having “a huge impact on people and their ability to trust the truth”. Kanoulas echoed the concern, saying, “People start doubting authentic images as well.”

Summary

This report covers the latest developments in artificial intelligence. The information presented highlights key changes and updates that are relevant to those following this topic.


Original Source: Firstpost | Author: Fp News Desk | Published: March 10, 2026, 3:35 am

Leave a Reply