Experts weigh in on growing discourse surrounding A.I. misinformation amidst Israel-Hamas war

Graphic by @the.beta.lab

With the rise of A.I. misinformation campaigns, journalists and social media users might have to update their media literacy skills.

As the Israel-Hamas war persists, it’s become more common for users on social media to encounter unverified imagery generated by artificial intelligence used to fuel misinformation campaigns.

The development of text-to-image generators trained via A.I. has allowed the spread of misinformation through images that are not representative of reality. With A.I. image generators like DALL-E receiving regular updates, generated content has already begun shaping the way modern audiences consume news.

“We’ve had so many people who are unwilling to enhance their trade and we can’t be in that situation anymore for the good of journalism,” said Ernest Kung, the A.I. product manager for the Associated Press. 

On Nov. 1, Kung shared his experience working in busy newsrooms with Concordia’s journalism department. During his conference, he presented multiple ways in which journalists could use A.I. tools to ease their day-to-day work routine.

Although Kung believes the implementation of A.I. is inevitable, he understands that the unregulated nature of certain generators causes more harm than good. Due to profitability or misinformation campaigns, ill-intentioned actors can now change the narrative of entire conflicts with the touch of a mouse.

“It is a cat and mouse game,” Kung said. “Someone’s always going to build a better A.I. tool to create fake imagery and someone’s always going to create a better image detection system.”

Nevertheless, Kung encouraged social media users and journalists alike to familiarize themselves with A.I. to avoid being blindsided by fake content online in the future. 

Media literacy in detecting A.I.-generated content is approached differently by various experts. Tristan Glatard, an associate professor in computer science and software engineering at Concordia, believed the solution lies in the hands of individuals to identify inconsistencies and check the sources behind A.I.-suspected imagery.

“I don’t think the solution is technical. It should be general education of citizens on how to detect fake imagery,” Glatard said. “Check your sources, double check, you know? It should be education on how to consume the news, not how to detect the images.”

Glatard suggested social media users may attempt to locate topological mistakes within suspected images. These include noticeable inconsistencies such as warped body parts or objects. Glatard also recommended A.I. image detectors, which he claimed have improved alongside generators.

Some social media platforms have already implemented methods to flag misinformation, such as X’s community notes or Instagram’s content labeling. 

Photojournalist and professor in Concordia’s journalism program, Liam Maloney, suggested a different approach to identifying fake images.

“There are still some telltale signs, but by and large the A.I. have gotten extremely good at faces,” Maloney said. “Even images that I made previously, when I look at them now, they seem hopelessly primitive.”

An early adopter of A.I. generators, Maloney believes newer models are no longer bound to small sets of data, therefore generated imagery is harder to identify. He claimed early generated content was often limited to imagery from the public domain, such as iconic pictures of past conflicts.


Maloney acknowledged the method of identifying topographical mistakes in imagery but said newer models would correct them in the future. Instead, he recommended two methods which he believes to be more effective.

The first, geolocation, would require the verifying party to analyze features of a photograph and correlate it to satellite imagery. For example, comparing the shapes of buildings to the corresponding historic imagery. The second was chronolocation, which requires users to account for the time of day presented in the picture. Once identified, the verifier would have to correlate that to other aspects presented, such as the shadows cast or the sun’s angle. 

Both Maloney and Glatard said they’ve encountered generated content linked to the Israel-Palestine conflict, which they believe were shared primarily to spread misinformation.

Maloney, who’ll be introducing a class focused on A.I. and journalism next semester, said the balance between both fields would grow harder to maintain as time passes and generators become more sophisticated. “By the time I start teaching, the material that I’m using would be outdated,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts