Experts weigh in on growing discourse surrounding A.I. misinformation amidst Israel-Hamas war

With the rise of A.I. misinformation campaigns, journalists and social media users might have to update their media literacy skills.

As the Israel-Hamas war persists, it’s become more common for users on social media to encounter unverified imagery generated by artificial intelligence used to fuel misinformation campaigns.

The development of text-to-image generators trained via A.I. has allowed the spread of misinformation through images that are not representative of reality. With A.I. image generators like DALL-E receiving regular updates, generated content has already begun shaping the way modern audiences consume news.

“We’ve had so many people who are unwilling to enhance their trade and we can’t be in that situation anymore for the good of journalism,” said Ernest Kung, the A.I. product manager for the Associated Press. 

On Nov. 1, Kung shared his experience working in busy newsrooms with Concordia’s journalism department. During his conference, he presented multiple ways in which journalists could use A.I. tools to ease their day-to-day work routine.

Although Kung believes the implementation of A.I. is inevitable, he understands that the unregulated nature of certain generators causes more harm than good. Due to profitability or misinformation campaigns, ill-intentioned actors can now change the narrative of entire conflicts with the touch of a mouse.

“It is a cat and mouse game,” Kung said. “Someone’s always going to build a better A.I. tool to create fake imagery and someone’s always going to create a better image detection system.”

Nevertheless, Kung encouraged social media users and journalists alike to familiarize themselves with A.I. to avoid being blindsided by fake content online in the future. 

Media literacy in detecting A.I.-generated content is approached differently by various experts. Tristan Glatard, an associate professor in computer science and software engineering at Concordia, believed the solution lies in the hands of individuals to identify inconsistencies and check the sources behind A.I.-suspected imagery.

“I don’t think the solution is technical. It should be general education of citizens on how to detect fake imagery,” Glatard said. “Check your sources, double check, you know? It should be education on how to consume the news, not how to detect the images.”

Glatard suggested social media users may attempt to locate topological mistakes within suspected images. These include noticeable inconsistencies such as warped body parts or objects. Glatard also recommended A.I. image detectors, which he claimed have improved alongside generators.

Some social media platforms have already implemented methods to flag misinformation, such as X’s community notes or Instagram’s content labeling. 

Photojournalist and professor in Concordia’s journalism program, Liam Maloney, suggested a different approach to identifying fake images.

“There are still some telltale signs, but by and large the A.I. have gotten extremely good at faces,” Maloney said. “Even images that I made previously, when I look at them now, they seem hopelessly primitive.”

An early adopter of A.I. generators, Maloney believes newer models are no longer bound to small sets of data, therefore generated imagery is harder to identify. He claimed early generated content was often limited to imagery from the public domain, such as iconic pictures of past conflicts.

Maloney acknowledged the method of identifying topographical mistakes in imagery but said newer models would correct them in the future. Instead, he recommended two methods which he believes to be more effective.

The first, geolocation, would require the verifying party to analyze features of a photograph and correlate it to satellite imagery. For example, comparing the shapes of buildings to the corresponding historic imagery. The second was chronolocation, which requires users to account for the time of day presented in the picture. Once identified, the verifier would have to correlate that to other aspects presented, such as the shadows cast or the sun’s angle. 

Both Maloney and Glatard said they’ve encountered generated content linked to the Israel-Palestine conflict, which they believe were shared primarily to spread misinformation.

Maloney, who’ll be introducing a class focused on A.I. and journalism next semester, said the balance between both fields would grow harder to maintain as time passes and generators become more sophisticated. “By the time I start teaching, the material that I’m using would be outdated,” he said.

Media literacy is the new alphabet: why everyone needs to know how to read the news

Disinformation circulating on social media can now be the difference between illness and health.

To the untrained eye, a video of Stella Immanuel, an American doctor, appears completely legitimate. Immanuel, while wearing her white coat and standing in front of the U.S. Supreme Court building, says she knows how to prevent further COVID-19 deaths. With a line of other people wearing white lab coats behind her, she assures that the virus has a cure: hydroxychloroquine.

The claim spread quickly across social platforms, garnering millions of views after being shared by Donald Trump and one of his sons. Both Facebook and Twitter quickly removed the video for violating their misinformation policies, and the Centers for Disease Control debunked the doctor’s claims. But for millions, the damage had already been done — the seed of misinformation had been sown.

Media literacy, or more specifically a lack thereof, could prove to be one of the biggest threats posed by social media. As displayed by viral claims that attempt to downplay the virus’s severity and unfounded theories for potential cures, the threat extends beyond the practice, and to society as a whole.

Facebook and other social media platforms have upped their misinformation policies as a response to the pandemic and the 2020 U.S. presidential election. Twitter has implemented a label beneath tweets that present disputed election claims, warning the viewer of such.  They’ve also begun completely removing some tweets with false information, as they did for the Immanuel video. Facebook has also started flagging posts as misleading or inaccurate, though its implementation has drawn a mixed reaction.

As the World Health Organization deems it, the problem this “infodemic” presents is obvious; the solution, on the other hand, remains in question. While the steps taken by Twitter and Facebook are a good start, more needs to be done to help individuals struggling to navigate the modern media landscape. I believe that media literacy courses should be required for all Canadians at the high school level, in order to reduce the spread of misinformation, and improve social media as a news-sharing platform.

Per a Ryerson University study, 94 per cent of online Canadians use social media. More than half of those users reported having come across some form of misinformation. A McGill University study found that the more a user relied on social media for news related to the pandemic, the more likely they were to defy public health guidelines. The inverse is equally true: the more a person relies on traditional news media for pandemic information, the more likely they were to follow the guidelines. A similar study at Carleton University found that almost half of Canadians surveyed believe at least one Corona virus conspiracy theory, with more than 25 per cent believing the virus was engineered in China as a weapon.

There are media studies courses that focus on the influences that advertising, propaganda and even cinema can have on consumers. But in the digital ecosystem that we currently find ourselves in, it has become essential to realize why misinformation exists on social media, and who benefits from it. Yet, students are never taught how to use these platforms properly.

In April, the Canadian government invested $3 million in order to help fight against virus-related misinformation. The money will be divided among several programs with the aim of “helping Canadians become more resilient and think critically.” As recently as late October, the federal government launched a program in collaboration with MediaSmarts to benefit Media Literacy Week in 2020, 2021, and 2022.

This plan, while well-intentioned, is reactive rather than proactive. Viewing misinformation related to the pandemic as a blip rather than the new normal is potentially very dangerous.

Last year in the U.S., a federal bill was introduced calling for $20 million of investment in media literacy education. Since then, 15 states have introduced media literacy bills, which aim to add media literacy as a part of the required high school curriculum. Beyond more consistent and clear messaging from all levels of government, experts prescribe some level of training required for students. Right now, social media users are left to use the formative platforms without the proper equipment; they are placed in a sea of information without a life raft.

In order to remedy its problem with misinformation, it will be essential for Canadian students to be instructed in media literacy by the time they graduate from high school. This baseline education, coupled with the advocacy we continue to see from groups such as MediaSmarts, creates a more educated media-consuming population. In the midst of this pandemic, it is media literacy, even more than epidemiology or politics, that could prove to be the greatest life-saver.


Feature graphic by @the.beta.lab

Exit mobile version