Categories
News

Experts weigh in on growing discourse surrounding A.I. misinformation amidst Israel-Hamas war

With the rise of A.I. misinformation campaigns, journalists and social media users might have to update their media literacy skills.

As the Israel-Hamas war persists, it’s become more common for users on social media to encounter unverified imagery generated by artificial intelligence used to fuel misinformation campaigns.

The development of text-to-image generators trained via A.I. has allowed the spread of misinformation through images that are not representative of reality. With A.I. image generators like DALL-E receiving regular updates, generated content has already begun shaping the way modern audiences consume news.

“We’ve had so many people who are unwilling to enhance their trade and we can’t be in that situation anymore for the good of journalism,” said Ernest Kung, the A.I. product manager for the Associated Press. 

On Nov. 1, Kung shared his experience working in busy newsrooms with Concordia’s journalism department. During his conference, he presented multiple ways in which journalists could use A.I. tools to ease their day-to-day work routine.

Although Kung believes the implementation of A.I. is inevitable, he understands that the unregulated nature of certain generators causes more harm than good. Due to profitability or misinformation campaigns, ill-intentioned actors can now change the narrative of entire conflicts with the touch of a mouse.

“It is a cat and mouse game,” Kung said. “Someone’s always going to build a better A.I. tool to create fake imagery and someone’s always going to create a better image detection system.”

Nevertheless, Kung encouraged social media users and journalists alike to familiarize themselves with A.I. to avoid being blindsided by fake content online in the future. 

Media literacy in detecting A.I.-generated content is approached differently by various experts. Tristan Glatard, an associate professor in computer science and software engineering at Concordia, believed the solution lies in the hands of individuals to identify inconsistencies and check the sources behind A.I.-suspected imagery.

“I don’t think the solution is technical. It should be general education of citizens on how to detect fake imagery,” Glatard said. “Check your sources, double check, you know? It should be education on how to consume the news, not how to detect the images.”

Glatard suggested social media users may attempt to locate topological mistakes within suspected images. These include noticeable inconsistencies such as warped body parts or objects. Glatard also recommended A.I. image detectors, which he claimed have improved alongside generators.

Some social media platforms have already implemented methods to flag misinformation, such as X’s community notes or Instagram’s content labeling. 

Photojournalist and professor in Concordia’s journalism program, Liam Maloney, suggested a different approach to identifying fake images.

“There are still some telltale signs, but by and large the A.I. have gotten extremely good at faces,” Maloney said. “Even images that I made previously, when I look at them now, they seem hopelessly primitive.”

An early adopter of A.I. generators, Maloney believes newer models are no longer bound to small sets of data, therefore generated imagery is harder to identify. He claimed early generated content was often limited to imagery from the public domain, such as iconic pictures of past conflicts.


Maloney acknowledged the method of identifying topographical mistakes in imagery but said newer models would correct them in the future. Instead, he recommended two methods which he believes to be more effective.

The first, geolocation, would require the verifying party to analyze features of a photograph and correlate it to satellite imagery. For example, comparing the shapes of buildings to the corresponding historic imagery. The second was chronolocation, which requires users to account for the time of day presented in the picture. Once identified, the verifier would have to correlate that to other aspects presented, such as the shadows cast or the sun’s angle. 

Both Maloney and Glatard said they’ve encountered generated content linked to the Israel-Palestine conflict, which they believe were shared primarily to spread misinformation.

Maloney, who’ll be introducing a class focused on A.I. and journalism next semester, said the balance between both fields would grow harder to maintain as time passes and generators become more sophisticated. “By the time I start teaching, the material that I’m using would be outdated,” he said.

Categories
Opinions

Fact-averse journalism is not journalism

For all pandemic news, journalists must base themselves on fact, not opinion.

According to the Canadian Association of Journalists’ ethics guidelines, journalists should not make assertions in their pieces. An assertion is a declaration used to express one’s personal beliefs, opinions, and feelings. Even if an assertion bears some truth, it is not a factual statement.

So because assertions may hold some factual integrity, they are sometimes hard to distinguish from facts. For this reason, social commentators who masquerade as journalists pose a threat to public safety — especially during the pandemic. Journalists should therefore separate their opinion from fact. If they do not, they should acknowledge how their views impact their ability to report with accuracy.

According to Statistics Canada, 90 per cent of Canadians relied on the internet for up-to-date information about COVID-19. This group mostly consulted online news sites, but they also consulted social media posts from news outlets, influencers, and other users. Furthermore, 53 per cent of Canadians have shared information about COVID-19 on social media without verifying its accuracy.

Based on these numbers, many Canadians do not have the time to fact-check the information they read. So, for the benefit of public health, journalists need to commit to the truth. 

One media outlet that blurs the line between assertion and fact is Rebel News. This right-leaning media outlet pairs factual information with misinformation. At the very least, they seem to omit information to increase the credibility of their claims. For example, this October a Rebel News journalist reported on the effectiveness of natural immunity to prevent COVID-19. They argued that this immunity is a more effective way to fight COVID-19 compared to Pfizer vaccines. To support the argument, they cited an Israeli study that also formed this conclusion. However, this study has not been peer-reviewed.

Once someone gets the virus and recovers, their immune system retains some memory of the virus. This means that their body has a blueprint for how to combat the virus in the future.

The Centers for Disease Control and Prevention (CDC) published a peer-reviewed study in November that also studied the effectiveness of natural immunity versus vaccination immunity. It found that natural immunity does help stave off future infections but it is not as reliable as immunity gained from vaccinations.

These researchers also explained that the Israeli study analyzed the benefit of Pfizer vaccinations six months after injections were given. This time gap may have skewed the results because the immunity effects of the mRNA vaccines may have worn off.

The study also found that in some cases, natural immunity can help protect someone from COVID-19.

However, to become naturally immune to COVID-19, one needs to get the disease. So, it becomes a public health concern when journalists encourage people to get the disease or imply that all of our bodies can protect us from it.

According to the Public Health Agency of Canada, unvaccinated people are more likely to contract COVID-19. Since December 2020, there have been 837,239 reported COVID-19 cases. Of this group, 82 per cent were unvaccinated. Further, unvaccinated people accounted for 77 per cent of COVID-related deaths.

Misinformed health journalism becomes dangerous when you consider the death toll of COVID-19. This is especially serious because many people do not have time to fact-check every piece they read.

Also, in my opinion, misinformation pushes people to fear the COVID vaccine. A Canadian study looked at a randomized sample of 3915 tweets from Canadians who express anti-vaccination sentiments. They found that 48 per cent of tweets included worries about vaccine safety. So, if you pair this fear with the consumption of misinformation, it may encourage more people to expose themselves to COVID.

When it comes to health news, journalists have an imperative to consult and disseminate factual information. Those who assume this role cannot cherry-pick information to reinforce a political stance. They must investigate and accurately explain vaccine safety. Without this commitment, so-called journalists let Canadians down.

 

Feature graphic by Madeline Schmidt

Exit mobile version