Artificial Intelligence as an agent of change

AI and human rights forum generates global discussions

On April 5, the Montreal Institute for Genocide and Human Rights Studies (MIGS) hosted the Human Rights and Artificial Intelligence Forum in Concordia’s 4th Space.

“Because we’ve done some work with Global Affairs Canada, the Dutch Foreign Ministry, and worked directly with different companies, we thought ‘let’s try to get a discussion going,’” said Kyle Matthews, MIGS’s executive director, about the event. Panelists from across the globe, some of whom Skyped in remotely, convened to give their expertise on the use of Artificial Intelligence (AI) technology with regards to human rights in different scopes.

“I’m happy we’ve generated discussions, that we’re connecting students and researchers of Concordia to practitioners in private sectors and in government,” said Matthews. “MIGS works on cutting edge issues with human rights and global affairs. We see, because Montreal is becoming the AI centre of the world, that there’s a unique opportunity for us to play a part in elevating the human rights discussion on a whole set of issues and conflicts.”

The Human Rights and AI Forum was held on April 5 at Concordia’s 4th Space. Photo by Hannah Ewen.

Troll Patrol: fighting abuse against women on Twitter

From London, Tanya O’Carroll, director of Amnesty Tech at Amnesty International, spoke about the innovation of AI in researching and crowdsourcing to enforce human rights.

Amnesty Tech’s Troll Patrol was a language decoding program that filtered hate speech towards female journalists and politicians on Twitter. The AI found instances ranging from sexism and racism, homophobia and Islamophobia, and more, with the majority aimed at women in minority groups.

The AI worked in tandem with volunteer human decoders, whom O’Carroll said are an important part of the loop. O’Carroll explained how the issue isn’t that Twitter doesn’t have a terms of abuse policy—it does, and it’s called “The Twitter Rules.” The issue is they don’t have enough moderators, which O’Carroll called their “business decision.”

The AI accurately predicted and identified only 52 per cent of abusive content on Twitter. O’Carroll acknowledged that, while this isn’t perfect, it’s valuable in challenging the data and bringing change to human rights issues on a large scale.

Emerging technologies in the public sector with a human-centric approach

During Enzo Maria Le Fevre Cervini’s panel, the major topic was governance. Le Fevre Cervini works with emerging technologies and international relations for the Agency for Digital Italy.

Le Fevre Cervini said the fourth revolution of AI is based on data gathered from the public sector, which emphasizes the need to focus on the quality and the quantity of data. The ethical dimensions should be less about the technology and more about its product—there needs to be a reassessment of AI as technology that can play a pivotal role in bridging the gap between parts of society.

Prometea, an AI software, quickly processes legal complaints at the DA’s office in Buenos Aires, Argentina. The complaints are compared to similar cases and the accused is either appointed a judicial hearing or not, according to the results. With just the DA computer system, it could take someone 30 minutes to get through 15 documents. With Prometea, all documents in the system are processed in two minutes.

“Technology is a major agent of change,” said Le Fevre Cervini, which is why he hopes governance of AI will change to allow the opportunity for technology to be more human-centred and widely available.

The series of panels was organized by the Montreal Institute for Genocide and Human Rights (MIGS). Photo by Hannah Ewen.

Ethics and AI

“There’s an assumption that AI will be smarter than humans, but they’re just good at narrow tasks,” said Mirka Snyder Caron, an associate at the Montreal AI Ethics Institute.

During her panel, Snyder Caron spoke about behaviour nudging, such as those little reply boxes at the bottom of an email on your Gmail account. While it may be easy, it’s “terribly convenient” because you’re just recycling what you’ve already done—the prompts are based on general replies and your previous emails.

Snyder Caron emphasized that it’s important to remember that AI systems are still just machines that “can be fooled” or “experience confusion.” She gave an example of an AI system that was unable to identify a stop sign covered in graffiti or one with squares concealing part of the word so it didn’t stop.

“Machine learning can adopt status quo based on patterns and classifications because of biases,” said Snyder Caron. To avoid problems such as discrimination, there needs to be increased diversity at the beginning of the AI process. For example, having a diversity of people inputting data could remove a layer of biases.

Bias, feminism and the campaign to stop killer robots

Erin Hunt, a humanitarian disarmament expert and program manager at Mines Action Canada, spoke about the darker side of AI—the dangers, in particular, of autonomous weapons.

With regards to autonomous weapons, aka Killer Robots, Hunt asked: “How are we sure they won’t distinguish atypical behavior?” Because they sometimes can’t distinguish between civilians and combatants, they don’t conform to human rights laws.

Hunt spoke about how biases lead to mistakes, and presented an example of a study of AI identification where 34.7 per cent of dark-skinned women were identified as men. Some AI target people that shouldn’t be targeted, such as people with disabilities. For example, there are regions of the world where people don’t have access to prosthetic limbs and use wood or metal as substitutes. This could be picked up by the AI as a rifle, thus having failed its job.

Technical difficulties with Skype during the panel further enforced Hunt’s point that if we can’t get a simple call from Ottawa to go through, we shouldn’t have autonomous weapons.

Zachary Devereaux (pictured) is the director of public sector services at Nexalogy. Photo by Hannah Ewen.

AI and disinformation campaigns

Zachary Devereaux, director of public sector services at Nexalogy, said there are two ways to train AI: supervised, which “requires human annotated data that the machine can extrapolate from to do the same types of judgement itself,” and unsupervised machine learning, where machines autonomously decide what judgement is necessary.

“Once you see a suggestion from AI as to what you should reply on your email, or once you see a suggestion from AI on how you should complete your sentence, you can’t unsee it,” said Devereaux.

“As humans, we’re so intellectually lazy—automated processes: we love them and we accept them,” said Devereaux. But because of this, the behaviour nudging Snyder Caron spoke about becomes cyclical, such as with Spotify and Google Home. “It’s our feedback to these systems that’s training AI to be smarter.”

AI and the rules-based international order

“Artificial intelligence should be grounded in human rights,” said Tara Denham, director of the Democracy Unit at Global Affairs Canada.

Denham acknowledged that AI makes mistakes, which can enforce discriminatory practices. It is an important question to ask how AI is already impacting biases and how they impact the future, seeing as “the future is evolving at an incredibly fast pace,” said Denham. One challenge is using systems that will amplify discriminatory practices, especially in growing countries who might not have the ability to work around them, according to Denham.

“When talking about ethics, they cannot be negotiated on an international level,” said Denham. Each country has their own ethics framework which may not be accepted or practiced elsewhere. In this scope, it’s important to have a common language and concepts to advance negotiations about human rights globally.

Feature photo by Hannah Ewen

Related Posts