Truth is no algorithmic matter

Technology is no better than the next guy when it comes to solving age-old human dilemmas

Meredith Broussard sits calmly at her desk. Behind her on a bookshelf is a copy of her latest book, Artificial Unintelligence, the topic of her latest Zoom talk.

“The people who decided to use an algorithm to decide grades were guilty of ‘technochauvinism,’” she says with a cool and collected tone that trumps the gravity of her research. She’s referring to the infamous decision that attributed artificial scores for a decisive IB exam based on an algorithm that looked at student’s performances pre-pandemic as well as their school ranking over previous years.

Technochauvinism is defined by the presumption that technology-based solutions are superior to human or social ones. This is a central concept to keep in mind when thinking about algorithms and their biases, which — although not always self-evident — sometimes have very tangible consequences.

And these consequences may be more serious than not scoring an A on a final test. With Broussard’s words still ringing in my ears, I stumbled upon an article exposing bias in algorithms used in American hospitals to prioritize access to chronic kidney disease care and kidney transplants. A study had found that the algorithm negatively discriminated against Black patients. It notably interpreted a person’s race as a physiological category instead of a social one — a design decision vehemently disputed by numerous medical studies.

Use of decision-making algorithms has become somewhat of a norm — it can be found anywhere, from the military, to newsrooms, to, most evidently, social media. They have found a purpose in making predictions, determining what is true, or at least, likely enough, and prescribing consequent actions. But in doing so, algorithms tacitly tackle some of our greatest dilemmas around truth, and they do so under the cover of a supposedly objective machine. As the kidney care algorithm clearly demonstrates, their interpretations are not an exact science.

Nonetheless, there is a tendency among humans, especially in the tech sector, to assume technology’s capacities are superior to that of human brains. And in many ways, they do outperform homo sapiens. Decision-making algorithms can be extraordinary tools to help us accomplish tasks faster and at a greater scope. In newsrooms, for instance, they are more efficient and accurate in producing financial and earnings reports. This is one of the promises of GPT-3, the latest language-generating bot, capable of producing human-like but repetitive text. This could significantly alleviate journalists’ workload and spare them from boring tasks.

What an algorithm should not do, however, is universally solve complex philosophical and ethical dilemmas, which humans themselves struggle to define, such as the matter of truth.

The case of the kidney care algorithm clearly illustrates how the ‘truth’ — about who is a priority — presents a clear distortion, embedded in the algorithm’s architecture. It also shows how what we hold to be true is exposed to change. It is subject to debates and additional information that might readjust and refine its meaning, from one that is biased and scientifically inaccurate to its ‘truer’ form that reflects more faithfully social realities.

The problem is perhaps not so much that the technology is imperfect, but rather that it is thought of and presented as something finite, which in turn leads us to be less vigilant of its blind spots and shortcomings. The risk is that the algorithmically prepared ‘truth’ is consumed as an absolute and unbiased one.

Scholars Bill Kovach and Tom Rosenstiel help us to think of truth as a “sorting-out process,” which results from the interactions between all stakeholders. The result does not represent an absolute truth — which, although it sounds compelling and elegant, may not ever be possible, for humans or machines. Rather, the sorting out process aims to paint a less incorrect picture.

Truth is the product of an ongoing conversation and this conversation should not take place solely within tech companies’ meeting rooms. It requires questioning and debate which cannot happen if one-sided interpretations are embedded in algorithms, dissimulated, and tucked away from the public space.

One simple way to ensure algorithms work for the benefit of human beings is to ensure more transparency about their design. In 2017, a Pew Research Center report on the matter had already called for increased algorithmic literacy, transparency and oversight. Last December, a British governmental report reiterated that proposition.

In the case of kidney care like for the IB test scores, algorithms have been actively contested and their uses have been revoked or appropriately adjusted. They have sparked a conversation about fairness and social justice that brings us closer to a better, more accurate version of truth.

 

 

 

Graphic by @the.beta.lab

Categories
Opinions

Algorithm editors and what they mean

What would journalism be without editors? Well, in my opinion, it would be pretty chaotic.

Editors are the backbone of journalism — take them out of the equation and you are setting loose a tsunami of fake news, badly written and poorly researched stories – to sum up, just total amateurism.

But, what do editors actually do?

According to Amelia Pisapia, journalist and former editorial director of Novel, editors are talented problem solvers who excel at putting information in context, assessing the accuracy of data and weeding out bias.

“They view issues from multiple angles, connect the dots and uncover human stories in complex systems,” writes Pisapia.

Pisapia adds that editors work within established ethical frameworks. She says that all editors have five values in common: accuracy, independence, impartiality, humanity and accountability.

However, in recent years editors have started to quite literally lose some of their humanity. With developments in technology and artificial intelligence, more and more media and news distributing platforms have started to use algorithms as editors instead of actual humans.

A good example is the algorithm behind the news feed on Facebook.Tobias Rose-Stockwell, a strategist, designer and journalist for Quartz wrote in his article, “[Facebook’s algorithm] shows you stories, tracks your responses, and filters out the ones that you are least likely to respond to. It is mapping your brain, seeking patterns of engagement.”

Sounds great doesn’t it? Having only quality news that you are interested in delivered right to your doorstep without having to move a muscle.

Well if it sounds too good to be true, it’s because it simply is. Algorithms are actually very far from being these perfect editors that we hope them to be. They have massive flaws and are actually very dangerous.

Don’t misunderstand me, algorithm editors have some good sides. They do surpass humans on some points — vis à vis their conduct as an editor for example.

In his article, “Can an Algorithm be an Editor?,” José Moreno, former multimedia director at Motorpress Lisboa explains that an algorithm has the silver lining of always acting the same way.

“Human editors always act differently on the basis of a common code,” Moreno says. “In a way, there is more accuracy and reliability in a “system” that always performs a function in the same way than in a “system” that always performs differently.”

So, yes algorithms have some upsides; Professor Pablo Boczkowski from Northwester University even called Facebook’s algorithm “the greatest editor in the history of humanity.”

But unfortunately, despite their virtues, any positive aspect that algorithms may present are always heavily outweighed by their negative counterparts.

The study , The Editor vs. the Algorithm: Targeting, Data and Externalities in Online News done by a collection of professors from different universities compared the different aspects of AI and human editors. The researchers discovered an alarming number of problems with algorithms editors, for example the algorithms tend to serve a less diverse mix of news to readers. They create a “bubble” effect as readers are presented with a narrower set of topics. An example the study presented was about readers who lived in German states where there was a high share of votes for extreme political parties. In the last election, those people were more likely to increase their consumption of political stories when their stories were selected by algorithms.

Another flaw with algorithms is their lack of social awareness; every calculation they make is based on an individual-level data. Algorithms don’t take into account “socially optimal reading behaviour,” according to the study.

“It doesn’t differentiate between factual information and things that merely look like facts,” said  Rose-Stockwell, referring to the Facebook example above. “It doesn’t identify content that is profoundly biased, or stories that are designed to propagate fear, mistrust, or outrage.”

The worst part in all of this, is that algorithms have even started to change the way some human editors think as well as the behavior of some news organizations. We have entered a traffic-at-all-costs mentality. News outlets are influenced by numbers, clicks and views now and no longer by journalistic values.

Despite all their flaws, regrettably, algorithm editors are still here and due to humans’ lust for technology and artificial intelligence, they are probably going to stay and even multiply.

But, why should algorithm editors be opposite to human editors, why should it be human vs machine?

The solution is easy: use a mix of both. The researchers from the study mentioned above concluded that “the optimal strategy for a news outlet seems to be to employ a combination of the algorithm and the human to maximize user engagement.”

In the digital age that we currently live in, machines will continue to take over more and more aspects of life. However, humans are more relevant than ever because these machines aren’t always optimal. So, in the end having a symbiosis between humans and machines is actually a comforting thought. It is the promise of a better tomorrow where machines will help humans and not supplant them.

Graphic by @sundaeghost

Categories
News

Graduate students explore the world of artificial intelligence and the early detection of anorexia

The paper focuses on the system’s efficiency in labelling early signs of anorexia.

Concordia graduate students Elham Mohammadi and Hessan Amini developed a research paper explaining an algorithm, using artificial intelligence, to detect signs of anorexia through social media for the Conference and Labs of the Evaluation Forum (CLEF) 2019, this September.

CLEF is a conference that has been running since 2000 in Lugano, Switzerland. It aims to address a wide range of topics, primarily focusing on “the fields of multilingual and multimodal information access evaluation.” Mohammadi and Amini worked under the supervision of Concordia Professor in Computer Science and Software Engineering Leila Kosseim.

Social media platforms are a rich source of information for research studies because people use these outlets to share a large sum of data in relation to their emotions, thoughts and everyday activities.

The research was based on a simulation scenario using past posts from social media. In an interview with The Concordian, Amini explained that there are a few reasons the study was focused on anorexia specifically.

“It wasn’t covered that much in literature,” he said. “Finding out the patterns requires a more complicated source of analyzing information.”

Their focus was on the early detection of the eating disorder.

“We don’t want to detect the risk after it has happened or after it has caused damage to the person,” Amini explained. “We want to detect that the person is showing signs of anorexia.”

The focus of the study was to test the algorithm. Amini clarified that their role is not to diagnose or analyze the data. The study is about the system’s efficiency in labelling these signs. With this, they are able to send this data to an expert to closely evaluate it.

Surfing through over 2,000 social media posts would be tedious and time-consuming, so the researchers used an algorithm called “attention mechanism.” This algorithm systematically filtered through the abundance of posts to detect those that were the most important, using keywords.

They had one data set that was already separated by users that showed signs of anorexia and those that did not, as well as another set of data that was not categorized at all. Mohammadi and Amini analyzed the data to compare the function of the system; however, it must be noted that when dealing with personal data, ethical complications may occur.

Mohammadi explained that when dealing with user’s data, some people might be hesitant to have their personal information analyzed. “People might not be comfortable with it,” he said.

In being able to detect certain patterns of anorexia on social media, more complex research topics arise. Although this is a good start, Amini explains that this research requires many experts sitting together and discussing solutions.

Amini notes that although people think artificial intelligence (AI) systems like this are set up to replace humans, the opposite is true.

“AI is going to be there to help humans,” he said. Amini explains that it will make the lives of psychologists and mental health practitioners easier.

Although this research is not the final solution, it can help bring awareness to those in need of mental attention and create a healthier society.

 

Graphic by Victoria Blair

Categories
Student Life

Artificial Intelligence as an agent of change

AI and human rights forum generates global discussions

On April 5, the Montreal Institute for Genocide and Human Rights Studies (MIGS) hosted the Human Rights and Artificial Intelligence Forum in Concordia’s 4th Space.

“Because we’ve done some work with Global Affairs Canada, the Dutch Foreign Ministry, and worked directly with different companies, we thought ‘let’s try to get a discussion going,’” said Kyle Matthews, MIGS’s executive director, about the event. Panelists from across the globe, some of whom Skyped in remotely, convened to give their expertise on the use of Artificial Intelligence (AI) technology with regards to human rights in different scopes.

“I’m happy we’ve generated discussions, that we’re connecting students and researchers of Concordia to practitioners in private sectors and in government,” said Matthews. “MIGS works on cutting edge issues with human rights and global affairs. We see, because Montreal is becoming the AI centre of the world, that there’s a unique opportunity for us to play a part in elevating the human rights discussion on a whole set of issues and conflicts.”

The Human Rights and AI Forum was held on April 5 at Concordia’s 4th Space. Photo by Hannah Ewen.

Troll Patrol: fighting abuse against women on Twitter

From London, Tanya O’Carroll, director of Amnesty Tech at Amnesty International, spoke about the innovation of AI in researching and crowdsourcing to enforce human rights.

Amnesty Tech’s Troll Patrol was a language decoding program that filtered hate speech towards female journalists and politicians on Twitter. The AI found instances ranging from sexism and racism, homophobia and Islamophobia, and more, with the majority aimed at women in minority groups.

The AI worked in tandem with volunteer human decoders, whom O’Carroll said are an important part of the loop. O’Carroll explained how the issue isn’t that Twitter doesn’t have a terms of abuse policy—it does, and it’s called “The Twitter Rules.” The issue is they don’t have enough moderators, which O’Carroll called their “business decision.”

The AI accurately predicted and identified only 52 per cent of abusive content on Twitter. O’Carroll acknowledged that, while this isn’t perfect, it’s valuable in challenging the data and bringing change to human rights issues on a large scale.

Emerging technologies in the public sector with a human-centric approach

During Enzo Maria Le Fevre Cervini’s panel, the major topic was governance. Le Fevre Cervini works with emerging technologies and international relations for the Agency for Digital Italy.

Le Fevre Cervini said the fourth revolution of AI is based on data gathered from the public sector, which emphasizes the need to focus on the quality and the quantity of data. The ethical dimensions should be less about the technology and more about its product—there needs to be a reassessment of AI as technology that can play a pivotal role in bridging the gap between parts of society.

Prometea, an AI software, quickly processes legal complaints at the DA’s office in Buenos Aires, Argentina. The complaints are compared to similar cases and the accused is either appointed a judicial hearing or not, according to the results. With just the DA computer system, it could take someone 30 minutes to get through 15 documents. With Prometea, all documents in the system are processed in two minutes.

“Technology is a major agent of change,” said Le Fevre Cervini, which is why he hopes governance of AI will change to allow the opportunity for technology to be more human-centred and widely available.

The series of panels was organized by the Montreal Institute for Genocide and Human Rights (MIGS). Photo by Hannah Ewen.

Ethics and AI

“There’s an assumption that AI will be smarter than humans, but they’re just good at narrow tasks,” said Mirka Snyder Caron, an associate at the Montreal AI Ethics Institute.

During her panel, Snyder Caron spoke about behaviour nudging, such as those little reply boxes at the bottom of an email on your Gmail account. While it may be easy, it’s “terribly convenient” because you’re just recycling what you’ve already done—the prompts are based on general replies and your previous emails.

Snyder Caron emphasized that it’s important to remember that AI systems are still just machines that “can be fooled” or “experience confusion.” She gave an example of an AI system that was unable to identify a stop sign covered in graffiti or one with squares concealing part of the word so it didn’t stop.

“Machine learning can adopt status quo based on patterns and classifications because of biases,” said Snyder Caron. To avoid problems such as discrimination, there needs to be increased diversity at the beginning of the AI process. For example, having a diversity of people inputting data could remove a layer of biases.

Bias, feminism and the campaign to stop killer robots

Erin Hunt, a humanitarian disarmament expert and program manager at Mines Action Canada, spoke about the darker side of AI—the dangers, in particular, of autonomous weapons.

With regards to autonomous weapons, aka Killer Robots, Hunt asked: “How are we sure they won’t distinguish atypical behavior?” Because they sometimes can’t distinguish between civilians and combatants, they don’t conform to human rights laws.

Hunt spoke about how biases lead to mistakes, and presented an example of a study of AI identification where 34.7 per cent of dark-skinned women were identified as men. Some AI target people that shouldn’t be targeted, such as people with disabilities. For example, there are regions of the world where people don’t have access to prosthetic limbs and use wood or metal as substitutes. This could be picked up by the AI as a rifle, thus having failed its job.

Technical difficulties with Skype during the panel further enforced Hunt’s point that if we can’t get a simple call from Ottawa to go through, we shouldn’t have autonomous weapons.

Zachary Devereaux (pictured) is the director of public sector services at Nexalogy. Photo by Hannah Ewen.

AI and disinformation campaigns

Zachary Devereaux, director of public sector services at Nexalogy, said there are two ways to train AI: supervised, which “requires human annotated data that the machine can extrapolate from to do the same types of judgement itself,” and unsupervised machine learning, where machines autonomously decide what judgement is necessary.

“Once you see a suggestion from AI as to what you should reply on your email, or once you see a suggestion from AI on how you should complete your sentence, you can’t unsee it,” said Devereaux.

“As humans, we’re so intellectually lazy—automated processes: we love them and we accept them,” said Devereaux. But because of this, the behaviour nudging Snyder Caron spoke about becomes cyclical, such as with Spotify and Google Home. “It’s our feedback to these systems that’s training AI to be smarter.”

AI and the rules-based international order

“Artificial intelligence should be grounded in human rights,” said Tara Denham, director of the Democracy Unit at Global Affairs Canada.

Denham acknowledged that AI makes mistakes, which can enforce discriminatory practices. It is an important question to ask how AI is already impacting biases and how they impact the future, seeing as “the future is evolving at an incredibly fast pace,” said Denham. One challenge is using systems that will amplify discriminatory practices, especially in growing countries who might not have the ability to work around them, according to Denham.

“When talking about ethics, they cannot be negotiated on an international level,” said Denham. Each country has their own ethics framework which may not be accepted or practiced elsewhere. In this scope, it’s important to have a common language and concepts to advance negotiations about human rights globally.

Feature photo by Hannah Ewen

Categories
Arts

4th SPACE is as flexible and adjustable as a bento box

A multidisciplinary addition to Concordia’s downtown campus

Concordia University’s 4th SPACE will be carrying out programs encompassing a variety of topics from avant-garde video games to open discussions about Indigenous cultures integrated in artificial intelligence during the upcoming months.

The explorative platform begins with a collaborative process between the school faculty and Concordia’s student associations, but it extends to more than a museum for school projects. After one month of its official launch in January, 4th SPACE revealed its interactive workshops to all passersby. The studio also features space for screenings and prototype installations presented by the university’s faculty members and students. Furthermore, its schedule offers roundtable events, an opportunity to spark conversation between guest panelists and the audience, that usually takes place in the center of the facility.

“Our collaborators, who will be researchers and students, take up residency in the SPACE, then they will transform the venue using specialized furniture,” said Knowledge Broker Prem Sooriyakumar. Designed to be as flexible and adjustable as a bento box, the venue can shift from a traditional science lab to a stage for visual art performances. “The way we’ve conceptualized the 4th SPACE is meant to be an agile space, meaning it can transform itself to the topic we are exploring for that set period,” Sooriyakumar continued.

Coinciding with the 50th anniversary of Sir George Williams Affair and Black History Month, the integrative studio has just hosted a commemoration of the Affair, Protests and Pedagogy.

On Jan. 31, the second evening of Protests and Pedagogy, Dorothy Williams’s workshop aimed to educate participants a card game she created. Williams is a historian and author of the only book that studied the history of black Canadians from New France era to 20th century Montreal, The Road to Now: A History of Blacks in Montreal. Her game, The ABCs of Canadian Black History, is a familiar combination between the classic bingo and childhood trading card game Yu-Gi-Oh. Instead of anime monsters, these cards feature prominent black Canadian figures and organizations such as successful entrepreneur Wilson Ruffin Abbott and the Victoria Pioneer Rifles.

Following Protests and Pedagogy, the 4th SPACE will be hosting Landscapes of Hope on Feb. 19 and 20. Photo by Mackenzie Lad.

Curated by Concordia’s Art Education professor, Vivek Venkatesh, and Communication Studies professor, Owen Chapman, Landscape of Hope is a two-day program in which the first part will be a workshop held at the 4th SPACE on Feb. 19. The workshop gives Concordia undergraduates and CEGEP students a space where they can voice their thoughts on racism and cyberbullying. The program will proceed with a visual and musical art performance led by the undergraduates and graduates of the university’s Communications Studies, Art Education, Music Therapy and Education departments on Feb. 20 starting at 5 p.m.

Affiliated with Concordia’s SOcial Media EducatiON Every day (SOMEONE) project and international touring festival Grimposium, Landscape of Hope aims to teach workshop participants and viewers digital resilience in relation to online hate speech.

Since 2016, Professor Venkatesh and the SOMEONE research team have garnered worldwide attention by sharing elementary to post-secondary students’s narrative on cyber racism through music, theatre and other art mediums. Their project, Landscape of Hope, demonstrated success at their official premiere in Norway last year.

On March 4, 4th SPACE will be housing Arcade 11 in collaboration with Technoculture, Art and Games Research Centre (TAG) and the Montreal Public Libraries Network. The arcade will feature experimental video games and “each game would have some kind of research component whether it was the technology involved, the experience or type of play,” said 4th SPACE coordinator, Douglas Moffat. Visitors will also have the opportunity to discuss these topics with the indie video game developers.

This event welcomes people of all ages; parents can mark this event in their to-do list of fun March break activities with their children. From retro arcade machines to a VR gaming experience, Arcade 11 is also the perfect opportunity for Concordia students to play and unwind after a study session for finals.

Taking place from March 18 to April 12, the studio’s planning team will carry out an exhibition centered around artificial intelligence. 4th SPACE will provide a platform for its visitors to reflect on the concept of Indigenous practices within AI. There will also be room for discussion about the hopes and fears of the innovative technology that is frighteningly powerful and limitless.

Since the studio’s opening, many Montreal residents and university students have come to see the new topic  4th SPACE was exploring. Successfully mirroring Concordia’s dynamic and inclusive climate, what was once a dark and forgotten corner at the downtown campus has regained a pulse.

Protest and Pedagogies was held at the space’s last event, a presentation surfacing the traumas and silences of 1969’s Sir George Williams Affair and the reparative work involved post-affair on Monday, Feb. 11. For more information, visit 4th SPACE’s schedule of activities & events.

Categories
Student Life

Four Montreal students take first place at HackHarvard

Four Montreal students take first place at HackHarvard

“HackHarvard was maybe my 10th hackathon,” said Nicolas MacBeth, a first-year software engineering student at Concordia. He and his friend Alex Shevchenko, also a first-year software engineering student, have decided to make a name for themselves and frequent as many hackathon competitions as they can. The pair have already participated in many hackathons over the last year, both together and separately. “I just went to one last weekend [called] BlocHacks, and I was a finalist at that,” said MacBeth.

Most notable of the pair’s achievements, along with their other teammates Jay Abi-Saad and Ajay Patal, two students from McGill, is their team’s first place ranking as ‘overall best’ in the HackHarvard Global 2018 competition on Oct. 19. According to MacBeth, while all hackathons are international competitions, “HackHarvard was probably the one that had the most people from different places than the United States.” The competition is sponsored by some of the largest transnational conglomerates in the tech industry. For example, Alibaba Cloud, a subsidiary of Alibaba Group, a multinational conglomerate specializing in e-commerce, retail, and Artificial Intelligence (AI) technology, as well as Zhejiang Lab, a Zhejiang provincial government sponsored institute whose research focuses on big data and cloud computing.

MacBeth said he and Shevchenko sifted through events on the ‘North American Hackathons’ section of the Major League Hacking (MLH) website, the official student hacking league that supports over 200 competitions around the world, according to their website. “We’ve gone to a couple hackathons, me and Alex together,” said MacBeth. “And we told ourselves ‘Why not? Let’s apply. [HackHarvard] is one of the biggest hackathons.’ […] So we applied for all the ones in the US. We both got into HackHarvard, and so we went.”

Essentially, MacBeth, Shevchenko, Abi-Saad, and Patal spent 36 hours conceptualizing, designing, and coding their program called sober.AI. The web application uses AI in tandem with visual data input to “increase accuracy and accessibility, and to reduce bias and cost of a normal field sobriety test,” according to the program’s description on Devpost. “I read a statistic somewhere that only a certain amount of police officers have been trained to be able to detect people [under the influence],” said MacBeth. “Drunk, they can test because they have [breathalyzers], but high, it’s kind of hard for people to test.”

MacBeth explained that the user-friendly web application could be helpful in a range of situations, from trying to convince an inebriated friend not to drive under the influence, to law enforcement officials conducting roadside testing in a way that reduces bias, to employees, who may have to prove sobriety for work, to do so non-invasively.

Sober.AI estimates the overall percentage of sobriety through a series of tests that are relayed via visual data—either a photo of an individual’s’ face or a video of the individual performing a task—that is inputted into two neural networks designed by the team of students.

“We wanted to recreate a field sobriety test in a way that would be as accurate as how police officers do it,” said MacBeth.

The first stage is an eye exam, where a picture of an individual is fed to the first neural network, which gives an estimation of sobriety based on the droopiness of the eye, any glassy haze, redness, and whether the pupils are dilated. The second stage is a dexterity test where individuals have to touch their finger to their nose, and the third is a balance test where people have to stand on one leg. “At the end, we compile the results and [sober.AI] gives a percentage of how inebriated we think the person is,” said MacBeth.

“Basically, what you want to do with AI is recreate how a human would think,” explained MacBeth. AI programs become increasingly more accurate and efficient as more referential data is inputted into the neural networks. “The hardest part was probably finding data,” explained MacBeth. “Because writing on the internet ‘pictures of people high’ or ‘red eyes’ and stuff like that is kind of a pain.” MacBeth said that he took to his social media pages to crowdsource photos of his friends and acquaintances who were high, which provided some more data. However, MacBeth said his team made a name for themselves at the hackathon when they started going from group to group, asking their competitors to stand on one leg, as if they were sober, then again after spinning around in a circle ten times. “That was how we made our data,” said MacBeth. “It was long and hard.”

Participating in such a prestigious competition and having sober.AI win ‘overall best’ left MacBeth and Shevchenko thirsty for more. “HackHarvard had a lot more weight to it. We were on the international level, and just having the chance of being accepted into HackHarvard within the six or seven hundred students in all of North America that were accepted, I felt like we actually needed to give it our all and try to win—to represent Concordia, to represent Montreal.”

MacBeth and Shevchenko have gone their separate ways in terms of competitions for the time being, however the pair’s collaborations are far from over. Both are planning to compete separately in ConUHacks IV at the end of January 2019, where MacBeth explained that they will team up with other software engineering students who have yet to compete in hackathons. “We’re gonna try to groom other people into becoming very good teammates,” said MacBeth.

The first-year software engineer concluded with some advice for fellow Concordia students. “For those in software engineering and even computer science: just go to hackathons,” advised MacBeth. “Even if you’re skilled, not skilled, want to learn, anything, you’re going to learn in those 24 hours, because you’re either gonna be with someone who knows, or you’re gonna learn on your own. Those are the skills you will use in the real world to bring any project to life.”

Feature photo courtesy of Nicolas Macbeth

Categories
Student Life

Mapping the future of artificial intelligence

Panelists define AI and discuss how this technology will impact society and the workplace

Artificial intelligence (AI) professionals discussed the impact and future of AI in the workplace and its role in society at large during a panel held at Concordia University on March 13.

“The fear of technological anxiety and mass unemployment due to artificial intelligence has been largely proven to be untrue,” said panelist Kai Hsin-Hung, a consultant at the International Training Centre for the International Labour Organization. “Rather than eliminating occupations, AI will most likely replace the tasks and how we are going to be doing them.”

According to Abhishek Gupta, an AI ethics researcher at McGill University, many people don’t fully understand the term AI, and its definition “has been shifting over time.” Gupta defined AI as “the ability of a machine to do a task that was previously thought to be only possible by human intelligence.”

Caroline Bourbonnière, a communications advisor for the research institute Element AI, clarified that, while certain jobs will be replaced with AI, the purpose of converting this work to automatic operations is to allow workers to be more efficient. “All of futurists are wrong about how quickly AI will be affecting the job market,” she said. “We have a lot of reports, and it was found that job creations versus job-loss projections tended to have a very balancing effect.”

Certain dangerous jobs, such as tractor operators and miners, may eventually be replaced by AI technology, but Bourbonnière emphasized that this does not mean AI will replace all jobs. In particular, she discussed how AI technology will be responsible for completing paperwork in the future, which will allow workers to focus on tasks more central to their job.

“In some organizations, people will be spending about two hours a week putting together reports,” Bourbonnière said, offering the example of how “79 per cent of social workers’ work is paperwork. Imagine what they could do with this time. They can be spending it with youth at risk.”

An important subdivision of AI is machine learning, Gupta explained. This refers to a digital system’s ability to “learn” a task that it is not explicitly programmed for. In this process, the digital system is provided with a set of data, which its AI component registers and internalizes. Machine learning is just one of the ways AI can be helpful, rather than a harmful, according to Xavier-Henri Hervé, the executive director of Concordia’s District 3 Innovation Centre.

“I do not think AI is the foe. AI is just reality,” he said. “The foe right now is time. The speed at which this is happening; things are happening a lot faster than anyone is imagining. [AI] is so convenient.“ Hervé reminded the audience that AI is already a component in many everyday devices, such as smartphones. “It is hiding everywhere,” he said.

Bourbonnière added that she believes it’s crucial to democratize AI to prevent large companies from monopolizing the technology, and to allow non-profit organizations to use AI to address issues around the world. “[Democratization] is education—to learn about the technology and not feel intimidated by it,” she said. “It’s important in widening the population’s understand of the technology.”

Feature photo by Mackenzie Lad

Exit mobile version