Categories
Sports

Double-check what you read while scrolling

Don’t believe everything you see on social media, especially when it comes to athletes.

Social media has evolved into an array of platforms where people cannot know for sure if they are seeing the truth. Many users spin a big situation differently to make it fit their own narratives, and, like most societal problems, it finds its way into the sports world.

Athletes are always under the spotlight with so many people paying close attention to their lives. When something big happens to an athlete, hordes of people take to the keyboards to give their two cents. The biggest consequence is that a lot of unverified information and claims appear on an easily accessible public forum, and they can be misinterpreted by other users.

The most recent case of spreading misinformation is the discussion surrounding the COVID-19 vaccine and its possible effects on different athletes. A very glaring instance of this occurred in January 2023, when Buffalo Bills safety Damar Hamlin suddenly collapsed on the field during an NFL game. 

Before doctors confirmed that a rare cardiac condition called commotio cordis had affected Hamlin, the public was delving into conspiracy. As National Public Radio’s Lisa Hagen reported after the incident, while fans were scrambling to learn the cause, “…on the internet, anti-vaccine activists filled in the silence with unfounded theories that Hamlin’s collapse was brought on by COVID vaccines.”

The social media discussion became so loud and turbulent that the player’s health seemed to take a back seat, which is rather ironic. In fact, someone even went as far as altering the headline of a CNN article in a screenshot to make people believe that a doctor determined the cause to be a COVID-19 booster shot. Everybody and their mother had something to say about the incident. 

Ten days after the false rumour circulation, USA Today felt the need to publish an article clarifying that there was no evidence that Hamlin’s condition was caused by the vaccine. “Doctors said a connection is highly unlikely given the list of cardiac issues that have long been observed as causing such incidents of cardiac arrest in athletes,” the article reads.

Hamlin was resuscitated on the field, and has now returned to playing football after recovering fully. But this incident remains a reminder of how important it is that people independently verify the information they read, especially on a public forum where anyone can say anything that’s on their mind.

More recently, Bronny James (son of NBA star LeBron James) went into cardiac arrest during a workout at the University of Southern California. When Elon Musk took to Twitter (now called X) and implied that the COVID-19 vaccine must have been partially or completely at fault for the incident, many impressionable people have believed it. Once again, it is more likely that James’ cardiac arrest was exercise-induced, since it is not an uncommon problem in teenage and young adult men.

So, the next time you read an outlandish claim, make sure you double-check its sources.

Lily Alexandre believes in better online communities

Video Essayist Lily Alexandre makes videos to help mend our broken online conversations

Lily Alexandre started her YouTube channel almost 10 years ago and has been producing videos on and off ever since. After a brief break in her output, she decided to start her channel back up when she became concerned about her job opportunities, having left Dawson College before graduation. So, deciding to use YouTube as a way to show off her skills to possible employers, Alexandre put out her first video in the “video essay” format. To her surprise, the video went viral.

The video that sent her channel soaring was released in January of this year, titled “Millions of Dead Genders: A MOGAI Retrospective,” which details the mostly forgotten “MOGAI” (Marginalized Orientations, Gender Alignments, and Intersex) community of 2010s Tumblr. This community, Alexandre explains, was largely comprised of early-teenage kids aiming to navigate their queer identities and formulate new names to put on their often confusing feelings that they felt did not fit neatly into existing “LGBTQIA+” categories. While often ridiculed for their incessant “micro-labeling,” Alexandre approaches this community with a critical lens to discuss why queer youth gravitated towards this outlook despite how it may have been detrimental to the ongoing process of some people’s gender exploration. Alexandre didn’t realize that this video would strike a chord with audiences so quickly.

“I was at work one day, packing orders at a warehouse and my phone started suddenly blowing up,” Alexandre detailed. “It was super exciting but I also had no idea how to approach it because I had made hundreds of YouTube videos and never had an audience over a thousand people. So, suddenly there was a lot of expectation.”

Since then, Alexandre’s channel has grown to have nearly 20K subscribers, and has released four more videos this year averaging about 30 minutes each, mostly discussing issues in online gender discourse.

However, with this focus on controversial topics in queer identity, as well as her being a visible trans woman online, Alexandre has begun to feel the burden of representing her community, where marginalized creators often feel the need to be more perfect and controversy-free than their peers in order to escape backlash.

Youtuber Lily Alexandre

“I think in my case, and in the case of a lot of queer and trans creators, it’s specifically a thing where

people have seen that they can relate to what I have to say and very quickly have become super attached to me, and kind of assumed that they know who I am and what I stand for outside of these videos,” Alexandre explained. “So, if I say something that goes outside the bounds of their image of me, there can be a lot of backlash, because I feel that people have gotten attached to me as a person and the idea that I have to live up to their ideal.”

Much of Alexandre’s catalogue focuses on where online conversations go wrong, and how we can start to piece our conversations back together. In her most recent video, “Do ‘Binary Trans Women’ Even Exist? The Politics of Gender Conformity,” she details the false dichotomy between non-binary and binary trans people and how both sides claim they are the ones that are more oppressed. This whole argument, Alexandre argues in the video, is reductive to the core, as it places all trans people into one of two boats, erasing important nuances in personal experiences.

Alexandre’s videos show viewers how to be more generous with each other online. Alexandre jokes in her videos about simply “logging off” of toxic conversations online, but she believes that there is truth to this suggestion.

“I think just engaging with people face-to-face builds a lot more empathy than we have online. I’ve been trying to carry that empathy into my online interactions too,” she suggested. “If I see someone with a ‘take’ I think is bad […] that doesn’t make us enemies. This stuff is just a lot lower stakes than it feels online.”

When producing videos spanning difficult topics like gender identity and mental illness, Alexandre is still learning how to balance her work with her own mental wellbeing. She finds herself sometimes getting overwhelmed when putting together videos with such heavy content. However, over the past few months, she’s been learning how to deal with these uncertain moments.

“In those cases, it’s been helpful to remind myself why I’m writing the thing I am. It’s usually not just to talk about ‘Hey, this is really awful, let’s wallow in it.’ It’s usually directional, it’s usually for a purpose,” Alexandre explained. “Because I’ve talked mostly about things I feel do have stakes, and my takes might move the needle in the right direction.”

Looking to the future, Alexandre plans to step away from videos along the topic of gender identity to focus on other issues. Worried she may get pigeonholed, she plans on also creating videos about art, games, music, and other interests.

All in all, Alexandre wants her channel to be a place of discovery and empathy, no matter the topic of videos she puts out.

“I’m hoping there can be a space for talking about these big questions in a way that isn’t super partisan,” explained Alexandre. “And I hope it can be an empathetic place where people are interested in understanding each other more than they are about being correct or being superior.”

 

Photographs by Catherine Reynolds

Categories
Arts

Going back in time in La vie sans applis

Rediscovering life before digital technology, Internet and social media

Walking through the exhibition feels like traveling back in time. For some, it will seem like an unknown life, whereas for others, it will seem familiar. 

Exhibited at the historical Musée de Lachine, La vie sans applis invites viewers to take a walk in a space that shows them life without the internet or social media. The exhibition is presented through different sections, which include social media, photos, music, games, e-mail, and more. It’s presented in a manner that displays the evolution of these different subjects. Each section also provides three types of information: a historical fact about Lachine, a “did you know,” and environmental facts.

When entering the room, viewers can see a blue wall to their left, where photographs of people are displayed. Pictures of hockey teams, as well as people fishing, playing tennis or running a marathon, can be admired among many other photographs. Ironically, in today’s world, this would be similar to an Instagram or Facebook feed. Perhaps it could also make visitors think of an old family photo album that they peek at once in a while. 

When looking at the photo, video and music sections, there are a variety of objects that can be gazed upon. One can see the evolution of cameras, now old relics with different shapes and sizes. In today’s world, we are able to instantly take pictures with our cell phones. Still, some take pleasure in using a film camera, waiting with excitement for the shots to be developed. Aesthetically, old-school looks better. 

Phonograph records dating from 1923, and an electric and battery operated radio circa 1937 are among other objects seen in the section. Today, there’s no need to worry when it comes to music, considering the multitude of apps that allow people the opportunity to listen to whatever they like. The internet has allowed younger generations to discover music from once upon a time, and help older generations look for their favourite older music with a better sound quality.

One downside of today’s music devices is streaming. According to an article published in 2019 by Rolling Stone, a researcher from the University of Oslo explored the environmental impact of streaming music and found out that “music consumption in the 2000s resulted in the emission of approximately 157 million kilograms of greenhouse gas equivalents.”

The exhibition suggests that the audience download and save the music on one’s device. Knowing the amount of music we listen to per day, it would be a challenge for everyone to go back to cassettes and vinyl when everything we listen to is on our devices. 

The game section of the exhibition displays familiar pastimes, such as a chess board from 1910, cards from the 20th century, lawn bowling balls from the 19th century and more. Though video games appear to have replaced some of these old forms of entertainment, they are still enjoyed by many out there. In all sincerity, game night with your pals at your favourite board game bar is far more exciting. 

The exhibition also demonstrates the way information was received in the past, how products were promoted and the way encyclopedia collections were equivalent to today’s search engines. Everything that is exhibited in La vie sans applis can be found on a cell phone. Whether you want to use a calculator, look at the world clock, or communicate with distant family members, everything can be done immediately. 

Digital technology has shaped the way the world works as everything travels faster than ever. However, it is essential to take a break and recharge by doing an activity that doesn’t involve using our cell phones. La vie sans applis encourages the audience to think about the relationship people have with their electronic devices. 

In the end, the real question is: would it be possible today to live without them? 

La vie sans applis is being displayed at 1 Chemin du Musée every day from 10:00 a.m. to 5:00 p.m. until Oct. 10. 

 

Photo by Ana Lucia Londono Flores

Truth is no algorithmic matter

Technology is no better than the next guy when it comes to solving age-old human dilemmas

Meredith Broussard sits calmly at her desk. Behind her on a bookshelf is a copy of her latest book, Artificial Unintelligence, the topic of her latest Zoom talk.

“The people who decided to use an algorithm to decide grades were guilty of ‘technochauvinism,’” she says with a cool and collected tone that trumps the gravity of her research. She’s referring to the infamous decision that attributed artificial scores for a decisive IB exam based on an algorithm that looked at student’s performances pre-pandemic as well as their school ranking over previous years.

Technochauvinism is defined by the presumption that technology-based solutions are superior to human or social ones. This is a central concept to keep in mind when thinking about algorithms and their biases, which — although not always self-evident — sometimes have very tangible consequences.

And these consequences may be more serious than not scoring an A on a final test. With Broussard’s words still ringing in my ears, I stumbled upon an article exposing bias in algorithms used in American hospitals to prioritize access to chronic kidney disease care and kidney transplants. A study had found that the algorithm negatively discriminated against Black patients. It notably interpreted a person’s race as a physiological category instead of a social one — a design decision vehemently disputed by numerous medical studies.

Use of decision-making algorithms has become somewhat of a norm — it can be found anywhere, from the military, to newsrooms, to, most evidently, social media. They have found a purpose in making predictions, determining what is true, or at least, likely enough, and prescribing consequent actions. But in doing so, algorithms tacitly tackle some of our greatest dilemmas around truth, and they do so under the cover of a supposedly objective machine. As the kidney care algorithm clearly demonstrates, their interpretations are not an exact science.

Nonetheless, there is a tendency among humans, especially in the tech sector, to assume technology’s capacities are superior to that of human brains. And in many ways, they do outperform homo sapiens. Decision-making algorithms can be extraordinary tools to help us accomplish tasks faster and at a greater scope. In newsrooms, for instance, they are more efficient and accurate in producing financial and earnings reports. This is one of the promises of GPT-3, the latest language-generating bot, capable of producing human-like but repetitive text. This could significantly alleviate journalists’ workload and spare them from boring tasks.

What an algorithm should not do, however, is universally solve complex philosophical and ethical dilemmas, which humans themselves struggle to define, such as the matter of truth.

The case of the kidney care algorithm clearly illustrates how the ‘truth’ — about who is a priority — presents a clear distortion, embedded in the algorithm’s architecture. It also shows how what we hold to be true is exposed to change. It is subject to debates and additional information that might readjust and refine its meaning, from one that is biased and scientifically inaccurate to its ‘truer’ form that reflects more faithfully social realities.

The problem is perhaps not so much that the technology is imperfect, but rather that it is thought of and presented as something finite, which in turn leads us to be less vigilant of its blind spots and shortcomings. The risk is that the algorithmically prepared ‘truth’ is consumed as an absolute and unbiased one.

Scholars Bill Kovach and Tom Rosenstiel help us to think of truth as a “sorting-out process,” which results from the interactions between all stakeholders. The result does not represent an absolute truth — which, although it sounds compelling and elegant, may not ever be possible, for humans or machines. Rather, the sorting out process aims to paint a less incorrect picture.

Truth is the product of an ongoing conversation and this conversation should not take place solely within tech companies’ meeting rooms. It requires questioning and debate which cannot happen if one-sided interpretations are embedded in algorithms, dissimulated, and tucked away from the public space.

One simple way to ensure algorithms work for the benefit of human beings is to ensure more transparency about their design. In 2017, a Pew Research Center report on the matter had already called for increased algorithmic literacy, transparency and oversight. Last December, a British governmental report reiterated that proposition.

In the case of kidney care like for the IB test scores, algorithms have been actively contested and their uses have been revoked or appropriately adjusted. They have sparked a conversation about fairness and social justice that brings us closer to a better, more accurate version of truth.

 

 

 

Graphic by @the.beta.lab

Media literacy is the new alphabet: why everyone needs to know how to read the news

Disinformation circulating on social media can now be the difference between illness and health.

To the untrained eye, a video of Stella Immanuel, an American doctor, appears completely legitimate. Immanuel, while wearing her white coat and standing in front of the U.S. Supreme Court building, says she knows how to prevent further COVID-19 deaths. With a line of other people wearing white lab coats behind her, she assures that the virus has a cure: hydroxychloroquine.

The claim spread quickly across social platforms, garnering millions of views after being shared by Donald Trump and one of his sons. Both Facebook and Twitter quickly removed the video for violating their misinformation policies, and the Centers for Disease Control debunked the doctor’s claims. But for millions, the damage had already been done — the seed of misinformation had been sown.

Media literacy, or more specifically a lack thereof, could prove to be one of the biggest threats posed by social media. As displayed by viral claims that attempt to downplay the virus’s severity and unfounded theories for potential cures, the threat extends beyond the practice, and to society as a whole.

Facebook and other social media platforms have upped their misinformation policies as a response to the pandemic and the 2020 U.S. presidential election. Twitter has implemented a label beneath tweets that present disputed election claims, warning the viewer of such.  They’ve also begun completely removing some tweets with false information, as they did for the Immanuel video. Facebook has also started flagging posts as misleading or inaccurate, though its implementation has drawn a mixed reaction.

As the World Health Organization deems it, the problem this “infodemic” presents is obvious; the solution, on the other hand, remains in question. While the steps taken by Twitter and Facebook are a good start, more needs to be done to help individuals struggling to navigate the modern media landscape. I believe that media literacy courses should be required for all Canadians at the high school level, in order to reduce the spread of misinformation, and improve social media as a news-sharing platform.

Per a Ryerson University study, 94 per cent of online Canadians use social media. More than half of those users reported having come across some form of misinformation. A McGill University study found that the more a user relied on social media for news related to the pandemic, the more likely they were to defy public health guidelines. The inverse is equally true: the more a person relies on traditional news media for pandemic information, the more likely they were to follow the guidelines. A similar study at Carleton University found that almost half of Canadians surveyed believe at least one Corona virus conspiracy theory, with more than 25 per cent believing the virus was engineered in China as a weapon.

There are media studies courses that focus on the influences that advertising, propaganda and even cinema can have on consumers. But in the digital ecosystem that we currently find ourselves in, it has become essential to realize why misinformation exists on social media, and who benefits from it. Yet, students are never taught how to use these platforms properly.

In April, the Canadian government invested $3 million in order to help fight against virus-related misinformation. The money will be divided among several programs with the aim of “helping Canadians become more resilient and think critically.” As recently as late October, the federal government launched a program in collaboration with MediaSmarts to benefit Media Literacy Week in 2020, 2021, and 2022.

This plan, while well-intentioned, is reactive rather than proactive. Viewing misinformation related to the pandemic as a blip rather than the new normal is potentially very dangerous.

Last year in the U.S., a federal bill was introduced calling for $20 million of investment in media literacy education. Since then, 15 states have introduced media literacy bills, which aim to add media literacy as a part of the required high school curriculum. Beyond more consistent and clear messaging from all levels of government, experts prescribe some level of training required for students. Right now, social media users are left to use the formative platforms without the proper equipment; they are placed in a sea of information without a life raft.

In order to remedy its problem with misinformation, it will be essential for Canadian students to be instructed in media literacy by the time they graduate from high school. This baseline education, coupled with the advocacy we continue to see from groups such as MediaSmarts, creates a more educated media-consuming population. In the midst of this pandemic, it is media literacy, even more than epidemiology or politics, that could prove to be the greatest life-saver.

 

Feature graphic by @the.beta.lab

Categories
Opinions

Going down the rabbit hole? How we’ve politicized the internet

I first came here for cat videos but now I can’t stop reading about conspiracy theories

The first time I heard of the Among Us game was in an article about how it had become the target of spam attacks led by pro-Trump supporters. This came a few days after Alexandria Ocasio-Cortez had set up a Twitch stream of the game as a way to incite people to vote for Joe Biden in the then-then-soon-approaching elections.

What used to be an innocent game that gained popularity among bored youngsters during quarantine ended up — yet again — as a battleground for Democrat versus Republican discord. So much for simply wanting to find your secret alien crew member.

Our southern neighbours’ recent presidential race has brought a whirlwind of political discourse in the past few weeks, and understandably so. The American elections are by far the most watched and discussed in the world. But then again, what’s new? Strong reactions to this event are expressed online every four years, does it make a difference that the results are still on everyone’s social media feeds?

As a Political Science major and self-proclaimed politics nerd, I think it’s a good thing that the internet, the most accessible and practical information-gathering tool we have right now, is bringing to people a sense of responsibility for the state of their country. I’m of the opinion that everyone should know their own point of view on political matters because everyone should be involved in how the country is run — in academic terms, this is called a democracy.

I also respect the openness about controversial topics that has sprouted in recent years. Politics are gradually becoming less of a taboo subject at Christmas family reunions — or at least, despite their prohibition, people are initiating these debates anyway.

This being said, the place we once went to to hide and not take anything too seriously has lost that magic. You can’t log onto Twitter or TikTok anymore just to watch lighthearted content and take your mind off things without running into a political feud. Every corner of the internet has been labeled with a political affiliation.

Many made fun of Ben Shapiro over the summer when he expressed discontent about sports being so politicized he didn’t even want to watch it anymore. “My place of comfort has been removed from me,” he said, raising many a mocking comment noting this as the definition for a safe space, a concept he has repeatedly antagonized in the past.

Shapiro is a controversial figure, and though I don’t necessarily ascribe to his political sentiments, I do feel the same way about having eroded what apolitical space we had. Now, I’m not certain if this is because people themselves turn even the most aleatoric content into part of a debate, or if simply more of our world is becoming political.

For instance, Shapiro talks about not wanting to read Sports Illustrated because of Caitlin Jenner’s feature on the cover, but she didn’t need to be politicized. She seems to me to be even more relevant to the world of sports than any of the models who adorn the pages of the magazine’s annual Swimsuit issue.

This is how a vicious cycle is formed: we constantly see political debates about the rights of trans women, so much so that we attribute this identity to a political leaning.

I feel for the kids who are growing up only knowing the internet, a platform the world is increasingly dependent on, as a tense and hostile place, and whose quarantine pastimes get turned into presidential debate stages. They might not ever know the simple times of cat videos, fail compilations, and the ice bucket challenge.

 

Feature graphic by Taylor Reddam

Categories
Opinions

Fake news is a meme that should die

“Fake news”—that awful, awful term is a meme that has hit its mark, proven its fitness, and is gaining traction due to misunderstanding, division and lulz that we are all guilty of spouting. We are feeding it every time we utter it.

And we should just stop using it.

Fake news generally refers to information that is false or misleading, often sensational, and masked as news. It is a term that is shouted, spouted, typed and copy-pasted a great deal. It’s even associated with a specific voice in my head—can you guess whose?

Now, when I refer to “fake news” as a “meme,” I don’t mean those tacky time-wasters we should all ignore on the internet. I’m writing about the original definition of meme as coined by Richard Dawkins in his 1976 book, The Selfish Gene.

The book itself presents the view that the gene is the agent of evolution (as opposed to the individual or the group). In the last chapter, Dawkins explores the idea of a unit of cultural evolution that works kind of similarly, though also differently. The meme, as he named it, is an idea, behaviour or style that exists in human minds and persists because of its sticking power and ability to spread. “Smoking is cool” is a meme that receives help from nicotine and the tobacco industry.

To be clear, internet memes aren’t quite the same. As Dawkins put it in a speech at Saatchi & Saatchi New Directors’ Showcase 2013 in Cannes in 2013, “instead of mutating by random chance and spreading by a form of Darwinian selection, they are altered deliberately by human creativity.” Internet memes are mere playthings for humans, and while real memes are created by humans, they evolve naturally.

Fake news is a meme in the original sense, and a strong one at that. It survives because it’s based on truth: false news is a real problem. It thrives by latching on to our fear of being lied to, the belief that people of opposing views are more likely to spread or believe lies—our fear of journalism’s demise, and the mix of humour and outrage we feel when Donald Trump uses it as a slur.

Sure, disinformation has always existed and will always exist—much like the people generating it, believing it and the journalists fighting against it. It’s a never-ending struggle. But this fake news business has gotten out of hand. It doesn’t simply exist to refer to disinformation in one form or another anymore.

The Washington Post and BuzzFeed News were among the first to use the term in October 2016 to describe how false news articles on Facebook had influenced the US elections. That put the seed in people’s minds. Then, President Trump threw an all-caps FN-bomb at CNN on Twitter in December of that year, which was the water that nurtured the meme’s growth.

Columnist Margaret Sullivan of The Washington Post actually warned us a couple of weeks later, calling the term a label that has been “co-opted to mean any number of completely different things: Liberal claptrap. Or opinion from left-of-center. Or simply anything in the realm of news that the observer doesn’t like to hear.”

To my liberal friends, stop using it ironically. To my conservative friends, stop using it so angrily. To my journalistic friends, stop using the term entirely. After this article, I will also stop using it. That’s the only way to kill a meme. Because we’re not really using it. It’s using us. Stop saying it. Stop writing it. Let it die.

 

Graphic by @sundaeghost

Categories
Student Life

Four Montreal students take first place at HackHarvard

Four Montreal students take first place at HackHarvard

“HackHarvard was maybe my 10th hackathon,” said Nicolas MacBeth, a first-year software engineering student at Concordia. He and his friend Alex Shevchenko, also a first-year software engineering student, have decided to make a name for themselves and frequent as many hackathon competitions as they can. The pair have already participated in many hackathons over the last year, both together and separately. “I just went to one last weekend [called] BlocHacks, and I was a finalist at that,” said MacBeth.

Most notable of the pair’s achievements, along with their other teammates Jay Abi-Saad and Ajay Patal, two students from McGill, is their team’s first place ranking as ‘overall best’ in the HackHarvard Global 2018 competition on Oct. 19. According to MacBeth, while all hackathons are international competitions, “HackHarvard was probably the one that had the most people from different places than the United States.” The competition is sponsored by some of the largest transnational conglomerates in the tech industry. For example, Alibaba Cloud, a subsidiary of Alibaba Group, a multinational conglomerate specializing in e-commerce, retail, and Artificial Intelligence (AI) technology, as well as Zhejiang Lab, a Zhejiang provincial government sponsored institute whose research focuses on big data and cloud computing.

MacBeth said he and Shevchenko sifted through events on the ‘North American Hackathons’ section of the Major League Hacking (MLH) website, the official student hacking league that supports over 200 competitions around the world, according to their website. “We’ve gone to a couple hackathons, me and Alex together,” said MacBeth. “And we told ourselves ‘Why not? Let’s apply. [HackHarvard] is one of the biggest hackathons.’ […] So we applied for all the ones in the US. We both got into HackHarvard, and so we went.”

Essentially, MacBeth, Shevchenko, Abi-Saad, and Patal spent 36 hours conceptualizing, designing, and coding their program called sober.AI. The web application uses AI in tandem with visual data input to “increase accuracy and accessibility, and to reduce bias and cost of a normal field sobriety test,” according to the program’s description on Devpost. “I read a statistic somewhere that only a certain amount of police officers have been trained to be able to detect people [under the influence],” said MacBeth. “Drunk, they can test because they have [breathalyzers], but high, it’s kind of hard for people to test.”

MacBeth explained that the user-friendly web application could be helpful in a range of situations, from trying to convince an inebriated friend not to drive under the influence, to law enforcement officials conducting roadside testing in a way that reduces bias, to employees, who may have to prove sobriety for work, to do so non-invasively.

Sober.AI estimates the overall percentage of sobriety through a series of tests that are relayed via visual data—either a photo of an individual’s’ face or a video of the individual performing a task—that is inputted into two neural networks designed by the team of students.

“We wanted to recreate a field sobriety test in a way that would be as accurate as how police officers do it,” said MacBeth.

The first stage is an eye exam, where a picture of an individual is fed to the first neural network, which gives an estimation of sobriety based on the droopiness of the eye, any glassy haze, redness, and whether the pupils are dilated. The second stage is a dexterity test where individuals have to touch their finger to their nose, and the third is a balance test where people have to stand on one leg. “At the end, we compile the results and [sober.AI] gives a percentage of how inebriated we think the person is,” said MacBeth.

“Basically, what you want to do with AI is recreate how a human would think,” explained MacBeth. AI programs become increasingly more accurate and efficient as more referential data is inputted into the neural networks. “The hardest part was probably finding data,” explained MacBeth. “Because writing on the internet ‘pictures of people high’ or ‘red eyes’ and stuff like that is kind of a pain.” MacBeth said that he took to his social media pages to crowdsource photos of his friends and acquaintances who were high, which provided some more data. However, MacBeth said his team made a name for themselves at the hackathon when they started going from group to group, asking their competitors to stand on one leg, as if they were sober, then again after spinning around in a circle ten times. “That was how we made our data,” said MacBeth. “It was long and hard.”

Participating in such a prestigious competition and having sober.AI win ‘overall best’ left MacBeth and Shevchenko thirsty for more. “HackHarvard had a lot more weight to it. We were on the international level, and just having the chance of being accepted into HackHarvard within the six or seven hundred students in all of North America that were accepted, I felt like we actually needed to give it our all and try to win—to represent Concordia, to represent Montreal.”

MacBeth and Shevchenko have gone their separate ways in terms of competitions for the time being, however the pair’s collaborations are far from over. Both are planning to compete separately in ConUHacks IV at the end of January 2019, where MacBeth explained that they will team up with other software engineering students who have yet to compete in hackathons. “We’re gonna try to groom other people into becoming very good teammates,” said MacBeth.

The first-year software engineer concluded with some advice for fellow Concordia students. “For those in software engineering and even computer science: just go to hackathons,” advised MacBeth. “Even if you’re skilled, not skilled, want to learn, anything, you’re going to learn in those 24 hours, because you’re either gonna be with someone who knows, or you’re gonna learn on your own. Those are the skills you will use in the real world to bring any project to life.”

Feature photo courtesy of Nicolas Macbeth

Categories
Arts

What makes an art critic?

Saelen Twerdy talks internet, archives and dematerialization

“I grew up in a cultural vacuum,” recalled Saelen Twerdy, a Montreal-based writer, editor, art critic and PhD candidate in art history at McGill University. His increasing desire for culture was fueled by this notion. Growing up in a small, exurban town in British Columbia, his initial exposure to critical literature was through video game magazines like Gamefan.

Twerdy considered himself a “video game snob,” because he preferred to read about video games rather than play them. Although these magazines sparked Twerdy’s interest in criticism, he said the internet is what truly served as a “gateway to experience the outerworld.” It allowed him to develop an interest in music and, ultimately, spend a decade as a music journalist writing for publications such as Color Magazine and Discorder Magazine.

On Nov. 9, Twerdy was featured in Conversations in Contemporary Art’s fifth lecture series held by Concordia’s studio art MFA program. The series provides the opportunity to hear a variety of artists, writers, critics and curators discuss their practices.

Twerdy’s talk, “How I Became an Art Critic,” discussed the internet’s role in developing his curiosity of the digital world and his understanding how culture is consumed. His current fascination is the dematerialization of art since the 1960s, which refers to how art has become increasingly computerized, leading to the replacement of its physical form.

During his studies in art history and film at the University of British Columbia, Twerdy retained an interest in art and technology. His curiosity of how people determine the definition and value of art led him to write about it.

I did not understand how to appreciate this work of art,” Twerdy said, referring to Rodney Graham’s Millennial Time Machine (2003).

This was the first piece he remembers coming across in a gallery and not grasping. Attracted to works that demonstrate some sort of analysis or reaction to society, he became frustrated and confused; he wanted to understand what he was looking at. What Twerdy learned from Graham’s piece was that he really needed to push his critical thinking. The ability to observe art critically changes the way a person experiences and engages with it, he said.

By studying how critics and artists research and analyze the way ideology circulates in a culture, Twerdy realized that in order to fully grasp art, he needed to study this particular phenomena. Thus, his desire to learn about the status of art and how conceptualism relates to dematerialization developed—conceptualism being an idea of an abstract object, and dematerialization meaning how an object becomes immaterial.

What is art and where [does it] belong?” Twerdy asked.

“Coming to art through criticism, as opposed to criticism through art, had an influence on my work.”
Steven Shearer, a contemporary artist from Vancouver who uses archives as a point of departure in his work, is a key figure in Twerdy’s research. Archiving and the inescapability of the internet piqued Twerdy’s curiosity. “If you want to talk about archive, you have to talk about the internet,” he said.

The internet was certainly a recurring topic in Twerdy’s talk. He recalled creating his Tumblr account in 2008, which filtered the way he experienced and engaged with art. It allowed him to witness the emergence of online art through viewing and being a part of its changes, which he said changed his perception of conceptual art and its relationship to the past.

“[I am] attracted to artists who work like critics,” Twerdy said, and laughed as he explained that he has always loved reading books about books and observing artworks about art, because it allowed him to think about creative works differently. This essentially described his curiosity about theory, specifically theories surrounding the notion of art as influenced by technology and its place in our world today.

According to Twerdy, critiquing art is less about the artwork itself and more about its place in society. Simply writing about art does not make someone an art critic. Stemming from general intrigue, criticism requires analysis, evaluation and thorough knowledge of the matter at hand, rather than a simple explanation of why a piece is of interest. Twerdy’s talk made the difference rather simple: art critics dive deep and take no shortcuts, while arts writers are all about generalization and promoting simplified, easy-to-read versions of complex artistic ideas.

Categories
Opinions

Tide Pods: From laundry to brainwashing

Social media challenges highlight a deeper issue within today’s meme culture

Over the last three weeks, a new challenge has emerged on social media called the “Tide Pod Challenge.” It quickly became a meme online, as many people made jokes about eating the colourful detergent packets. Despite the danger and the laundry brand telling people not to eat the pods, many people—mostly teenagers—continue to videotape themselves eating Tide Pods.

The first time I heard about a challenge on social media was the 2014 Ice Bucket Challenge, and it was for a good cause. Since then, many new dares have emerged on the internet, and in my opinion, many of them are stupid. With the Tide Pod Challenge specifically, teenagers record themselves biting into the packets in order to gain views, recognition and popularity on social media.

You’re probably reading this thinking the same thing as me: this challenge is just stupid and dangerous. People are ingesting toxins by intentionally eating Tide Pods. In 2017, before the challenge even began, more than 10,500 children under the age of five and 220 teens were exposed to Tide Pods, and about 25 per cent of those cases were intentional, according to the Washington Post.

Perhaps we can understand why very young children might be attracted to the colour and the pleasant smell of Tide Pods, but I for one cannot understand why a teenager—who can make reasonable choices—is compelled to do the same. So why are they doing this? I believe I might have an answer.

Recently, our society has entered an era characterised by social media and meme culture. This facet of culture has been defined by Richard Dawkins, in his book The Selfish Gene, as “an element of a culture or system of behaviour that may be considered to be passed from one individual to another by non-genetic means, especially imitation.” In today’s culture, memes and social media are the diffusers of ideas within the online world, and they are limitless. Anyone can find anything on any subject online. It is a beautiful and useful tool, or a dangerous one—especially for people who are easily influenced, such as teenagers.

The problem is that, in our era of social media, the border between public and private life is slowly being erased. Every time we log on to a social media platform, such as Instagram or Facebook, we see people sharing idealistic pictures and videos of their everyday lives.

Even if most social media users understand that these perfect images do not reflect real life, I believe many teenagers can be influenced by these people, which lead them to constantly pursue views, likes and perfection online.

These teenagers, therefore, will follow a trend not because it is something they think is valuable and useful, but because they think it is the first step to celebrity and popularity. However, reality often catches up to them, but perhaps too late, when their lives are endangered. They hope to become celebrities, but often become known on a small scale, limited to their neighbourhood news or the emergency medical services.

Fortunately, Tide has quickly reacted to the challenge by creating advertisements that show the dangerous effects of eating their products. Yet it doesn’t seem to be enough as more intentional cases of Tide Pod ingestion are reported every day (already 39 since the beginning of the year, 91 per cent of which were intentional), according to the Washington Post.

I believe social media perpetuates meme culture, and teenagers in this culture suffer potential brainwashing from online trends. Unfortunately, most teenagers today cannot be themselves without thinking about what they have to do in order to be liked and loved in their virtual community.

Graphic by Alexa Hawksworth 

Categories
Opinions

Internet hoaxes: easily seen, easily believed

Admit it, you’ve fallen for an Internet hoax. You’ve passed on a chain email because you were scared if you didn’t, you’d be alone forever. You clicked on that Facebook link offering a $500 gift card, and then spent the next few days deleting all the spam from your wall. Don’t worry, we’ve all done it.

Browse wisely and save yourself from embarrassment (and viruses). Photo by Sarebear:), Flickr.

In recent news, U.S. Olympian Kate Hansen posted a video that caused a stir worldwide. The media was full of so-called “Sochi fails,” ranging from unfinished hotel rooms to bright yellow water coming from taps, but Hansen’s video was about to blow them all out of the water. She wrote on her YouTube page, “I’m pretty sure this is a wolf wandering my hall in Sochi,” and guess what? The video actually showed a wolf outside her door.

The stunt was later discovered to have been organized by late-night TV host Jimmy Kimmel, but not before it went viral.

In 2013, Kimmel pulled a similar stunt with his “twerk fail” video that showed a girl twerking against a door, then falling onto a candle-covered table as her hair caught fire.  People on the Internet went crazy, until they realized they’d been fooled.

Nowadays, we will believe almost anything we see on the web, and people are quick to take advantage of our naiveté. Whether it’s comedians or online pranksters, we’re constantly bombarded by fake content and the problem is that people consistently believe it.

It’s not that we’re getting more stupid, we’re just getting lazy.

In this day and age, we have access to whatever we want in a matter of seconds. We can get news sent directly to our smartphones within minutes of it happening. We don’t have to put any effort into looking for information, so we’ve stopped trying.

That’s how we’ve become so gullible—our laziness is being taken advantage of. Internet trolls know that most of us won’t bother checking the facts, and as soon as people start sharing a ridiculous story, their job is done. If years of being in school have taught us anything, it’s that the Internet isn’t always a trustworthy source.

We all need to start thinking logically. Do your research before believing everything you read. If you stumble upon an article about Pauline Marois wanting to open a Quebecois-only blood bank, look into where the article came from. You know the story I’m talking about, it was all over social media a few months ago. People were so quick to share it in disgust that they didn’t even check the source. As it turns out, The Lapine, the website the story originated from, was a satirical news outlet.

Most importantly, use your common sense. Do you really think Zara would give a $500 gift card to every person who shared something on Facebook? Or that there’s actually a disorder called “Alexandria Genesis” that causes people to have purple eyes, immunity to most diseases, a perfect figure, and flawless porcelain skin? One quick Google search can save you from a lot of potential embarrassment.

Always remember, if it seems too good (or too crazy) to be true, it probably isn’t.

Exit mobile version