Categories
Opinions

AI: your next romantic partner

AI is slowly but surely becoming part of our romantic lives.

Artificial intelligence is such a fascinating invention. However, many consider it a threat to humanity. AI is invading the romantic world with its customizable partners—and as dangerous as it is, it’s inevitable.

People who are tired of getting ghosted, betrayed, and hurt, might consider downloading an AI application that replicates the emotions of human beings. Having an AI partner can be helpful to improve one’s relationship skills and boost confidence. It can also allow people who have been traumatized by a former toxic partner to find a safe space. 

But while AI may seem like a tool to end romantic loneliness, it isolates the person from the real world. It hinders them from facing their fears and healing themselves to establish genuine, healthy relationships. 

Out of curiosity, I downloaded an AI dating app last year. However, I felt bored immediately simply because the person responding did not exist. Receiving a good morning text daily feels good, but dating an AI sounds like living in a delusional world.

We must be mindful that dating an AI is way more dangerous than we think. People trying to build a relationship with their AI partner will share much of their personal information—not knowing that a company on the other side may collect all their data. 

As I think of AI dominating the dating scene, I imagine new debates emerging. Will texting an AI be considered cheating? Are humans now in competition with an AI? Will people find it hard to move on from their AI partners? Will AI increase anxiety in real-world dating? Many people are already addicted to their social media apps and I fear that AI will eventually become addictive, too. Being in a prolonged relationship with an AI will cause people to feel anxious when interacting with real humans. 

Imagine being intimate and vulnerable with an AI everyday. Reading and hearing exactly what you want will create a destructive comfort zone. It will hinder one from experiencing joy from actual dates and learning to evolve with a human partner. I also see it as a trap. When we constantly escape our reality to temporarily feel better, we only postpone our healing and the magic that comes with true love.

In Japan, over 4,000 men have an AI digital wife with a marriage certificate issued by tech company Gatebox. Although the number might seem low, it is still concerning. I firmly believe that this number will rise exponentially in the upcoming years. In the long run, the decrease in birth rate will be alarming. 

If things remain the same, AI will transform the world into a place lacking in deep emotions and human interactions. While change is sometimes daunting, we must always proceed with caution and choose to participate in what feels right to us. 

Categories
Arts and Culture Community

AI and the generative art of writing

The Quebec Writers Federation hosted a panel discussion on artificial intelligence.

On Sept. 21, the Quebec Writers Federation (QWF) held a writers’ panel titled AI and the Future of Writing at the Atwater Library Auditorium. As part of their Writing Matters campaign, the organization invited Professor Cheryl Chan, Professor Andrew Piper, and author Sean Michaels to discuss the academic, professional and emotional landscape surrounding this burgeoning new tool. 

Investigative journalist Julian Sher opened the talk with a colourful and poignant introduction, ironically written by ChatGPT. His subversive humor, wit and ease set a fantastic tone for the evening. 

The panel discussed the definition, benefits, fears and future applications of generative AI models. Piper defined generative AI models as a new power of memorization, a productive large-language models that are trained on data and produce information based on prompts.  Michaels likened AI to a mystical boulder in the forest that finishes your sentences, having used the tool to write part of his new fiction publication. Chan recounted her research into programming her own AI model. She argues that by analyzing how this technology accumulates information, one can question the dominant, normalized presentation of voice. Piper took a more relaxed approach, finding a cool, detached perspective to analyze the lack of and need for AI regulation. 

As the speakers bounced off each other, it became evident that they were pre-emptively answering to how we should think or feel about the way in which AI destabilizes a general idea of humanity and the human condition. The guests suggest that writers and academics should critically engage with discourse to avoid fear mongering a dystopian future. 

Chan contended that humans will always have a certain mystique and subjectivity, optimistically concluding that: “Fast food didn’t kill the chefs.” The public, therefore, should trust in the resiliency and counter-cultural ability of artists, creatives and writers.  

Michael asserted that the fears around AI don’t come from inherent danger within the technology itself but from the wider implications of this technology within a collectively predetermined economic structure. For the panel, the issue is not the fact that AI exists, but rather that it exacerbates the precarity and uncertainty that already exist in capitalism. Michaels suggests that there would be less of an anxiety surrounding job precarity brought on by AI if our economic system prioritized care and collective security.

While the group touched briefly on the effects of societal stratification and wealth inequality, there was curiously little discussion on how to make AI more accessible. While Piper was adamant that a safe use of AI should not replace but augment skills, the discussion did not provide pragmatic solutions to reach this goal. 

Sher guided the second portion of the evening into a Q&A session. The crowd provided plenty of thought-provoking comments. Whether the comments came from a struggling lyricist to a passionate teacher concerned with literacy in their classroom, the room was alive with nods of approval, murmurs of disgust and whispers of excitement. 

The evening closed with wine, cheese and a more nuanced understanding of the technology that pervades us.

Categories
Arts Exhibit

Yea I made it up, Yea it’s real: Examining digital culture, social media, and the meme-sphere

Concordia students and alumni adopt internet aesthetics to explore the human experience in the digital age in new exhibition

On Feb. 17, artists Edson Niebla Rogil and Dayana Matasheva hosted the vernissage for their exhibition Yea I made it up, Yea it’s real out of their shared Plateau studio.

The show featured 12 artists, including Niebla Rogil and Matasheva, whose works address the effects of the internet on the human experience through mediums ranging from AI-generated audio to livestreaming-inspired video compilations.

For Matasheva, who graduated from film production in 2020, the internet represents an aesthetic endeavour. “I think aesthetically, no one is using the visual vernacular of the internet. We are interested in its aesthetics specifically, rather than just its subject matter.”

After noticing a lack of representation of internet subject matter within traditional gallery spaces, Niebla Rogil and Matasheva issued an open call for like-minded artists.

“There’s a really big focus on technology as a medium, but there’s very little about the cultures that are growing online and changing the landscape of how people interact with each other,” said Concordia intermedia major Liz Waterman, whose sensorial TikTok-inspired video projection Doom Scroll was featured in the exhibition.

“I think that it’s shaping culture and psychology in a way that’s really interesting, and we don’t see enough work about it.”

Yea I made it up, Yea it’s real is the first exhibition organized, hosted, and curated by Niebla Rogil and Matasheva, but the pair have ambitions to move future exhibitions out of their studio into larger spaces, and to continue to host their networking event The Net Worker.

“It’s a recurring event where people shamelessly network and there’s no other purpose to it,” explains Matasheva. “People come together, exchange DIY business cards, they wear business attire and everything. It’s a little bit performative, but it actually is serving a purpose for artists.”

Information about upcoming exhibitions, networking events and more can be found on Niebla Rogil and Matasheva’s Instagram profiles.

Categories
Arts

Art Therapy: one of the many roles traditional art plays in the digital era

Concordia Arts Hive conjures the psychological and spiritual aspect of arts

The history of art therapy goes back to around the 1700s, when art was being used in various modes of psychological treatment. According to Lois Woolf, founder of the Vancouver Art Therapy Institute, art therapy was first explored in Europe and North America in the 1940s.

The study of this subject and human psychology was explored in increasing depth for years. Unlike art creation, art therapy focuses on the process of art rather than the result.

The Centre for the Arts in Human Development at Concordia University provides creative art therapy for people with disabilities and special needs, as well as for people with anxiety and depression. Senior associate director Lenore Vosberg says that instead of teaching art skills,  the centre helps people express themselves through different art forms.

“It’s a very supportive place. People get a lot of good and positive feedback for everything they do here,” Vosberg said. The centre works to build participants’ self-esteem and self confidence, as well as build relationships and trust through the process of art creation.

As art is a genre of work that embraces different ideologies, art therapy is useful for all kinds of people. It’s an alternative to traditional therapy for people who find it easier to express themselves through an art form rather than speaking to a therapist. 

The Concordia Art Hive is a public practice art therapy space, located on the first floor of the ER building downtown and on the fourth floor of the central building at Loyola in the G-Lounge. The spaces are accessible to anyone who wishes to achieve self-expression through art. Students sit around a table to communicate with each other while creating their crafts. 

Rachel Chainey is an art therapist who coordinates the Art Hive HQ located at Concordia’s downtown campus. She says that one of their challenges is getting people to understand what art therapy is.

“Some people would be intimidated by arts because they think they should be good,” Chainey said. “[But you approach] it from an angle of play. It’s not a performance, or result, but more of a process.”

There are more than 30 art hives in Montreal. Traditional arts are spreading internationally into many other fields, like technology, creating endless possibilities for artists everywhere. 

Art education student Kaida Kobylka stopped by the Art Hive with the goal of observing art studios in a public space. She explained the process of an AI project that she had explored, in which she had to put the artistic idea first to let it create. “AI can learn and create, but it can’t just make something out of nothing yet,” said Kobylka. “I have to put the artistic thoughts into the input, it isn’t just replacing an artistic mind.”

“Everybody has the crisis when they are an artist, like does what I made matter or would painting exist in the future,” Kobylka said, “but the answer is yes, the paintings are still evolving and relevant.” 

Indeed, art has been always seen as a form of self-expression and materialized thoughts throughout the existence of humankind, and this is how traditional art participates in society in a psychological and spiritual way. 

Categories
News

Teachers shift gears to avoid A.I. plagiarism

As concern over students using A.I. chatbots rises, teachers must prepare to deal with the issue constructively.

OpenAI, a leading artificial intelligence research laboratory, has recently launched ChatGPT, a text-generating tool open to all for free. This chatbot is capable of understanding and answering questions through prompts, and hence is becoming extremely popular among students.

Textbots like ChatGPT can rescue last-minute assignments that can range from writing Shakespearean poetry to doing calculus. As such A.I. gets exploited by students, teachers are looking for ways to detect such plagiarism.

“We clearly need to come up with new ways to evaluate learning if we want to avoid these bots to be used to fake student work,” said Bérengère Marin-Dubuard, an A.I. enthusiast and teacher in interactive media arts at Dawson College.

Marin-Dubuard also expressed her thoughts on the quality of the text written by the A.I.

“The text generated is interesting, but in the end I’d be surprised if many people just don’t do the work,” she said. “It’s probably even more work to set it up.”

Marin-Dubuard encourages her class to embrace the new technology as a tool, but she remains wary of the threat of plagiarism. 

ChatGPT’s technology relies on natural language processing — a subfield of computer science based on the interaction between computers and human language.

“One part of how ChatGPT works is by learning complex patterns of language usage using a large amount of data,” said Jackie CK Cheung, an associate computer science professor at McGill University and the Associate Scientific Director at Mila A.I. Institute of Quebec. 

“Think at the scale of all the text that is on the internet,” Cheung added. “The system learns to predict which words are likely to occur together in the same context.”

He explained that the developing A.I. would eventually improve as researchers and users feed it new knowledge, a process known as “deep learning.”

Cheung knew that the easily accessible ChatGPT and related models could increase students’ temptation to plagiarize. He noted that instructors will have to adapt their methods of evaluation, and try resorting more to in-person or oral communication. Cheung added;

“There could also be innovations in which ChatGPT-like models can be used as an aid to help with improving the learning process itself.” 

A question of ethics has remained, as A.I. continues to develop in art and writing. Both art and text generators have been accused of plagiarism. Last month, artists online flooded art-hosting websites to prevent A.I. from generating proper images. Last week, a substack blog was outed as being written by A.I. by one of its plagiarized writers. 

Julia Anderson, who finds new ways to interact with developing technology and has collaborated with the Montreal A.I. Ethics Institute, said that A.I. should not be simply used to do the work for you. She believes that ChatGPT and similar models could be used as tools to help conceptualize projects or aid in teaching and supporting students. A.I. tools like LEX offer support in conceptualizing ideas, something Anderson thought teachers could use to aid them in making a curriculum. 

“You can make a similar argument with other technologies, like Google translate,” Anderson said. “But it’ll be at the discretion of the user to decide what to edit.”

With schools now beginning to look for methods of detecting A.I. plagiarism, Edward Tian, a 22-year-old computer science major from Princeton University, developed GPTZero. The program can detect work written by the OpenAI software. 

Other methods of dissuading students’ temptations to plagiarize, according to Anderson, could include digital watermarks and the requirement to pay in order to be able to copy text. 

Nonetheless, Anderson understands that such measures cannot strictly assure legitimacy. 

“Going forward I’m sure there’s going to be more problems,” she said. “At the end of the day, it comes down to human discretion.”

Categories
News

Graduate students explore the world of artificial intelligence and the early detection of anorexia

The paper focuses on the system’s efficiency in labelling early signs of anorexia.

Concordia graduate students Elham Mohammadi and Hessan Amini developed a research paper explaining an algorithm, using artificial intelligence, to detect signs of anorexia through social media for the Conference and Labs of the Evaluation Forum (CLEF) 2019, this September.

CLEF is a conference that has been running since 2000 in Lugano, Switzerland. It aims to address a wide range of topics, primarily focusing on “the fields of multilingual and multimodal information access evaluation.” Mohammadi and Amini worked under the supervision of Concordia Professor in Computer Science and Software Engineering Leila Kosseim.

Social media platforms are a rich source of information for research studies because people use these outlets to share a large sum of data in relation to their emotions, thoughts and everyday activities.

The research was based on a simulation scenario using past posts from social media. In an interview with The Concordian, Amini explained that there are a few reasons the study was focused on anorexia specifically.

“It wasn’t covered that much in literature,” he said. “Finding out the patterns requires a more complicated source of analyzing information.”

Their focus was on the early detection of the eating disorder.

“We don’t want to detect the risk after it has happened or after it has caused damage to the person,” Amini explained. “We want to detect that the person is showing signs of anorexia.”

The focus of the study was to test the algorithm. Amini clarified that their role is not to diagnose or analyze the data. The study is about the system’s efficiency in labelling these signs. With this, they are able to send this data to an expert to closely evaluate it.

Surfing through over 2,000 social media posts would be tedious and time-consuming, so the researchers used an algorithm called “attention mechanism.” This algorithm systematically filtered through the abundance of posts to detect those that were the most important, using keywords.

They had one data set that was already separated by users that showed signs of anorexia and those that did not, as well as another set of data that was not categorized at all. Mohammadi and Amini analyzed the data to compare the function of the system; however, it must be noted that when dealing with personal data, ethical complications may occur.

Mohammadi explained that when dealing with user’s data, some people might be hesitant to have their personal information analyzed. “People might not be comfortable with it,” he said.

In being able to detect certain patterns of anorexia on social media, more complex research topics arise. Although this is a good start, Amini explains that this research requires many experts sitting together and discussing solutions.

Amini notes that although people think artificial intelligence (AI) systems like this are set up to replace humans, the opposite is true.

“AI is going to be there to help humans,” he said. Amini explains that it will make the lives of psychologists and mental health practitioners easier.

Although this research is not the final solution, it can help bring awareness to those in need of mental attention and create a healthier society.

 

Graphic by Victoria Blair

Categories
News

Humans, sex and robots: integrating technology into our sexuality

Concordia research aims at understanding individual attitudes and perceptions towards artificial erotic agents.

In the past weeks, Concordia students might have stumbled upon an unusual request glued to various bathroom stalls; research on sex robots looking for participants.

As with any subject confronting our sexuality, mixed with the feared and misunderstood rise of technology, the expected reactions are strong, ranging from laughter to repulsion.

“I think it is eerie because it is kind of disrupting the process of individuals getting to know their bodies at an intimate level whether it is with a partner or by themselves,” said Georgette Ayoub, Concordia Political Science student.

Yet, Simon Dubé, the man behind the research and Ph.D. student in Psychology, Neuroscience and Cognition of human sexuality at Concordia, says these reactions are quite natural.

“These are first impulse reactions,” said Dubé. “It’s not unique reactions with sex robots. We had the same with video games, with pornography. It used to be the same thing even with radio, people used to think it would lead to the destruction of society. It’s always blurred out of proportion.”

Indeed, there is a climate of moral panic when it comes to technology. Are robots going to replace affection, or even love? Such reasoning can be explained by the fact that only a few studies have been done so far, and most of them are done on human interaction with computers; none have dug into erotic interaction.

Therefore, the research is interested in people’s reception towards emerging artificial agents, such as virtual erotic partners, virtual chatbots and of course, the infamous sex robots. Dubé hopes to further the understanding of their impact on our society and the relationship humans can develop with E-robots.

And for everyone wondering, no, the research doesn’t actually include having sex with robots. When students register to participate in his study, Dubé, who works with the Concordia Vision Lab, uses a number of different techniques, such as questionnaires, to track people’s responses to images, videos and audio related to erotic machines.

While technology has taken over a huge part of our lives, it only makes sense that it now arises in our sexuality. Dubé argues that his research only comes at a time where there is a cluster of all technology coming together to enable these erotic machine interactions.

“The idea of having sex or an intimate relationship with non-living objects has been here for thousands of years,” said Dubé. “I think at this point, it’s a matter of the technology emerging right now related to artificial intelligence, such as sex robots, computing or augmented virtual reality. These are all achieving a level of interactivity and immersivity that is starting to become interesting for people, to use them in their relationship or intimacy.”

Arguably, the fear of including robots in our intimacy or sexuality derives from pop culture-producing fiction. Just think of the utopic, robotic cold-hearted world created in almost every episode of Black Mirror. The picture is always a classical one, where an apocalyptic world is shown as a result of what could happen if we start including these technologies in our day to day lives.

Dubé warns us of the danger of such misconceptions, arguing that this discourse is at the very root of why they have such trouble doing research on something that could be beneficial for a lot of individuals.

“People have really polarized ideas on what these technologies can do, but for some people, it can be super helpful,” said Dubé. “It can be part of their sexuality with their spouses, their partners or alone. Yes, humans develop problems with all kinds of technology, people get addicted to video games per example, but artificial erotic agents could help people with trauma, or anxiety related to sexuality or intimacy. It’s always the same music that plays over and over again, but here we just need to do the right kind of research.”

What Dubé means by the right kind of research could result in positive applications of these erotic technologies in health and medical research, and even Sex Ed. It could be used by people who’ve experienced sexual trauma to help them reintegrate sexuality into their lives, or by people having a hard time finding partners, dealing with their own orientation or simply out of curiosity.

“The key message I want to get across, is that it’s simply not gonna be an apocalypse or a robot utopia or virtual reality utopia where everything is going to be beautiful or dark,” said Dubé. “It’s going to be somewhere in the middle, for some people, it’s an amazing experience and it’s an integrated part of their sexuality and for others, they might have a problematic dynamic with these technologies. But we need to overcome this idea it will be all black or all white.”

Either way, with erotic technologies, we are now standing at the beginning of a new sexual revolution.

 

Graphic by @joeybruceart

Categories
Student Life

Four Montreal students take first place at HackHarvard

Four Montreal students take first place at HackHarvard

“HackHarvard was maybe my 10th hackathon,” said Nicolas MacBeth, a first-year software engineering student at Concordia. He and his friend Alex Shevchenko, also a first-year software engineering student, have decided to make a name for themselves and frequent as many hackathon competitions as they can. The pair have already participated in many hackathons over the last year, both together and separately. “I just went to one last weekend [called] BlocHacks, and I was a finalist at that,” said MacBeth.

Most notable of the pair’s achievements, along with their other teammates Jay Abi-Saad and Ajay Patal, two students from McGill, is their team’s first place ranking as ‘overall best’ in the HackHarvard Global 2018 competition on Oct. 19. According to MacBeth, while all hackathons are international competitions, “HackHarvard was probably the one that had the most people from different places than the United States.” The competition is sponsored by some of the largest transnational conglomerates in the tech industry. For example, Alibaba Cloud, a subsidiary of Alibaba Group, a multinational conglomerate specializing in e-commerce, retail, and Artificial Intelligence (AI) technology, as well as Zhejiang Lab, a Zhejiang provincial government sponsored institute whose research focuses on big data and cloud computing.

MacBeth said he and Shevchenko sifted through events on the ‘North American Hackathons’ section of the Major League Hacking (MLH) website, the official student hacking league that supports over 200 competitions around the world, according to their website. “We’ve gone to a couple hackathons, me and Alex together,” said MacBeth. “And we told ourselves ‘Why not? Let’s apply. [HackHarvard] is one of the biggest hackathons.’ […] So we applied for all the ones in the US. We both got into HackHarvard, and so we went.”

Essentially, MacBeth, Shevchenko, Abi-Saad, and Patal spent 36 hours conceptualizing, designing, and coding their program called sober.AI. The web application uses AI in tandem with visual data input to “increase accuracy and accessibility, and to reduce bias and cost of a normal field sobriety test,” according to the program’s description on Devpost. “I read a statistic somewhere that only a certain amount of police officers have been trained to be able to detect people [under the influence],” said MacBeth. “Drunk, they can test because they have [breathalyzers], but high, it’s kind of hard for people to test.”

MacBeth explained that the user-friendly web application could be helpful in a range of situations, from trying to convince an inebriated friend not to drive under the influence, to law enforcement officials conducting roadside testing in a way that reduces bias, to employees, who may have to prove sobriety for work, to do so non-invasively.

Sober.AI estimates the overall percentage of sobriety through a series of tests that are relayed via visual data—either a photo of an individual’s’ face or a video of the individual performing a task—that is inputted into two neural networks designed by the team of students.

“We wanted to recreate a field sobriety test in a way that would be as accurate as how police officers do it,” said MacBeth.

The first stage is an eye exam, where a picture of an individual is fed to the first neural network, which gives an estimation of sobriety based on the droopiness of the eye, any glassy haze, redness, and whether the pupils are dilated. The second stage is a dexterity test where individuals have to touch their finger to their nose, and the third is a balance test where people have to stand on one leg. “At the end, we compile the results and [sober.AI] gives a percentage of how inebriated we think the person is,” said MacBeth.

“Basically, what you want to do with AI is recreate how a human would think,” explained MacBeth. AI programs become increasingly more accurate and efficient as more referential data is inputted into the neural networks. “The hardest part was probably finding data,” explained MacBeth. “Because writing on the internet ‘pictures of people high’ or ‘red eyes’ and stuff like that is kind of a pain.” MacBeth said that he took to his social media pages to crowdsource photos of his friends and acquaintances who were high, which provided some more data. However, MacBeth said his team made a name for themselves at the hackathon when they started going from group to group, asking their competitors to stand on one leg, as if they were sober, then again after spinning around in a circle ten times. “That was how we made our data,” said MacBeth. “It was long and hard.”

Participating in such a prestigious competition and having sober.AI win ‘overall best’ left MacBeth and Shevchenko thirsty for more. “HackHarvard had a lot more weight to it. We were on the international level, and just having the chance of being accepted into HackHarvard within the six or seven hundred students in all of North America that were accepted, I felt like we actually needed to give it our all and try to win—to represent Concordia, to represent Montreal.”

MacBeth and Shevchenko have gone their separate ways in terms of competitions for the time being, however the pair’s collaborations are far from over. Both are planning to compete separately in ConUHacks IV at the end of January 2019, where MacBeth explained that they will team up with other software engineering students who have yet to compete in hackathons. “We’re gonna try to groom other people into becoming very good teammates,” said MacBeth.

The first-year software engineer concluded with some advice for fellow Concordia students. “For those in software engineering and even computer science: just go to hackathons,” advised MacBeth. “Even if you’re skilled, not skilled, want to learn, anything, you’re going to learn in those 24 hours, because you’re either gonna be with someone who knows, or you’re gonna learn on your own. Those are the skills you will use in the real world to bring any project to life.”

Feature photo courtesy of Nicolas Macbeth

Categories
Student Life

Mapping the future of artificial intelligence

Panelists define AI and discuss how this technology will impact society and the workplace

Artificial intelligence (AI) professionals discussed the impact and future of AI in the workplace and its role in society at large during a panel held at Concordia University on March 13.

“The fear of technological anxiety and mass unemployment due to artificial intelligence has been largely proven to be untrue,” said panelist Kai Hsin-Hung, a consultant at the International Training Centre for the International Labour Organization. “Rather than eliminating occupations, AI will most likely replace the tasks and how we are going to be doing them.”

According to Abhishek Gupta, an AI ethics researcher at McGill University, many people don’t fully understand the term AI, and its definition “has been shifting over time.” Gupta defined AI as “the ability of a machine to do a task that was previously thought to be only possible by human intelligence.”

Caroline Bourbonnière, a communications advisor for the research institute Element AI, clarified that, while certain jobs will be replaced with AI, the purpose of converting this work to automatic operations is to allow workers to be more efficient. “All of futurists are wrong about how quickly AI will be affecting the job market,” she said. “We have a lot of reports, and it was found that job creations versus job-loss projections tended to have a very balancing effect.”

Certain dangerous jobs, such as tractor operators and miners, may eventually be replaced by AI technology, but Bourbonnière emphasized that this does not mean AI will replace all jobs. In particular, she discussed how AI technology will be responsible for completing paperwork in the future, which will allow workers to focus on tasks more central to their job.

“In some organizations, people will be spending about two hours a week putting together reports,” Bourbonnière said, offering the example of how “79 per cent of social workers’ work is paperwork. Imagine what they could do with this time. They can be spending it with youth at risk.”

An important subdivision of AI is machine learning, Gupta explained. This refers to a digital system’s ability to “learn” a task that it is not explicitly programmed for. In this process, the digital system is provided with a set of data, which its AI component registers and internalizes. Machine learning is just one of the ways AI can be helpful, rather than a harmful, according to Xavier-Henri Hervé, the executive director of Concordia’s District 3 Innovation Centre.

“I do not think AI is the foe. AI is just reality,” he said. “The foe right now is time. The speed at which this is happening; things are happening a lot faster than anyone is imagining. [AI] is so convenient.“ Hervé reminded the audience that AI is already a component in many everyday devices, such as smartphones. “It is hiding everywhere,” he said.

Bourbonnière added that she believes it’s crucial to democratize AI to prevent large companies from monopolizing the technology, and to allow non-profit organizations to use AI to address issues around the world. “[Democratization] is education—to learn about the technology and not feel intimidated by it,” she said. “It’s important in widening the population’s understand of the technology.”

Feature photo by Mackenzie Lad

Categories
Student Life

How immersive technology and culture can help create a better future

 Creative director and co-founder of ALLFUTUREEVERYTHING (AFE), Monika Bielskyte (left), during a panel discussion at C2 Montreal alongside interviewer and executive producer of the National Film Board of Canada, Hugues Sweeney (right). Photo by Kirubel Mehari.

 

C2 Montreal invited creative director, Monika Bielskyte, to discuss the future of virtual reality

C2 Montreal is an international conference that gathers visionaries and innovative thinkers from around the world for a three-day event filled with panel discussions based on creativity and commerce. This year’s edition of the event, which ran from May 23 to 25, featured a talk given by Monika Bielskyte on virtual reality (VR) technology and how it might help create a better future.

Bielskyte is the founder of ALLFUTUREEVERYTHING (AFE), a company that designs and builds futuristic virtual worlds using computer-generated simulations of three-dimensional images that people can physically interact with. For example, the company creates simulations of how cities will look 50 years from now.

Bielskyte is a creative director at AFE, specializing in immersive technology such as augmented reality, a technology that uses goggles to superimpose computer-generated images on a user’s view of the real world. She also works with mixed reality, which merges the real and virtual world to produce new environments, and she creates VR prototypes.

For Bielskyte, creating these futuristic virtual worlds offers a way to possibly change our future.“Why I am interested in speaking about the future is because it gives us this necessary distance to look at the present with fresh eyes,” she said during her C2 Montreal talk. “But ultimately, it’s always about the choices that we are making today because there are no answers, only choices.”

According to Bielskyte, the prototypes designed and created using immersive technology and media can have a direct impact on our culture—which influences our reality and eventually our future. And although artificial intelligence (AI) is becoming a more common component of immersive technology, she said, it doesn’t really help improve our world or our future. “We’ve been designing into AI the failures of humanity,” she said. “So our AI will fail as we fail.”

For this reason, Bielskyte designs virtual futures that depict how culture and humanity can be utilized to improve the world. “I am interested in showing how cultures of the world can cohabit and enrich each other rather than fighting each other,” she said. This idea of cohabitation and collaboration has been a focus of Bielskyte for a long time.  “From a very young age, I realized that everything is truly connected,”she said. “What interests me is to find how cultures affect each other, because no culture is self-contained.”

The idea that technological innovation without humanitarian revolution leads to a dystopian future is part of what drives Bielskyte’s focus on culture in her virtual prototypes of the future.“Technological change is much easier than cultural change, but if culture doesn’t change, nothing does,” she said. “We’ve been a little too focused on technology. Technology is important, but it’s truly just an extension of ourselves—it’s a tool. Technology is not good or bad, humanity is.”

During the talk, Bielskyte also tackled some misconceptions she said people often have concerning VR. “Technology/content companies haven’t done a great job in marketing this new technology and these new ideas,” she said. “[Virtual reality] is mostly perceived as an entertainment gimmick.” The ideas Bielskyte discussed about VR, in comparison, were not about entertainment, but rather about building a glimpse into the future and broadening our horizons with tangible experiences. VR is a world where people no longer sit in front of a computer to get a glimpse into another world, she said. Instead, they become immersed in other realities. “It’s about leaving the rectangular screens behind and stepping into a space where the world is our desktop,” Bielskyte said, describing a world where VR simulations would allow users to feel like they’re truly experiencing another reality.

According to Bielskyte, when immersive technology becomes the new common form of communication, it will cause major changes to our view of reality. “When most of the content we consume is no longer something that we watch, but truly something that we are in—is it just virtual? If it can cause real physical damage, is it only a simulation?” Bielskyte asked the audience. “[Mixed, augmented and virtual realities] are in some way as real and as impactful as real experiences might be.”

Bielskyte also spent part of her talk delving into the storytelling aspect of immersive technology. “People are only at the beginning of learning how to tell stories through interaction [with the audience], and VR does not exist without interaction,” she said.

At the moment, VR simulations are set up in closed environments, such as small rooms or booths, which Bielskyte said is an example of how old media habits are still being applied to this new medium. Instead, she encourages more creative thinking in the development of immersive technology—particularly VRs that interact more thoroughly with the real world. “The digital world will soon enough be meshed with the physical in such a way that our reality will be the transparency that we choose,” she said.

This distinction between reality and virtual reality, however, is more significant in the Western world, Bielskyte said. During her extensive travels, she has learned that places like Central and South America have different perceptions of what is real. “With my Colombian friends, we can shift the conversation about physical experience to dreams, to art, to shamanistic and psychedelic experiences in a blink of an eye—all of these things in their culture are real,” she said.

These varying perspectives of virtual reality are why Bielskyte said she enjoys teaching workshops on immersive creativity around the world. “I can definitely say that the students I had in places like Rio de Janeiro and in Bogotá come up with ideas for virtual reality that are not only equally good as the projects that are being pitched to me in Los Angeles or Silicon Valley—they are way more inspiring and way more interesting,” she said.

For Bielskyte, creativity is the key to developing immersive technology that will truly help humanity. “Humans are creative animals, and it’s only through creativity that we might find ourselves in a habitable future,” she said.  

Recently, some of Bielskyte’s work has extended to creating participatory story worlds for Hollywood, including the design and prototyping of the world in Ghost in the Shell. She is also working on a project called Future Nation, which aims to bring fictional worlds from Hollywood into the real world. “It’s about imagining these fictional futures for actual places, for real countries, cities and geographic regions—to help the policy-makers imagine how they could build a better future,” she said.

Categories
Student Life

One step closer to The Matrix

Little robot Jibo isn’t just a pretty face; it walks, talks, and thinks

Ambient objects have been slowly infiltrating our homes. Simple innovations like The Clapper, an electric switch that responds to sound, may have started the revolution of leaving our households in the capable robotic hands of artificial intelligence. Technology has evolved since The Clapper, though. The first household robot, bunny-shaped Nabaztag, came around in 2006, and could give you the weather and time, aggregate your RSS feeds and even retrieve your email. The little rabbit never did gain a ton of popularity, however, and came upon hard times when the central servers ran into crippling slowdowns in December of its launch year.

Now that we control speakers, lights and door locks with tiny computers and smartphones, the world of ambient objects seems to be going silent… Or is it?

Meet Jibo, a robot that’s set to do a bit of everything. Unlike other artificial intelligence of its kind, it is compatible with apps that can improve its functionality. All things said, Jibo’s a little creepy. With the ability to take pictures, track faces and the option of remote control from a smartphone or tablet, Jibo’s list of features aren’t just useful, they’re downright terrifying.

We all remember the media scare that came with remote-hacking into laptop webcams. Walk around campus and look at how many students have a post-it note over their webcams, or have otherwise blotted them out. Being able to take photos without pressing a button or a timer is great, but how much privacy and safety are we willing to sacrifice for convenience? The truth is that, despite the scare tactics, few people become the intentional targets of hackers. The worry here comes more from the software used to improve Jibo’s functionality, and the nefarious purposes regular people could put it towards.

Face-tracking and movement-tracking aren’t new in automated devices. Microsoft’s Kinect is another example of the impressive technology little Jibo showcases, and its only downside is the lackluster selection of games it supports. Jibo could easily become a household name, then, even with a hefty price of $599 US.

But fear-mongering aside, the potential of these ambient objects is limitless. Imagine having Jibo act as a security camera for your apartment, reporting to your smartphone any unauthorized entry and catching it on video. Sure, there are already ways to set this up with basic webcams, but ease of use would increase the adoption rate of these security measures. Like all things tech-related, keep your credentials safe and your password lengthy and complex, and you’ll avoid trouble.

We’ve come a long way with tech in the past quarter-century. That being said, I’d love for Jibo’s voice to sound like HAL from 2001: A Space Odyssey. Creepy? Sure! But think of the geek-potential!

Exit mobile version