Categories
Opinions

Are we all just living the same life in different fonts?

Don’t be fooled—social media capitalizes on relatability and social isolation.

Do you ever see a meme or video on social media that is oddly specific and relatable to you? I always find it so unsettling how well my algorithm knows me, from my taste in music and books, right down to personal experiences that I thought were unique.

I’ve stumbled upon videos lately that really made me sit back, set my phone aside and stare at the wall. Seeing memes related to our secret little quirks or even our very specific and seemingly unique past trauma can be ruled out as coincidence; however, it seems every single piece of content I come across lately is eerily accurate.

Yes, I watched The Social Dilemma on Netflix when it came out—it still haunts me. I also read Digital Minimalism by Cal Newport. That’s how I learned that the algorithm compiles data from likes, comments, shares, and other interactions, and that some social media even record the amount of time you spend on a specific post and where your gaze catches on the screen. I’m always painfully aware that social media preys on my attention and time, which the algorithm then uses to throw me into a vicious cycle of doom scrolling.

However, I also realize that users capitalize on relatability. We all (subconsciously or not) know the golden rule to success on social media: if people don’t relate to you, they simply won’t care. My own experience in business communications taught me how hard content creators work to get on the “For You” page. They have to work with the system, but they also feed it more tools to reel us all in.

So what if my algorithm notices that I am a Swiftie eagerly anticipating the announcement of  Reputation (Taylor’s Version), that I have a “golden retriever” boyfriend, that I secretly dream of owning a book and plant shop joined to a cat café (apparently it’s a “feminine urge”), that my Roman Empire is being a woman in a man’s world, and that I am afraid of the dark? Is it really so bad that my social media feed is so meticulously tailored?

The answer to that question will depend on how you answer this one: Is social media a means for entertainment, or to gain information? Part of me wants to say it’s just entertainment and it doesn’t really matter. But another part of me is screaming that my algorithm is putting me into niche boxes and shuttering me from the bigger picture of the world. I find myself consuming mindless content instead of learning about the war and humanitarian crisis in the Gaza Strip, for example. I have to go out of my way to learn about that.

It always gets me when I see someone comment: “Are we all just living the same life in different fonts?” Your social media feed is giving you that impression, showing you oddly specific videos you’d send your best friend in a heartbeat. Your algorithm knows that if you are entertained and you feel seen in a world where human connection is blurred by screens, it will keep your attention just a bit longer.

Am I really a die-hard Swiftie, or am I just being overexposed to that content? Is that video really “so me!” or does it just touch on something I can somewhat relate to? Are these memes truly relatable, or am I just yearning for a vague sense of community and belonging in a socially-isolated generation?

Did you find this article relatable? If so, I’ve succeeded in the golden rule. Welcome to the Social Media Existential Crisis Club, where we question this warped sense of belonging and combat the negative effects of the algorithm on important information sharing.

Categories
Music

Looking down the rabbit hole of streaming services

Now more than any ever we have unlimited access to the art of music

Not too long ago, finding new music took a walk to the record store to ask the employees what they recommended. These audio aficionados were real human beings with ears for music and the knowledge to point out what constitutes art worth listening to. In that same spirit, new music had long presented itself to consumers in the shape of the live show, something we’re generally bereft of in a pandemic world. Opening acts allowed patrons to discover a performer often unknown to them, giving listeners the chance to come to their own conclusions.

With the introduction of countless streaming services, infinite artists and genres are accessible at any given time. These have opened the doors to vast historical catalogues of music from Cab Calloway to Brian Eno, or from swing to shoegaze. There is no doubt that this is a fortunate time to be a lover of music, but at the heart of all these streaming services is something to remember: they are businesses, and businesses love to collect data.

Take Spotify’s privacy policy for example, which outlines their use of user data which includes search queries, streaming history, user-created playlists, browsing history, account settings, and much more. Most of this is used “to provide the personalized Spotify Service,” and “to evaluate and develop new features, technologies, and improvements to the Spotify Service.”

In this sense, the algorithm is always ahead of its listeners, basing recommendations on their digital footprints. Although streaming services offer discovery playlists, they are still generated by the service itself. As a result of this, it becomes easy to fall into a loop of listening to similar artists from similar periods over and over again. You don’t need to know who Cocteau Twins or Car Seat Headrest are to have good music taste, but you can do better than the cheap recommendations produced by your own habits.

All of this begs the question: what’s the answer to big tech mirroring our tastes back to us? Not everyone has parents with a basement full of vinyl records and a turntable waiting to be discovered. In this regard, it would suit us to find and define music for ourselves. With sites like Rate Your Music or Chosic, the experience of discovering new music without any personal data required can be achieved in a time where live shows are sparse. With music so easily accessible these days, it becomes easier and easier to forget that music is an art form — and the act of discovering it should be an art form as well.

 

Graphic by Madeline Schmidt

Truth is no algorithmic matter

Technology is no better than the next guy when it comes to solving age-old human dilemmas

Meredith Broussard sits calmly at her desk. Behind her on a bookshelf is a copy of her latest book, Artificial Unintelligence, the topic of her latest Zoom talk.

“The people who decided to use an algorithm to decide grades were guilty of ‘technochauvinism,’” she says with a cool and collected tone that trumps the gravity of her research. She’s referring to the infamous decision that attributed artificial scores for a decisive IB exam based on an algorithm that looked at student’s performances pre-pandemic as well as their school ranking over previous years.

Technochauvinism is defined by the presumption that technology-based solutions are superior to human or social ones. This is a central concept to keep in mind when thinking about algorithms and their biases, which — although not always self-evident — sometimes have very tangible consequences.

And these consequences may be more serious than not scoring an A on a final test. With Broussard’s words still ringing in my ears, I stumbled upon an article exposing bias in algorithms used in American hospitals to prioritize access to chronic kidney disease care and kidney transplants. A study had found that the algorithm negatively discriminated against Black patients. It notably interpreted a person’s race as a physiological category instead of a social one — a design decision vehemently disputed by numerous medical studies.

Use of decision-making algorithms has become somewhat of a norm — it can be found anywhere, from the military, to newsrooms, to, most evidently, social media. They have found a purpose in making predictions, determining what is true, or at least, likely enough, and prescribing consequent actions. But in doing so, algorithms tacitly tackle some of our greatest dilemmas around truth, and they do so under the cover of a supposedly objective machine. As the kidney care algorithm clearly demonstrates, their interpretations are not an exact science.

Nonetheless, there is a tendency among humans, especially in the tech sector, to assume technology’s capacities are superior to that of human brains. And in many ways, they do outperform homo sapiens. Decision-making algorithms can be extraordinary tools to help us accomplish tasks faster and at a greater scope. In newsrooms, for instance, they are more efficient and accurate in producing financial and earnings reports. This is one of the promises of GPT-3, the latest language-generating bot, capable of producing human-like but repetitive text. This could significantly alleviate journalists’ workload and spare them from boring tasks.

What an algorithm should not do, however, is universally solve complex philosophical and ethical dilemmas, which humans themselves struggle to define, such as the matter of truth.

The case of the kidney care algorithm clearly illustrates how the ‘truth’ — about who is a priority — presents a clear distortion, embedded in the algorithm’s architecture. It also shows how what we hold to be true is exposed to change. It is subject to debates and additional information that might readjust and refine its meaning, from one that is biased and scientifically inaccurate to its ‘truer’ form that reflects more faithfully social realities.

The problem is perhaps not so much that the technology is imperfect, but rather that it is thought of and presented as something finite, which in turn leads us to be less vigilant of its blind spots and shortcomings. The risk is that the algorithmically prepared ‘truth’ is consumed as an absolute and unbiased one.

Scholars Bill Kovach and Tom Rosenstiel help us to think of truth as a “sorting-out process,” which results from the interactions between all stakeholders. The result does not represent an absolute truth — which, although it sounds compelling and elegant, may not ever be possible, for humans or machines. Rather, the sorting out process aims to paint a less incorrect picture.

Truth is the product of an ongoing conversation and this conversation should not take place solely within tech companies’ meeting rooms. It requires questioning and debate which cannot happen if one-sided interpretations are embedded in algorithms, dissimulated, and tucked away from the public space.

One simple way to ensure algorithms work for the benefit of human beings is to ensure more transparency about their design. In 2017, a Pew Research Center report on the matter had already called for increased algorithmic literacy, transparency and oversight. Last December, a British governmental report reiterated that proposition.

In the case of kidney care like for the IB test scores, algorithms have been actively contested and their uses have been revoked or appropriately adjusted. They have sparked a conversation about fairness and social justice that brings us closer to a better, more accurate version of truth.

 

 

 

Graphic by @the.beta.lab

Categories
Opinions

Algorithm editors and what they mean

What would journalism be without editors? Well, in my opinion, it would be pretty chaotic.

Editors are the backbone of journalism — take them out of the equation and you are setting loose a tsunami of fake news, badly written and poorly researched stories – to sum up, just total amateurism.

But, what do editors actually do?

According to Amelia Pisapia, journalist and former editorial director of Novel, editors are talented problem solvers who excel at putting information in context, assessing the accuracy of data and weeding out bias.

“They view issues from multiple angles, connect the dots and uncover human stories in complex systems,” writes Pisapia.

Pisapia adds that editors work within established ethical frameworks. She says that all editors have five values in common: accuracy, independence, impartiality, humanity and accountability.

However, in recent years editors have started to quite literally lose some of their humanity. With developments in technology and artificial intelligence, more and more media and news distributing platforms have started to use algorithms as editors instead of actual humans.

A good example is the algorithm behind the news feed on Facebook.Tobias Rose-Stockwell, a strategist, designer and journalist for Quartz wrote in his article, “[Facebook’s algorithm] shows you stories, tracks your responses, and filters out the ones that you are least likely to respond to. It is mapping your brain, seeking patterns of engagement.”

Sounds great doesn’t it? Having only quality news that you are interested in delivered right to your doorstep without having to move a muscle.

Well if it sounds too good to be true, it’s because it simply is. Algorithms are actually very far from being these perfect editors that we hope them to be. They have massive flaws and are actually very dangerous.

Don’t misunderstand me, algorithm editors have some good sides. They do surpass humans on some points — vis à vis their conduct as an editor for example.

In his article, “Can an Algorithm be an Editor?,” José Moreno, former multimedia director at Motorpress Lisboa explains that an algorithm has the silver lining of always acting the same way.

“Human editors always act differently on the basis of a common code,” Moreno says. “In a way, there is more accuracy and reliability in a “system” that always performs a function in the same way than in a “system” that always performs differently.”

So, yes algorithms have some upsides; Professor Pablo Boczkowski from Northwester University even called Facebook’s algorithm “the greatest editor in the history of humanity.”

But unfortunately, despite their virtues, any positive aspect that algorithms may present are always heavily outweighed by their negative counterparts.

The study , The Editor vs. the Algorithm: Targeting, Data and Externalities in Online News done by a collection of professors from different universities compared the different aspects of AI and human editors. The researchers discovered an alarming number of problems with algorithms editors, for example the algorithms tend to serve a less diverse mix of news to readers. They create a “bubble” effect as readers are presented with a narrower set of topics. An example the study presented was about readers who lived in German states where there was a high share of votes for extreme political parties. In the last election, those people were more likely to increase their consumption of political stories when their stories were selected by algorithms.

Another flaw with algorithms is their lack of social awareness; every calculation they make is based on an individual-level data. Algorithms don’t take into account “socially optimal reading behaviour,” according to the study.

“It doesn’t differentiate between factual information and things that merely look like facts,” said  Rose-Stockwell, referring to the Facebook example above. “It doesn’t identify content that is profoundly biased, or stories that are designed to propagate fear, mistrust, or outrage.”

The worst part in all of this, is that algorithms have even started to change the way some human editors think as well as the behavior of some news organizations. We have entered a traffic-at-all-costs mentality. News outlets are influenced by numbers, clicks and views now and no longer by journalistic values.

Despite all their flaws, regrettably, algorithm editors are still here and due to humans’ lust for technology and artificial intelligence, they are probably going to stay and even multiply.

But, why should algorithm editors be opposite to human editors, why should it be human vs machine?

The solution is easy: use a mix of both. The researchers from the study mentioned above concluded that “the optimal strategy for a news outlet seems to be to employ a combination of the algorithm and the human to maximize user engagement.”

In the digital age that we currently live in, machines will continue to take over more and more aspects of life. However, humans are more relevant than ever because these machines aren’t always optimal. So, in the end having a symbiosis between humans and machines is actually a comforting thought. It is the promise of a better tomorrow where machines will help humans and not supplant them.

Graphic by @sundaeghost

Exit mobile version