Truth is no algorithmic matter

Technology is no better than the next guy when it comes to solving age-old human dilemmas

Meredith Broussard sits calmly at her desk. Behind her on a bookshelf is a copy of her latest book, Artificial Unintelligence, the topic of her latest Zoom talk.

“The people who decided to use an algorithm to decide grades were guilty of ‘technochauvinism,’” she says with a cool and collected tone that trumps the gravity of her research. She’s referring to the infamous decision that attributed artificial scores for a decisive IB exam based on an algorithm that looked at student’s performances pre-pandemic as well as their school ranking over previous years.

Technochauvinism is defined by the presumption that technology-based solutions are superior to human or social ones. This is a central concept to keep in mind when thinking about algorithms and their biases, which — although not always self-evident — sometimes have very tangible consequences.

And these consequences may be more serious than not scoring an A on a final test. With Broussard’s words still ringing in my ears, I stumbled upon an article exposing bias in algorithms used in American hospitals to prioritize access to chronic kidney disease care and kidney transplants. A study had found that the algorithm negatively discriminated against Black patients. It notably interpreted a person’s race as a physiological category instead of a social one — a design decision vehemently disputed by numerous medical studies.

Use of decision-making algorithms has become somewhat of a norm — it can be found anywhere, from the military, to newsrooms, to, most evidently, social media. They have found a purpose in making predictions, determining what is true, or at least, likely enough, and prescribing consequent actions. But in doing so, algorithms tacitly tackle some of our greatest dilemmas around truth, and they do so under the cover of a supposedly objective machine. As the kidney care algorithm clearly demonstrates, their interpretations are not an exact science.

Nonetheless, there is a tendency among humans, especially in the tech sector, to assume technology’s capacities are superior to that of human brains. And in many ways, they do outperform homo sapiens. Decision-making algorithms can be extraordinary tools to help us accomplish tasks faster and at a greater scope. In newsrooms, for instance, they are more efficient and accurate in producing financial and earnings reports. This is one of the promises of GPT-3, the latest language-generating bot, capable of producing human-like but repetitive text. This could significantly alleviate journalists’ workload and spare them from boring tasks.

What an algorithm should not do, however, is universally solve complex philosophical and ethical dilemmas, which humans themselves struggle to define, such as the matter of truth.

The case of the kidney care algorithm clearly illustrates how the ‘truth’ — about who is a priority — presents a clear distortion, embedded in the algorithm’s architecture. It also shows how what we hold to be true is exposed to change. It is subject to debates and additional information that might readjust and refine its meaning, from one that is biased and scientifically inaccurate to its ‘truer’ form that reflects more faithfully social realities.

The problem is perhaps not so much that the technology is imperfect, but rather that it is thought of and presented as something finite, which in turn leads us to be less vigilant of its blind spots and shortcomings. The risk is that the algorithmically prepared ‘truth’ is consumed as an absolute and unbiased one.

Scholars Bill Kovach and Tom Rosenstiel help us to think of truth as a “sorting-out process,” which results from the interactions between all stakeholders. The result does not represent an absolute truth — which, although it sounds compelling and elegant, may not ever be possible, for humans or machines. Rather, the sorting out process aims to paint a less incorrect picture.

Truth is the product of an ongoing conversation and this conversation should not take place solely within tech companies’ meeting rooms. It requires questioning and debate which cannot happen if one-sided interpretations are embedded in algorithms, dissimulated, and tucked away from the public space.

One simple way to ensure algorithms work for the benefit of human beings is to ensure more transparency about their design. In 2017, a Pew Research Center report on the matter had already called for increased algorithmic literacy, transparency and oversight. Last December, a British governmental report reiterated that proposition.

In the case of kidney care like for the IB test scores, algorithms have been actively contested and their uses have been revoked or appropriately adjusted. They have sparked a conversation about fairness and social justice that brings us closer to a better, more accurate version of truth.

 

 

 

Graphic by @the.beta.lab

Related Posts