Categories
News

Yemen’s uncertain path to peace

In the short term, Biden’s diplomatic approach in Yemen may not be enough to leverage peace

Earlier this month, the Biden administration took considerable steps to reverse U.S. policy on the war in Yemen, instigated under Obama and continued throughout Trump’s presidency.

It notably put a hold on its support to the Saudi-led coalition, revoked the terrorist designation of the Houthi movement, and appointed veteran diplomat, Timothy Lenderking, as special envoy to the conflict.

What began in 2014, when the Iran-backed Houthi movement overthrew president Hadi’s unpopular government, has since turned into the world’s worst humanitarian crisis. As of 2015, neighbouring Saudi Arabia has spearheaded a coalition, while mobilizing a substantial part of its GDP, to back the Hadi government and wage a war against the Houthis and their allies – so far, unsuccessfully.

According to the United Nations, 233,000 people have been killed in the war and more than 20 million are left in dire need of humanitarian aid. In a briefing to the Security Council last week, UN humanitarian chief Mark Lowcock warned the country is “speeding towards the worst famine the world has seen in decades,” adding that “something like 400,000 children under the age of five are severely malnourished across the country.”

“This war has to end,” Biden said earlier this month, of the conflict that has reached a stalemate since the latest attempts at peace talks failed in 2018.

For the population, peace is long overdue. As reported by Newlines Magazine, many have welcomed efforts to reignite the peace process, but remain pessimistic about the prospect of a political solution in the near future.

The U.S.’ shift towards a diplomatic approach or even a hypothetical withdrawal of regional actors, like Saudi Arabia, would not necessarily result in the end of the civil war, warns Elena Delozier from the Washington Institute. In an interview on the Conversation Six podcast, she stressed that this conflict was and remains one mostly animated by local actors – the Houthis and the Yemeni government.

“If we had an arrangement for peace talks tomorrow, neither of them have the political will right now to go to the table,” she said. “The question for the United States is how can it get the Hadi government, the Houthis, or how can it help the U.N. get, those two parties to come to peace talks.”

In recent weeks, the Houthi movement has made advances on the government’s last stronghold of Marib – the fall of which experts say will bring about further displacement and humanitarian consequences.

Last September, a UN group of experts designated Canada as one of the countries responsible for “perpetuating the conflict” by selling arms, including sniper rifles and light armoured vehicles, to Saudi Arabia. The ongoing arms deal currently amounts to $14 billion.

The New Democratic Party reiterated this criticism earlier this month in the House of Commons. Foreign Affairs Minister Marc Garneau assured, “Human rights considerations are now at the centre of our export regime,” adding that he “will deny any permit application where there is a risk of human rights violations.”

In addition to the U.S.’ dwindling support, the declassification last Friday of a report that found Saudi Crown Prince Mohammed Bin Salam responsible for approving the murder of Washington Post journalist Jamal Khashoggi, puts Ryadh in an increasingly defensive position. 

But while it may reduce its military spending in Yemen, Saudi Arabia is expected to further its presence through local undercover fighters, according to Ahmed Nagi, a fellow at the Carnegie Middle East Institute.

Meanwhile, for the Houthis, the “priority today is to make more gains, not to engage in power-sharing deals,” said Nagi, indicating that under such conditions, a viable path to peace remains nothing but precarious.

 

Graphic by James Fay

Truth is no algorithmic matter

Technology is no better than the next guy when it comes to solving age-old human dilemmas

Meredith Broussard sits calmly at her desk. Behind her on a bookshelf is a copy of her latest book, Artificial Unintelligence, the topic of her latest Zoom talk.

“The people who decided to use an algorithm to decide grades were guilty of ‘technochauvinism,’” she says with a cool and collected tone that trumps the gravity of her research. She’s referring to the infamous decision that attributed artificial scores for a decisive IB exam based on an algorithm that looked at student’s performances pre-pandemic as well as their school ranking over previous years.

Technochauvinism is defined by the presumption that technology-based solutions are superior to human or social ones. This is a central concept to keep in mind when thinking about algorithms and their biases, which — although not always self-evident — sometimes have very tangible consequences.

And these consequences may be more serious than not scoring an A on a final test. With Broussard’s words still ringing in my ears, I stumbled upon an article exposing bias in algorithms used in American hospitals to prioritize access to chronic kidney disease care and kidney transplants. A study had found that the algorithm negatively discriminated against Black patients. It notably interpreted a person’s race as a physiological category instead of a social one — a design decision vehemently disputed by numerous medical studies.

Use of decision-making algorithms has become somewhat of a norm — it can be found anywhere, from the military, to newsrooms, to, most evidently, social media. They have found a purpose in making predictions, determining what is true, or at least, likely enough, and prescribing consequent actions. But in doing so, algorithms tacitly tackle some of our greatest dilemmas around truth, and they do so under the cover of a supposedly objective machine. As the kidney care algorithm clearly demonstrates, their interpretations are not an exact science.

Nonetheless, there is a tendency among humans, especially in the tech sector, to assume technology’s capacities are superior to that of human brains. And in many ways, they do outperform homo sapiens. Decision-making algorithms can be extraordinary tools to help us accomplish tasks faster and at a greater scope. In newsrooms, for instance, they are more efficient and accurate in producing financial and earnings reports. This is one of the promises of GPT-3, the latest language-generating bot, capable of producing human-like but repetitive text. This could significantly alleviate journalists’ workload and spare them from boring tasks.

What an algorithm should not do, however, is universally solve complex philosophical and ethical dilemmas, which humans themselves struggle to define, such as the matter of truth.

The case of the kidney care algorithm clearly illustrates how the ‘truth’ — about who is a priority — presents a clear distortion, embedded in the algorithm’s architecture. It also shows how what we hold to be true is exposed to change. It is subject to debates and additional information that might readjust and refine its meaning, from one that is biased and scientifically inaccurate to its ‘truer’ form that reflects more faithfully social realities.

The problem is perhaps not so much that the technology is imperfect, but rather that it is thought of and presented as something finite, which in turn leads us to be less vigilant of its blind spots and shortcomings. The risk is that the algorithmically prepared ‘truth’ is consumed as an absolute and unbiased one.

Scholars Bill Kovach and Tom Rosenstiel help us to think of truth as a “sorting-out process,” which results from the interactions between all stakeholders. The result does not represent an absolute truth — which, although it sounds compelling and elegant, may not ever be possible, for humans or machines. Rather, the sorting out process aims to paint a less incorrect picture.

Truth is the product of an ongoing conversation and this conversation should not take place solely within tech companies’ meeting rooms. It requires questioning and debate which cannot happen if one-sided interpretations are embedded in algorithms, dissimulated, and tucked away from the public space.

One simple way to ensure algorithms work for the benefit of human beings is to ensure more transparency about their design. In 2017, a Pew Research Center report on the matter had already called for increased algorithmic literacy, transparency and oversight. Last December, a British governmental report reiterated that proposition.

In the case of kidney care like for the IB test scores, algorithms have been actively contested and their uses have been revoked or appropriately adjusted. They have sparked a conversation about fairness and social justice that brings us closer to a better, more accurate version of truth.

 

 

 

Graphic by @the.beta.lab

Exit mobile version