The Problem with AI Ethics

The Problem with AI Ethics

UKAI Projects · Local Disturbances - Shorts #38 - The Problem with AI Ethics

 

Two friends are walking down a busy street in Toronto and come across some young people singing and dancing on the sidewalk. Their experience is shared but how they choose to aesthetically constitute the event differs. One is thrilled to see this public expression and doesn’t begrudge the detour that the performance demands of pedestrians. The other is angry at having to step through muddy grass and is intimidated by the aggressive dancing and the content of the music.

This schism, to some degree, is a question of ethics, a question of good and bad behaviour, of our moral obligations to others. Ethical evaluations, however, require an abstracted distance from the actual event. Whose determination holds? And in this case, the problem is not of ethical definitions. Both agree that scaring strangers and being a nuisance is “bad”. Somehow, however, they have taken different paths in assessing what this shared experience “means” and how that definition ought to be applied.

Given that we inhabit different bodies, were nurtured by different landscapes, and hold different stories, it shouldn’t be surprising that we aesthetically constitute the events around us in different ways. Ethical determinations are downwind from aesthetic ones. We must first assemble innumerable stimuli into a coherent whole before we might decide if an act is “ethical”. The challenge with ethics, and AI ethics in particular, is that the process of assembling meaning, the very thing that sets us apart from artificial intelligence, is ignored. And by ignoring the process of constituting the world, determinations about what things ‘mean’ become extensions of existing patterns of power, both social and cultural.

If someone decides to call the police on the dancing young people, determinations of the appropriateness of their actions are shifted to institutional actors, and our own capacity to assemble the world is forfeited.

I’ve been involved in conversations around artificial intelligence and culture since around 2018. I was pulled into a project where we offered workshops in every province and territory, and gradually I became fascinated by the implications of the changes underway.

I find, however, that I have little patience or interest in the exploding field of AI ethics.

I’ve nibbled at the edges of the topic, with Please Don’t Understand This being a recent indirect attempt to draw attention to the monology of Western liberalism that shapes AI ethics.

Western ethical frameworks often assume a sort of universalism, wherein the principles developed within these frameworks (such as fairness, accountability, and transparency) are deemed applicable and relevant globally, ignoring cultural and contextual nuances. Moreover, the universalism extends to how events are understood. Only one of the two friends encountering the street performance can be right, and this determination will too often be made by those outside of the encounter altogether.

There is also a tacit assumption that Western institutions are appropriate and necessary vehicles for the determinations being made. The AI ethics space is dominated by lawyers, professors, and policy advisors. Expertise in navigating and bending legal and policy frameworks presupposes at least a passive acceptance that the law and government are legitimate and necessary means to the ends of a better world.

I tend to withdraw from debates around AI ethics with claims that it’s “not really my area”, or that I prefer to “leave it to smarter folks to sort through”. However, a recent high-profile article at Wired has me reconsidering this position.

“The World Isn’t Ready for the Next Decade of AI” is a transcript of a podcast episode of “Have a Nice Future” where the hosts speak with Mustafa Suleyman, the cofounder of DeepMind and Inflection AI.

One of the main thrusts of the conversation is the concern that democratization of artificial intelligence will create conditions capable of toppling the ability of nation states to govern. Suleyman offers that “the story of the next decade is one of proliferation of power, which is what I think is gonna cause a genuine threat to the nation-state, unlike any we've seen since the nation-state was born.”

My first response to this prediction was excitement. The balance of the interview, however, revealed that my desire for the collapse of nation states was not shared by either the interviewers or the interviewee.  The idea that the collapse of nation states might be a positive thing was not considered. What it “meant” was predetermined and not worthy of discussion.

Although I do not describe myself as a Christian, I am sympathetic to Ivan Illich, who in his final years, described the hegemony of Western institutions as a profound “corruption of Christianity”.

The argument is a fairly straight-forward one. “Grace”, to Illich, contained the idea of spontaneous, voluntary, unmediated gestures of love toward others, particularly those to whom we hold no familial or communal obligations. The institutionalization of care through the nation state, therefore, abbreviates our capacity for grace. We turn over the work of caring for others to human machines that discourage compassion and human connection. We turn over the work of determining what things mean, and what action should be taken, to others.

Western institutions are a highly successful technology developed in part by the Roman church to ensure that the outer and inner life of worshipers were shaped to their needs. The pre-institutional church understood confession as a communal act. Worshipers would ‘breathe’ forgiveness into the mouths of fellow congregants. “Conspiratio” or “shared breath” become a source of discomfort for those in power and so confession, in private and on a regular schedule, was made mandatory following the Fourth Lateran Council in 1215. Local priests and wandering orders became agents of control and over time, worshipers took on the role of self-accusation with some vigour. 

As other ideas ascended in Europe and elsewhere (the nation state, capitalism, consumerism, public safety) institutions were turned over to other purposes. The means worked wonders. The ends evolved.

The Wired conversation assumes that the collapse of nation states is necessarily bad. To be fair, history offers ample examples where the collapse of nation states is disastrous for those involved. However, national governments have always proven selfish of their own power, and actively eradicate autonomous spaces where the seeds of renewal might grow and be refined. The modern state actively resists efforts at its own resilience, so when states fail, there is little left but a vacuum and occasionally marginalized communities that have developed their own protocols and approaches over time to deal with volatility and scarcity.

Institutions are vehicles for extending control. Sometimes the “ends” of institutions are those to which I am sympathetic. Sometimes they are not. My issue is less with the goals of these bodies, but the degree of power they insist upon and the role they play in shaping what things mean, and therefore how right and wrong are determined.

AI has the potential to increase the centralization of power. However, as all sides to the Wired conversation attest, it might also have the ability to profoundly destabilize central authorities.

We need epochal thinking to deal with epochal technological changes. Taking sides among elites looking to further entrench their interests leads to short-term wins and long-term pain.  I am not interested in being a part of any movement that insists on making determinations about what the events of my life mean in service to affixing them with a label of “good” or “bad”.

“Ethics” that depend upon Western institutions and relentless self-oppression are certainly necessary but the insistence that the whole of the debate is contained within the ‘technologists’ versus ‘ethicists’ is where problems emerge. Coke and Pepsi did a marvelous job of narrowing a generation’s choices down to two. I feel similarly trapped in a conversation where I must side with the interests of one elite group or another.

Rather than figuring out who we want to drive the bus, we might consider the possibility that we won’t have a bus to get around in for much longer. We can then find other ways to get from point A to point B that are appropriate for the world that’s coming. Our work, therefore, serves to imagine uses for these tools that might help us live in this unknown culture, and live in ways that connect and celebrate.

Focusing on ethical failures is attractive as it makes the lines between ‘sides’ clear and offers the rush of being on the side of the righteous. In doing so, however, we forfeit our responsibility to actively constitute the world and undersign ourselves to what happens next. AI ethicists seem committed to skipping the most important step, the quest for meaning. Even more disappointingly, artists seem only too willing to help them in this work, offering didactic works that are rousing examples of rhetoric but unsatisfying as works of art.

I am generally optimistic about human nature, and perhaps this is why I feel generally comfortable with democratizing technologies. I am, however, generally pessimistic about the ability of institutions to move us forward – they are capable of impressive things but only by abstracting away aspects of experience that make scaling harder – these things are often the very qualities of life we ought to be preserving. The responsibility to make meaning of my own life is something that I would prefer to hold as my own.

Back

Leave a comment

Please note, comments need to be approved before they are published.