Translating identity politics doublethink into software is a programming impossibility.
by Bill Frezza
This month’s issue of MIT Technology Review, my alma mater’s flagship magazine of technology fashion, is entirely devoted to Artificial Intelligence (AI), making the rounds for at least the third time in my career as both panacea and bogeyman. Sprinkled among the long form articles are colorful little one-page warnings with titles like “The Dangers of Tech-Bro AI” and “How to Root Out Hidden Biases in AI.” In addition to the timeless fear of losing our jobs to machines, these pieces argue that right-thinking people must be on the lookout for algorithms that generate unfairness, demanding instead that our AI behave ethically.
Grab the popcorn, this should be fun to watch.
Ethics Are Not, and Never Have Been, Absolute
History shows that people can be made to believe that all sorts of things are ethical, recoiling in horror over things that other people consider ethical. Our tribal nature renders us vulnerable to the will of the leader, or the mob, doing things in groups that we would never consider doing individually. We also have a proven track record of embracing logical contradictions, using post hoc rationalization to justify decisions as it suits us.Nowhere is this more evident than in the contemporary identity politics movement. Here concepts like privilege and intersectionality collide with murky definitions of race and gender to create a moral morass so thick that only the brave or foolhardy dare wade in. Not that there is anything new about this. Philosophers, clerics, ethicists, legislators, jurors, and everyday people have spent eons arguing about right and wrong. A rich body of literature documents society’s ever-changing ethical consensus, or lack thereof.
So next time you hear an expert demand that we develop ethical AI, ask who will be the arbiter of what constitutes correct and incorrect ethics? And once they solve the ancient problem of who watches the watchmen (Quis custodiet ipsos custodes?), exactly how do they plan to translate their demands for “fairness” into code? Sure, software is capable of dealing with uncertainty, incomplete knowledge, and complex conditional circumstances. It can even use fuzzy logic to solve certain classes of problems. But be careful what you ask for when you feed murky definitions into a computer while expecting it to embrace blatant contradictions.
Let me give an example of a murky definition. Define race, ethnicity, and, these days, gender in a manner that a computer can use as the basis for making ethical decisions. How many races are there? How do we classify mixed-race people? What are the unambiguous determinants of ethnicity? Which are the privileged ones and which are the underprivileged ones? And while I used to believe there were only two genders and that these were biologically determined, I am now assured that I am wrong.
Most people skate by with Justice Potter Stewart “I know it when I see it” answers to vexing questions like these. And that may be fine for humans with wetware brains, imprecise use of language, and a practiced ability to duck hard problems. But that’s not so fine for software running on digital machines that literally can only do what they are told. In this particular example, solving the murky definition problem by declaring that computers accept whatever boxes people check on forms is not only a total cop out but surely invites unethical people to game the system seeking unfair advantage, as some infamous cases revealed.Then there is the problem of embracing contradictions; that is, simultaneously believing that something can be A and not-A at the same time, and in all respects. Admit it: we do it all the time. It makes us human. Even doctrinaire Aristotelians like Ayn Rand fall into this trap. The dynamic tension generated by the contradictions swirling in our heads provides rich fodder for religion, humor, art, drama, and macroeconomics.
Imagining an “ethical” AI trying to please its human masters operating under these conditions brings up images of Captain Kirk outsmarting evil computers by forcing them to perseverate on some glaring contradiction at the root of their programming. The computers ended up smoking until they blew themselves up. Unlike the guy who tried to outsmart his fellow citizens by rubbing their noses in their contradictions. They made him drink hemlock.
Do I have an answer to how we can make AI unbiased? Of course not. And neither do the self-appointed experts demanding that we do. Long-haul truck drivers may well be at risk of losing their jobs to AI, but tendentious pundits and class-action lawyers will never be short of work.
Bill Frezza is a fellow at the Competitive Enterprise Institute.