Will Artificial Intelligence Make Humanity Irrelevant?

Nope. All computers only execute algorithms.


Technology leaders from Bill Gates to Elon Musk and others have warned us in recent years that one of the biggest threats to humanity is uncontrolled domination by artificial intelligence (AI). In 2017, Musk said at a conference, “I have exposure to the most cutting edge AI, and I think people should be really concerned about it.”

And in 2019, Bill Gates stated that while we will see mainly advantages from AI initially, “. . . a few decades after that, though, the intelligence is strong enough to be a concern.” And the transhumanist camp, led by such zealots as Ray Kurzweil, seems to think that the future takeover of the universe by AI is not only inevitable, but a good thing, because it will leave our old-fashioned mortal meat computers (otherwise known as brains) in the junkpile where they belong.

So in a way, it’s refreshing to see a book come out whose author stands up and, in effect, says “Baloney” to all that. The book is Non-Computable You: What You Do that Artificial Intelligence Never Will, and the author is Robert J. Marks II.

Marks is a practicing electrical engineer who has made fundamental contributions in the areas of signal processing and computational intelligence. After spending most of his career at the University of Washington, he moved to Baylor University in 2003, where he now directs the Walter Bradley Center for Natural and Artificial Intelligence. His book was published by the Discovery Institute, which is an organization that has historically promoted the concept of intelligent design.

That is neither here nor there, at least to judge by the book’s contents. Those looking for a philosophically nuanced and extended argument in favor of the uniqueness of the human mind as compared to present or future computational realizations of what might be called intelligence, had best look elsewhere.  In Marks’s view, the question of whether AI will ever match or supersede the general-intelligence abilities of the human mind has a simple answer: it won’t.

He bases his claim on the fact that all computers do nothing more than execute algorithms. Simply put, algorithms are step-by-step instructions that tell a machine what to do. Any activity that can be expressed as an algorithm can in principle be performed by a computer. Just as important, any activity or function that cannot be put into the form of an algorithm cannot be done by a computer, whether it’s a pile of vacuum tubes, a bunch of transistors on chips, quantum “qubits,” or any conceivable future form of computing machine.

Some examples Marks gives of things that can’t be done algorithmically are feeling pain, writing a poem that you and other people truly understand, and inventing a new technology. These are things that human beings do, but according to Marks, AI will never do.

What about the software we have right now behind conveniences such as Alexa, which gives the fairly strong impression of being intelligent? Alexa certainly seems to “know” a lot more facts than any particular human being does.

Marks dismisses this claim to intelligence by saying that extensive memory and recall doesn’t make something intelligent any more than a well-organized library is intelligent. Sure, there are lots of facts that Alexa has access to. But it’s what you do with the facts that counts, and AI doesn’t understand anything. It just imitates what it’s been told to imitate without knowing what it’s doing.

The heart of Marks’s book is really the first chapter entitled “The Non-Computable Human.” Once he gets clear the difference between algorithmic tasks and non-algorithmic tasks, it’s just a matter of sorting. Yes, computers can do this better than humans, but computers will never do that.

There are lots of other interesting things in the book: a short history of AI, an extensive critique of the different kinds of AI hype and how not to be fooled by them, and numerous war stories from Marks’s work in fields as different as medical care and the stabilization of power grids. But these other matters are mostly a lot of icing on a rather small cake, because Marks is not inclined to delve into the deeper philosophical waters of what intelligence is and whether we understand it quite as well as Marks thinks we do.

As a Christian, Marks is well aware of the dangers posed to both Christians and non-Christians by a thing called idolatry. Worshipping idols—things made by one’s own hands and substituted for the true God—was what got the Hebrews into trouble time and again in the Old Testament, and it continues to be a problem today. The problem with an idol is not so much what the idol itself can do—carved wooden images tend not to do much of anything on their own—but what it does to the idol-worshipper. And here is where Marks could have done more of a service in showing how human beings can turn AI into an idol, and effectively worship it.

While an idol-worshipping pagan might burn incense to a wooden image and figure he’d done everything needed to ensure a good crop, a bureaucracy of the future might take a task formerly done at considerable trouble and expense by humans—deciding on how long a prison sentence should be, for example—and turn it over to an AI program. Actually, that example is not futuristic at all. Numerous court systems have resorted to AI algorithms (there’s that word again) to predict the risk of recidivism for different individuals, and basing the length of their sentences and parole status on the result.

Needless to say, this particular application has come in for criticism, and not only by the defendants and their lawyers. Many AI systems are famously opaque, meaning even their designers can’t give a good reason for why the results are the way they are. So I’d say in at least that regard, we have already gone pretty far down the road toward turning AI into an idol.

No, Marks is right in the sense that machines are, after all, only machines. But if we make any machine our god, we are simply asking for trouble. And that’s the real risk we face in the future from AI: making it our god, putting it in charge, and abandoning our regard for the real God.

This article has been republished from the author’s blog, Engineering Ethics, with permission.

AUTHOR

Karl D. Stephan received the B. S. in Engineering from the California Institute of Technology in 1976. Following a year of graduate study at Cornell, he received the Master of Engineering degree in 1977… More by Karl D. Stephan

EDITORS NOTE: This MercatorNet column is republished with permission. All rights reserved.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *