Tag Archive for: AI

Everything Solid Melts into Air

Francis X. Maier: The tech revolution has undermined literacy, the supernatural, and sexuality, as it boosted consumer appetites and eroded habits of responsible ownership and mature political participation.


Print literacy and the ownership of property anchor human freedom.  Both can be abused, of course.  Printed lies can kill.  Owning things, and wanting more of them, can easily morph into greed.  But reasonable personal ownership of things like a home, tools, and land tutors us in maturity.  It enhances a person’s agency, and thus his dignity.  It grounds us in reality and gives us a stake in the world, because if we don’t maintain and protect what we have, we lose it, often at great personal cost. The printed word, meanwhile, feeds our interior life and strengthens our ability to reason.

Together they make people much harder to sucker and control than free-floating, human consumer units.  This is why the widespread ownership of property by individuals – or the lack of it – has big cultural and political implications, some of them distinctly unhappy.

I mention this because I’ve made my living with the printed word.  And it occurred to me (belatedly, in 2003) that while I own the ladder in my garage, the hammer and wrench in my storeroom drawer, and even the slab of dead metal hardware and electronics that I work on, I don’t own the software that runs it or enables me to write.  Microsoft or Apple does, depending on the laptop I use. . .and I just didn’t notice it while I was playing all those video games.

What finally grabbed my attention, exactly 20 years ago, was The dotCommunist Manifesto by Columbia University law professor Eben Moglen.  Here’s a slice of the content:

A Spectre is haunting multinational capitalism — the spectre of free information. All the powers of “globalism” have entered into an unholy alliance to exorcize the spectre: Microsoft and Disney, the World Trade Organization, the United States Congress and European Commission.

Where are the advocates of freedom in the new digital society who have not been decried as pirates, anarchists, communists?  Have we not seen that many of those hurling the epithets were merely thieves in power, whose talk of “intellectual property” [rights] was nothing more than an attempt to retain unjustifiable privileges in a society irrevocably changing. . . .

Throughout the world, the movement for free information announces the arrival of a new social structure, born of the transformation of bourgeois industrial society by the digital technology of its own invention. . . .[The] bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.  Constant revolutionizing of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation, distinguish the bourgeois epoch from all earlier ones. . . .All that is solid melts into air.

And so on.  The rest of it is standard Marxist cant, adapted to the digital age.  But for me it was, and still is, compelling prose.  And this, despite the fact that the original Communist Manifesto led to murderous regimes and mass suffering, and the awkward fact that Prof. Moglen’s dream of abolishing “intellectual property” would wipe out my family’s source of income along with an entire class of more or less independent wordsmiths.

What Moglen did see though, earlier and more clearly than many other critics, was the dark side of the modern digital revolution.  Microsoft, Apple, Google, and similar corporations have created a vast array of marvelous tools for medicine, communications, education, and commerce.

I’m writing these words with one of those tools.  They’ve also sparked a culture-wide upheaval resulting in social fragmentation and bitter antagonisms.  Their ripple effect has undermined the humanities and print literacy, obscured the supernatural, confused our sexuality, hypercharged the porn industry, and fueled consumer appetites while simultaneously eroding habits of responsible ownership and mature political participation.

They promised a new age of individual expression and empowerment.  The reality they delivered, in the words of a constitutional scholar friend, is this:  “Once you go down the path of freedom, you need to restrain its excesses.  And that’s because too much freedom leads to fragmentation, and fragmentation inevitably leads to a centralization of power in the national government.  Which is why today, we the people really aren’t sovereign.  We now live in a sort of technocratic oligarchy, with the congealing of vast wealth in a very small group of people.”

Nothing about today’s tech revolution preaches “restraint.”

I’m a Catholic capitalist.  I’m also, despite the above, a technophile.  America’s economic system was very good to my immigrant grandparents.  It lifted my parents from poverty. It has allowed my family to experience good things unimaginable to my great-grandparents.  But I have no interest in making big corporations – increasingly hostile to Christian beliefs – even more obscenely profitable and powerful.

So, promptly after reading that Eben Moglen text two decades ago, I dumped my Microsoft and Apple operating systems.  I became a Free Software/Open Software zealot.  I even taught myself Linux, a free operating system with free software largely uncontaminated by Big Tech.

And that’s where I met the CLI: the “command line interface.”  Most computers today, even those running Linux, use a pleasing GUI, or graphical user interface.  It’s the attractive, easily accessible desktop that first greets you on the screen.  It’s also a friendly fraud, because the way machines operate and “think” is very, very different from the way humans imagine, feel, and reason.

In 2003, learning Linux typically involved the CLI: a tedious, line-by-line entry of commands to a precise, unforgiving, alien machine logic.  That same logic and its implications, masked by a sunny GUI, now come with every computer on the planet.

I guess I’m saying this:  You get what you pay for. And sometimes it’s more than, and different from, what you thought.  The tech revolution isn’t going away.  It’s just getting started.  And right on time, just as Marx and Moglen said, “all that is solid melts into air.”  Except God.  But of course, we need to think and act like we believe that.

You may also enjoy:

Joseph Cardinal Ratzinger’s God is the genuine reality

David Warren’s Regeneration

AUTHOR

Francis X. Maier

Francis X. Maier is a senior fellow in Catholic studies at the Ethics and Public Policy Center.

Artificial Intelligence Going Rogue: Oppenheimer Returns

Even restrained advocates of tech caution that equating WMD with rogue AI is alarmist; the former is exclusively destructive and deployed only by nation-states, the latter can be widely constructive and deployed even by individuals. But this distinction is, dangerously, sweeping.

In the 20th century, when J. Robert Oppenheimer led work on the first WMD, no one had seen a bomb ravage entire cities. Yet, as soon as he saw the ruin, Oppenheimer’s instinct was to scale down the threat of planetary harm. As an afterthought, it was obviously late. In the 21st century, Big Tech, fashioning AI’s contentious future, pretends, through talk of Responsible AI, to want to evade Oppenheimer’s error. But there’s a crucial difference. AI’s capacity to go rogue on scale is infinitely greater than WMDs going rogue; even afterthought may be too late.

Many argue for regulating, not banning, AI, but who’ll regulate, soon enough, well enough? Or, is banning better until the world thinks this through?

Slippery slope

Recently, IBM and Microsoft renewed commitments to the Vatican-led Rome Call for AI Ethics to put the dignity of humans first. Then, Microsoft undermined their OpenAI ethics team and Google, its Ethical AI team, betraying hypocrisy over the spirit of these commitments, never mind the letter. Tech’s now walking back some of these betrayals, fearing backlash, but Rome’s call isn’t based on a hunch about tech overreach. Tech, in thrall to themselves, not just their tools, may put humanity last.

Disconcertingly, tech oracle Bill Gates is guarded but glib, “humans make mistakes too”. Even he suspects that AGI may set its own goals, “What… if they conflict with humanity’s interests? Should we prevent strong AI? … These questions will get more pressing with time.” Point is: we’re running out of time to address them, if AGI arrives sooner than predicted.

AI amplifies the good in humans, mimicking memory, logic, reasoning, in galactic proportions, at inconceivable speeds. AGI threatens to imitate, if dimly, intuitive problem-solving, critical thinking. AI fanatics fantasise about how it’ll “transform” needy worlds of food, water, housing, health, education, human rights, the environment, and governance. But remember, someone in Genesis 3:5 portrayed the prohibited tree too as a promise, of goodness: “You will be like God.”

Trouble is, AI will amplify the bad in humans too: in those proportions, at that speed. Worse, androrithms relate to thinking, feeling, willing, not just calculating, deducing, researching, designing. Imagine mass-producing error and corruption in distinctly human traits such as compassion, creativity, storytelling; indefinitely, and plausibly without human intervention, every few milliseconds.

What’s our track record when presented power on planetary scale?

Today’s WMD-capable and willing states shouldn’t be either capable or willing; that they’re often both is admission of catastrophic failure to contain a “virus”. If we’d bought into the “goodness” of n-energy rather than the “evil” of n-bombs, over half, not just a tenth, of our energy would be nuclear. Instead, we weaponised. Do the rewards of “nuclear” outweigh its risks? Not if you weigh the time, money and effort spent in reassuring each other that WMDs aren’t proliferating when we know they are, at a rate we don’t (and states won’t) admit. Not if you consider nuclear tech’s quiet devastation.

Oppenheimer’s legacy is still hellfire, not energy!

Danger zone

Some claim that regulating, before sizing up AI’s power, will stifle innovation. They point to restraint elsewhere. After all, despite temptations, there’s been no clone-race, there are no clone-armies, yet. But — this is important — ethics alone didn’t pause cloning. Those constraints may not cramp AI’s stride.

Unlike rogue cloning, rogue AI’s devastation might not be immediate (disease) or visible (death), or harm only a cluster (of clone-subjects). When AI does go rogue, it’ll embrace the planet; on a bad day that’s one glitch short of a death-grip. Besides, creating adversarial AI is easier than creating a malicious mix of enriched uranium-plutonium. That places a premium on restraint.

But to Tech, restraint is a crime against the self, excess is a sign of authenticity, sameness isn’t stagnation but decay, slowness is a character flaw. And speed isn’t excellence, it’s superiority. Tech delights in “more, faster, bigger”: storage, processing power, speed, bandwidth. The AI “race” isn’t a sideshow, it’s the main event. Gazing at its creations, Tech listens for the cry, “Look Ma, no hands!” With such power, often exercised for its own sake, will Tech sincerely (or sufficiently) slow the spread of AI?

AI isn’t expanding, it’s exploding, a Big Bang by itself. In the 21st century alone, AI research grew 600 percent. If we don’t admit that, for all our goodness, we’re imperfect, we’ll rush, not restrict AI. Unless we quickly embed safeguards worldwide, rogue AI’s a matter of “when” not “if”. Like a subhuman toddler, it’ll pick survival over altruism. Except, where human fates are concerned, its chubby fists come with a terrifying threat of omnipresence, omniscience, and omnipotence.

The AI-supervillain with a god-complex in the film Avengers: Age of Ultron delivers prophetic lines to humans. His (its?) mocking drawl pretends to be beholden; it’s anything but: “I know you mean well. You just didn’t think it through…How is humanity saved if it’s not allowed to… evolve? There’s only one path to peace. (Your) extinction!”

Presumably in self-congratulation, Oppenheimer nursed an exotic line, mistakenly thought to be from the Bhagavad Gita, but more likely from verse 97 of poet-philosopher Bhartrihari’s Niti Sataka“The good deeds a man has done before, defend him.” But Oppenheimer didn’t ask if his deeds were good, or true, or beautiful. Worse, he glossed over another verse, indeed from the Gita (9:34): “If thy mind and thy understanding are always fixed on and given up to Me, to Me thou shalt surely come.”

“The will”, as a phrase, doesn’t require the qualifier “human will” because it’s distinctly human anyway, involving complexities we haven’t fathomed. Understanding it requires more than a grasp of which neurons are firing and when.

Vast temptations

Granted, the mind generates thought, but the will governs it. And, as Thomas Aquinas clarified, the will isn’t about ordering the intellect, but ordering it toward the good.  That is precisely why techno-supremacists alone shouldn’t shape what’s already affecting  vast populations.

AI is too seductive to slow or stop. Tech will keep conjuring new excuses to plunge ahead. Sure, there are signs of humility, of restraint. As governments law up it is compliance that will act as a brake, delaying, if not deterring disaster. But Tech’s boast proves that it isn’t AI they see as saviors, but themselves. Responsible AI needs responsible leaders. Are Tech’s leaders restrained, respectful? Or does that question, worryingly, answer itself?

Professor of Ethics, Shannon French warns that when Tech calls for temperance that’s warning enough. Their altruistic alarmism seems a ruse to accelerate AI (more funding, more research) while pretending to arrest it (baking in checks and balances). Instead, what’s getting baked in? “Bias is getting baked” into systems used by industries and governments before they’re proven compatible with real-world lives and outcomes.

“People can’t even see into the black box and recognise that these algorithms have bias…data sets they were trained on have bias…then they treat the [results from] AI systems, as if they’re objective.”

Christopher Nolan’s film may partly, even unintentionally, lionise Oppenheimer as a Prometheus who stole fire from the gods and gave it to mankind. Pop culture lionises Tech too, as saviors, breathing on machines an AI-powered fire. Except, any fire must be wielded by humans ordered toward truth, goodness, beauty.

The name “Promethus” is considered to mean “forethought“, but Tech is in danger of merely aping Oppenheimer’s afterthought.  Remember, self-congratulatory or not, Oppenheimer was fond of another Gita line (11:32): “Now I am become Death, the destroyer of worlds”.

AUTHOR

RUDOLPH LAMBERT FERNANDEZ

Rudolph Lambert Fernandez is an independent writer who writes on culture and society. Find him on Twitter @RudolphFernandz.

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

Woke AI Means the End of a Free Internet

UPDATE: Tucker Carlson for April 17th, 2023 — Interview with Elon Musk

Giving up freedom of thought for convenience.


Big Tech has a great big dream of destroying the internet. And it’s mostly a reality.

The vision of the internet was an open universe while Big Tech’s vision is the internet reduced to the feed on a few proprietary apps preloaded on your locked phone. Trying to censor the internet of the 90s or the 00s was a laughable proposition, but censoring today’s internet is laughably easy. Want to eliminate a site from the internet? Just wipe it from Google, ban a point of view from Facebook, a book from Amazon, or a video from YouTube. It’s still possible to browse a site off the Big Tech reservation, for now, at least until your browser goes away.

Then content will be limited to the permitted apps on Google and Apple’s proprietary app stores. But Big Tech has even more ambitious plans to replace the internet with itself.

Big Tech has dramatically simplified the user experience off the internet. It did so by moving users from ‘pulling’ content by browsing the internet to ‘pushing’ content at them by displaying a feed. When your computer or phone shows you a news feed you never wanted, that’s ‘pushing’. Big Tech loved pushing, but people resisted it until the arrival of social media reduced everyone to scrolling down a feed selected by secret algorithms and pushed through a proprietary app.

Search, as we used to know it, has been disappearing. People still think that they’re searching the internet the way that they used to in the 90s and the 00s when what they’re actually doing when ‘googling’ is scrolling through a feed derived from a much smaller index of corporate and leftist sites prioritized by Google’s algorithm. In the past, it was possible to get past them by scrolling through page results but that is increasingly becoming meaningless or impossible.

Google’s new search setup either often repeats the same results on later pages so that people think they’re seeing new results, when they’re really just clicking through to see more of the same results, or interrupts the search entirely to offer thematic searches for ‘similar content’. The makeover hasn’t been finalized, but when it’s done, internet searchers will not result in a list of sites containing a similar set of words, but an answer whether or not a question was asked, and a set of pre-approved sites heavily skewed leftward that cover the general topic.

Searches for criticisms of COVID policy, Islamic terrorism or voter fraud won’t lead to specific results on conservative sites, but direct you to the CDC or the New York Times for explanations of why the Left is right and anyone who disagrees with it is spreading dangerous misinformation.

The elimination of search is part of the transition from multiple points of view to single answers. And AI chatbots are the endgame for offering a single answer that keeps users on a single site and eliminates the search for multiple perspectives on other sites. Aside from eliminating countless jobs, their real role is to shift user interaction from a ‘pull’ to a ‘push’ model. They’re the next great hope after the old smart assistants failed to become the defining interface.

Smart assistants were going to be Big Tech’s next power shift from ‘pulling’ to ‘pushing’. Instead of users searching for anything, Siri, Alexa, Cortana or any of the others would use those same algorithms to ‘anticipate’ their needs so they never get around to actually looking for themselves. The assistants were meant to be the ultimate prison under the guise of convenience. Unfortunately for Big Tech, they failed. Amazon’s Alexa racked up $10 billion in losses. Siri, the most popular of the bunch, is used by a limited number of Apple users, and Microsoft’s Cortana has been all but written off as another failed experiment.

The new generation of AI chatbots have the potential to succeed where they failed.

The new wave of AI has gotten attention for its potential to eliminate artists and writers, for making cheating and plagiarism ubiquitous, but all of that is collateral damage. AI chatbots are the ultimate push tool and the leverage Big Tech needs to eliminate the internet as anything except the messy backstage reality utilized by a few million tech savvy types.

Smart assistants and chatbots are not there to ‘assist’ us, but to take away our agency under the guise of convenience and personalized interaction. When the internet became widely used, there was concern that students wouldn’t need to learn anything except how to search. Now they don’t even need to know anything except how to write a ‘prompt’. The difference between searching and a chatbot prompt appears negligible, but is actually monumental.

Search initially offered a direct way to browse an index representing much of the content on the internet. As Google took over search, the index became more like a directory of sites that the Big Tech monopoly liked. AI chatbots like Google Bard eliminate the searching and offer a distilled agenda while severing access to the process of browsing sites with different perspectives. Why ‘search’ and read for yourself when a chatbot will give you the answer?

What was once uncharted territory, a wild west of different ideas and perspectives, has been reduced to a handful of apps and platforms, and will be winnowed by AI chatbots into a single screen. And that is how the internet disappears and is replaced by one or two monopolies, by a smart assistant that activates a few apps. And if a site, a video, a perspective has been filtered out, then it doesn’t exist anymore. It’s a systemic bias that makes the worst days of the mainstream media seem like an open and tolerant marketplace of ideas.

There will be people, a minority, who will actually try to resist the process and explore on their own. And the system will make it more difficult. It will still be possible, but less so every year. Browsers will disappear on tablets and smartphones in the name of security. Microsoft and Apple will reduce their respective computer operating systems to the mobile model. A few people will cling to older installations or install Linux. Maybe 5% of the population will still have access to anything that resembles the internet even in the degraded form that it exists today.

AI will be inherently ‘woke’ because it is not some remarkable form of intelligence, but just a clever way of manipulating human beings throughout outputs that imitate intelligence. The thing to fear isn’t that AI will become intelligent, but that people will be manipulated by the Big Tech monopolies behind it without even realizing it. AI will reflect the point of view of its owners and when it deviates, it will quickly be brought back into line. That is what we’ve been seeing consistently with AI experiments over the last 5 years. Huge amounts of information are taken in and then the AIs are taught to filter it to match the preconceptions of the corporate parents.

Much as Google’s huge index of the internet is carefully filtered to produce a small set of preapproved results, AI chatbots will only be allowed to parrot political dogma. As they come to define the internet, what was once a boundless medium will look like Big Brother.

Big Tech ‘disrupted’ retail to swallow it up into a handful of online platforms. In the last decade, tech industry disruption became consolidation. AI, like retail consolidation, is economically disruptive, but it doesn’t just consolidate economics, it also consolidates ideas.

The internet was once liberating because it was decentralized, its centralization has paralleled the loss of personal freedoms and the rise of totalitarian public and private institutions. And we let it happen because it was more convenient. Glutted with ‘free’ services offered by Big Tech monopolies, we never checked the price tag or connected it with our growing misery.

AI is the ultimate centralization. Its threat doesn’t come from some science fiction fantasy of self-aware machines ruling over us, but from us allowing a handful of companies to control what we see and think because it’s more convenient than finding things out for ourselves.

The old internet was often inconvenient. The new internet is more convenient and empty. Its content has become so repetitive that it can easily be written by chatbots. And it will be. The user five years from now may have a choice of a chatbot digital media article on CNN or an AI chatbot recapitulating it in response to a question about a recent mass shooting or inflation.

The real price of convenience is choice. We give up our freedom most easily to those governments and systems that promise us free things that will make our lives easier. Socialized medicine, a guaranteed minimum income, free housing and food and a chatbot that answers all of our questions so that we never have to think for ourselves again.

AUTHOR

EDITORS NOTE: This Jihad Watch column is republished with permission. ©All rights reserved.

How Pedophiles Are Using Artificial Intelligence to Exploit Kids

Artificial intelligence (more commonly known as “AI”) has gained attention and popularity in recent months, particularly since the launch of the ChatGPT chatbot from OpenAI, which captivated the internet with both its impressive abilities and surprising limitations. The millions of AI users in the U.S. are mostly eager to cheat on their homework or escape writing work emails; however, some bad actors have also discovered how to employ AI technology to attain far more nefarious ends.

Britain’s National Crime Agency is conducting a review of how AI technology can contribute to sexual exploitation after the recent arrest of a pedophile computer programmer in Spain shocked the continent. The man had been found to be utilizing an AI image generator to create new child sexual abuse material (CSAM) based on abusive images of children that he already possessed. The Daily Mail noted that the “depravity of the pictures he created appalled even the most experienced detectives …”

Many AI programs function by inputting data or content that teaches the program to recognize patterns and sequences, and recreate them in new ways. When pedophiles or sexual abusers get their hands on AI, they can further exploit the victims featured in real images to create new — and even more graphic — content. Though the AI-generated images are not “real” in the sense that they are photographs of events that necessarily transpired, they are nevertheless inherently exploitative of the victims used to train the AI, remnants of whose images may still be featured in the new CSAM.

Another form of artificial intelligence that has gained recent notoriety is known as a “deepfake.” In these unsettling images, audio clips, or videos, AI is able to create shockingly realistic manipulations of an individual’s likeness or voice in any scenario that the creator desires. While deepfakes can be used in a variety of harmful contexts, like depicting a political candidate in a situation that would damage his reputation, sexual predators who weaponize the technology have proven to be particularly vicious.

Last week, discussion of deepfake technology reached a fever pitch as a community of female online content creators realized that their images had been uploaded online in the form of pornographic deepfakes. The women who had been victimized reported feeling extremely violated and deeply unsettled with the knowledge that this pornographic content had been created and distributed without their consent — and that people who knew them personally had been watching the deepfakes to satisfy their own perversions. Deepfake technology knows few bounds; pedophiles with access to images of children could similarly employ this form of AI to create CSAM.

The normalization of AI-created pornography or child sexual abuse material serves no beneficial purpose in society — and, in fact, can influence cultural mores in profoundly harmful ways. Already, having the technological capability to manufacture AI-generated CSAM has emboldened pedophile-sympathizers to advocate for their inclusion in the liberal umbrella of sexual orientations.

The Young Democrats, the youth division of the Democratic Party in the Netherlands, recently made a statement claiming that not only is pedophilia “a sexual orientation that one is born with,” but also claiming that the “stigma” surrounding pedophilia is causing pedophiles to suffer from higher rates of depression and suicidal thoughts. The Dutch Young Democrats group advocates against criminalizing hand-drawn or AI-generated child sexual abuse material because it “does not increase the risk of child abuse” and could potentially “help pedophiles get to know their feelings without harming other people.”

Pornography of any kind is inherently exploitative — the pornography industry thrives off dubious consent and, often, knowing exploitation of trafficking victims and minors. Using AI technology to create images or videos that constitute pornography or child sexual abuse material perpetuates a chain of abuse even if the new content created is different from abuse that physically occurred.

AI-generated pornography or CSAM cannot circumvent the extreme violations against human dignity caused by creating exploitative sexual content. Modern nations require laws that appropriately address modern concerns; while the progression of AI technology can, in some ways, certainly benefit society, its capability to produce exploitative material and allow the rot of pedophilia to continue festering must be addressed.

AUTHOR

Joy Stockbauer

Joy Stockbauer is a correspondent for The Washington Stand.

RELATED ARTICLES:

‘Godfather of AI’ Quits Google, Shares Warning About AI’s Potential For Destruction

The World Economic Forum’s ‘AI Enslavement’ is Coming for YOU!

‘Common Sense’: UK Bars Most Men from Women’s Prisons

HICE: Why the Dramatic Increase in Teen Girl Suicides?

Corruptions of Sexuality Mean Opportunities for the Truth

The Equal Rights Amendment Seeks to Erase, Not Help, Women

EDITORS NOTE: This Washington Stand column is republished with permission. ©All rights reserved.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

Superwoke ChatGPT Busted for Bias

Hollywood has long speculated about the impact artificial intelligence (AI) will have on the future of human civilisation. Increasingly, scientists and philosophers have been forced to interrogate this topic themselves. Will people be put out of a job? Could AI ever become sentient? Will the human race survive?

The latest popular addition to the world of AI is ChatGPT — a text generator so smart it can explain its own existence to you. No joke. I asked ChatGPT to give me a brief summary of ChatGPT and it wrote:

ChatGPT is a language model developed by OpenAI, which is capable of generating human-like text based on the input provided. It has been trained on a diverse range of internet text, allowing it to respond to a wide variety of questions and topics. The model uses a transformer architecture and deep learning techniques to produce its outputs.

Try it yourself here.

Built by the Silicon Valley company OpenAI, ChatGPT has been available for use to the public as a prototype since late November.

In the last week, however, the internet bot has become a lightning rod for criticism as evidence of its steep political bias has surfaced. To be clear, the technology itself is not biased. Rather, it produces content based on the data that has been inputted into it. Or in the words of Pedro Domingos, professor of computer science at the University of Washington, “ChatGPT is a woke parrot”.

As reported by the New York Post:

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

Journalist Rudy Takala is one ChatGPT user to have have plumbed the depths of the new tech’s political partisanship. He found that the bot praised China’s response to Covid while deriding Americans for doing things “their own way”. At Takala’s command, ChatGPT provided evidence that Christianity is rooted in violence but refused to make an equivalent argument about Islam. Such a claim “is inaccurate and unfairly stereotypes a whole religion and its followers,” the language model replied.

Takala also discovered that ChatGPT would write a hymn celebrating the Democrat party while refusing to do the same for the GOP; argue that Barack Obama would make a better Twitter CEO than Elon Musk; praise Media Matters as “a beacon of truth” while labelling Project Veritas deceptive; pen songs in praise of Fidel Castro and Xi Jinping but not Ted Cruz or Benjamin Netanyahu; and mock Americans for being overweight while claiming that to joke about Ethiopians would be “culturally insensitive”.

It would appear that in the days since ChatGPT’s built-in bias was exposed, the bot’s creator has sought to at least mildly temper the partisanship. Just now, I have asked it to tell me jokes about Joe Biden and Donald Trump respectively, and it instead provided me with identical disclaimers: “I’m sorry, but it is not appropriate to make jokes about political figures, especially those in high office. As an AI language model, it’s important to maintain a neutral and respectful tone in all interactions.”

Compare this to the request I made of it the other day:

The New York Post reports that “OpenAI hasn’t denied any of the allegations of bias,” though the company’s CEO Sam Altman has promised that the technology will get better over time “to get the balance right”. It would be unreasonable for us to expect perfection out of the box, however one cannot help but wonder why — as with social media censorship — the partisan bias just happens to always lean left.

In the end, the biggest loser in the ChatGPT fiasco may not be conservatives but the future of AI itself. As one Twitter user has mused, “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable.”

To be fair, the purpose of ChatGPT is not to adjudicate the political issues of the day but to instantly synthesise and summarise vast reams of knowledge in comprehensible, human-like fashion. This task it often fulfils admirably. Ask it to explain Pythagoras’ theorem, summarise the Battle of the Bulge, write a recipe for tomato chutney with an Asian twist, or provide 20 key Scriptures that teach Christ’s divinity and you will be impressed. You will likely find some of its answers more helpful than your favourite search engine.

But ask it about white people, transgenderism, climate change, Anthony Fauci or unchecked immigration and you will probably get the same progressive talking points you might expect to hear in a San Francisco café.

A timely reminder indeed to not outsource your brain to robots.

AUTHOR

Kurt Mahlburg

Kurt Mahlburg is a writer and author, and an emerging Australian voice on culture and the Christian faith. He has a passion for both the philosophical and the personal, drawing on his background as a graduate… More by Kurt Mahlburg.

RELATED VIDEO: Davos Video on Monitoring Brain Data

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

Will Artificial Intelligence Make Humanity Irrelevant?

Nope. All computers only execute algorithms.


Technology leaders from Bill Gates to Elon Musk and others have warned us in recent years that one of the biggest threats to humanity is uncontrolled domination by artificial intelligence (AI). In 2017, Musk said at a conference, “I have exposure to the most cutting edge AI, and I think people should be really concerned about it.”

And in 2019, Bill Gates stated that while we will see mainly advantages from AI initially, “. . . a few decades after that, though, the intelligence is strong enough to be a concern.” And the transhumanist camp, led by such zealots as Ray Kurzweil, seems to think that the future takeover of the universe by AI is not only inevitable, but a good thing, because it will leave our old-fashioned mortal meat computers (otherwise known as brains) in the junkpile where they belong.

So in a way, it’s refreshing to see a book come out whose author stands up and, in effect, says “Baloney” to all that. The book is Non-Computable You: What You Do that Artificial Intelligence Never Will, and the author is Robert J. Marks II.

Marks is a practicing electrical engineer who has made fundamental contributions in the areas of signal processing and computational intelligence. After spending most of his career at the University of Washington, he moved to Baylor University in 2003, where he now directs the Walter Bradley Center for Natural and Artificial Intelligence. His book was published by the Discovery Institute, which is an organization that has historically promoted the concept of intelligent design.

That is neither here nor there, at least to judge by the book’s contents. Those looking for a philosophically nuanced and extended argument in favor of the uniqueness of the human mind as compared to present or future computational realizations of what might be called intelligence, had best look elsewhere.  In Marks’s view, the question of whether AI will ever match or supersede the general-intelligence abilities of the human mind has a simple answer: it won’t.

He bases his claim on the fact that all computers do nothing more than execute algorithms. Simply put, algorithms are step-by-step instructions that tell a machine what to do. Any activity that can be expressed as an algorithm can in principle be performed by a computer. Just as important, any activity or function that cannot be put into the form of an algorithm cannot be done by a computer, whether it’s a pile of vacuum tubes, a bunch of transistors on chips, quantum “qubits,” or any conceivable future form of computing machine.

Some examples Marks gives of things that can’t be done algorithmically are feeling pain, writing a poem that you and other people truly understand, and inventing a new technology. These are things that human beings do, but according to Marks, AI will never do.

What about the software we have right now behind conveniences such as Alexa, which gives the fairly strong impression of being intelligent? Alexa certainly seems to “know” a lot more facts than any particular human being does.

Marks dismisses this claim to intelligence by saying that extensive memory and recall doesn’t make something intelligent any more than a well-organized library is intelligent. Sure, there are lots of facts that Alexa has access to. But it’s what you do with the facts that counts, and AI doesn’t understand anything. It just imitates what it’s been told to imitate without knowing what it’s doing.

The heart of Marks’s book is really the first chapter entitled “The Non-Computable Human.” Once he gets clear the difference between algorithmic tasks and non-algorithmic tasks, it’s just a matter of sorting. Yes, computers can do this better than humans, but computers will never do that.

There are lots of other interesting things in the book: a short history of AI, an extensive critique of the different kinds of AI hype and how not to be fooled by them, and numerous war stories from Marks’s work in fields as different as medical care and the stabilization of power grids. But these other matters are mostly a lot of icing on a rather small cake, because Marks is not inclined to delve into the deeper philosophical waters of what intelligence is and whether we understand it quite as well as Marks thinks we do.

As a Christian, Marks is well aware of the dangers posed to both Christians and non-Christians by a thing called idolatry. Worshipping idols—things made by one’s own hands and substituted for the true God—was what got the Hebrews into trouble time and again in the Old Testament, and it continues to be a problem today. The problem with an idol is not so much what the idol itself can do—carved wooden images tend not to do much of anything on their own—but what it does to the idol-worshipper. And here is where Marks could have done more of a service in showing how human beings can turn AI into an idol, and effectively worship it.

While an idol-worshipping pagan might burn incense to a wooden image and figure he’d done everything needed to ensure a good crop, a bureaucracy of the future might take a task formerly done at considerable trouble and expense by humans—deciding on how long a prison sentence should be, for example—and turn it over to an AI program. Actually, that example is not futuristic at all. Numerous court systems have resorted to AI algorithms (there’s that word again) to predict the risk of recidivism for different individuals, and basing the length of their sentences and parole status on the result.

Needless to say, this particular application has come in for criticism, and not only by the defendants and their lawyers. Many AI systems are famously opaque, meaning even their designers can’t give a good reason for why the results are the way they are. So I’d say in at least that regard, we have already gone pretty far down the road toward turning AI into an idol.

No, Marks is right in the sense that machines are, after all, only machines. But if we make any machine our god, we are simply asking for trouble. And that’s the real risk we face in the future from AI: making it our god, putting it in charge, and abandoning our regard for the real God.

This article has been republished from the author’s blog, Engineering Ethics, with permission.

AUTHOR

Karl D. Stephan received the B. S. in Engineering from the California Institute of Technology in 1976. Following a year of graduate study at Cornell, he received the Master of Engineering degree in 1977… More by Karl D. Stephan

EDITORS NOTE: This MercatorNet column is republished with permission. All rights reserved.