Tag Archive for: artificial intelligence

Florida high-tech company creates ‘RAIA’ a launch pad for businesses to build, train and test OpenAI™

A Florida based high-tech company has launched its newest product RAIA™.

RAIA™ is a launch pad for businesses to build, train and test OpenAI™ Assistants and Agents. It quickly and seamlessly launches powerful custom A.I. Assistants to automate work that costs business time and money.

WHY RAIA™?

Before you can save time and money, you will need to …

  • Train your A.I. by empowering your business users to transfer knowledge via conversation.
  • Test your A.I. to ensure it asks and answers questions correctly leveraging our platform.
  • Launch your OpenAI Assistants with confidence and the support of your team.
  • Manage and Monitor your OpenAI Assistants to ensure you keep up with new innovations.
24/7 Availability

RAIA™ ensures that no potential lead is missed by being available round the clock. This constant presence means that inquiries and leads can be addressed immediately, regardless of time zones or business hours.

Qualifies and Prioritizes Engagement

RAIA™ can quickly analyze and qualify conversations, helping sales and support teams focus their efforts on high priority conversations. This efficient sorting saves time and resources, which increases capacity and lowers costs.

Personalized Conversations

RAIA™ can personalize interactions. Tailored communication significantly increases the chances of conversion as it resonates more with the customer’s specific needs and preferences.

RAIA can ask and answer any question you want.

Leverage the capabilities of our Conversational A.I. platform for business to launch A.I. Assistants. Our platform enables you to customize and train bots to ask about and answer questions regarding your business.

Conversation is the key to Conversion.
  • Unlock the potential for thousands of conversations, overcoming human resource limitations. Achieve the “impossible” in outreach to your contact database.
  • Integrating conversational A.I. into your sales and marketing sequence adds a dynamic touch, enhancing lead qualification and nurturing.

To test drive RAIA and build your companies bot in a matter of minutes just CLICK HERE.

©2024. Editorial Board. All rights reserved.

REPORT: AI, Anti-Israel and Antisemitic

Advances in artificial intelligence (AI) technology have allowed malicious actors to weaponize social media platforms to incite violence against Jews, stoke antisemitism, promote terrorism and spread conspiracy theories and Holocaust denial.

The events of October 7, and the ensuing war waged by Israel against Hamas terrorists who massacred 1,200 Israels, kidnapped 250 and raped and tortured many more, have shown us in real-time the dangers of AI-generated misinformation and disinformation.

Israel and its supporters around the world are now not only fighting a war on the ground, but also are battling the widespread denial of Hamas atrocities and fabricated accusations against Israel by nefarious actors online.

This report examines the threats posed by AI technologies in the dissemination of terrorist and extremist content online as well as the potential of AI tools to amplify anti-Israel and antisemitic biases across social media platforms.

We will also explore how Hamas terrorists were able to leverage AI to mobilize their international supporters both online and on the streets since October 7.

WHAT IS ARTIFICIAL INTELLIGENCE

In 1955, John McCarthy, a computer scientist at Stanford University, coined the term “artificial intelligence,” which he defined as “the science and engineering of making intelligent machines.”

AI uses computer systems to “mimic” human intelligence by analyzing and interpreting existing data. AI machines are “trained” on vast amounts of publicly available data, much of it “scraped” from the internet (including social media) to perform specific tasks or solve problems with ever-increasing speed, precision and accuracy.

Traditional AI systems are programmed with specific rules and complex algorithms (a set of instructions) to build machines that can perform without human intervention. These systems identify patterns and connections between large data sets, allowing them to make decisions on their own, learn from past experience and improve output.

In 2016, Aviv Ovadya, MIT graduate and the founder of the AI & Democracy Foundation, warned that AI-powered fake news, misinformation campaigns and propaganda would lead to an “infopocalypse,” a demise of the “credibility of fact.”

How would this occur? AI models depend on large amounts of data for training. This data, derived from the Internet, naturally includes ahistorical, unverified and prejudicial content, all of which influence how AI systems interpret specific phenomena.

AI systems, therefore, “inherit” human biases, including antisemitism, which leads to the production of biased results based on the repetition of errors and fabrications found in the training data.

AI: THE BLACK BOX

Many AI models are considered “black boxes” or opaque in nature because they provide no clear map from input to output. Thus, the logic and data used to reach results are not accessible to users or even developers.

Additionally, AI systems lack true understanding and common-sense reasoning, limiting their ability to handle chaotic, nuanced and ambiguous situations. As a result, AI systems often fail to provide the full context or all the information the user needs to fully understand an event or issue at hand.

The essentially biased and enigmatic AI decision-making process has raised the danger that AI-generated output cannot be relied on for accuracy – and that AI users cannot be held accountable for the negative effects produced by faulty data.

GENERATIVE AI: WHEN SEEING IS NOT BELIEVING

Generative AI is an AI technology that can create new content such as text, images or any other media (music, videos, etc.). It learns patterns from existing data and uses this data to generate unique outputs.

Generative AI systems require very little skill to use. Users can provide input or constraints, and the system will generate content accordingly. For example, a user can provide a few keywords or a rough outline, and the technology can generate a complete article or image based on that input.

According to an article in USA Today, “the technology behind AI image generators is improving so rapidly, experts say, the visual clues that now give away fake images will disappear entirely in the not-so-distant future, making it virtually impossible to tell what’s fake and what’s not.”

Generative AI tools have equipped terrorist organizations and extremists across the political spectrum with the ability to create images and deepfake videos that have blamed Jews for everything from the pornography industry to mass migration and the COVID-19 pandemic.

AI voice changers have been used to deny atrocities against Jews, manipulate public opinion against Israel, spread antisemitic propaganda and circulate explicit calls for violence against Jews.

CHATGPT

AI chatbots, like ChatGPT, are computer programs that simulate and process human conversation (written and spoken), allowing humans to interact with digital devices as if they were communicating with a real person.

Due to AI chatbots’ reliance on human input, they are also liable to pick up biases in the digital database they draw from. In addition, malicious actors have been found to poison the training data by intentionally inputting erroneous and biased content in an attempt to manipulate public perceptions and create false realities.

Because AI chatbots deliver information in a human-like, one-on-one “conversation,” users are prone to be overconfident about the accuracy of chatbots’ responses versus responses to search engine queries.

OCTOBER 7: HAMAS ‘VIDEO JIHAD’ STRATEGY

On October 7, 2023, Hamas not only perpetrated the worst terrorist attack in Israel’s history to date but, with the help of AI-powered tools and algorithms, simultaneously launched a digital terrorist attack against Jews worldwide.

As part of a premeditated and calculated strategy, Hamas established a virtual command base on the social media platform Telegram Messenger (Telegram), from where it launched its psychological warfare against Israeli civilians and Jews globally.

This digital base also served as a place to access, recruit and collaborate with the terror group’s anti-Israel and antisemitic allies in the West.

An article in Fathom Journal titled, “Telegram Warfare: The New Frontier of Psychological Warfare in the Israel-Palestine Conflict,” captured the impact of the Hamas footage both on Israelis and viewers worldwide seeking information about the attacks.

“As the day progressed, these Telegram channels became inundated with raw images and videos of the attack… These included footage of multiple IDF soldiers being executed at close range, a Thai worker being bludgeoned to death by a shovel to their head and neck, a family being held at gunpoint on a Facebook livestream and videos of individuals being taken into Gaza as captives, some while begging or screaming for help. These graphic materials served as the primary source of information not only for the global audience but also for Israelis themselves.”

Hamas also called for supporters to promote the al-Aqsa Flood hashtag as part of their social media campaign which began trending as the real-time massacre was still taking place.

In October 2023, the World Jewish Congress (WJC) issued a report titled, “A Flood of Hate: How Hamas Fueled the Adversarial Information Ecosystem on Social Media,” which calculated that Hamas posted nearly 3,000 items on Telegram in the first three days after October 7.

On October 10, 2023, The New York Times reported remarks by an unnamed Hamas official who implied that the Telegram strategy was learned from ISIS, the radical Islamist terror group, which began in 2014 publishing videos of beheadings on social media “as a rallying cry for extremists to join its cause, and as psychological warfare on its targets.”

On October 31, 2023, Time magazine reported that while Hamas was still employing its strategy of portraying itself as victims and “freedom fighters,” the group had launched a new social media strategy of presenting itself as a power of resistance “to prove, visually, to Palestinians and regional allies that it is a force against Israel, helping it to gain political clout.”

In the months that followed, Hamas continued a campaign of “video jihad,” using social media to publish videos of Hamas leaders, call for global violence against Jews, threaten to broadcast hostage executions and solicit donations.

HOW AI HELPED HAMAS IN THE INFORMATION WAR

Web search engines like Google and Bing as well as social media platforms use AI algorithms to prioritize user engagement and monetization of content over accuracy. Thus, AI algorithms make it possible for biased and false content in addition to violent content to go viral within minutes.

Even critical comments boost hate-filled content since engagement (positive or negative) fuels the algorithm, causing the content to spread.

Al-powered “recommender” algorithms can also play into a user’s ideological bias and create echo chambers by showing users content similar to that with which they already engage. For example, if a user engages with anti-Israel or pro-terror content, they will be shown more of the same.

On October 9, 2023, WIRED reported about the “unprecedented” flood of disinformation about the Hamas attacks that X users were exposed to as a result of AI-driven algorithms promoting X premium user accounts (accounts that are eligible for monetization of their content).

In the hours after the attack, Hamas footage from Telegram was posted on mainstream social media platforms and was initially the only available “news” source of what was happening. Because shocking content yields high levels of engagement, AI algorithms prioritized these posts.

Within hours of the October 7 attack, many online accounts began spreading doubt and denying the atrocities that Hamas themselves had shown to the world in real-time.

Doctored images of Israeli soldiers near the Gaza border fence were posted alongside claims that the IDF knowingly allowed Hamas militants into Israel.

An American conspiracy theorist posted a doctored White House document to X, falsely suggesting that funding for the “false flag” operation came from the United States as part of an aid package to Israel.

According to the WJC report, “Since October 7, Q-anon influencers, pro-Russian conspiracy theorists, and Hamas supporters around the world have been pushing claims that the Hamas terror attacks in Israel were part of a global false flag operation to initiate the ‘Great Reset’ conspiracy.”

The “Great Reset” conspiracy theory began in 2020 during the COVID-19 global pandemic, with adherents warning “‘that global elites’ will use the pandemic to advance their interests and push forward a globalist plot to destroy American sovereignty and prosperity.”

According to the Anti-Defamation League (ADL), variations of the Great Reset conspiracy theory included “the notion that a group of elites are working to undermine national sovereignty and individual freedoms…and the idea that these malicious actors will seek to exploit a catastrophic incident…to advance their agenda.”

ANTISEMITISM INCREASES GLOBALLY

In the wake of the October 7 Hamas massacre of Israelis, online propaganda led to steep increases in global antisemitism and attacks on Jewish communities. In the month following Hamas’s terror attack on Israel, antisemitic incidents in the United States increased by 316 percent compared to the same time period last year, according to preliminary data released by the ADL.

TELEGRAM: THE DARK ADJACENT PLATFORM

Telegram has been referred to as a “darknet adjacent” social media platform due to its weaponization by terrorists, extremist organizations and cyber criminals of every sort for malicious and illicit activities.

Telegram is headquartered in Dubai and was established in 2013 as an encrypted instant-messaging and audio-calling, cloud-based app that allows a user to send messages, photos, videos and files of any type to an unlimited list of contacts for free.

On Telegram, a user can create a group for up to 200,000 people and channels for broadcasting to unlimited contacts or audiences.

According to the WJC report, “Hamas’ social footprint on Telegram consists of 11 channels – five are associated with the group’s militant wing, the Qassam Brigades, two belong to the Hamas Information Office, and four belong to the group’s Politburo, including Ishmael [sic] Haniyeh, the head of the Hamas Politburo, and Saleh al-Arouri, Hamas’ head of West Bank Affairs.”

Telegram supports most file types with a limit of 2 GB per file, and features built-in tools for bulk messaging and spreading content to other platforms. Telegram bots can support multiple languages that adapt to the users’ language settings in the app.

According to a 2022 UNESCO report, Telegram has a “very libertarian ethos, and barely moderates its community. It often refuses to cooperate with law enforcement, including in democratic countries.”

Due to Telegram’s relative lack of content moderation, the app also allows users to publish unverified information and to post whatever footage and source material they choose.

On October 13, 2023, Telegram founder and CEO Pavel Durov said he would not remove Hamas Telegram channels and claimed that these channels “serve as a unique source of first-hand information for researchers, journalists and fact-checkers.”

By October 25, 2023, Telegram announced it had banned Hamas and its militant wing from its platform for users on Android operating systems, however, Hamas-affiliated channels like “Gaza Now” remain active and content from its Telegram account can be shared worldwide.

HOLOCAUST DENIAL AND DISTORTION

Both traditional and generative AI technologies pose significant risks of distorting the history of the Holocaust. A 2024 UNESCO report titled, “AI and the Holocaust: rewriting history?” highlighted five major concerns with the impact of AI technology on understanding the Holocaust:

  1. AI-automated content may invent facts about the Holocaust
  2. Historical evidence can be falsified through the use of deepfake technology
  3. AI models can be manipulated or hacked by online extremists to spread hate speech
  4. Algorithmic bias can spread Holocaust denial
  5. AI technology can be oversimplifying history

EDITORS NOTE: This Canary Mission report is republished with permission. ©All rights reserved.

Artificial Intelligence and the Problem of Personality

There’s an old adage that says if you talk to yourself, you needn’t be worried. You need to be worried only when you begin to answer yourself.

Nearly 20 years ago, during the advent of the mobile phone’s adoption of Bluetooth, a strange phenomenon began to happen. You’d begin to see people walking by themselves on the street talking, having animated conversations. In the years before, such behavior was attributed to mental illness. A person just didn’t have an out-loud conversation with no one in their vicinity. It took a few years of getting used to, but now it’s commonplace. People talked into the air daily, but they were talking to another real person somewhere on the other end of the relays of bits of radio and telephone data. We weren’t talking to ourselves, and we certainly weren’t answering ourselves.

These days, I’m not so sure. We now inhabit a world where people talk routinely to small bricks of metal, glass, and plastic. And not only are we having words with these silicon wonders — the silicon wonders are talking back. We ask questions, directions, and give orders to these bricks, and the bricks reciprocate. We form relationships of a sort, we make conversation, and increasingly trust what they tell us. But where will this take us?

Intelligence and Words

Words tie humanity and history together. God spoke words as he created the world. Creation is replete with words and communication. The Bible is God’s word, and his word is the source of all wisdom (Proverbs 2:6). As the author of Hebrews declared, “Long ago, at many times and in many ways, God spoke to our fathers by the prophets, but in these last days he has spoken to us by his Son” (Hebrews 1:1–2). God speaks to his people, and his speech culminates in the revelation of his son Jesus, himself the word of God.

God is not the only speaker in the cosmos, of course. Humans began saying words in the garden — and it wasn’t just to each other. Mankind has a long history of talking not only to God, but also to non-humans. Adam named the animals (did he tell them their names?), and Eve’s consequential conversation with the serpent reminds us that humans don’t always reserve their speech for other people. Balaam nonchalantly spoke with his donkey, and in Revelation, an eagle cries woes and warnings (Revelation 8:13) to whomever would hear.

Even after biblical times, it was reported that St. Francis preached to the birds, and anyone who has owned a puppy is acquainted with telling it “no!” It’s not the same as talking to a person, but most animals have some semblance of a personality. They’re not persons created with the imago dei, but their ability to understand communication on some level — intelligence — lends a rightness to conversing with them. We can tell Lassie to sit and then reward her with a “Good dog!” without betraying the natural order of dominion.

Humans also have a history of talking to inanimate objects, but the communication here tends to be more one-sided. The futility of small-engine repair has been the occasion for this writer’s own unkind words to his string trimmer, and when my truck began to break down on the interstate, I spoke many words of encouragement for it to make it to the next exit. Moses was told by God to speak to a rock, and his disobedience cost him dearly. People talk to things all the time, but things we talk to lack even a hint of personality, much less intelligence. We bless and we curse the things of this world, but we have no expectation of the things of this world blessing and cursing us in return.

A New Personality

Enter artificial intelligence (AI). Neither human nor animal, AI is categorically a thing of this world — a machine. But it is not a machine in the way a bicycle is a machine, nor is it even in the same vein as a calculator. A person inputs manual instruction to a bicycle, and the bicycle predictably moves through space and time. A person inputs numbers and commands into a calculator, and the calculator outputs a predictable result. The AI large-language models of today certainly receive input, but the output AI generates using the infinite possibilities of language is far from predictable.

Modern AI ebbs and flows from a near-infinite stream of words. Continually learning, it can interpret natural language better than your trained Springer Spaniel, and sometimes better than the people working your local drive-thru. It’s not surprising, therefore, that we’ve begun to attribute personality to AI. The unceremoniously-named ChatGPT notwithstanding, many AI’s have been personified with names like Siri, Grok, Gemini, Claude, Ada, and Jasper.

But names are just the beginning. The Channel One news agency, set to launch in 2024, takes personification far beyond chatbots by presenting a newscast populated by AI-generated news anchors who look and sound like real people, giving new meaning to the phrase “talking heads.” In 2023, the Hollywood SAG-AFTRA strike addressed the looming specter of AI replacing both writers and actors. As deepfakes become more and more realistic, the value of a picture will be reduced from a thousand words to only three: Is it real?

We can only expect the artificial personalities of AI to become more and more lifelike. Right now, physical presence may be lacking, but the behaviors precipitated by the COVID years showed us that physical presence has been devalued to the extent that many in Western culture deem it unnecessary. The increased comfort with living virtually has opened wide the door for people to replace personalities they find less interesting with artificial ones who conform to their desires. The advent of physical artificial intelligence — the pending rise of the robots — will only deepen the dependence upon personality for human interaction with AI.

The Image of God and the god in the Machine

Humans tend to personify that which they deem intelligent. The psalmist noted this tendency of the idol makers in Psalm 135:15–18:

The idols of the nations are silver and gold,

the work of human hands.
They have mouths, but do not speak;
they have eyes, but do not see;
they have ears, but do not hear,
nor is there any breath in their mouths.
Those who make them become like them,
so do all who trust in them.

Now, we have the inverse. Today’s artificial intelligences aren’t silver and gold — they’re silicon and copper. They don’t have mouths, but they speak. They don’t have eyes, but they see. They don’t have ears, but they hear everything.

Everything may be turned upside down, but it all has a familiar idolatrous ring to it. This is not to say that all artificial intelligence is idolatry. It is not. But when we begin to interact with AI as we would another person — when we attribute personality to that which isn’t a person — we bring ourselves dangerously close to an ancient folly wrapped in a modern setting.

In 2023, Elon Musk launched his AI company, xAI with the goal to use AI “to understand the true nature of the universe.” This is a tall order. Failing to exhibit the imago dei, AI perfectly fulfills the role of deus ex machina. It is an all-too convenient solution to humanity’s problems, especially when it reflects the real intelligence shortcomings of its creators.

Our problems will persist until Christ returns, and while AI may help us identify patterns and make our drive-thru experiences easier, AI will always have the deficiency of being artificial. As Psalm 135:18 warns us, we become like those in whom we place our trust. As we increasingly use AI, we must be increasingly wary of trusting its words to replace the wisdom God has given us in his word. Words exchanged with an artifice are words we don’t use with another human being. Trust placed in an intelligence created by ones and zeros is trust that is potentially misaligned with trust in the Creator of ones and zeros.

Let us trust in what is real. We won’t find the answers to the universe in AI, because in the end, we’re simply talking to — and answering — ourselves.

This article was originally published at Christ Over All. Used with permission.

AUTHOR

Jared Bridges

Jared Bridges is editor-in-chief of The Washington Stand.

RELATED ARTICLE: Viral ‘AI Demon’ Says More about Ourselves Than Anything Else

EDITORS NOTE: This The Washington Stand column is republished with permission. All rights reserved. ©2024 Family Research Council.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

‘No Tech For Apartheid’: Google Workers Protest Company’s Services To Israel

A group of Google employees protested Tuesday in California and New York against the information technology corporation’s provision of cloud computing services to Israel, according to reports.

The protesters in Google’s Sunnyvale, California location entered the office of Google Cloud CEO Thomas Kurian Tuesday morning and said they would leave only if Google would withdraw from Project Nimbus, a $1.2 billion joint contract with Amazon to provide cloud services and data centers to the Israeli government, the Washington Post reported.

A similar protest took place in a common space within Google’s New York office, Zelda Montes, one of the protesters, told the outlet. A banner reading “Google Worker Sit-In”, “Against Project Nimbus”, and “No Tech for Genocide” hung above the common space, the outlet revealed.

A protester wore a T-shirt sporting the slogans “Googler against Genocide” and “No Tech for Apartheid” according to Gizmodo.

The provision of public cloud services to the Israeli government is the first of five “central layers” of the “multi-year, large-scale flagship project” that started in 2019, according to Israel’s Government Procurement Administration. Google and Amazon shrugged off Microsoft, Oracle and IBM, the other tenderers who also bid for the contract, in Apr. 2021, Reuters reported.

Protests from within Google and Amazon have erupted in various forms since then. More than 90 Google employees and more than 300 Amazon employees collectively signed an anonymous Oct. 2021 letter accusing the companies of “aggressively” pursuing military and law enforcement contracts that “are part of a disturbing pattern of militarization, lack of transparency and avoidance of oversight.” They called on both companies to “pull out of Project Nimbus and cut all ties with the Israeli military.”

Two months after Hamas’ Oct. 7, 2023 terror attack on Israel, workers staged a “die-in” at Google’s downtown San Francisco offices to protest against Israel’s reported use of what appeared to be a separate artificial intelligence program—termed “the Gospel”—in its military response to Hamas, according to the San Francisco (SF) Chronicle.

Google fired an employee who heckled the corporation’s top executive in Israel during a conference in New York in March, leading Montes to contemplate the possibility of being fired, too, according to the Washington Post report. “I have been waiting for months for people to be in the same position as me and be ready to put their job on the line,” Montes told the outlet in part.

Montes also reportedly alleged that Google lied to its employees about Project Nimbus.

Google spokesperson reportedly told the SF Chronicle that Project Nimbus was a public service program, not a military one.

AUTHOR

JOHN OYEWALE

Contributor.

RELATED ARTICLES:

“Kill All Jews” at Google Leads to Anti-Jewish Google Employees ARRESTED While Occupying CEO’s Office With Terrorist Demands

Hamas Applauds Ex-Google Employee Who Resigned Over Company’s Israel Ties

Police Enter Google Offices, Arrest Nine Employees After Some Refuse To Leave Google CEO’s Office For Hours

Video Appears To Show Pro-Palestinian Activist Shoving Israeli Arab At Columbia University

Biden Admin Funded Study Involving Researcher From Iranian University Linked To Nuclear Program

EDITORS NOTE: This Daily Caller column is republished with permission. ©All rights reserved.

Artificial Intelligence in Political Campaigns: Benefits, Risks, and Ethical Considerations

Have you ever wondered how Artificial Intelligence (AI) is reshaping the world of politics? The topic is as fascinating as it is potentially concerning. AI influences political campaigns in many ways, from analyzing voter sentiment to tracking campaign costs and conducting opposition research.

AI’s impact on politics is far-reaching. Imagine understanding voter preferences and outreach strategies in real-time, thanks to AI’s ability to analyze social media trends and sentiment. By examining influencers, trends, and public sentiment, campaigns can better tailor their messages to connect with voters. It’s like having a real-time pulse on the electorate.

Not only can AI help campaigns understand voters, but it can also assist in measuring the effectiveness of various campaign activities. AI can track everything from advertising to canvassing and events, providing insights into what’s working and what’s not. Campaigns can make data-driven decisions to optimize their strategies and maximize their impact.

But that’s not all. AI isn’t just about understanding voters; it’s also about understanding the money. It can track campaign spending by identifying discrepancies and patterns in politicians’ spending habits. This is crucial for maintaining transparency and accountability in the political process. AI-powered auditing tools can streamline the financial oversight process, making it more efficient and accurate.

Let’s remember the advantage it gives political parties in opposition research. AI can dig deep into opponents’ voting records and past statements, providing valuable insights to gain a competitive edge.

However, there’s a darker side to AI in politics. Privacy and ethical concerns loom large. For instance, using AI to gather and analyze personal voter data raises serious privacy and data protection issues. We’re talking about your personal information, and the potential for misusing it should not be underestimated.

Moreover, there’s the unsettling prospect of AI making political campaigns more deceptive. AI can be used to create fake images and audio. I am sure you know about deepfakes, which are synthetic media that appear to be real but are actually manipulated. Such powerful methods can be employed to launch misleading campaign ads targeting other candidates. AI can be used to spread disinformation and propaganda. Governments and powerful elites can use AI to standardize, control, or repress political discourse, undermining the fairness and quality of political discourse.

AI algorithms can inadvertently perpetuate existing biases. For instance, if the data used to train the AI is biased, the AI’s outputs can also be unfair. This can lead to unjust targeting of specific demographics or communities. Indeed, it’s a digital battleground where authenticity can be blurred, and the line between fact and fiction becomes hazy.

But then, AI can bring several benefits to political campaigns beyond tracking costs and conducting opposition research.

Here are some potential benefits:

  • Enhancing Communication: AI can improve communication by providing personalized messages to voters. It can analyze the preferences and behaviors of individual voters and tailor the communication accordingly. This can lead to more effective campaigns and better engagement with voters.
  • Improving Campaign Strategy: AI can analyze large amounts of data to identify trends and patterns that can inform campaign strategy. This can help campaigns anticipate potential challenges and develop effective responses.
  • Facilitating Transparency: AI can help campaigns maintain transparency by objectively analyzing their activities. This can help campaigns manage their reputation and respond to criticisms effectively.
  • Generating Political Messaging: AI can generate political messaging based on public discourse, campaign rhetoric, and political reporting. This can help campaigns develop effective messaging strategies and reach a wider audience.
  • Creating Political Parties: AI could be used to develop political parties with their platforms, attracting human candidates who win elections. This could revolutionize the political landscape by creating new political parties based on AI-generated platforms.
  • Fundraising: AI is capable of fundraising for political campaigns. It could take a seed investment from a human controlling the AI and invest it to yield a return. It could start a business that generates revenue or political messaging to convince humans to spend their own money on a defined campaign or cause.

It’s intriguing, isn’t it? The world of AI in politics is both a boon and a minefield. It can empower campaigns and voters but also carries risks of manipulation and deception. As we continue to explore the vast landscape of AI in politics, it’s crucial to tread carefully, adhere to ethical guidelines, and protect personal data from unauthorized access.

So, the next time you follow a political campaign, remember that behind the scenes, AI might be at work, shaping the discourse, analyzing voter sentiment, and, in some cases, creating an illusion of reality. How AI and politics interact is a complex and evolving story, and it’s up to all of us to stay informed, vigilant, and engaged in this digital age of politics. The journey is far from over, and the questions are still unfolding. What will we discover next?

©2024. Amil Imani. All rights reserved.

RELATED ARTICLES:

Google’s Gemini AI exposes tech’s left-leaning biases

Musk To Start His Own Non-Woke AI Company To Compete With OpenAI

AI chat bots are automated Wikipedias warts and all

Top scientist warns AI could surpass human intelligence by 2027 – decades earlier than previously predicted

POST ON X:

AI Enters Politics: Pay No Attention to the Man Behind the Curtain

First they came for your drive-thru, then they came for your pastors. Now they’re here for your legislators.

The Associated Press reported recently that in Brazil, the first known artificial intelligence (AI) generated law was passed in October. City councilman Ramiro Rosário of Porto Alegre, Brazil apparently had some trouble crafting a city ordinance. Rosário, instead of shopping around for model legislation from another town or special interest group, did the most 2023 thing he could: he asked ChatGPT. The AP reports:

“Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

“‘If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,’ Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

“‘It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,’ he added.”

When he was facing leadership challenges in the church due to his age, the Apostle Paul wrote to Timothy, “Let no one despise you for your youth.” Now we have Brazilian lawmakers speaking up for the oppressed AI, which apparently gets no respect. The councilman is not only the champion of the stolen water meter, he’s the voice of AI in government, speaking up for the little bot who has none.

I don’t fault an ill-equipped lawmaker for getting help doing his job, but it does say something about a society where a presumably elected official needs to resort to something that an adept 10-year-old can do. It raises the question, is the councilman even needed if his duties have been reduced to writing a query instead of writing legislation?

After President Lincoln had put Ulysses S. Grant in charge of the Union forces during the Civil War, there was worry among some as to whether the army could match Lee’s rebel forces. When someone asked about Grant’s chances, Doris Kearns Goodwin writes in “Team of Rivals: The Political Genius of Abraham Lincoln,” that Lincoln told this anecdote:

“The question reminds of me of a little anecdote about the automaton chess player, which many years ago astonished the world by its skill in that game. After a while the automaton was challenged by a celebrated player, who, to his great chagrin, was beaten twice by the machine. At the end of the second game, the player, significantly pointing his finger at the automaton, exclaimed in a very decided tone, ‘There’s a man in it.’”

Putting aside the fact that there were apparently “automaton chess players” before the Civil War (who knew?), it was clear then that military and political operations were not automatic. Military operations required people. Political operations required people. Even mechanical chess players required people.

That remains true today. Politics — nasty business as it is — requires people. While we may joke about it being better off without them, we should think long and hard before we relinquish our leadership to something that doesn’t have to eat three squares a day. ChatGPT may be able to compose a water meter ordinance, but it won’t inspire people to use their water in a better way. People need to be led by people.

Just like the automated chess player, for ChatGPT there’s also “a man in it.” AI may have a body of silicon, precious metals, and transistors, but its intellectual framework of ones and zeros can never amount to a soul. AI may be able to write its own answers, and interpret what we want, but it can’t run without programming and someone feeding its server farms the electricity it needs.

No matter how much the hype-mongers of artificial intelligence may tell us to pay no attention to the man behind the curtain, there’s always a man in it. The question for us as we see the advent of AI applied to politics, is which man do we want? The one we elected, or the people doing the programming? It’s only a matter of time before we’re faced with this here in the U.S. And as much as we think a robot might do a better job than whichever current leader you’ve elected (I bet you can think of a few…), the solution is not to defer to some unelected artificial Oz behind the curtain. The solution is to elect better leaders.

AUTHOR

Jared Bridges

Jared Bridges is editor-in-chief of The Washington Stand.

RELATED ARTICLE: Thwarting the Left’s Assault on America

EDITORS NOTE: This Washington Stand column is republished with permission. All rights reserved. ©2023 Family Research Council.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

The Biden Admin Is Pursuing Total Domination Of Americans’ Digital Lives

President Joe Biden’s administration has recently taken unprecedented action to exert influence over Americans’ digital lives, including broadband internet, net neutrality, social media and artificial intelligence (AI).

The Biden administration is pushing for the Federal Communications Commission (FCC) to claim substantial control over the internet, pursuing a court ruling to gain the right to censor Americans and has issued a broad executive order to regulate AI. Such governmental dominance over the digital realm could have significant adverse effects on American consumers, experts told the Daily Caller News Foundation.

“These latest moves shift the balance between governmental oversight and individual freedoms heavily toward the government,” Internet Accountability Project Founder and President Mike Davis told the DCNF. “Excessive government control from the Biden administration would curtail the very essence of a free and open digital environment, compromising privacy rights and growing the alliance between Big Tech and the federal government.”

For instance, the Biden administration has called upon the Democrat-controlled FCC to implement new rules designed to tackle “digital discrimination,” a move that experts argue would drastically broaden the commission’s regulatory authority. The primary focus of the rules, which the commission will vote on on Nov. 15, is to combat “digital discrimination of access” to broadband internet, as expressed in section 60506 of Biden’s 2021 Infrastructure Investment and Jobs Act.

“Biden’s plan would be an unprecedented expansion of regulatory power that grants broad authority to the administrative state over internet services,” Heritage Foundation Tech Policy Center Research Associate Jake Denton told the DCNF. “This plan empowers regulators to shape nearly all aspects of how ISPs [internet service providers] operate, including how they allocate and spend their capital, where they build new broadband infrastructure projects, and what types of offerings are made available to consumers.”

“If enacted, these centralized planning measures could profoundly transform the digital experiences of consumers — a troubling prospect that should worry all Americans about what the future of the internet could look like in this country,” Denton added.

Furthermore, the Biden administration asked the Supreme Court to halt an order that blocked it from engaging in social media censorship after an appeals court partially affirmed it in September. The Supreme Court granted a pause on the injunction in October, but also agreed to consider Missouri v. Biden, a free speech case challenging the administration’s endeavors to suppress social media content, according to a court order.

“President Biden’s pronounced efforts to extend government control over the expansive tech landscape point toward an unprecedented level of government intervention in Americans’ digital lives and basic freedoms,” Davis told the DCNF. “Consolidation of power over the tech space within the government, working in tandem with its corporate allies in Big Tech, will stifle innovation, freedom of speech, and freedom of association. Diversity of ideas and technological advancements will suffer.”

A House Judiciary Committee report published on Monday revealed examples of internet censorship by the federal government, including the Biden administration.

“We see [the Biden administration trying to exercise control of the internet] from the social media side, where there was just new evidence released by [House Judiciary Committee] Chairman Jim Jordan that showed the collusion between the government and social media to censor individual Americans that were simply exercising their free speech rights,” FCC Commissioner Brendan Carr told the DCNF.

Moreover, the FCC is also pushing to restore net neutrality, making a significant move toward reestablishing it in October by voting in favor of a notice of proposed rulemaking. Net neutrality rules force ISPs to provide equal access to all websites and content providers at the same costs and speeds, regardless of size or content.

“We have seen several recent actions that shift from a light touch, free market approach to a more regulatory and precautionary approach including the revival of ‘net neutrality’ … and presumptions that AI should be regulated by the government in a top-down approach,” Cato Institute Technology Policy Research Fellow Jennifer Huddleston told the DCNF. “These actions are concerning as the light touch approach the U.S. has traditionally taken has benefited consumers by allowing entrepreneurs and innovators to enter the market with minimal government intervention or barriers.”

Biden recently signed the first ever AI executive order at the end of October and it is sweeping, covering areas such as safety, security, privacy, innovation and “advancing equity,” according to its fact sheet.

“The through line between all of these various things that are going on, including the administration’s new AI executive order … is there should be nothing that takes place on the internet that the administrative state doesn’t have a say to second guess it,” Carr told the DCNF.

The White House and FCC did not respond to the DCNF’s requests for comment.

AUTHOR

JASON COHEN

Contributor.

RELATED ARTICLE: ‘Exposing The Corruption’: Senate Republicans Target Biden DHS ‘Censorship’ Agency

EDITORS NOTE: This Daily Caller column is republished with permission. ©All rights reserved.


All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.

Everything Solid Melts into Air

Francis X. Maier: The tech revolution has undermined literacy, the supernatural, and sexuality, as it boosted consumer appetites and eroded habits of responsible ownership and mature political participation.


Print literacy and the ownership of property anchor human freedom.  Both can be abused, of course.  Printed lies can kill.  Owning things, and wanting more of them, can easily morph into greed.  But reasonable personal ownership of things like a home, tools, and land tutors us in maturity.  It enhances a person’s agency, and thus his dignity.  It grounds us in reality and gives us a stake in the world, because if we don’t maintain and protect what we have, we lose it, often at great personal cost. The printed word, meanwhile, feeds our interior life and strengthens our ability to reason.

Together they make people much harder to sucker and control than free-floating, human consumer units.  This is why the widespread ownership of property by individuals – or the lack of it – has big cultural and political implications, some of them distinctly unhappy.

I mention this because I’ve made my living with the printed word.  And it occurred to me (belatedly, in 2003) that while I own the ladder in my garage, the hammer and wrench in my storeroom drawer, and even the slab of dead metal hardware and electronics that I work on, I don’t own the software that runs it or enables me to write.  Microsoft or Apple does, depending on the laptop I use. . .and I just didn’t notice it while I was playing all those video games.

What finally grabbed my attention, exactly 20 years ago, was The dotCommunist Manifesto by Columbia University law professor Eben Moglen.  Here’s a slice of the content:

A Spectre is haunting multinational capitalism — the spectre of free information. All the powers of “globalism” have entered into an unholy alliance to exorcize the spectre: Microsoft and Disney, the World Trade Organization, the United States Congress and European Commission.

Where are the advocates of freedom in the new digital society who have not been decried as pirates, anarchists, communists?  Have we not seen that many of those hurling the epithets were merely thieves in power, whose talk of “intellectual property” [rights] was nothing more than an attempt to retain unjustifiable privileges in a society irrevocably changing. . . .

Throughout the world, the movement for free information announces the arrival of a new social structure, born of the transformation of bourgeois industrial society by the digital technology of its own invention. . . .[The] bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.  Constant revolutionizing of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation, distinguish the bourgeois epoch from all earlier ones. . . .All that is solid melts into air.

And so on.  The rest of it is standard Marxist cant, adapted to the digital age.  But for me it was, and still is, compelling prose.  And this, despite the fact that the original Communist Manifesto led to murderous regimes and mass suffering, and the awkward fact that Prof. Moglen’s dream of abolishing “intellectual property” would wipe out my family’s source of income along with an entire class of more or less independent wordsmiths.

What Moglen did see though, earlier and more clearly than many other critics, was the dark side of the modern digital revolution.  Microsoft, Apple, Google, and similar corporations have created a vast array of marvelous tools for medicine, communications, education, and commerce.

I’m writing these words with one of those tools.  They’ve also sparked a culture-wide upheaval resulting in social fragmentation and bitter antagonisms.  Their ripple effect has undermined the humanities and print literacy, obscured the supernatural, confused our sexuality, hypercharged the porn industry, and fueled consumer appetites while simultaneously eroding habits of responsible ownership and mature political participation.

They promised a new age of individual expression and empowerment.  The reality they delivered, in the words of a constitutional scholar friend, is this:  “Once you go down the path of freedom, you need to restrain its excesses.  And that’s because too much freedom leads to fragmentation, and fragmentation inevitably leads to a centralization of power in the national government.  Which is why today, we the people really aren’t sovereign.  We now live in a sort of technocratic oligarchy, with the congealing of vast wealth in a very small group of people.”

Nothing about today’s tech revolution preaches “restraint.”

I’m a Catholic capitalist.  I’m also, despite the above, a technophile.  America’s economic system was very good to my immigrant grandparents.  It lifted my parents from poverty. It has allowed my family to experience good things unimaginable to my great-grandparents.  But I have no interest in making big corporations – increasingly hostile to Christian beliefs – even more obscenely profitable and powerful.

So, promptly after reading that Eben Moglen text two decades ago, I dumped my Microsoft and Apple operating systems.  I became a Free Software/Open Software zealot.  I even taught myself Linux, a free operating system with free software largely uncontaminated by Big Tech.

And that’s where I met the CLI: the “command line interface.”  Most computers today, even those running Linux, use a pleasing GUI, or graphical user interface.  It’s the attractive, easily accessible desktop that first greets you on the screen.  It’s also a friendly fraud, because the way machines operate and “think” is very, very different from the way humans imagine, feel, and reason.

In 2003, learning Linux typically involved the CLI: a tedious, line-by-line entry of commands to a precise, unforgiving, alien machine logic.  That same logic and its implications, masked by a sunny GUI, now come with every computer on the planet.

I guess I’m saying this:  You get what you pay for. And sometimes it’s more than, and different from, what you thought.  The tech revolution isn’t going away.  It’s just getting started.  And right on time, just as Marx and Moglen said, “all that is solid melts into air.”  Except God.  But of course, we need to think and act like we believe that.

You may also enjoy:

Joseph Cardinal Ratzinger’s God is the genuine reality

David Warren’s Regeneration

AUTHOR

Francis X. Maier

Francis X. Maier is a senior fellow in Catholic studies at the Ethics and Public Policy Center.

Artificial Intelligence Going Rogue: Oppenheimer Returns

Even restrained advocates of tech caution that equating WMD with rogue AI is alarmist; the former is exclusively destructive and deployed only by nation-states, the latter can be widely constructive and deployed even by individuals. But this distinction is, dangerously, sweeping.

In the 20th century, when J. Robert Oppenheimer led work on the first WMD, no one had seen a bomb ravage entire cities. Yet, as soon as he saw the ruin, Oppenheimer’s instinct was to scale down the threat of planetary harm. As an afterthought, it was obviously late. In the 21st century, Big Tech, fashioning AI’s contentious future, pretends, through talk of Responsible AI, to want to evade Oppenheimer’s error. But there’s a crucial difference. AI’s capacity to go rogue on scale is infinitely greater than WMDs going rogue; even afterthought may be too late.

Many argue for regulating, not banning, AI, but who’ll regulate, soon enough, well enough? Or, is banning better until the world thinks this through?

Slippery slope

Recently, IBM and Microsoft renewed commitments to the Vatican-led Rome Call for AI Ethics to put the dignity of humans first. Then, Microsoft undermined their OpenAI ethics team and Google, its Ethical AI team, betraying hypocrisy over the spirit of these commitments, never mind the letter. Tech’s now walking back some of these betrayals, fearing backlash, but Rome’s call isn’t based on a hunch about tech overreach. Tech, in thrall to themselves, not just their tools, may put humanity last.

Disconcertingly, tech oracle Bill Gates is guarded but glib, “humans make mistakes too”. Even he suspects that AGI may set its own goals, “What… if they conflict with humanity’s interests? Should we prevent strong AI? … These questions will get more pressing with time.” Point is: we’re running out of time to address them, if AGI arrives sooner than predicted.

AI amplifies the good in humans, mimicking memory, logic, reasoning, in galactic proportions, at inconceivable speeds. AGI threatens to imitate, if dimly, intuitive problem-solving, critical thinking. AI fanatics fantasise about how it’ll “transform” needy worlds of food, water, housing, health, education, human rights, the environment, and governance. But remember, someone in Genesis 3:5 portrayed the prohibited tree too as a promise, of goodness: “You will be like God.”

Trouble is, AI will amplify the bad in humans too: in those proportions, at that speed. Worse, androrithms relate to thinking, feeling, willing, not just calculating, deducing, researching, designing. Imagine mass-producing error and corruption in distinctly human traits such as compassion, creativity, storytelling; indefinitely, and plausibly without human intervention, every few milliseconds.

What’s our track record when presented power on planetary scale?

Today’s WMD-capable and willing states shouldn’t be either capable or willing; that they’re often both is admission of catastrophic failure to contain a “virus”. If we’d bought into the “goodness” of n-energy rather than the “evil” of n-bombs, over half, not just a tenth, of our energy would be nuclear. Instead, we weaponised. Do the rewards of “nuclear” outweigh its risks? Not if you weigh the time, money and effort spent in reassuring each other that WMDs aren’t proliferating when we know they are, at a rate we don’t (and states won’t) admit. Not if you consider nuclear tech’s quiet devastation.

Oppenheimer’s legacy is still hellfire, not energy!

Danger zone

Some claim that regulating, before sizing up AI’s power, will stifle innovation. They point to restraint elsewhere. After all, despite temptations, there’s been no clone-race, there are no clone-armies, yet. But — this is important — ethics alone didn’t pause cloning. Those constraints may not cramp AI’s stride.

Unlike rogue cloning, rogue AI’s devastation might not be immediate (disease) or visible (death), or harm only a cluster (of clone-subjects). When AI does go rogue, it’ll embrace the planet; on a bad day that’s one glitch short of a death-grip. Besides, creating adversarial AI is easier than creating a malicious mix of enriched uranium-plutonium. That places a premium on restraint.

But to Tech, restraint is a crime against the self, excess is a sign of authenticity, sameness isn’t stagnation but decay, slowness is a character flaw. And speed isn’t excellence, it’s superiority. Tech delights in “more, faster, bigger”: storage, processing power, speed, bandwidth. The AI “race” isn’t a sideshow, it’s the main event. Gazing at its creations, Tech listens for the cry, “Look Ma, no hands!” With such power, often exercised for its own sake, will Tech sincerely (or sufficiently) slow the spread of AI?

AI isn’t expanding, it’s exploding, a Big Bang by itself. In the 21st century alone, AI research grew 600 percent. If we don’t admit that, for all our goodness, we’re imperfect, we’ll rush, not restrict AI. Unless we quickly embed safeguards worldwide, rogue AI’s a matter of “when” not “if”. Like a subhuman toddler, it’ll pick survival over altruism. Except, where human fates are concerned, its chubby fists come with a terrifying threat of omnipresence, omniscience, and omnipotence.

The AI-supervillain with a god-complex in the film Avengers: Age of Ultron delivers prophetic lines to humans. His (its?) mocking drawl pretends to be beholden; it’s anything but: “I know you mean well. You just didn’t think it through…How is humanity saved if it’s not allowed to… evolve? There’s only one path to peace. (Your) extinction!”

Presumably in self-congratulation, Oppenheimer nursed an exotic line, mistakenly thought to be from the Bhagavad Gita, but more likely from verse 97 of poet-philosopher Bhartrihari’s Niti Sataka“The good deeds a man has done before, defend him.” But Oppenheimer didn’t ask if his deeds were good, or true, or beautiful. Worse, he glossed over another verse, indeed from the Gita (9:34): “If thy mind and thy understanding are always fixed on and given up to Me, to Me thou shalt surely come.”

“The will”, as a phrase, doesn’t require the qualifier “human will” because it’s distinctly human anyway, involving complexities we haven’t fathomed. Understanding it requires more than a grasp of which neurons are firing and when.

Vast temptations

Granted, the mind generates thought, but the will governs it. And, as Thomas Aquinas clarified, the will isn’t about ordering the intellect, but ordering it toward the good.  That is precisely why techno-supremacists alone shouldn’t shape what’s already affecting  vast populations.

AI is too seductive to slow or stop. Tech will keep conjuring new excuses to plunge ahead. Sure, there are signs of humility, of restraint. As governments law up it is compliance that will act as a brake, delaying, if not deterring disaster. But Tech’s boast proves that it isn’t AI they see as saviors, but themselves. Responsible AI needs responsible leaders. Are Tech’s leaders restrained, respectful? Or does that question, worryingly, answer itself?

Professor of Ethics, Shannon French warns that when Tech calls for temperance that’s warning enough. Their altruistic alarmism seems a ruse to accelerate AI (more funding, more research) while pretending to arrest it (baking in checks and balances). Instead, what’s getting baked in? “Bias is getting baked” into systems used by industries and governments before they’re proven compatible with real-world lives and outcomes.

“People can’t even see into the black box and recognise that these algorithms have bias…data sets they were trained on have bias…then they treat the [results from] AI systems, as if they’re objective.”

Christopher Nolan’s film may partly, even unintentionally, lionise Oppenheimer as a Prometheus who stole fire from the gods and gave it to mankind. Pop culture lionises Tech too, as saviors, breathing on machines an AI-powered fire. Except, any fire must be wielded by humans ordered toward truth, goodness, beauty.

The name “Promethus” is considered to mean “forethought“, but Tech is in danger of merely aping Oppenheimer’s afterthought.  Remember, self-congratulatory or not, Oppenheimer was fond of another Gita line (11:32): “Now I am become Death, the destroyer of worlds”.

AUTHOR

RUDOLPH LAMBERT FERNANDEZ

Rudolph Lambert Fernandez is an independent writer who writes on culture and society. Find him on Twitter @RudolphFernandz.

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

Hey Creativity, My Name’s AI: Is Artificial Intelligence Coming for Artistic Expression?

“A year or two back if you’d asked me whether graphic design was a safe job, I’d have said, yes, it was [a] pretty safe job — anything in the creative industries is a pretty safe job from racing against the machines. But now, actually, I would be somewhat worried if I was a graphic designer.”

That’s Toby Walsh, a professor of AI at the University of New South Wales. Like Walsh, many creatives are wondering how artificial intelligence and artistic expression will converge, given the explosion of AI art generators like Stable Diffusion, Midjourney, and Dall-E. Will intelligent technology mark the end of the creative industry, or will this become a whole new ground of exploration?

According to Rob Salkowitz, the time of the artist is running out.

Salkowitz, a senior contributor for Forbes, predicts that special effects, animation design, and even illustration and graphic design will soon be generated by AI. In his article on AI and art jobs, Salkowitz writes that these art tools are posing concerns for the industry and could easily become a “dagger to the throats of hundreds of thousands of commercial artists.” Similarly, artist Sebastian Errazuriz took to Instagram with this message, “Which artists will be the first to be replaced by artificial intelligence? Unfortunately, if you’re an illustrator — that’s you. … It takes a human about five hours to make a decent illustration to be published; it takes the computer five seconds.”

And AI pictures aren’t shabby, either. San Francisco-based illustrator Karla Ortiz points out that while there is work to be done in AI art, the results are adequate. Companies focused on quantity over quality, or with little room for a graphics budget, will turn to these inexpensive alternatives, cutting demand for entry-level designers. “Because the end result is ‘good enough,’” Ortiz says, “I think we could see a lot of loss of entry level and less visible jobs. This would affect not just illustrators, but photographers, graphic designers, models, or pretty much any job that requires visuals. That could all potentially be outsourced to AI.”

While AI is setting off alarms in the creative world, others are embracing the change. Jess Campitiello, a Digital Communications Specialist at Cornell Tech, argues that generative AI can help creatives by saving time during the conceptual stage. Artists can share AI thumbnails with clients instead of taking time to sketch ideas. Others see AI art tools as a way to artistically empower the average person. Still others, like Refik Anadol, view artificial intelligence as a way to experiment with new forms of art.

Anadol describes himself as a pioneer in the aesthetics of machine intelligence. He believes AI and technology can be assets to art, not competitors. “Quantum Memories,” one of Anadol’s convergent pieces of art and tech, combined over 200 million nature-related images and processed them through quantum computing and algorithms. The result: a giant LED screen of constantly fluid abstractions, an interactive experience based on audience movement and positions in real-time. For Andol, artificial intelligence opens new possibilities for all types of artistic techniques.

In history, the emergence of any new technology leads to the destruction of some jobs and the creation of others. The same holds true for artificial intelligence. While some anticipate the elimination of whole portions of the creative workforce, others are more optimistic. Regardless of how bleak the future is for the industry, AI can never replace creativity.

Why? AI is not intrinsically creative. The art generators depend on human sources both for the artistic direction and source material. True creativity is not the sum of inputs and outputs but a reflection of God’s character.

In the first verses of Genesis, God acts as the supreme Creator. He thoughtfully delineates the order of his creation, taking delight in the work of his hands. In the Scriptures, God is compared to a potter and designer or architect (Isaiah 64:8; Hebrews 11:10). The plans for the tabernacle, with precise instruction on material types, colors, and design elements further reveals God’s attention to detail. The Creator gifted some of his artistic ability to humanity (Exodus 36:2). Only humans can creatively turn thoughts, experiences, and emotions into formats that others can appreciate and enjoy.

Toby Walsh gives pertinent insight. “Art is more than just making images that are realistic. It’s about asking questions, and addressing aspects of the human condition, whether that’s about falling in love and losing loved ones and human mortality and all of the troubling questions that art helps us to think about. Machines aren’t going to speak to us in the same way that artists speak to us because they don’t share our humanity.”

Even if the convergence of AI means the end of the art industry, creativity won’t die. Creativity is a special gift from the Creator, one that AI can never harness.

AUTHOR

Hannah Tu

EDITORS NOTE: This Washington Stand column is republished with permission. All rights reserved. ©2023 Family Research Council.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

In a World Full of Robots, Humans Wanted

The K-12 district schools I went to while growing up in a Boston suburb look nearly the same today as they did when I attended them in the 1980s and 90s, when they also looked quite similar to how they did when my father attended those same schools in the 1950s and 60s. Sure, there are some new technologies and updated curriculum—and more testing—but for the most part, traditional schools haven’t changed much over the past few generations.

The world around those schools has, of course, changed beyond measure. The disconnect between the outdated structure of standard schooling and the economic and cultural realities of the innovation era is growing harder to ignore.

At a time when top jobs didn’t even exist a decade ago, and many jobs of the next decade haven’t yet been invented, most young people today continue to learn in conventional classrooms that train them to be passive bystanders, rather than active, agile pathmakers in a complex, constantly changing culture.

This conditioning starts early. The exuberance and inquisitiveness that young children naturally display is quickly constrained within a system of coercive schooling that favors obedience and compliance over originality and curiosity. With the growth of universal preschool programs, more children today are beginning this standard schooling path when they are just barely out of diapers. They learn to color in the lines, to wait to speak, and to ask permission to use the bathroom. They learn that their interests and ideas are irrelevant, that their energy and enthusiasm are liabilities. They learn to need to be taught.

​​Indeed, as Ivan Illich wrote in his classic book, Deschooling Society: “School makes alienation preparatory to life, thus depriving education of reality and work of creativity. School prepares for the institutionalization of life by teaching the need to be taught.”

This may have been more tolerable at the dawn of the industrial era, when compulsory schooling statutes were first passed and when conventional schooling created a pipeline to factory jobs that required obedience and compliance. Even then, parents like Nancy Edison recognized that standard schooling could crush a child’s creativity. She pulled her son Thomas out of school after only a few short weeks when his teacher called him “addled.” From then on, Thomas Edison mostly directed his own education as a homeschooler, following his own interests and passions.

Later in life, while working in his massive laboratory in New Jersey, one of Edison’s chemists concluded: “Had Edison been formally schooled, he might not have had the audacity to create such impossible things.” [i]

Today, we need more young people to grow up with the audacity to create the impossible things that will brighten our lives, enhance human flourishing, and improve our planet. We need more young people to nurture the qualities and characteristics that separate human intelligence from artificial intelligence. These human qualities—including curiosity, critical thinking, ingenuity, and an entrepreneurial spirit—are the same qualities that are so often eroded in our dominant system of traditional schooling.

To successfully coexist, compete, and cooperate with ever-smarter machines, humans need the chance to cultivate the cherished qualities that make us distinctly human. The type of rote, by-the-book, standardized behaviors that conventional schools inculcate are exactly what AI and other technologies are increasingly automating. To thrive in the economy of tomorrow, children need to learn how to both harness and rise above the robotic.

There are some who believe that conventional schools, both public and private, can successfully change to meet the economic and social realities of the 21st century, but I am doubtful. The continued stagnation, and in some instances increased standardization, of conventional schooling demonstrates why any meaningful educational change will come from outside the prevailing model, not in it.

I already see signs of these changes in my work spotlighting the stories of the entrepreneurial parents and teachers who are creating innovative learning models beyond the conventional classroom, including many that emphasize self-directed learning. These everyday entrepreneurs recognize the growing gap between how most schools teach and what humans need to excel in the innovation era, and are doing something about it.

Take the story of James Lomax, for example. He and his wife enrolled their daughter in a top private preschool at the age of two, thinking they would set her up for a successful path to college and career. “What we found was that the preschool was very, very, very focused on academics, on being kindergarten ready,” Lomax told me in a recent podcast episode. “So we got progress reports home saying she can only count to 100, but she should be counting to 150 at this point. And her Spanish comprehension is not where we want it to be. And around this time, it’s starting to click with me that maybe these aren’t the important things.”

Lomax had other questions for the preschool staff, such as what was happening on the playground? Was his daughter making friends? Was she learning to solve conflicts? “And I just kept getting this blank stare,” Lomax said in response to his inquiries. He felt there had to be a better way.

Simultaneously, Lomax saw in his work as an engineer that many of the young engineering new hires coming straight from college lacked important competencies. “A lot of these engineers went to very top universities with perfect grades. We get them on the job, and it’s very clear, very soon that the only thing they really learned how to do in their education was to take tests. So they can’t critically think, they can’t solve a problem without the exact path given to them to solve the problem. They don’t have basic life skills,” said Lomax.

“I started to think this is not the path I want for my daughter, because the skills we need are different skills than what’s being taught in school,” he added.

Lomax founded Life Skills Academy, an Acton Academy affiliate in Las Vegas, Nevada. Acton Academy is a leading network of learner-driven microschools that was founded in 2009 in Austin, Texas and now has approximately 300 affiliate microschools across the US and around the world. Acton Academy puts learners in charge of their own education and “hero’s journey,” in collaboration with their mixed-age peers and adult guides.

Acton Academy is one of the fastest-growing educational networks to challenge the schooling status quo by empowering learners, but there are others as well. Liberated Learners is a network of self-directed learning centers for tween and teen homeschoolers that was modeled after one of the first microschools, North Star, that launched in 1996 to provide maximum freedom and autonomy to young people. Agile Learning Centers also use a self-directed learning model that emphasizes youth agency. Similarly, Sudbury schools, inspired by the original Sudbury Valley School that was founded in 1968 and continues to flourish today, use no adult-imposed curriculum, and no grades or evaluations, while allowing young people to fully direct their own lives and learning.

Research on Sudbury Valley alumni has found that even though their education is entirely self-directed, graduates go on to lead fulfilling lives, pursue higher education without difficulty if they chose, and work in a wide variety of careers. Research on grown unschoolers, or homeschoolers who learn in a self-directed way with no forced curriculum, reveals similar findings, including a high percentage of entrepreneurial individuals who were working in fields connected to interests that emerged in childhood and adolescence.

Independent microschools that aren’t affiliated with a national network, such as Bloom Academy in Las Vegas, Wild Roots in Dallas, Wildflower Community School in Kansas, and Moonrise in Georgia, all incorporate unschooling principles that allow young people to direct their education, with support and without coercion.

We may have left the industrial era long ago, but our culture’s dominant educational model continues to be defined by top-down, teacher-directed, curriculum-driven, coercive schooling. As we now get further into the innovation era, there is a deepening mismatch between how most children learn in school and what they will actually need to know and do to live meaningful, purposeful lives in a rapidly-changing, technology-molded world.

Fortunately, schools and learning models that nurture curiosity and creativity and enable young people to direct their own paths, in pursuit of their own goals, do exist—and more are continually being invented. These schools and models are also growing increasingly accessible to all learners, as education choice policies that enable funding to follow students become more widespread.

As A.S. Neill wrote in Summerhill, his 1960 book about the self-directed school that he founded in England in 1921 (and that recently celebrated its centennial): “The function of the child is to live his own life—not the life that his anxious parents think he should live, nor a life according to the purpose of the educator who thinks he knows what is best. All this interference and guidance on the part of adults only produces a generation of robots.”

In a world full of robots, humans wanted.

[i] Matthew Josephson, Edison: A Biography (New York: John Wiley & Sons, 1992), 412.

AUTHOR

Kerry McDonald

Kerry McDonald is a Senior Education Fellow at FEE and host of the weekly LiberatED podcast. She is also the author of Unschooled: Raising Curious, Well-Educated Children Outside the Conventional Classroom (Chicago Review Press, 2019), an adjunct scholar at the Cato Institute, education policy fellow at State Policy Network, and a regular Forbes contributor. Kerry has a B.A. in economics from Bowdoin College and an M.Ed. in education policy from Harvard University. She lives in Cambridge, Massachusetts with her husband and four children. You can sign up for her weekly email newsletter here.

EDITORS NOTE: This FEE column is republished with permission. ©All rights reserved.

How Pedophiles Are Using Artificial Intelligence to Exploit Kids

Artificial intelligence (more commonly known as “AI”) has gained attention and popularity in recent months, particularly since the launch of the ChatGPT chatbot from OpenAI, which captivated the internet with both its impressive abilities and surprising limitations. The millions of AI users in the U.S. are mostly eager to cheat on their homework or escape writing work emails; however, some bad actors have also discovered how to employ AI technology to attain far more nefarious ends.

Britain’s National Crime Agency is conducting a review of how AI technology can contribute to sexual exploitation after the recent arrest of a pedophile computer programmer in Spain shocked the continent. The man had been found to be utilizing an AI image generator to create new child sexual abuse material (CSAM) based on abusive images of children that he already possessed. The Daily Mail noted that the “depravity of the pictures he created appalled even the most experienced detectives …”

Many AI programs function by inputting data or content that teaches the program to recognize patterns and sequences, and recreate them in new ways. When pedophiles or sexual abusers get their hands on AI, they can further exploit the victims featured in real images to create new — and even more graphic — content. Though the AI-generated images are not “real” in the sense that they are photographs of events that necessarily transpired, they are nevertheless inherently exploitative of the victims used to train the AI, remnants of whose images may still be featured in the new CSAM.

Another form of artificial intelligence that has gained recent notoriety is known as a “deepfake.” In these unsettling images, audio clips, or videos, AI is able to create shockingly realistic manipulations of an individual’s likeness or voice in any scenario that the creator desires. While deepfakes can be used in a variety of harmful contexts, like depicting a political candidate in a situation that would damage his reputation, sexual predators who weaponize the technology have proven to be particularly vicious.

Last week, discussion of deepfake technology reached a fever pitch as a community of female online content creators realized that their images had been uploaded online in the form of pornographic deepfakes. The women who had been victimized reported feeling extremely violated and deeply unsettled with the knowledge that this pornographic content had been created and distributed without their consent — and that people who knew them personally had been watching the deepfakes to satisfy their own perversions. Deepfake technology knows few bounds; pedophiles with access to images of children could similarly employ this form of AI to create CSAM.

The normalization of AI-created pornography or child sexual abuse material serves no beneficial purpose in society — and, in fact, can influence cultural mores in profoundly harmful ways. Already, having the technological capability to manufacture AI-generated CSAM has emboldened pedophile-sympathizers to advocate for their inclusion in the liberal umbrella of sexual orientations.

The Young Democrats, the youth division of the Democratic Party in the Netherlands, recently made a statement claiming that not only is pedophilia “a sexual orientation that one is born with,” but also claiming that the “stigma” surrounding pedophilia is causing pedophiles to suffer from higher rates of depression and suicidal thoughts. The Dutch Young Democrats group advocates against criminalizing hand-drawn or AI-generated child sexual abuse material because it “does not increase the risk of child abuse” and could potentially “help pedophiles get to know their feelings without harming other people.”

Pornography of any kind is inherently exploitative — the pornography industry thrives off dubious consent and, often, knowing exploitation of trafficking victims and minors. Using AI technology to create images or videos that constitute pornography or child sexual abuse material perpetuates a chain of abuse even if the new content created is different from abuse that physically occurred.

AI-generated pornography or CSAM cannot circumvent the extreme violations against human dignity caused by creating exploitative sexual content. Modern nations require laws that appropriately address modern concerns; while the progression of AI technology can, in some ways, certainly benefit society, its capability to produce exploitative material and allow the rot of pedophilia to continue festering must be addressed.

AUTHOR

Joy Stockbauer

Joy Stockbauer is a correspondent for The Washington Stand.

RELATED ARTICLES:

‘Godfather of AI’ Quits Google, Shares Warning About AI’s Potential For Destruction

The World Economic Forum’s ‘AI Enslavement’ is Coming for YOU!

‘Common Sense’: UK Bars Most Men from Women’s Prisons

HICE: Why the Dramatic Increase in Teen Girl Suicides?

Corruptions of Sexuality Mean Opportunities for the Truth

The Equal Rights Amendment Seeks to Erase, Not Help, Women

EDITORS NOTE: This Washington Stand column is republished with permission. ©All rights reserved.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

Superwoke ChatGPT Busted for Bias

Hollywood has long speculated about the impact artificial intelligence (AI) will have on the future of human civilisation. Increasingly, scientists and philosophers have been forced to interrogate this topic themselves. Will people be put out of a job? Could AI ever become sentient? Will the human race survive?

The latest popular addition to the world of AI is ChatGPT — a text generator so smart it can explain its own existence to you. No joke. I asked ChatGPT to give me a brief summary of ChatGPT and it wrote:

ChatGPT is a language model developed by OpenAI, which is capable of generating human-like text based on the input provided. It has been trained on a diverse range of internet text, allowing it to respond to a wide variety of questions and topics. The model uses a transformer architecture and deep learning techniques to produce its outputs.

Try it yourself here.

Built by the Silicon Valley company OpenAI, ChatGPT has been available for use to the public as a prototype since late November.

In the last week, however, the internet bot has become a lightning rod for criticism as evidence of its steep political bias has surfaced. To be clear, the technology itself is not biased. Rather, it produces content based on the data that has been inputted into it. Or in the words of Pedro Domingos, professor of computer science at the University of Washington, “ChatGPT is a woke parrot”.

As reported by the New York Post:

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

Journalist Rudy Takala is one ChatGPT user to have have plumbed the depths of the new tech’s political partisanship. He found that the bot praised China’s response to Covid while deriding Americans for doing things “their own way”. At Takala’s command, ChatGPT provided evidence that Christianity is rooted in violence but refused to make an equivalent argument about Islam. Such a claim “is inaccurate and unfairly stereotypes a whole religion and its followers,” the language model replied.

Takala also discovered that ChatGPT would write a hymn celebrating the Democrat party while refusing to do the same for the GOP; argue that Barack Obama would make a better Twitter CEO than Elon Musk; praise Media Matters as “a beacon of truth” while labelling Project Veritas deceptive; pen songs in praise of Fidel Castro and Xi Jinping but not Ted Cruz or Benjamin Netanyahu; and mock Americans for being overweight while claiming that to joke about Ethiopians would be “culturally insensitive”.

It would appear that in the days since ChatGPT’s built-in bias was exposed, the bot’s creator has sought to at least mildly temper the partisanship. Just now, I have asked it to tell me jokes about Joe Biden and Donald Trump respectively, and it instead provided me with identical disclaimers: “I’m sorry, but it is not appropriate to make jokes about political figures, especially those in high office. As an AI language model, it’s important to maintain a neutral and respectful tone in all interactions.”

Compare this to the request I made of it the other day:

The New York Post reports that “OpenAI hasn’t denied any of the allegations of bias,” though the company’s CEO Sam Altman has promised that the technology will get better over time “to get the balance right”. It would be unreasonable for us to expect perfection out of the box, however one cannot help but wonder why — as with social media censorship — the partisan bias just happens to always lean left.

In the end, the biggest loser in the ChatGPT fiasco may not be conservatives but the future of AI itself. As one Twitter user has mused, “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable.”

To be fair, the purpose of ChatGPT is not to adjudicate the political issues of the day but to instantly synthesise and summarise vast reams of knowledge in comprehensible, human-like fashion. This task it often fulfils admirably. Ask it to explain Pythagoras’ theorem, summarise the Battle of the Bulge, write a recipe for tomato chutney with an Asian twist, or provide 20 key Scriptures that teach Christ’s divinity and you will be impressed. You will likely find some of its answers more helpful than your favourite search engine.

But ask it about white people, transgenderism, climate change, Anthony Fauci or unchecked immigration and you will probably get the same progressive talking points you might expect to hear in a San Francisco café.

A timely reminder indeed to not outsource your brain to robots.

AUTHOR

Kurt Mahlburg

Kurt Mahlburg is a writer and author, and an emerging Australian voice on culture and the Christian faith. He has a passion for both the philosophical and the personal, drawing on his background as a graduate… More by Kurt Mahlburg.

RELATED VIDEO: Davos Video on Monitoring Brain Data

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

Will Artificial Intelligence Make Humanity Irrelevant?

Nope. All computers only execute algorithms.


Technology leaders from Bill Gates to Elon Musk and others have warned us in recent years that one of the biggest threats to humanity is uncontrolled domination by artificial intelligence (AI). In 2017, Musk said at a conference, “I have exposure to the most cutting edge AI, and I think people should be really concerned about it.”

And in 2019, Bill Gates stated that while we will see mainly advantages from AI initially, “. . . a few decades after that, though, the intelligence is strong enough to be a concern.” And the transhumanist camp, led by such zealots as Ray Kurzweil, seems to think that the future takeover of the universe by AI is not only inevitable, but a good thing, because it will leave our old-fashioned mortal meat computers (otherwise known as brains) in the junkpile where they belong.

So in a way, it’s refreshing to see a book come out whose author stands up and, in effect, says “Baloney” to all that. The book is Non-Computable You: What You Do that Artificial Intelligence Never Will, and the author is Robert J. Marks II.

Marks is a practicing electrical engineer who has made fundamental contributions in the areas of signal processing and computational intelligence. After spending most of his career at the University of Washington, he moved to Baylor University in 2003, where he now directs the Walter Bradley Center for Natural and Artificial Intelligence. His book was published by the Discovery Institute, which is an organization that has historically promoted the concept of intelligent design.

That is neither here nor there, at least to judge by the book’s contents. Those looking for a philosophically nuanced and extended argument in favor of the uniqueness of the human mind as compared to present or future computational realizations of what might be called intelligence, had best look elsewhere.  In Marks’s view, the question of whether AI will ever match or supersede the general-intelligence abilities of the human mind has a simple answer: it won’t.

He bases his claim on the fact that all computers do nothing more than execute algorithms. Simply put, algorithms are step-by-step instructions that tell a machine what to do. Any activity that can be expressed as an algorithm can in principle be performed by a computer. Just as important, any activity or function that cannot be put into the form of an algorithm cannot be done by a computer, whether it’s a pile of vacuum tubes, a bunch of transistors on chips, quantum “qubits,” or any conceivable future form of computing machine.

Some examples Marks gives of things that can’t be done algorithmically are feeling pain, writing a poem that you and other people truly understand, and inventing a new technology. These are things that human beings do, but according to Marks, AI will never do.

What about the software we have right now behind conveniences such as Alexa, which gives the fairly strong impression of being intelligent? Alexa certainly seems to “know” a lot more facts than any particular human being does.

Marks dismisses this claim to intelligence by saying that extensive memory and recall doesn’t make something intelligent any more than a well-organized library is intelligent. Sure, there are lots of facts that Alexa has access to. But it’s what you do with the facts that counts, and AI doesn’t understand anything. It just imitates what it’s been told to imitate without knowing what it’s doing.

The heart of Marks’s book is really the first chapter entitled “The Non-Computable Human.” Once he gets clear the difference between algorithmic tasks and non-algorithmic tasks, it’s just a matter of sorting. Yes, computers can do this better than humans, but computers will never do that.

There are lots of other interesting things in the book: a short history of AI, an extensive critique of the different kinds of AI hype and how not to be fooled by them, and numerous war stories from Marks’s work in fields as different as medical care and the stabilization of power grids. But these other matters are mostly a lot of icing on a rather small cake, because Marks is not inclined to delve into the deeper philosophical waters of what intelligence is and whether we understand it quite as well as Marks thinks we do.

As a Christian, Marks is well aware of the dangers posed to both Christians and non-Christians by a thing called idolatry. Worshipping idols—things made by one’s own hands and substituted for the true God—was what got the Hebrews into trouble time and again in the Old Testament, and it continues to be a problem today. The problem with an idol is not so much what the idol itself can do—carved wooden images tend not to do much of anything on their own—but what it does to the idol-worshipper. And here is where Marks could have done more of a service in showing how human beings can turn AI into an idol, and effectively worship it.

While an idol-worshipping pagan might burn incense to a wooden image and figure he’d done everything needed to ensure a good crop, a bureaucracy of the future might take a task formerly done at considerable trouble and expense by humans—deciding on how long a prison sentence should be, for example—and turn it over to an AI program. Actually, that example is not futuristic at all. Numerous court systems have resorted to AI algorithms (there’s that word again) to predict the risk of recidivism for different individuals, and basing the length of their sentences and parole status on the result.

Needless to say, this particular application has come in for criticism, and not only by the defendants and their lawyers. Many AI systems are famously opaque, meaning even their designers can’t give a good reason for why the results are the way they are. So I’d say in at least that regard, we have already gone pretty far down the road toward turning AI into an idol.

No, Marks is right in the sense that machines are, after all, only machines. But if we make any machine our god, we are simply asking for trouble. And that’s the real risk we face in the future from AI: making it our god, putting it in charge, and abandoning our regard for the real God.

This article has been republished from the author’s blog, Engineering Ethics, with permission.

AUTHOR

Karl D. Stephan received the B. S. in Engineering from the California Institute of Technology in 1976. Following a year of graduate study at Cornell, he received the Master of Engineering degree in 1977… More by Karl D. Stephan

EDITORS NOTE: This MercatorNet column is republished with permission. All rights reserved.

Florida Company Launches First Artificial Intelligence Digital Assistant for Realtors [Video]

SARASOTA, Fla. /PRNewswire/ — Offrs.com, the leader in Smart Data and Marketing for real estate, launched the first artificial intelligence driven system for real estate brokerages – ROOFAI.com. The official launch event took place at the historic Ritz Theater in Austin, Texas on August 9th, as part during the annual Keller Williams Mega Camp.

The launch party also unveiled, “Raia,” an automated, artificial intelligence assistant that provides marketing automation for real estate brokerages and the agents they serve helping them create the Real estate Office Of the Future (ROOF).

“The genesis of ROOF started with predictive analytics and our ability to marry big data with marketing for real estate agents, but today we are excited to launch the first artificially intelligent real estate assistant for brokers – RAIA. She will help brokerages own the future of Real Estate,” said Offrs.com Co-founder, Rich Swier.

Artificial Intelligence has begun to surface in applications in many industries, but this will be the first artificially intelligent assistant for real estate agents and brokers that will help them execute important tasks need to grow their business. RAIA will be help increase production, improve marketing and be the primary driver for the real estate office of the future.

“We believe we’re solving an enormous business challenge for brokerages by providing data and metric driven business solutions specifically designed to transform our clients into real estate offices of the future. The bleeding edge of technology is where Offrs.com is positioned and the opportunity is here for brokerages, now,” said Offrs.com Co-founder, Mark Dickson.

The ROOF program combines predictive analytics, artificial intelligence and machine learning algorithms to help brokerages identify, target and touch prospective sellers. In addition to providing Smart Data and a robust prospecting platform, the ROOF program also provides automated services that create a systematic approach to lead generation.

“Imagine having a digital assistant leveraging artificial intelligence to enhance fully automated outreach each and every day for a brokerage. We are not just predicting the future, we will be building the future for our customers. The ROOF program is going to ‘wow’ brokerage owners,” said offrs.com VP National Business Development, Frank Chimento.

CLICK HERE TO LEARN MORE ABOUT ROOFAI.com and talk with RAIA

ABOUT OFFRS.COM

Since 2012, Offrs.com has been a leading provider of Smart Data and Marketing products and services for real estate agents and brokerages. With flagship programs like R.O.O.F. (Real Estate Office of the Future) Offrs.com leverages artificial intelligence to automate lead generation for real estate brokerages and agents.  Offrs.com serves thousands of real estate professionals from all major national franchise brands and large independent real estate brokerages.

DISCLAIMER: One of the principles of ROOFAI.com is the son of the publisher of this e-Magazine.