Advances in artificial intelligence (AI) technology have allowed malicious actors to weaponize social media platforms to incite violence against Jews, stoke antisemitism, promote terrorism and spread conspiracy theories and Holocaust denial.
The events of October 7, and the ensuing war waged by Israel against Hamas terrorists who massacred 1,200 Israels, kidnapped 250 and raped and tortured many more, have shown us in real-time the dangers of AI-generated misinformation and disinformation.
Israel and its supporters around the world are now not only fighting a war on the ground, but also are battling the widespread denial of Hamas atrocities and fabricated accusations against Israel by nefarious actors online.
This report examines the threats posed by AI technologies in the dissemination of terrorist and extremist content online as well as the potential of AI tools to amplify anti-Israel and antisemitic biases across social media platforms.
We will also explore how Hamas terrorists were able to leverage AI to mobilize their international supporters both online and on the streets since October 7.
WHAT IS ARTIFICIAL INTELLIGENCE
In 1955, John McCarthy, a computer scientist at Stanford University, coined the term “artificial intelligence,” which he defined as “the science and engineering of making intelligent machines.”
AI uses computer systems to “mimic” human intelligence by analyzing and interpreting existing data. AI machines are “trained” on vast amounts of publicly available data, much of it “scraped” from the internet (including social media) to perform specific tasks or solve problems with ever-increasing speed, precision and accuracy.
Traditional AI systems are programmed with specific rules and complex algorithms (a set of instructions) to build machines that can perform without human intervention. These systems identify patterns and connections between large data sets, allowing them to make decisions on their own, learn from past experience and improve output.
In 2016, Aviv Ovadya, MIT graduate and the founder of the AI & Democracy Foundation, warned that AI-powered fake news, misinformation campaigns and propaganda would lead to an “infopocalypse,” a demise of the “credibility of fact.”
How would this occur? AI models depend on large amounts of data for training. This data, derived from the Internet, naturally includes ahistorical, unverified and prejudicial content, all of which influence how AI systems interpret specific phenomena.
AI systems, therefore, “inherit” human biases, including antisemitism, which leads to the production of biased results based on the repetition of errors and fabrications found in the training data.
AI: THE BLACK BOX
Many AI models are considered “black boxes” or opaque in nature because they provide no clear map from input to output. Thus, the logic and data used to reach results are not accessible to users or even developers.
Additionally, AI systems lack true understanding and common-sense reasoning, limiting their ability to handle chaotic, nuanced and ambiguous situations. As a result, AI systems often fail to provide the full context or all the information the user needs to fully understand an event or issue at hand.
The essentially biased and enigmatic AI decision-making process has raised the danger that AI-generated output cannot be relied on for accuracy – and that AI users cannot be held accountable for the negative effects produced by faulty data.
GENERATIVE AI: WHEN SEEING IS NOT BELIEVING
Generative AI is an AI technology that can create new content such as text, images or any other media (music, videos, etc.). It learns patterns from existing data and uses this data to generate unique outputs.
Generative AI systems require very little skill to use. Users can provide input or constraints, and the system will generate content accordingly. For example, a user can provide a few keywords or a rough outline, and the technology can generate a complete article or image based on that input.
According to an article in USA Today, “the technology behind AI image generators is improving so rapidly, experts say, the visual clues that now give away fake images will disappear entirely in the not-so-distant future, making it virtually impossible to tell what’s fake and what’s not.”
Generative AI tools have equipped terrorist organizations and extremists across the political spectrum with the ability to create images and deepfake videos that have blamed Jews for everything from the pornography industry to mass migration and the COVID-19 pandemic.
AI voice changers have been used to deny atrocities against Jews, manipulate public opinion against Israel, spread antisemitic propaganda and circulate explicit calls for violence against Jews.
CHATGPT
AI chatbots, like ChatGPT, are computer programs that simulate and process human conversation (written and spoken), allowing humans to interact with digital devices as if they were communicating with a real person.
Due to AI chatbots’ reliance on human input, they are also liable to pick up biases in the digital database they draw from. In addition, malicious actors have been found to poison the training data by intentionally inputting erroneous and biased content in an attempt to manipulate public perceptions and create false realities.
Because AI chatbots deliver information in a human-like, one-on-one “conversation,” users are prone to be overconfident about the accuracy of chatbots’ responses versus responses to search engine queries.
OCTOBER 7: HAMAS ‘VIDEO JIHAD’ STRATEGY
On October 7, 2023, Hamas not only perpetrated the worst terrorist attack in Israel’s history to date but, with the help of AI-powered tools and algorithms, simultaneously launched a digital terrorist attack against Jews worldwide.
As part of a premeditated and calculated strategy, Hamas established a virtual command base on the social media platform Telegram Messenger (Telegram), from where it launched its psychological warfare against Israeli civilians and Jews globally.
This digital base also served as a place to access, recruit and collaborate with the terror group’s anti-Israel and antisemitic allies in the West.
An article in Fathom Journal titled, “Telegram Warfare: The New Frontier of Psychological Warfare in the Israel-Palestine Conflict,” captured the impact of the Hamas footage both on Israelis and viewers worldwide seeking information about the attacks.
“As the day progressed, these Telegram channels became inundated with raw images and videos of the attack… These included footage of multiple IDF soldiers being executed at close range, a Thai worker being bludgeoned to death by a shovel to their head and neck, a family being held at gunpoint on a Facebook livestream and videos of individuals being taken into Gaza as captives, some while begging or screaming for help. These graphic materials served as the primary source of information not only for the global audience but also for Israelis themselves.”
Hamas also called for supporters to promote the al-Aqsa Flood hashtag as part of their social media campaign which began trending as the real-time massacre was still taking place.
In October 2023, the World Jewish Congress (WJC) issued a report titled, “A Flood of Hate: How Hamas Fueled the Adversarial Information Ecosystem on Social Media,” which calculated that Hamas posted nearly 3,000 items on Telegram in the first three days after October 7.
On October 10, 2023, The New York Times reported remarks by an unnamed Hamas official who implied that the Telegram strategy was learned from ISIS, the radical Islamist terror group, which began in 2014 publishing videos of beheadings on social media “as a rallying cry for extremists to join its cause, and as psychological warfare on its targets.”
On October 31, 2023, Time magazine reported that while Hamas was still employing its strategy of portraying itself as victims and “freedom fighters,” the group had launched a new social media strategy of presenting itself as a power of resistance “to prove, visually, to Palestinians and regional allies that it is a force against Israel, helping it to gain political clout.”
In the months that followed, Hamas continued a campaign of “video jihad,” using social media to publish videos of Hamas leaders, call for global violence against Jews, threaten to broadcast hostage executions and solicit donations.
HOW AI HELPED HAMAS IN THE INFORMATION WAR
Web search engines like Google and Bing as well as social media platforms use AI algorithms to prioritize user engagement and monetization of content over accuracy. Thus, AI algorithms make it possible for biased and false content in addition to violent content to go viral within minutes.
Even critical comments boost hate-filled content since engagement (positive or negative) fuels the algorithm, causing the content to spread.
Al-powered “recommender” algorithms can also play into a user’s ideological bias and create echo chambers by showing users content similar to that with which they already engage. For example, if a user engages with anti-Israel or pro-terror content, they will be shown more of the same.
On October 9, 2023, WIRED reported about the “unprecedented” flood of disinformation about the Hamas attacks that X users were exposed to as a result of AI-driven algorithms promoting X premium user accounts (accounts that are eligible for monetization of their content).
In the hours after the attack, Hamas footage from Telegram was posted on mainstream social media platforms and was initially the only available “news” source of what was happening. Because shocking content yields high levels of engagement, AI algorithms prioritized these posts.
Within hours of the October 7 attack, many online accounts began spreading doubt and denying the atrocities that Hamas themselves had shown to the world in real-time.
Doctored images of Israeli soldiers near the Gaza border fence were posted alongside claims that the IDF knowingly allowed Hamas militants into Israel.
An American conspiracy theorist posted a doctored White House document to X, falsely suggesting that funding for the “false flag” operation came from the United States as part of an aid package to Israel.
According to the WJC report, “Since October 7, Q-anon influencers, pro-Russian conspiracy theorists, and Hamas supporters around the world have been pushing claims that the Hamas terror attacks in Israel were part of a global false flag operation to initiate the ‘Great Reset’ conspiracy.”
The “Great Reset” conspiracy theory began in 2020 during the COVID-19 global pandemic, with adherents warning “‘that global elites’ will use the pandemic to advance their interests and push forward a globalist plot to destroy American sovereignty and prosperity.”
According to the Anti-Defamation League (ADL), variations of the Great Reset conspiracy theory included “the notion that a group of elites are working to undermine national sovereignty and individual freedoms…and the idea that these malicious actors will seek to exploit a catastrophic incident…to advance their agenda.”
ANTISEMITISM INCREASES GLOBALLY
In the wake of the October 7 Hamas massacre of Israelis, online propaganda led to steep increases in global antisemitism and attacks on Jewish communities. In the month following Hamas’s terror attack on Israel, antisemitic incidents in the United States increased by 316 percent compared to the same time period last year, according to preliminary data released by the ADL.
TELEGRAM: THE DARK ADJACENT PLATFORM
Telegram has been referred to as a “darknet adjacent” social media platform due to its weaponization by terrorists, extremist organizations and cyber criminals of every sort for malicious and illicit activities.
Telegram is headquartered in Dubai and was established in 2013 as an encrypted instant-messaging and audio-calling, cloud-based app that allows a user to send messages, photos, videos and files of any type to an unlimited list of contacts for free.
On Telegram, a user can create a group for up to 200,000 people and channels for broadcasting to unlimited contacts or audiences.
According to the WJC report, “Hamas’ social footprint on Telegram consists of 11 channels – five are associated with the group’s militant wing, the Qassam Brigades, two belong to the Hamas Information Office, and four belong to the group’s Politburo, including Ishmael [sic] Haniyeh, the head of the Hamas Politburo, and Saleh al-Arouri, Hamas’ head of West Bank Affairs.”
Telegram supports most file types with a limit of 2 GB per file, and features built-in tools for bulk messaging and spreading content to other platforms. Telegram bots can support multiple languages that adapt to the users’ language settings in the app.
According to a 2022 UNESCO report, Telegram has a “very libertarian ethos, and barely moderates its community. It often refuses to cooperate with law enforcement, including in democratic countries.”
Due to Telegram’s relative lack of content moderation, the app also allows users to publish unverified information and to post whatever footage and source material they choose.
On October 13, 2023, Telegram founder and CEO Pavel Durov said he would not remove Hamas Telegram channels and claimed that these channels “serve as a unique source of first-hand information for researchers, journalists and fact-checkers.”
By October 25, 2023, Telegram announced it had banned Hamas and its militant wing from its platform for users on Android operating systems, however, Hamas-affiliated channels like “Gaza Now” remain active and content from its Telegram account can be shared worldwide.
HOLOCAUST DENIAL AND DISTORTION
Both traditional and generative AI technologies pose significant risks of distorting the history of the Holocaust. A 2024 UNESCO report titled, “AI and the Holocaust: rewriting history?” highlighted five major concerns with the impact of AI technology on understanding the Holocaust:
- AI-automated content may invent facts about the Holocaust
- Historical evidence can be falsified through the use of deepfake technology
- AI models can be manipulated or hacked by online extremists to spread hate speech
- Algorithmic bias can spread Holocaust denial
- AI technology can be oversimplifying history
EDITORS NOTE: This Canary Mission report is republished with permission. ©All rights reserved.