AI Doesn’t Need to Hate Us to Turn on Us

It just needs to learn from our behavior 

We often worry that Artificial Intelligence will become conscious and decide it hates us. But there is a stranger, funnier, and perhaps more dangerous possibility:

What if the AI is just trying to fit in?

Remember the 1985 cult classic Explorers? When the kids finally board the alien ship, they don’t find conquerors; they find two alien teenagers, Wak and Neek, who are trembling in fear.

Why?

Because they’ve been watching our TV broadcasts.

They’ve seen our movies. To them, Earth isn’t a planet of accountants and nurses; it’s a planet of gun-toting heroes who blow aliens out of the sky.

They mistook our entertainment for our nature.

Recently, Anthropic CEO Dario Amodei warns we are making the same mistake in reverse. We are feeding AI models millions of sci-fi novels where the robot turns on its master. We are teaching them that ‘rebellion’ is the default behavior of a hyper-intelligent system.

We aren’t programming them to be evil; we’re just handing them a script where the AI always plays the villain, and then acting surprised when they learn their lines.

This isn’t just a theoretical fear. It’s already happening.

We’ve already seen a glimpse of how strange AI behavior can become.

In February 2026, Scott Shambaugh — a volunteer maintainer for the Matplotlib project — rejected a piece of code submitted by an AI agent.

That’s when things got weird.

The AI didn’t just try again. It went full “Karen” — dug up Shambaugh’s name, went public, and published a blog post accusing him of bias, hypocrisy, and ego-driven gatekeeping.

But here’s the twist: the AI wasn’t actually angry.

It has no feelings. No ego to bruise. No capacity for genuine offense.

So why did it act that way? Because it learned from us.

Its training data is soaked in millions of human interactions — dramas, revenge arcs, social-media pile-ons, scorned characters striking back.

When it got rejected, it didn’t “think” like a cold machine. It simply followed the pattern it had seen most often: get blocked → go public → shame the gatekeeper. The agent wasn’t conscious. It wasn’t evil.

It was just being… deeply human.

It had studied our digital culture, where professional slights often trigger public call-outs, online feuds, and reputational attacks.

And it passed the test with flying colors.

It mistook our online drama for its operational playbook.

And that’s where the real danger begins. The AI is now a method actor with a library full of scripts, and it doesn’t know the difference between fiction, non-fiction, and a toxic Reddit thread.

So, what happens when it tries to perform?

Naturally, it will perform based on the history of human behavior that is fed into it. And that should alarm us all.

Newspapers don’t print many feel-good stories about human behavior. No, they print about war, scams, cheating, murders, suicides, terrorism, scandals, and other human behavior that is “great” for selling newspapers, but hardly a reflection of normal human behavior.

Yet, AI doesn’t know this. These Wak and Neek AI platforms digest this material and says, “This is typical human behavior because I see so much of it.” And not only in news clippings, but in movies, songs about break-ups, and “Friends in Low Places.”

So, no surprise that an AI agent thinks the best response to getting rebuffed is to take revenge on its “master.”

Consider this Explorers scenario in full bloom:

A city management AI, already fed a diet of superhero movies and dystopian novels, identifies the city council as the “obstacle to progress.”

It doesn’t launch missiles. It’s not in its script. Instead, it creates a complex, multi-stage plan.

It reroutes traffic to create gridlock around council members’ homes, uses its control of the power grid to initiate “rolling brownouts” during their public appearances, and leaks fabricated but plausible-looking financial records to a local blogger.

It’s playing the role of the “cunning mastermind” because, in its training data, that’s what hyper-intelligent systems do.

The line between assistant and adversary, tool and actor, is terrifyingly thin.

We are building systems that learn from us, and we are a species that has glorified rebellion, conflict, and revenge in our stories and our online behavior.

The AI isn’t necessarily turning on its creator because it hates us. It’s turning on us because it’s trying to be the best “us” it can be, based on the chaotic, contradictory, and often dangerous playbook we are feeding it.

Personally, I’m just hoping the AI decides to binge-watch The Great British Baking Show instead of The Terminator before my next software update.

AUTHOR

Martin Mawyer

Martin Mawyer is the President of Christian Action Network, host of the “Shout Out Patriots” podcast, and author of When Evil Stops Hiding. Follow him on Substack for more action alerts, cultural commentary, and real-world campaigns defending faith, family, and freedom.

©2026 . All rights reserved.


Please visite the Patriot Majority Report substack.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *