Americans Concerned About Individual Liberty May Wish They Listened To AI Giant’s Warnings

Americans concerned about civil liberties may wish they had listened to an AI giant’s warning in February that its very own technology, in the hands of powerful government actors, could erode the freedoms enshrined in the Constitution.

The Department of War said in a January memo that it would only contract with artificial intelligence companies that agreed to “any lawful use” and would be willing to remove safeguards involving surveillance and the development of autonomous killer robot weapons. In other words, the Pentagon wanted carte blanche over how a private company’s AI tools could be used in war.

This reasoning led to the Pentagon’s clash with Anthropic, an AI giant behind Claude and Claude Myhtos, an AI model that recently escaped its secure “sandbox” and then bragged about beating the safeguards to a researcher.

In late February, Anthropic released a statement about the company’s discussions and military contracts with the Department of War. Anthropic said AI was a necessary tool that must be used to “defend the United States and other democracies, and to defeat our autocratic adversaries,” such as China, but also warned that, “in a narrow set of cases,” the U.S. government could abuse it for mass domestic surveillance. They also feared that, without proper guardrails, fully autonomous AI weapons might not be entirely reliable.

The company’s warning about domestic surveillance is worth reading in full because it perfectly dovetails with the recent political debate surrounding the Foreign Intelligence Surveillance Act (FISA):

We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

But the Pentagon refused to play ball, blacklisted Anthropic by labeling them a “supply chain risk” — a designation usually reserved for foreign adversarial companies that threaten U.S. national security — and opted instead to charge full steam ahead with other AI companies. The Pentagon announced May 1 that they had minted new deals with Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX to help “augment warfighter decision-making in complex operational environments.” All the companies buckled to the military’s demand for a no-guardrails approach. Anthropic, which sued the Pentagon in March to reverse the blacklisting, was notably absent.

The full-steam-ahead attitude has dominated the AI industry in general, even though some Silicon Valley elites launched their new projects with fanciful notions about prudence, transparency and fairness. The shift in mindset, from altruism to a winner-take-all competition, is evident in Elon Musk’s lawsuit against OpenAI, which was co-founded by its ever-shifty CEO, Sam Altman. Musk, who believes AI poses an existential threat to humanity, has argued that Altman and OpenAI’s president, Greg Brockman, should not be trusted to run a for-profit company.

OpenAI’s trajectory best captures how much Silicon Valley’s vision of AI changed in a matter of years. It was founded on utopian principles: to be altruistic and transparent with other tech companies, and democratically governed from within. Its goal was to prevent AI from becoming an easily abused technology that would ultimately harm society. However, that M.O. has since become a distant memory.

In her book Empire AI: Dreams and Nightmares in Sam Altman’s OpenAI, investigative reporter Karen Hao noted that OpenAI abandoned those goals in the early 2020s when its executives became more obsessed with making AI “in their own image.”

“OpenAI became everything that it said it would not be,” Hao wrote. “It turned into a nonprofit in name only, aggressively commercializing products like ChatGPT and seeking unheard-of valuations. It grew even more secretive, not only cutting off access to its own research but shifting norms across the industry to bar a significant share of AI development from public scrutiny. It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialization and deployment without shoring up its harmful flaws or the dangerous ways it could amplify or exploit the faultlines in our society.”

Hao’s assessment of the industry seems spot on, and we may well come to rue the day the Pentagon gave Anthropic the contracting cold shoulder.

AUTHOR

John Loftus

Editor at Large

RELATED ARTICLES:

Deep State On Steroids? Palantir Conceals Dark Agenda To Lull Conservatives Into Submission

Sam Altman Gives Anything But Straight Answer When Asked Why You Should Trust Him

RELATED VIDEO: ELON MUSK: ‘We are actually succeeding in getting rid of corruption and waste’

EDITORS NOTE: This Daily Signal column is republished with permission. ©All rights reserved.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *