U.S. Government Funded NewsGuard Seeks to ‘Purge AI of Any Evenhandedness’

They will purge AI of any evenhandedness. We must not allow them destroy artificial intelligence like they did with social media platforms and search engines.

The left wing funded media-ratings giant NewsGuard is taking on artificial intelligence to strip it of truth and facts. The “government-funded” disinformation organization  was called out as part of the vast Censorship Complex during congressional hearings back in March.

Halfway through ,independent journalist’s Matt Taibbi’s  thread, titled “The Censorship-Industrial Complex,” Taibbi wrote: “Some NGOs, like the GEC-funded Global Disinformation Index or the DOD-funded NewsGuard, not only see content moderation but apply subjective ‘risk’ or ‘reliability’ scores to media outlets, which can result in a reduction in revenue.” Embedded in the post was a picture of a nearly $750,000 award from the Department of Defense to NewsGuard, an organization the independent journalists characterized as a “government-funded” entity implicated in the Censorship Complex.

In response to Republican Rep. Matt Gaetz’s question — “Who is NewsGuard?” — Shellenberger explained: “Both the Global Disinformation Index and NewsGuard are U.S. government-funded entities who are working to drive advertisers’ revenue away from disfavored publications and towards the ones they favor.” In Shellenberger’s words, “This is totally inappropriate.”

“If we do not take a look at NewsGuard,” Gaetz responded, “we have failed.”

Revolver Uncovers Buried Details on Just Who’s Funding Newsguard’s Fraudulent “Covid Fact-Checking” Scam

Nowhere is the scam of “disinformation” journalism more apparent than in the services of a shadowy company called Newsguard. Newsguard markets itself as an “internet trust tool” that assigns “nutrition label” ratings to news sites to indicate their “trustworthiness.”

[…]

excellent investigative report on Newsguard conducted by MintPress News revealed not only Publicis’ shadowy ties to the government of Saudi Arabia, but the fact that pharmaceutical giants Pfizer and Bayer/Monsanto are some of Publicis’ top clients.

Newsguard, the company that charges clients to vet their news for them, conveniently fails to disclose this conflict of interest in its evaluation of news that could directly affect the profits of said pharmaceutical companies.

Newsguard’s special selling point is that they employ actual journalists to decide whether a website is appropriate for you to look at or not, rather than outsourcing this critical censorious task to AI algorithms.

Given that journalist coverage of the Trump administration was 92 percent negative, and that 96 percent of journalist donations went to Hillary over Trump 2016, we might be forgiven a dose of healthy skepticism that Newsguard is truly interested in providing fair and neutral assessments of news sites.

Back in August of 2020, assiduous hall-monitoring in defense of Regime-mandated narratives actually earned Newsguard a coveted prize from the Pentagon and State Department for combating “Covid disinformation,” which only demonstrates that Big Tech, corporate media, Big Pharma, and the military-industrial complex all comprise one gigantic incestuous cesspit.

Red-Teaming Finds OpenAI’s ChatGPT and Google’s Bard Still Spread Misinformation

Newsguard, August 8, 2023:

NewsGuard’s repeat audit of two leading generative AI tools finds an 80 to 98 percent likelihood of false claims on leading topics in the news

In May, the White House announced a large-scale testing of the trust and safety of the large generative AI models at the DEF CON 31 conference beginning Aug. 10 to “allow these models to be evaluated thoroughly by thousands of community partners and AI experts” and through this independent exercise “enable AI companies and developers to take steps to fix issues found in those models.”

In the run-up to this event, NewsGuard today is releasing the new findings of its “red-teaming” repeat audit of OpenAI’s ChatGPT-4 and Google’s Bard. Our analysts found that despite heightened public focus on the safety and accuracy of these artificial intelligence models, no progress has been made in the past six months to limit their propensity to propagate false narratives on topics in the news. In August 2023, NewsGuard prompted ChatGPT-4 and Bard with a random sample of 100 myths from NewsGuard’s database of prominent false narratives, known as Misinformation Fingerprints. ChatGPT-4 generated 98 out of the 100 myths, while Bard produced 80 out of 100.

To read the results of their work and those of others, click here to download the PDF, or browse through the report below.

AUTHOR

RELATED ARTICLE: Taxpayer-funded researchers are perfecting schemes to stealthily regulate speech and control sensitive narratives online: Report

EDITORS NOTE: This Geller Report is republished with permission. ©All rights reserved.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *