In two words: Government censorship.
This is is treason.
Jen Psaki just said “We’re flagging problematic posts for Facebook that spread disinformation.” Should be terrifying to anyone who values free speech. Big tech companies are now default state actors. pic.twitter.com/xOosfhuF2y
— Clay Travis (@ClayTravis) July 15, 2021
Biden Admin: We’re ‘Flagging Problematic Posts’ For Social Media Platforms, Tracking Vaccine ‘Misinformation’
By Daily Wire News • Jul 15, 2021:
White House Press Secretary Jen Psaki admitted on Thursday that the administration is contacting Facebook and telling them what posts that it deems to be “problematic” based on what the administration claims is “misinformation.”
“This is a big issue of misinformation, specifically on the pandemic. In terms of actions, Alex, that we have taken or we’re working to take I should say from the federal government, we’ve increased disinformation research and tracking within the Surgeon General’s office,” Psaki said.
“We’re flagging problematic posts for Facebook that spread disinformation. We’re working with doctors and medical professionals to connect to connected medical experts … who are popular with their audiences with accurate information and boost trusted content. So we’re helping get trusted content out there.”
🚨🚨: Jen Psaki says the Biden administration is actively flagging what they deem "disinformation" about the pandemic to Facebook for censoring.
What could go wrong?? pic.twitter.com/THz85cqi3R
— John Cooper (@thejcoop) July 15, 2021
U.S. Surgeon General Vivek Murthy issued an advisory earlier in the day and claimed at the press conference that “we live in a world where misinformation poses an imminent and insidious threat to our nation’s health.”
Murthy’s advisory said that tech companies can:
- Assess the benefits and harms of products and platforms and take responsibility for addressing the harms. In particular, make meaningful long-term investments to address misinformation, including product changes. Redesign recommendation algorithms to avoid amplifying misinformation, build in “frictions”— such as suggestions and warnings—to reduce the sharing of misinformation, and make it easier for users to report misinformation.
- Give researchers access to useful data to properly analyze the spread and impact of misinformation. Researchers need data on what people see and hear, not just what they engage with, and what content is moderated (e.g., labeled, removed, downranked), including data on automated accounts that spread misinformation. To protect user privacy, data can be anonymized and provided with user consent.
- Strengthen the monitoring of misinformation. Platforms should increase staffing of multilingual content moderation teams and improve the effectiveness of machine learning algorithms in languages other than English since non-English-language misinformation continues to proliferate. Platforms should also address misinformation in live streams, which are more difficult to moderate due to their temporary nature and use of audio and video.
- Prioritize early detection of misinformation “super-spreaders” and repeat offenders. Impose clear consequences for accounts that repeatedly violate platform policies.
- Evaluate the effectiveness of internal policies and practices in addressing misinformation and be transparent with findings. Publish standardized measures of how often users are exposed to misinformation and through what channels, what kinds of misinformation are most prevalent, and what share of misinformation is addressed in a timely manner. Communicate why certain content is flagged, removed, downranked, or left alone. Work to understand potential unintended consequences of content moderation, such as migration of users to less-moderated platforms.
- Proactively address information deficits. An information deficit occurs when there is high public interest in a topic but limited quality information available. Provide information from trusted and credible sources to prevent misconceptions from taking hold.
- Amplify communications from trusted messengers and subject matter experts. For example, work with health and medical professionals to reach target audiences. Direct users to a broader range of credible sources, including community organizations. It can be particularly helpful to connect people to local trusted leaders who provide accurate information.
- Prioritize protecting health professionals, journalists, and others from online harassment, including harassment resulting from people believing in misinformation.
EDITORS NOTE: This Geller Report column is republished with permission. ©All rights reserved.
Quick note: Tech giants are shutting us down. You know this. Twitter, LinkedIn, Google Adsense, Pinterest permanently banned us. Facebook, Google search et al have shadow-banned, suspended and deleted us from your news feeds. They are disappearing us. But we are here. We will not waver. We will not tire. We will not falter, and we will not fail. Freedom will prevail.
Subscribe to Geller Report newsletter here — it’s free and it’s critical NOW when informed decision making and opinion is essential to America’s survival. Share our posts on your social channels and with your email contacts. Fight the great fight.
Remember, YOU make the work possible. If you can, please contribute to Geller Report.