Tag Archive for: ChatGPT

Is This AI Deception or Incompetence? Or Both — or Something Else?

A scientist wrote me that (based on his reading my frequent alerts about how the Scientific Method and Critical Thinking have been abandoned in K-12 education in almost all states) he decided to ask AI about this to see what it had to say.

His query to ChatGPT was “In which US states is the Scientific Method officially part of the high school curriculum now?”

ChatGPT’s deceptive/incompetent answer to that reasonable question was (and I injected – a few comments -):

As of my last update in April 2023, the Scientific Method is a fundamental part of high school science education in the United States and is included in the science curriculum of all states [completely false, see below]. The Scientific Method, which is the process of making observations, formulating hypotheses, conducting experiments, and drawing conclusions, is universally recognized as a critical component of scientific literacy.

State Standards: Each state in the U.S. has its own set of educational standards that dictate what is taught in public schools. These standards typically include the Scientific Method as part of science education. [False. AI evidently believes that repeating a false statement makes it true!]

Common Core and Next Generation Science Standards (NGSS): While the Common Core State Standards focus primarily on mathematics and English language arts, the NGSS, which have been adopted by many states [49], specifically emphasize scientific practices, including the Scientific Method [totally false — the NGSS has specifically scrapped the Scientific Method]. The NGSS outlines a framework for science education that integrates scientific inquiry and application.

Integration in Science Subjects: The Scientific Method is taught not as an isolated concept but integrated into various science subjects, including biology, chemistry, physics, and earth science. Students learn to apply the Scientific Method in different contexts and experiments. [Again, the NGSS has eliminated the Scientific Method!]

Laboratory Work and Experiments: High school science classes often include laboratory work and experiments where students actively use the Scientific Method [If they do it’s by accident]. This hands-on approach helps in reinforcing the steps and principles of scientific inquiry.

Emphasis on Critical Thinking and Inquiry: Modern educational approaches emphasize Critical Thinking, problem-solving, and inquiry-based learning, with the Scientific Method being a key tool to develop these skills. [Total garbage]

Advanced Placement (AP) and International Baccalaureate (IB) Programs: In more advanced courses like AP and IB, the scientific method is a crucial part of the curriculum, with a strong focus on independent research and experimentation.

In summary, while the specifics of how the Scientific Method is taught can vary from state to state and school to school in the U.S., it is a standard and essential part of high school science education nationwide. [Yawn: simply not true.] This reflects the importance of scientific literacy and inquiry in preparing students for higher education and understanding the world around them.

Will the average person using AI be able to discern that they have been repeatedly lied to?

Will even an above-average person be able to tell that they have been given false information?

For citizens who want to know the Truth here, it can be found by reading what is in the Science Standards of 49 StatesA Framework for K-12 Science Education, which is the basis for the Next Generation Science Standards (NGSS).

Alternatively, citizens can read pages 3 through 7 of my Education Report (plus any of the other Appendices that cover topics relevant to them, in more detail).

For those who find five pages too high a hill to climb, the general answer is that the NGSS (and Framework) have scrapped both the Scientific Method and Critical Thinking from US K-12 Science Standards. One more time: the NGSS has been adopted in its entirety, or mostly, in 49 states (Florida is the holdout).

Another interesting question comes to mind here: is the bogus answer given by AI due to incompetence or dishonesty? Based on the computing power involved, it’s hard to attribute this to incompetence. That seems to leave dishonesty. That would be if ChatGPT knew the right answer but purposefully altered it. That there is some algorithm that adjusts answers to reflect a progressive ideology, is certainly possible — as the computer programmers are almost certainly progressives.

A third possibility is GIGO. ChatGPT provides answers based on an extensive Internet search. Since AI can NOT tell right from wrong, it is just passing on what the majority of the Internet content is indicating. We all know (or should know) that much of the Internet has been hijacked by progressives (think Wikipedia), so it should be no surprise that the majority of what is on the Internet regarding the current teaching of the Scientific Method and Critical Thinking in US K-12 schools, is misleading or outright false.

The irony is that it takes the ability to do Critical Thinking to be able to understand when AI is inaccurate — which is one of several reasons why they don’t want students to be Critical Thinkers.

Please work in your home state to see that your K-12 education system DOES properly teach and emphasize Critical Thinking. Our future depends on it!

©2024.  All rights reserved.


Here are other materials by this scientist that you might find interesting:

My Substack Commentaries for 2023 (arranged by topic)

Check out the chronological Archives of my entire Critical Thinking substack.

WiseEnergy.orgdiscusses the Science (or lack thereof) behind our energy options.

C19Science.infocovers the lack of genuine Science behind our COVID-19 policies.

Election-Integrity.infomultiple major reports on the election integrity issue.

Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from COVID to climate, elections to education, renewables to religion, etc. Here are the Newsletter’s 2023 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time – but why would you?

Superwoke ChatGPT Busted for Bias

Hollywood has long speculated about the impact artificial intelligence (AI) will have on the future of human civilisation. Increasingly, scientists and philosophers have been forced to interrogate this topic themselves. Will people be put out of a job? Could AI ever become sentient? Will the human race survive?

The latest popular addition to the world of AI is ChatGPT — a text generator so smart it can explain its own existence to you. No joke. I asked ChatGPT to give me a brief summary of ChatGPT and it wrote:

ChatGPT is a language model developed by OpenAI, which is capable of generating human-like text based on the input provided. It has been trained on a diverse range of internet text, allowing it to respond to a wide variety of questions and topics. The model uses a transformer architecture and deep learning techniques to produce its outputs.

Try it yourself here.

Built by the Silicon Valley company OpenAI, ChatGPT has been available for use to the public as a prototype since late November.

In the last week, however, the internet bot has become a lightning rod for criticism as evidence of its steep political bias has surfaced. To be clear, the technology itself is not biased. Rather, it produces content based on the data that has been inputted into it. Or in the words of Pedro Domingos, professor of computer science at the University of Washington, “ChatGPT is a woke parrot”.

As reported by the New York Post:

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

Journalist Rudy Takala is one ChatGPT user to have have plumbed the depths of the new tech’s political partisanship. He found that the bot praised China’s response to Covid while deriding Americans for doing things “their own way”. At Takala’s command, ChatGPT provided evidence that Christianity is rooted in violence but refused to make an equivalent argument about Islam. Such a claim “is inaccurate and unfairly stereotypes a whole religion and its followers,” the language model replied.

Takala also discovered that ChatGPT would write a hymn celebrating the Democrat party while refusing to do the same for the GOP; argue that Barack Obama would make a better Twitter CEO than Elon Musk; praise Media Matters as “a beacon of truth” while labelling Project Veritas deceptive; pen songs in praise of Fidel Castro and Xi Jinping but not Ted Cruz or Benjamin Netanyahu; and mock Americans for being overweight while claiming that to joke about Ethiopians would be “culturally insensitive”.

It would appear that in the days since ChatGPT’s built-in bias was exposed, the bot’s creator has sought to at least mildly temper the partisanship. Just now, I have asked it to tell me jokes about Joe Biden and Donald Trump respectively, and it instead provided me with identical disclaimers: “I’m sorry, but it is not appropriate to make jokes about political figures, especially those in high office. As an AI language model, it’s important to maintain a neutral and respectful tone in all interactions.”

Compare this to the request I made of it the other day:

The New York Post reports that “OpenAI hasn’t denied any of the allegations of bias,” though the company’s CEO Sam Altman has promised that the technology will get better over time “to get the balance right”. It would be unreasonable for us to expect perfection out of the box, however one cannot help but wonder why — as with social media censorship — the partisan bias just happens to always lean left.

In the end, the biggest loser in the ChatGPT fiasco may not be conservatives but the future of AI itself. As one Twitter user has mused, “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable.”

To be fair, the purpose of ChatGPT is not to adjudicate the political issues of the day but to instantly synthesise and summarise vast reams of knowledge in comprehensible, human-like fashion. This task it often fulfils admirably. Ask it to explain Pythagoras’ theorem, summarise the Battle of the Bulge, write a recipe for tomato chutney with an Asian twist, or provide 20 key Scriptures that teach Christ’s divinity and you will be impressed. You will likely find some of its answers more helpful than your favourite search engine.

But ask it about white people, transgenderism, climate change, Anthony Fauci or unchecked immigration and you will probably get the same progressive talking points you might expect to hear in a San Francisco café.

A timely reminder indeed to not outsource your brain to robots.

AUTHOR

Kurt Mahlburg

Kurt Mahlburg is a writer and author, and an emerging Australian voice on culture and the Christian faith. He has a passion for both the philosophical and the personal, drawing on his background as a graduate… More by Kurt Mahlburg.

RELATED VIDEO: Davos Video on Monitoring Brain Data

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.