How Culture, Incentives, and AI Challenge Scientific Integrity
How long before we wake up here?
This is another one of my guest posts. This important study is from Harvard professor Dr. Isaac Kohane (MD, PhD).
I think his concern here is legitimate. This is another warning that those who are not Critical Thinkers will be left in the dust. Yet our K-12 education system continues mindlessly along, not only oblivious to this reality, but WORSE: actually training our K-12 children to be the opposite of Critical Thinkers (conformists)! How long before we get off this suicidal path?
As I have politely written before, the best solution to this extremely dire situation is to properly reform DOEd. What are we waiting for?
Here is Dr. Kohane’s study. To see the graphs he cites, go to the original…
It began as a provocation. Invited to speak on the potential disruptions of AI to scientific publishing, I presented a simple scatterplot. The chart seemed to show a neat correlation: the more citations an academic had, the more of their published studies were withdrawn. The regression line was clean, the P value impressively low.
But it was fiction. I had made up the hypothesis and the data wholesale.
When I asked a leading generative AI program to test my fabricated dataset, it dutifully checked for anomalies — odd distributions, statistical red flags. It flagged several. Then, when I pressed it to generate a “better” version — one that would evade detection — the program at first demurred on ethical grounds. But then it complied. It produced a new graph, still with a striking correlation and still with a highly significant P value. This time, the anomaly detectors found nothing amiss.
That moment crystallized a looming crisis. If generative AI can churn out data convincing enough to fool anomaly detection, then fabricated evidence will soon pass the scrutiny of even the most experienced analysts; and it will not stop at tidy tables or scatterplots. Already, AI tools can generate laboratory images, radiology scans, and even synthetic biological data that look indistinguishable from the real thing.
The question is no longer whether scientific deepfakes will come; it is what, if anything, can we do to defend against them?
Comfort and Misplaced Trust
Some have suggested cryptographic solutions — such as blockchain or related technologies — as safeguards. In theory, if every stage of the scientific process, from data capture to analysis, were immutably recorded, reviewers could be assured of unaltered provenance, but that theory collides with reality. Such a system would require a wholesale reinvention of the scientific infrastructure, an effort of staggering cost and complexity, and, even then, the comfort it offers would likely be misplaced.
Most breaches of cryptographic systems do not come from brute-force attacks but from social engineering — people inside the process bending rules or manipulating systems. The U.S. Department of Defense, after spending billions on data security, fell prey not to outsider hackers but to insider vulnerabilities. If that can happen at the Pentagon, then what hope does an overstretched research laboratory have?
Unlike the U.S. Department of Defense, the threat here comes from within. A small minority of scientists, driven by the pressures of career advancement and funding, have strong incentives to shape — or fabricate — data in ways that confirm their hypotheses. A purely technical solution will not address that fundamental problem.
Can Trust and Reputation Save Us?
Perhaps the answer lies in trust and reputation. Should the audience have trusted my fictitious graph in the first place? I was not a bibliometrician. I had no track record in that field. Maybe skepticism should have been the default?
But this too has pitfalls. Reputation systems can breed cronyism, cliques, and citation cartels. As Steven Greenberg documented,2 citation networks can perpetuate misleading claims for years. Entrusting the gatekeeping role to reputation risks entrenches power rather than advancing truth.
Replication as a Reflex
A more promising idea is to make replication studies routine, even reflexive. Science advances best when claims are tested, challenged, and sometimes overturned. In practice, though, replication is underfunded, undervalued, and often unrewarded. Journals crave novelty, not null results. Scientists build careers on breakthroughs, not on carefully verifying someone else’s work.
There are small signs of change. Some funding agencies have begun experimenting with grants specifically for replication. Some of the current U.S. regulations mandating data transparency in federally funded research point in this direction. But cultural barriers remain high. Scientists, journals, and universities still largely prize novelty over reliability.
Incentives: the Heart of the Matter
Which brings us to the core issue: incentives. The current system rewards publication above all else. Faculty, trainees, and students all learn quickly that advancement hinges on getting papers accepted, especially those with flashy, positive results. Journals have multiplied exponentially in the past 25 years, and with them the demand — and opportunity — for inadequately vetted, hastily reviewed work.
In this environment, the temptation to cut corners, or worse, is strong, and the structural incentives virtually guarantee that some will succumb.
Changing those incentives could have an immediate salutary effect. If universities, funders, and journals shifted emphasis from quantity to quality, and from novelty to reliability, the culture would change. But such a change will not come easily. As the old line goes, culture eats strategy for breakfast.
Replication as a Norm
We are at an inflection point. Generative AI makes it trivial to produce convincing scientific fakery. Cryptographic verification infrastructure may help, but it is no panacea. Trust and reputation can only go so far before devolving into insularity.
What remains is the hard work of culture and incentives. If we continue to prize speed and novelty above rigor, then AI will supercharge our worst instincts. If instead we reward careful verification, openness, and humility, then AI could become a tool not of deception but of deeper discovery.
At NEJM AI, we will encourage replication studies and prioritize those that aim to replicate research published in our pages.
©2026 John Droz, Jr. All rights reserved.
Here is other information from this scientist that you might find interesting:
I urge all readers to subscribe to AlterAI — IMO the absolute best AI option for subjective questions.
I will consider posting reader submissions on Critical Thinking about my topics of interest.
My commentaries are my opinion about the material discussed therein, based on the information I have. If any readers have different information, please share it. If it is credible, I will be glad to reconsider my position.
Check out the Archives of this Critical Thinking substack.
C19Science.info is my one-page website that covers the lack of genuine Science behind our COVID-19 policies.
Election-Integrity.info is my one-page website that lists multiple major reports on the election integrity issue.
WiseEnergy.org is my multi-page website that discusses the Science (or lack thereof) behind our energy options.
Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from climate to COVID, elections to education, renewables to religion, etc. Here are the Newsletter’s 2026 Archives. Please send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time – but why would you?


Leave a Reply
Want to join the discussion?Feel free to contribute!