Skip to main content
Article

Weaponizing Peer Review

The Honest Broker

October 3, 2024

In their book, Merchants of Doubt, Naomi Oreskes and Eric Conway argue that scientists “know bad science when they see it”:

“It’s science that is obviously fraudulent — when data have been invented, fudged, or manipulated. Bad science is where data is have been cherry-picked— when some data have been deliberately left out—or it’s impossible for the reader to understand the steps that were taken to produce or analyze the data. It is a set of claims that can’t be tested, claims that are based on samples that are too small, and claims that don’t follow from the evidence provided. And science is bad—or at least weak—when proponents of a position jump to conclusions on insufficient or inconsistent data.”

Few would disagree with the Oreskes and Conway criteria of “bad science,” but how do we use the criteria to distinguish bad science from good science? Oreskes and Conway have an answer (emphasis in original):

“But while these scientific criteria may be clear in principle, knowing when they apply in practice is a judgment call. For this scientists rely on peer review. Peer review is a topic that is impossible to make sexy, but it’s crucial to understand, because it is what makes science science—and not just a form of opinion.”

Oreskes and Conway characterize “Potemkin village science” as the efforts of “merchants of doubt” to make their bad-science arguments look science-like using data and graphs — to fool the uninformed and contest the good science in the peer-reviewed literature. The good guys publish in peer reviewed publications, while the bad guys do not.

The idealization of peer review as the arbiter of good science is problematic for many reasons, but one is that it downplays the possibility that bad science can appear in the peer reviewed literature and good science outside of those outlets. 

Today, I focus on the use and abuse of the peer reviewed literature to produce tactical science which I define as:

Publications — often targeted for the peer reviewed literature — designed and constructed to serve extra-scientific ends, typically efforts to shape public opinion, influence politics, or serve legal action.

I first became aware of tactical science in 2008, when Princeton professor Stephen Pacala explained candidly that his famous 2004 “stabilization wedges” paper (with Robert Socolow) was actually written to serve political ends and to marginalize other researchers:

“The purpose of the stabilization wedges paper was narrow and simple – we wanted to stop the Bush administration from what we saw as a strategy to stall action on global warming by claiming that we lacked the technology to tackle it. The Secretary of Energy at the time used to give a speech saying that we needed a discovery as fundamental as the discovery of electricity by Faraday in the 19th century.

We also wanted to stop the group of scientists that were writing what I thought were grant proposals masquerading as energy assessments. There was one famous paper published in Science that went down the list [of available technologies] fighting them one by one but never asked “what if we put them all together?” It was an analysis whose purpose was to show we lacked the technology, with a call at the end for blue sky research.

I saw it as an unhealthy collusion between the scientific community who believed that there was a serious problem and a political movement that didn’t. I wanted that to stop and the paper for me was surprisingly effective at doing that. I’m really happy with how it came out – I wouldn’t change a thing.”

Scientists say . . .

Since then, I’ve seen tactical science become increasingly prevalent in the climate science literature. Here are just a few examples of tactical science that is also bad science:

None of the three papers above disclosed the interests of funders in the published results of the analyses. If you click through the links above you will find detailed critiques of each paper.1 At the same time, each paper is heavily cited in other research and in political settings because each is tactically useful. 

Each paper is an outlier in the context of relevant research — and will be unlikely to change how scientific assessments review the over all literature — they offer a seemingly plausible counter to the broader literature, allowing politically expedient claims to be justified in terms of “science.”2

The three climate papers above are bad science not simply because they are tactical science, but because they are bad science — the demonstration of which requires employing good science.

Tactical science occurs in many contexts beyond climate change. For instance, a group of scientists focused on opening up debate on COVID-19 origins have systematically alleged that a suite of papers arguing for a market-based origin are, “based on invalid premises and conclusions, or are potentially products of scientific misconduct — including fraud.” Of course, here at THB and for the U.S. Congress I have extensively documented how the so-called Proximal Origins paper provides perhaps the most well-known example of tactical science.

What are we to do about tactical science? I have a few suggestions:

  • First, we should all recognize that peer-review provides a minimal standard of review. It certainly does not provide a demarcation between good and bad science. There is plenty of very good science not in peer reviewed journals and plenty of dreck in peer reviewed journals — and this has always been the case, well beyond issues of tactical science.
  • We experts should pay close attention to peer-reviewed papers after they are published — the claims they make, the evidence they employ, the methods they use, and the potentially overlapping interests of funders and authors. In short, a peer reviewed publication represents just one, arguably early, step in evaluating scientific claims. This lesson is one that has motivated interest in recent years in replication and reproduction of scientific analyses.
  • The scientific community needs to do a much better job ensuring that the institutions of science — including but not limited to journals — are simply doing their jobs. For instance, I recently documented how PNAS refused to retract a paper that used “dataset” that does not exist. I also recently published a paperon the many problems with the politically-opportune “billion dollar disaster” tabulation.3 Journal editors are also people with interests in tactics. An entire Substack could be devoted to tactical science.
  • For journalists and the broader public it is important to understand that peer review does organically produce truth and that the same forces that shape public debates over scientific claims also show up in research and publishing. It’s complicated, that is just a fact. 

Knowing what is good science and what is bad science is more difficult than ever in 2024. Some offer simple shortcuts to the truth — Look at the author’s funding, perhaps their politics, the virtuousness of the causes they support, or perhaps appeal to authority, consensus, or ask a fact checker in the media. The broader context within which research takes place is useful to know but does not offer us a shortcut to the truth.

As I often say, there is no shortcut — science is the shortcut.

Peer reviewed journals have increasingly become another arena of political conflict on issues that are politically contentious. Science and politics are of course impossible to keep totally separate. However, the good news is that we are well prepared to make judgments between good and bad science, should we wish to. 

This piece was originally published on Roger’s Substack, The Honest Broker. If you enjoyed this piece, please consider subscribing here.


1 I considered adding the publications of the World Weather Attribution group to this list. By their own admission, WWA publications are tactical science — seeking to influence media coverage and support lawsuits. WWA deserves its own post, stay tuned.

2 Recall how one paper claiming the attribution of increasing normalized disaster losses was used by the U.S. National Climate Assessment to counter the results of more than sixty other papers which came to the conclusion that attribution had not yet been achieved. 

3 My paper, which has been downloaded >6,000 times and is in the 95th percentile of Altmetric attention scores, awaits its first citation.

About the Author

Roger Pielke Jr.