Blind spots in blind review

One of my interest areas is in meta-research — sometimes called the science of how we do science. This branch of study seeks to understand how well our systems of knowledge production actually work. Instead of testing a psychological theory or an economic model, meta-researchers study the processes that shape which studies get published, whose careers advance, and what ideas reach the public. Peer review is at the heart of that system, yet it often feels like a black box: opaque, inconsistent, and hotly debated.

A new study by Pleskac, Kyung, Chapman, and Urminsky (2025) cracks open that box in a rare way: through a randomized controlled trial at a major academic conference. While many debates about blind versus unblinded review rely on anecdotes or simulations, this project tested the issue directly, with real reviewers making real decisions about which papers would be accepted.

The Experiment

At the 39th Annual Conference of the Society for Judgment and Decision Making, 530 submissions were randomly assigned to both single-blind review (where reviewers saw author identities) and double-blind review (where author names were hidden). Each paper received at least three reviews in each condition, making it possible to compare the systems directly on reliability, fairness, and validity.

This setup creatively tests a major concern offer associated with peer review: Although it’s meant to be unbiased and rigorous, it still relies on humans making judgments. This can lead to all sorts of biases, for example, reviewers giving higher ratings to well-known scholars in regardless of the actual quality of the work being reviewed.

What They Found

  • Reliability: The most striking finding was just how inconsistent peer review is. Fewer than half of the top-rated submissions overlapped across single-blind and double-blind conditions. In fact, reviews were more reliable and consistent for lower score ratings compared to higher. Whether a paper was accepted for a talk depended heavily on noise, not just merit.

  • Fairness: Single-blind reviews advantaged papers with senior coauthors and those led by PhD students or research scientists, but disadvantaged Asian first authors relative to White peers. Male first authors were rated higher in both systems, though this bias was stronger under double-blind. Surprisingly, submissions with more male coauthors were penalized more under double-blind. Blinding didn’t erase bias, it just shifted its form.

  • Validity: Neither system predicted which talks would be judged highest in quality, draw the largest audiences, or spark the most questions when presented at the actual conference. Both systems did modestly predict poster ratings and eventual journal publication, but overall the signal-to-noise ratio was weak.

Why It Matters

This study shows that peer review is not only biased but unreliable and misaligned with the outcomes we often care about most. Switching from single- to double-blind review does not guarantee greater equity. Instead, it changes who benefits and how biases operate, without solving the deeper issue: peer evaluation is noisy, inconsistent, and only weakly predictive of long-term impact.

The Bigger Picture

Peer review is the bottleneck through which most scientific knowledge must pass. If that bottleneck is noisy or inequitable, it shapes not just individual careers but the flow of knowledge itself. At the same time, I think we all agree that some sort of gatekeeping role is necessary to ensure that publications and presentations meet a standard of quality.

That makes studies like this central to open science, equity, and ethics. Improving peer review isn’t simply an academic housekeeping issue — it’s about building systems that earn the public’s trust and ensure that what gets disseminated reflects the best of science, not the quirks of its evaluators.

Interested in learning more? You can read the full paper here. My lab is also working on related research in how faculty co-author with one another, and how the public reacts to predatory journals. Find out how to get involved here!

Next
Next

The gatekeepers of academia: What our new study reveals about bias in publishing