The gatekeepers of academia: What our new study reveals about bias in publishing

Academic publishing is often described as the “currency” of higher education—fueling careers, tenure decisions, and reputations. But what happens if that currency isn’t distributed fairly? Are journals more likely to publish some perspectives than others, especially on controversial topics?

That’s the question we set out to explore in our new paper, The Gatekeepers of Academia: Investigating Bias in Journal Publication Across Topics, Author Backgrounds, and Institutions, now out in Learned Publishing (Zhou, Lebrecht, Pithayarungsarit, & Monke, 2025). A big thanks goes to FIRE’s Free Inquiry Grant for funding this project!

What We Did

We assembled a dataset of more than 20,000 published articles across 12 hot-button topics, from DEI training to gun control to standardized testing. Human coders and AI (using few-shot learning) rated each article abstract on a scale from liberal-leaning to conservative-leaning. We then linked those ratings to journal impact factors, author demographics, institutional characteristics, and article metrics like citations and Altmetric scores.

What We Found

The overall story is surprisingly balanced:

  • Across all topics, articles leaned slightly liberal, but the average was close to neutral (2.7 on a 1 to 5 scale).

  • Topic mattered most. DEI training articles were strongly liberal; social media and mental health articles leaned slightly conservative.

  • High-impact journals were not more biased. If anything, they published work that trended more centrist.

  • Author demographics and institutional prestige had very weak relationships with political lean. A small effect emerged where authors presenting as women published slightly more liberal-leaning articles.

  • Articles that gained more readers or social media traction were also a bit more centrist than average.

Why It Matters

There’s a longstanding worry that journals act as ideological gatekeepers — publishing only certain kinds of results and leaving the rest in the “file drawer.” Our study challenges that claim. On controversial issues, political lean was not the main driver of publication outcomes. Instead, research quality and topic seemed to matter most. That’s good news for defending the robustness of our scientific discovery and publication process!

That said, our results don’t let the system off the hook. The fact that some topics (like DEI training) show consistent one-sided publication patterns could still reflect self-censorship by authors, editorial gatekeeping, or simply gaps in the research base. Without data on unpublished work, we can’t fully untangle which forces are at play.

Looking Ahead

We see two key implications:

  1. For researchers: Don’t assume your work will be dismissed simply because of its political valence — our data suggest journals are more even-handed than critics fear.

  2. For the field: We need more transparency in how editors and reviewers handle controversial submissions, and more attention to unpublished studies that never see daylight.

Finally, on the methods side, our project demonstrates how fine-tuned GPT models can complement human coding in large-scale content analysis, thus opening new possibilities for studying bias in science itself.

Want to read the full paper? It’s open access here: https://doi.org/10.1002/leap.2022

Next
Next

New semester, new door, same curiosity