What does it mean to be a “successful researcher”?

I have a confession to make: I’m not sure if I’m going to succeed as a research scholar.

For some of you, that may sound like a ridiculous statement coming from me. Admittedly, I’ve had some pretty decent “successes” in academic research.

But here's the thing: I'm not convinced we, as the community of scholars, even know how to best measure what “success” as a researcher looks like. Increasingly, I'm convinced that we've built an entire professional ecosystem around metrics we know are fundamentally broken, and our proposed solutions mostly involve... different broken metrics.

And as an early career scholar, that leaves me feeling lost and uncertain about where to spend my time and energy and how I’m supposed to pursue a “successful” career.

Why the current incentive structure doesn’t work

You’ve probably heard “publish or perish.” It’s shorthand for how researchers get evaluated: quantity of published articles, quality of journals where articles are published (usually measured in terms of journal impact factor), and impact of research output (usually measured in terms of citation counts such as h-indices). Don’t meet your institution’s targets? Find a different job.

We treat this phrase as hyperbole, but it’s more accurate than we’d like to admit. Higher education leaders have to evaluate large numbers of faculty across diverse fields. There isn’t time to deeply consider every faculty member’s work. We have to rely on countable metrics because we need some way to make decisions tractable.

This means that, as someone going up for tenure in the next six years or so, I have to optimize my research to meet the metrics expected at my institution. I’m lucky that Claremont is small enough where we get more individualized evaluation, and I have incredibly supportive colleagues — but even then, I find myself asking questions like: Would this project end up in a top-tier journal? How many projects should I be juggling, in hopes that a few make it to the finish line? Will this project take too long to complete, such that it won’t count towards my record by the time I go up for review? Am I collaborating with the right people who will also publish in the journals I’m targeting?

To be clear, I’m not talking about research fraud — though that is certainly a concern that’s on the rise. These are rational strategies, ones that I’ve been taught and advised to pursue, for meeting the metrics I’m evaluated against.

(As an aside, I’ve heard horror stories of all the ways the publish or perish culture could lead to questionable research practices or outright fraud. Some have said that these problems are threatening the whole research enterprise.)

Meanwhile, we’re actively discouraging the innovations we claim to value for early career scholars like myself.

Interdisciplinary research is needed to solve modern complex problems — but if we’re rewarded for sticking to the top journals in our field, there’s no incentive to do the hard work of publishing in other disciplines. Journal paywalls are a major concern, but new up-and-coming open access journals (that don’t charge an outrageous fee) won’t attract scholars until tenure committees recognize them, which requires scholars to publish there first. We need more replication research, but until replications get as much recognition as novel findings, we won’t be incentivized to spend our limited time on a replication study.

The system is broken. I feel the pressure to “play the game” to find the right strategies to meet the required metrics, and any time I spent pursuing projects or initiatives that I think will have more societal impact is time taken away from the things that are measured for tenure.

Why new initiatives still don’t fix the core problem

I’ve seen some exciting initiatives to try to change the system — different metrics, or different approaches to research that might alleviate some of this pressure.

But I’m skeptical. Every alternative I’ve seen is just as gameable, given enough time and pressure.

For example, Altmetrics and other proposed indices attempt to quantify public impact of a scholar’s work (rather than just citations). Great — now we’re incentivized to chase flashy headlines over theoretical depth. Several other alternatives to impact factors have been proposed, and some schools avoid metrics altogether and instead identify a list of top journals that are updated regularly based on faculty input. Great — now we’re incentivized to only target those journals and may even get punished for publishing elsewhere. Others propose limiting CVs to 3-5 key contributions to force evaluation of quality. Great — now we’re incentivized to avoid exploratory work and incremental studies that enable breakthroughs, focusing only on the big contributions.

Preprints are on the rise. They’re spreading knowledge faster and enabling open feedback, which is genuinely useful. But they’re also being gamed to artificially drive up citation counts, creating confusion when cited in policy documents without peer review markers, and wreaking havoc on h-index calculations since Google Scholar counts them.

Then there’s the Open Science pitch — preregistration, open data, open methods. It’s made real progress in some fields in changing the culture of research and putting up safeguards against questionable research practices, and I’ve been grateful to be able to participate in some of these initiatives.

But consider what we're actually asking of early career scholars. Preregister your hypotheses (adding weeks to your timeline). Share your data and materials (unpaid labor that doesn't count toward tenure). Publish null results (use a publication slot on something most hiring committees currently undervalue).

The same logic applies to “slow science” — the idea that we should publish less and focus on quality. Sounds wonderful until you’re an assistant professor and spending three years on one important paper means professional suicide when other scholars (who you are likely evaluated in comparison to) produce three papers annually.

To be clear, these aren’t bad ideas. Some could genuinely improve things. But they’re not yet widespread, and when they do become standard practice, someone will figure out how to optimize for them. The incentive structure remains the same, just with different metrics to game.

And for those who refuse the play the game… well, maybe that’s why there’s a rise in scholars leaving academia for applied research careers.

So what do we do now?

I'm not criticizing anyone for playing the game. If we are expecting early career scholars to produce X publications in journals with Y impact factors, that's what we’ll produce and focus our time on.

I would love to embrace new initiatives to improve science, and to some extent, I’m trying to in my own work. But I believe that we can't fix research practices without first fixing the incentive structures that punish those practices.

How do we do that? I have no idea. That’s why I’m writing this, I’m hoping someone reading has better solutions. To be honest, I'm not even sure there is a solution that doesn't create new problems.

I hope that at least some people out there reading this resonates with what I’m feeling. I’m not writing this just to complain. I’m sticking with it in academia, trying to find ways to “play the game” without losing what I believe to be truly important and valuable about research, and while leaving time to invest in areas that don’t have immediate career rewards but I hope have long-term societal impact.

If there are others like me out there, then perhaps, over time, we can collectively find a way to improve the system.

At the very least, we shouldn’t turn a blind eye to the problems in the system — and this is my attempt to call it all out into the public for discussion.

Next
Next

The surprising cost of unfulfilled vocational callings