Who actually wins when AI comes to work?
There's a popular story about AI and the workplace: the tools are becoming so powerful that they'll level the playing field. A junior analyst with Claude or ChatGPT can now produce work that rivals a senior analyst. Everyone rises, and the performance gap closes.
A new paper in Human Resource Management thinks that story is probably wrong. The authors introduce what they call the "AI-specific Matthew Effect" — borrowing Merton's classic idea that advantage compounds for those who already have it.
The argument, in plain terms: star employees are uniquely positioned to extract disproportionate value from AI, and everyone else may fall further behind.
Three reasons stars pull ahead
To use AI well, you need to know when it's wrong. Star employees already have the domain expertise to direct, evaluate, and refine AI outputs — crafting targeted prompts, catching hallucinations, and knowing which parts of their work benefit from AI support versus which require distinctly human judgment. An average employee using the same tool gets a generic output and often can't tell the difference. Same tool, very different result.
Stars also experiment early. They have the autonomy and risk tolerance to build customized AI workflows before the rest of the organization catches up — before those workflows get standardized and handed down to everyone else. The paper frames this as a first-mover advantage; by the time other employees are using the firm's official AI template, stars have already moved on to something more sophisticated.
And when stars do adopt AI, the gains aren't additive, they're multiplicative. Stars already bring organized workflows and strong metacognitive skills to the table. AI rewards exactly that kind of structure with multiplicative productivity gains.
The attribution problem
Here's where the argument gets more provocative. Even when AI isn't involved, evaluators may now default to assuming that strong work from average performers was AI-generated, while attributing the same quality output from stars to genuine talent. The result is a double disadvantage for all other employees: they’re less equipped to use AI strategically, and more likely to have their contributions discounted regardless.
The authors argue this creates a self-reinforcing cycle. Stars gain more resources, visibility, and opportunities — which widens their ability to use AI effectively — which generates more recognition — and so on. Through that compounding process, they're also better positioned to translate AI-enabled performance gains into actual compensation and career advancement. The performance gap feeds a rewards gap.
To be clear, this is a theoretical paper, so empirical tests are still needed, especially because the equalization hypothesis has real support too (e.g., see Noy and Zhang's 2023 experimental study). The present paper argued that controlled experiments capture short-term task gains rather than the complex dynamics of knowledge work over time. That's plausible, but it’s hard to test.
Why this matters for how we teach
Here's what I keep coming back to: the paper implies that AI literacy is not evenly valuable.
If AI primarily amplifies existing expertise, then the students who benefit most in their careers won't be the ones who are best at prompting. They'll be the ones who understand their domain deeply enough to know when the output is wrong — and skilled enough to push past it. Surface-level AI fluency, without the underlying knowledge to back it up, may not be worth much at all.
That changes how I think about teaching. It's not enough to say "AI is a tool and here's how to use it." We need to teach students how to evaluate AI outputs critically, how to bring real domain knowledge to bear, and how to develop analytical judgment that doesn't outsource easily. If we don't, we risk producing graduates who are efficient but fragile.
This is exactly what I've been thinking about in organizing the 7C AI in Teaching & Learning Symposium on October 23, 2026, alongside colleagues from Claremont Graduate University. Our framing isn't "AI is a threat to academic integrity." That conversation has been had. The question we're asking is: what do students actually need to know to thrive in a world where AI is already embedded in knowledge work?
We have two fantastic keynote speakers — Dr. Fred Oswald (UC Irvine) and Dr. Scott Emett (Arizona State University) — plus talks, panels, and time to connect. Free for all 7C faculty, staff, and students. Registration opens August 2026. Save the date!
The bottom line
Call, Jiang, and Idso (2026) make a compelling case that AI won't be the great equalizer we've been promised. Whether the AI-specific Matthew Effect holds up empirically is an open question. But the framework is useful, and the stakes are real.
For researchers, it opens a rich empirical agenda. For educators and practitioners, it's a prompt to think harder about who benefits from AI adoption — and what we can do about it.
Full citation: Call, M. L., Jiang, K., & Idso, C. (2026). Star advantage: Employee value creation and capture in the age of artificial intelligence. Human Resource Management, 65, 151–167. https://doi.org/10.1002/hrm.70023