When the machines join the meeting room: How I-O psychologists are using AI
Artificial intelligence isn’t just creeping into the workplace — it’s showing up in our own discipline. In our newest publication with The Industrial-Organizational Psychologist, we offer one of the first systematic snapshots of how industrial-organizational (I-O) psychologists are actually using generative AI (Nag et al., 2025). Our team surveyed nearly 500 SIOP members across academia and practice, asking how often they use tools like ChatGPT, what for, and how they feel about it.
If you want to explore the data yourself, you can view the interactive dashboard here: I-O and GenAI Survey Dashboard (Tableau)
What’s happening right now
Our survey found that over 90% of I-O psychologists (both academics and practitioners) use generative AI at least monthly, and more than half use it weekly. Academics are using it for research brainstorming, summarizing papers, and writing code. Practitioners are drafting reports, surveys, and training materials. Some of my favorite visualizations of these data, published in our article, are copied below.
The main benefits people reported were efficiency, creativity, and output quality. The main concerns were accuracy, privacy, bias, and copyright — the same themes dominating broader AI debates.
And perhaps most tellingly, almost two-thirds of respondents think AI will have a positive impact on our field. Only about 5% expect it to make things worse. But optimism varies: consultants and students are the most enthusiastic; academics are cautious.
What this means for I-O psychology
Several takeaways stand out.
AI is an amplifier, not a replacement. Most people are using it as a support tool — a faster first draft, a brainstorming partner, a code generator — not as a substitute for theory, judgment, or validation. That’s encouraging. But it also means our professional value may increasingly depend on what we add to AI output: context, ethical reasoning, and methodological rigor.
The adoption gap between practice and academia is real. Practitioners are already integrating AI into daily workflows, while academics lag behind. The reasons are predictable: data privacy, publication ethics, and methodological caution. But if I-O psychology is serious about staying relevant, faculty need to model responsible AI use rather than quietly ignoring it in the classroom.
Our ethical frameworks aren’t keeping pace. The biggest worries — accuracy, bias, privacy — all have direct implications for assessment validity, fairness, and compliance. These are old I-O topics with new twists. The fact that GenAI “hallucinates” means it can produce plausible but false summaries or advice, which is a problem if practitioners take it at face value. The tools are evolving faster than our ethical codes.
What this means for teaching
The divide in attitudes toward AI is especially visible among students. Many are using these tools regularly and intuitively, potentially even better than faculty. That’s an opportunity: courses in methods, statistics, or organizational psychology can treat GenAI as a case study in human judgment, creativity, and bias.
For example, when my students use ChatGPT to write R code, we discuss why it sometimes fails — because the model doesn’t “understand” data, it predicts text. When they ask it to summarize research, we check whether it fabricates sources. AI becomes the starting point for teaching critical evaluation and replication, not an end-run around them.
Where research needs to go next
This first wave of data raises more questions than it answers.
What measurable effects does AI have on the validity of assessments or on the quality of consulting recommendations?
How do biases in large language models affect fairness in personnel decisions?
What kinds of organizational cultures encourage (or inhibit) responsible AI use?
How can we integrate AI tools into multivariate and psychometric workflows without eroding scientific standards?
In other words, we don’t just need more AI research — we need better methodological integration of it.
A note of caution
As we emphasized in our paper, AI should be used with I-O psychology expertise, not instead of it. If we let generative models define what counts as “analysis” or “insight,” we risk automating our own obsolescence. But if we bring the same empirical discipline we use for validity and fairness into our AI work, we might actually strengthen the field.
For I-O psychologists, the question isn’t whether we’ll use AI — it’s whether we’ll use it responsibly, transparently, and in a way that still honors our scientific foundations.
For examples, yes, I used GenAI to help me write this blog post. But the result was heavily edited, drawing from my (expert) experience as an author on the original paper — this helped me refine the post and identify where GenAI got it wrong, or where it missed the emphasis I wanted to make.
We hope that this paper sets off a chain of future research, or, at the very least, a reckoning across our entire discipline that GenAI is here to stay, and we need to learn how to live with it.
You can read the full article here: Nag, M., Leung, D., Zhou, S., & Belwalkar, B. (2025). Use of artificial intelligence in industrial-organizational psychology: Current trends and future outlook. The Industrial-Organizational Psychologist, 63(2), 63-68.
And again, explore the accompanying data visualization: I-O and GenAI Survey Dashboard
.