There’s a science to good surveys

In addition to blogging about recent publications coming out of our lab, we’ll also be sharing others’ publications and resources that we may create to help people interested in these topics. In today’s post, we discuss Dr. Zickar’s 2020 publication on Measurement Development and Evaluation, which I frequently recommend as the go-to resource for an overview of survey science.

Surveys are everywhere. From employee engagement platforms to leadership diagnostics to dashboards, these data-driven tools promise insight, clarity, and action. You might think it’s easy to create a survey — just open a free account on SurveyMonkey or Google Forms, type out a few questions, and you’re good to go!

Nope — there’s a whole lot more to it. There’s a science to creating and evaluating surveys (we often call it the fancy term “psychometrics”). Zickar’s 2020 publication, available open access (free!), summarizes some of the key points. Whether you’re you’re designing a new scale or selecting an off-the-shelf assessment from a vendor, having a basic understanding of psychometrics is vital to ensuring that you’re using good science when conducting surveys. That’s what today’s blog post is designed to help you with — and to cap it off, we provide a handy checklist at the end!

📐 What Good Measurement Looks Like

Dr. Zickar structures the article around two main processes: measurement development and measurement evaluation.

1. Start with a Clear Construct. Good surveys don’t start with items, they start with ideas. What are you actually trying to measure? Psychological safety? Transformational leadership? Work engagement? If you can’t define the construct clearly and link it to theory, no amount of fancy statistics will save you later.

2. Write Items That Reflect the Construct. Item writing is often rushed, but Zickar reminds us it’s an integral part of the process that requires careful planning, review, and iteration. Items should represent different dimensions of the construct; be clear, concise, and appropriate for the audience; and be pretested and refined before deployment.

🔬 How Do You Know If a Survey Works?

Once items are drafted, you don’t just “see what happens.” Zickar walks through the major tools that researchers use to evaluate a measure. These are statistical methods that practitioners should also be aware of when vetting external assessments:

  • ✴️ EFA (Exploratory Factor Analysis): Used in early development to uncover how items naturally group. It helps reveal patterns you didn’t expect and helps break down a longer survey into smaller subgroups of “dimensions” (for example, we often think of personality as having five dimensions.)

  • 🔁 Reliability: Estimates consistency of the scoring — the same survey should give the same person similar scores over time, and questions within the same survey should be consistent with one another. The most popular metric is called “Cronbach’s alpha” (ranges from 0 to 1, with 1 being the best), but there are better alternatives that some might use.

  • 📏 Validity: Does this tool measure what it says it does? There are multiple ways to assess this, but the central idea is that a good survey should show evidence of being related to similar concepts, unrelated to dissimilar concepts, and predicts desirable outcomes (e.g., if you wish to use a survey of personality to identify who should be on your team, the survey should predict the individual’s future performance on the team).

  • ✅ CFA (Confirmatory Factor Analysis): Tests how well your theoretical model fits the data. It’s the gold standard when you think you know what structure your survey should have. Look for details like “factor loadings” that tell you how well each item reflects the underlying construct it’s supposed to measure.

  • 🎯 IRT (Item Response Theory): Evaluates how well each item discriminates between people at different levels of the trait. Helpful for test refinement and adaptive testing. These are more advanced analyses that are useful in standardized testing and large-scale assessments.

💡 Think Theory First, Not Just Data

Zickar’s core argument is that measurement is not just a statistical procedure — it’s a theory-driven, discovery-oriented process. Statistical tools should serve conceptual clarity, not substitute for it. If we’re not careful, we risk measuring what’s easy instead of what actually matters.

👀 Why This Matters for Everyday Society

Our society is being flooded with surveys created by everyday individuals and by large software vendors. Sometimes it’s harmless — who hasn’t taken a Buzzfeed survey about their Hogwarts house? But other times, especially if untested and unscientific surveys are being used for decision-making on jobs or promotions, a lack of awareness of survey science can significantly impact another’s livelihood.

Whether you’re building your own survey or being pitched one by a vendor, Zickar’s review gives you a roadmap for what good survey science looks like. It gives you the language to ask tough, important questions about what a survey is really measuring, whether the tool is grounded in evidence or buzzwords, and how to interpret results responsibly.

You don’t need to run EFA or IRT yourself. But you do need to know when to ask for it — and when to walk away if the answers don’t hold up.

To help you with evaluating surveys, we’d love to share an easy-to-use checklist with you!

All you have to do is subscribe here to our free monthly newsletter (no spam, we promise!). You’ll get the checklist as a printable PDF in your email. Click here for a quick preview of the checklist!

Next
Next

Does college student well-being matter?