You finally got your Chapter 3 (Methodology) approved. It is time to collect data. You build a 50-question Google Form, email it to 500 professionals, and wait for the data to roll in.

A week later, you have 12 responses. Worse, the people who did reply abandoned the survey halfway through. And when you finally scrape enough data together to show your committee, they reject it because your instrument lacks “Validity and Reliability.”

At McKinley Research, we see hundreds of scholars hit this exact wall. Writing a questionnaire seems easy—until you realize that asking the wrong question ruins your entire study.

Here are the 4 most common mistakes researchers make when designing surveys, and how to fix them.

1. The “Double-Barreled” Question

This is the fastest way to confuse your participants and destroy your data. A double-barreled question asks two different things but only allows for one answer.

  • The Mistake: “Please rate your agreement: The software is fast and easy to use.”
  • The Problem: What if the software is fast, but incredibly difficult to use? The participant doesn’t know how to answer, so they just click “Neutral” or close the window.
  • The Fix: Split it up. Question 1: “The software is fast.” Question 2: “The software is easy to use.”

2. Ignoring Reliability (The Cronbach’s Alpha Trap)

Your committee will ask: “Is your survey reliable?” They are asking if your questionnaire consistently measures what it is supposed to measure. If you just made up 20 questions off the top of your head, your survey is likely unreliable. In quantitative research, this is measured by a statistic called Cronbach’s Alpha. If your score is below 0.70, your data is mathematically useless.

  • The Fix: Do not reinvent the wheel. Use “Adopted” or “Adapted” questionnaires—surveys that have already been validated by previous researchers in published journals. If you must build your own, McKinley Research can help you run the statistical tests required to prove its reliability.

3. Survey Fatigue (The 10-Minute Limit)

You care deeply about your research. Your participants do not. If your survey takes 25 minutes to complete and requires them to write three paragraphs, they will abandon it.

  • The Fix: Be ruthless with your questions. Look at your Research Questions (RQs). If a survey item does not directly answer one of your RQs, delete it. Keep the format simple: use a standard 5-point Likert Scale (Strongly Disagree to Strongly Agree) so participants can establish a rhythm. A good survey should take no more than 7 to 10 minutes to complete.

4. Skipping the Pilot Study

You would never launch a software product without beta testing it. Why would you launch your academic data collection without a pilot test?

  • The Mistake: Sending the survey to all 500 people on day one.
  • The Fix: Send it to 15-20 people first. Ask them: Were any questions confusing? Did the link work on your phone? How long did it take you? Fixing a typo or a confusing phrase during the pilot phase saves your entire dataset from being compromised.

Conclusion

Your data analysis (Chapter 4) is entirely dependent on the quality of the data you collect. If you feed bad questions into SPSS, you will get bad results out of it. Garbage in, garbage out.

Not sure if your survey will pass committee review? Send your draft questionnaire to McKinley Research. Our methodological experts will review your items for bias, ensure alignment with your research questions, and help you establish the validity and reliability you need to defend your work.