Surveys are a popular way to study social, political and demographic trends. When articles in the media cite conflicting surveys, it's often difficult for readers to decide which one to trust. A good way to assess surveys is in terms of their validity and reliability. Reliability refers to the consistency of the survey results--in other words, if the test were repeated, would it give the same results? Validity, by contrast, asks whether the survey measured what it was supposed to measure. These parameters can be described quantitatively, but the following steps describe a simple way to think about them qualitatively.
- Skill level:
Repeat the survey using a different random sample from the same population. If the survey has high reliability, the two results should be consistent between the two surveys. If the results are highly inconsistent, the survey has low reliability. This is the test-and-retest way to estimate reliability.
Include different sets of questions in the survey which measure the same attribute. For example, if your survey is measuring the intensity of religious belief in a given population, you might use questions that consider different opinions or behaviours correlated with strong religious belief--how often the respondent attends church, whether they consider themselves deeply religious, what role religion plays in their lives, etc. If the results from different groups of questions are consistent, the survey has greater reliability than if the results are inconsistent. This is the internal consistency approach to testing reliability.
Look for factors that might damage the internal validity of the survey. Internal validity just means the degree to which the survey actually measures what you wanted to measure and not some other unrelated variable. The way in which the questions are worded can make a tremendous difference. A question like "do you support abortion?" may get different answers from a question like "do you support a woman's right to choose?" Often when surveys have poor internal validity it's because the questions were poorly worded or unclear.
Ask how well the survey sample represents the general population. There are two ways to make a survey representative. Researchers could choose a sample that is in every way a microcosm of the larger population--in other words, has exactly the same percentages of people from different income groups, ethnic backgrounds, etc. as the general population. This approach, however, is extremely difficult. Generally researchers will use random sampling--and as long as the sample is truly random and sufficiently large, it will usually be a good measure.
Not all samples that appear to be random are truly random, however. Imagine, for example, that a research group did a telephone survey of adults in an area by calling randomly chosen individuals at 11 AM on weekdays. This sample might appear to be random but is in fact skewed towards houseparents and retirees, both of whom are more likely to be at home at those times. It's important, then, to ask how the survey was conducted and make sure that the population in question was a) a random sample drawn from a population that is b) representative of the general population. If the survey doesn't meet these criteria, it's not necessarily applicable to groups other than those that participated in the survey.
Look at the sample size. Choosing a larger sample size will improve the precision of the results and decrease the margin of error. The margin of error in a measurement decreases in proportion to the square root of the sample size, however, so increasing the size of the sample can only improve precision up to point.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for