The topics we typically select for VeraSpectives, our blog, tend to focus on news stories and events and how they resonate among the American public. Our frequent field schedule and omnibus format provide us with a platform for reporting on serious issues such as the necessity of enforcing mandatory vaccines, as well as lighter topics such as whether it’s ever ok to recline your seat on an airplane. We enjoy jumping into the fray on social topics armed with actual data from actual Americans.
That said, as much as we have fun identifying social topics that we think will interest the American public, our expertise stems from our understanding of the protocols and practices that make for accurate and insightful research. With that as a framework, we’d like to “borrow” VeraSpectives for a few posts to share some of this expertise.
For our first installment, we’d like to briefly address the important relationship between weighting and sampling, and how to use both to maximize your study’s data quality…
Virtually all marketing research companies that survey the general population weight their data in some capacity. Weighting allows respondents who are under- or over-represented in the sample to assume their correct level of importance so the data accurately project to the universe. For example, we know from census data that 18-29 year old males comprise 11.2% of the U.S. general population. But sampling is never perfect. If your survey sample comes in with 8% of these 18-29 year old males, respondents within this group need to be “weighted up”. Or, conversely, if your 18-29 year old males come in at 15%, they need to be “weighted down”. This is to ensure that their opinions “count” to the extent that they should, relative to other groups in the population. This is done across a wide variety of segments of the population, aside from just age and gender.
Now, the more that the profile of sample coming into your survey differs from the known profile of the general population (based on U.S. census data), the stronger the weighting requirement.
The trick is that at some point, the act of weighting can actually diminish, rather than enhance, data quality. Weighting can create increased variability in the data – i.e., the more weighting that is done, the greater the error range within the data. So, for instance, a sample size of 500 respondents may actually “act” like a sample size of 250 respondents after weighting. The impact is that it is harder to find significant differences between subgroups, for example. So you may spend extra money on a project in order to gain the reliability of a larger sample size, but that extra money is wasted when – because of extensive weighting – the “effective base” (the “true” sample size after weighting effects are taken into account) is much smaller.
To minimize weighting effects, it is essential that one has a rigorous sampling plan that increases weighting efficiency – i.e., a sampling plan that brings your incoming sample profile as close to the actual proportions in the population as possible, in order to keep weighting to a minimum. While four-dimension quota sampling plans are typically the norm, it’s our experience that a more robust six-dimension quota sampling plan gets you closer to the Census targets, therefore requiring less weighting.
Again, incoming survey sample is never going to perfectly represent the population without weighting, but the closer you can get (prior to weighting), the better your data quality…and the better bang for your buck.
Check out our next installment of VeraQuest on Research where we’ll cover how to make sure that respondents see your survey graphics as you want them to be seen.
Leave a Reply