Creating Pillars of Quality for Respondent Sampling

“There are existing standards for sampling practices,” says Frank Findley, “but there is growing sentiment that they are not substantial enough for modern day. We’re looking to create pillars of quality that potentially extend the existing standards.”

Findley, Executive Director of the Marketing Accountability Standards Board, led the Respondent Sample Quality Panel at MASB Winter Summit 2018, last week in New Orleans.

Research providers were represented by Art Klein, Managing Partner at MSW•ARS Research; research practitioners by Paul Donato, Chief Research Officer at the Advertising Research Foundation (ARF); and marketplace technology by Courtney Williams, Executive Director of Quality at Lucid.

How is fragmentation in device usage impacting sampling practices?

Donato: “Digital advertising is a $75 billion a year business, 50 percent is mobile and 50 percent falls to two companies, Facebook and Google. I don’t hear anything about quality of sampling in digital advertising. We’re living in a world where every day there is more and more machine data available, and the role of sample quality, the role of panels, is going to be to clean, fix, address, conform that machine-level data so that it is actually reflective of the whole population. A panel can be used to model behavior that isn’t available to machine data.”

Sample Quality Panel
Donato, Williams, Klein and Findley

Williams: “Our industry has done a traditionally poor job of being up to speed with technology, I would suggest, but technology makes it easier to meet people where they want to take surveys, where they want to give their opinion. We need to do everything we can to embrace technology, improve, and streamline it to make it easier for people to tell us what they think.
“Security and multiple device access points are both major challenges facing the industry. A couple things that come to mind immediately are how we can use metadata during survey experiences to potentially identify fraudulent behavior as well as how to tackle the identification of respondents across devices. Having a respondent who might take a survey on a phone for one panel company and on their desktop for another is a challenge. Cross-device recognition is facilitated by exchanges but the challenge is real and super-complex.”

Art Klein: “The problem is that the device can affect the survey itself. Survey results can vary from device to device and from time to time. A survey itself is not equal across devices. You have to consider whether a survey is truly optimized for mobile or is it just doable on mobile, whether you want mobile people in the sample or you don’t, or whether the survey is strictly mobile. My clients look to me for those answers. A lot of companies are ultimately not delivering quality samples.”

One of the largest concerns is that “convenience sampling” is leading to erroneous conclusions; especially for Artificial Intelligence and Big Data apps where respondents are primarily existing brand users. How can this be addressed?

Williams: “Your screeners are very important; how we write a screener to bring people in to take a survey instrument and qualify them as who you specifically want to interview in that environment in a way that’s not ‘leading’ is very important as well. As we advance in our understanding of leveraging technology and as Artificial Intelligence gets smarter, more solutions will also become available. The responsibility for what a sample frame looks like absolutely falls to the researcher. If not, there must be agreement on a project basis as to who’s responsible so there’s no confusion.

“The industry needs to recognize that there are varying degrees of convenience sample – some are really sketchy, but can incorporate a degree of randomness that mitigates the biases and makes results more than adequate for the business decision at hand. True random probability sample is still out there – but do people want to pay for it? There is a phrase about having cake and eating it that probably applies here…”

Klein: “It really comes down to research and the insights person who is at the company making decisions on what sample frame they should be using. As a researcher, it’s my responsibility to deliver a quality sample. Most of my clients don’t even ask about it.”

Donato: “It’s the research company’s responsibility to make sure the data is properly curated. One of the top five concerns in the industry right now is the contract between targeting and brand building. There is a huge concern among marketers that we’re doing so much targeting that we’re not doing anything to build the brand in the future.”

MSW•ARS Research’s Art Klein

Is “random sampling” dead?

Klein: “It was never alive! It never really existed. The true random sample was always cost prohibitive and could never be done in the normal course of every day research – and it’s unnecessary and inappropriate for 99 percent of the work that we do.”

Do we need to establish new methods and standards for respondent recruitment?

Donato: “I think it’s a ‘fit for use’ issue. If you’re taking a sample to model out the biases of a larger, machine-level data set, then your true sector validation sample, which is used to model out demographics, has to be representative. On the other hand, there are sometimes when a sizable panel performs better than a high probability online sample.”

Klein: “In the daily research that we do, we’re going after target populations, and we’re looking for the best, most representative way to bring people into that survey. You need a trusted research partner that knows what to ask and what to do – and notices the differences between particular patterns.”

 

Leave your ideas on Respondent Sample Quality in the Comment area below.