One of the benefits of having spent over thirty years in the market research industry is seeing the same arguments come round every so often. The first time one sees a particular argument it seems novel and convincing, but by the second or third time round one starts to becomes accustomed to it and to be better able to put it in context.
When telephone interviewing (often referred to as CATI - computer aided telephone interviewing) came on the scene in the 1980s people were concerned about whether it would introduce biases due to sampling issues and/or modality effects. These were valid concerns, but they were usually addressed by running comparative studies with face-to-face studies, as if face-to-face were a reliable indicator of 'truth'.
Instead of asking 'Is CATI the best available system?' or 'Is CATI good enough?' or perhaps 'What is CATI good for?' the typical question that was asked was one almost guaranteed to make CATI look like a second-rate compromise, adopted because it was cheaper and faster. That badly formed question was 'Is CATI biased?'. Yes, of course it was, everything that involves people measuring people is biased (as I explore in a recent VCU paper titled 'Are Community Panels Biased?'). If research is to move ahead new techniques should be held to the same standard of enquiry and evaluation as existing techniques. They should not be held to some higher standard, predicated on an implicit assumption that what is already happening is perfect and that new techniques need to produce the same results as a necessary minimum standard of acceptability.
When online research appeared, when online access panels appeared, and when research communities came on the scene, the same poorly constructed questions were asked 'Is it fully representative?' and 'Is it biased?', as if the other research alternatives were somehow beyond question.
In the last few years researchers have become increasingly aware that our status quo assumptions of rational respondents, being reached representatively, and answering in an informed way are often wide of the mark. Neuroscience, behavioral economics, theories of social behavior, and predictive markets have all shown that market researchers need to re-address the status quo as well as properly addressing emerging ideas.
Research is on the edge of a revolution in the way it conducts its business. Key changes include: research communities, mobile devices, social media, and Big Data. Research needs to work out what these new strands can add to the mix, rather than trying to fit them to some hypothetical and outdated model of what research is and how it should be evaluated.
Key questions for research to address are:
1. What approaches work, for what situations, and for which contexts?
2. For any specific approach, what it strengths and its limitations?
3. For a given business decision, what is the right trade-off between accuracy, speed, money, and depth? (We used to talk about quality, speed, and money - but with many research techniques the quality issue needs to be subdivided into accuracy and depth.)
4. How do we ensure that research buyers are in a position to make informed choices between the various options?