Choice Based Conjoint (CBC) modeling is a popular method for determining why people choose the particular products that they buy. In a typical CBC study, respondents are asked to choose between different prototypes of a product. This task is often repeated many times. The choices they make reveal the impact of various product configurations. An optimal product can then be built based on the revealed preferences and other tradeoffs.
The downside of CBC modeling is that respondents find it boring and repetitive. They are often asked to make a purchase 'decision' 15 to 20 times, and for each decision they have to compare a number of product configurations. Even our clients often question the large number of tasks, having become fatigued when trying the survey for themselves. Why do we need such large number of tasks? Would asking each respondent to perform six tasks work as well as asking them to do fifteen?
When Choice Based Conjoint (CBC) research was coming of age in the 1990s, many researchers asked exactly that question. The answer, based on some 20+ studies, was that there should be at least 20 choice tasks (eg. Johnson & Orme 1996) for each participant. Getting this many responses from each respondent allowed for smaller sample sizes, which helped keep costs down. In addition, (Markowitz & Cohen 2001) had shown that a lot of choice tasks were needed from each respondent to fully optimize the performance of CBC models in predicting consumer behavior. The consensus was to keep the sample sizes small and push the respondents hard with long choice tasks.
Things have changed quite a bit since the Nineties. Almost all CBC studies are now conducted with online panelists instead of through personal interviews - which largely negates the cost argument. Panelists are also exposed to a lot more market research studies in general, so fatigue and boredom creep in much more quickly during a long CBC exercise. One might expect that bored panelists are probably not going to be making carefully considered decisions during the later stages of the exercise.
My colleague Andrew Grenville and I examined the problem of respondent fatigue in the paper, How Many Questions Revisited. Our findings clearly demonstrate that this problem is real; respondents become less engaged in the later tasks. We also found that asking respondents to perform fifteen conjoint tasks (instead of six) brings limited improvement in the model's ability to predict respondents' behavior. The extra tasks also come at a cost - the sensitivity and consistency of the results go down. By asking more questions of each respondent, we end up being less able to tell which configuration is truly preferred.
Our results also indicated that when more questions are asked, the preferences are less consistent with known logical values - i.e. some of the answers just don't make sense. This suggests that respondents are in fact paying less attention to their decisions as the questions keep coming. The data collected from panelists supports this hypothesis. By the time respondents got to the fifteenth task, they only spent one third as long considering their answer as they did for the first question. Part of this is undoubtedly due to their increased familiarity with the task. However, a lot of the speedup appears to be due to simplifying rules. Instead of considering each product option as a whole and focusing on all key decision criteria, respondents increasingly rely on only one or two salient features.
We recommend that CBC practitioners who utilize on-line panel samples should seek to keep their panelists and respondents happy and engaged. This can be accomplished by minimizing the number and complexity of choice tasks. The lower modeling precision that results should be compensated for, when possible, by increasing the number of respondents.
Johnson, R. and Orme, B. (1996), "How Many Questions Should You Ask In Choice-Based Conjoint Studies?" ART Forum Proceedings.
Markowitz, P. and Cohen, S. (2001), "Practical Consideration When Using Hierarchical Bayes Techniques?" ART Forum Proceedings.
Tang, J. and Grenville, A. (2010), "How Many Questions Should You Ask in CBC Studies? - Revisited Again," Sawtooth Software Conference Proceedings.