River sample. The name evokes visions of pristine waters flowing softly, with babbling rapids and a meandering path through a verdant forest. But would you drink the water straight from the river? Probably not. You’d need to know if it was clean and safe to drink.
So too with river sample. You need to know what you’re getting and whether or not it will effect your data. But too often researchers are blindly embracing river sample as a quick and cheap source of sample, without giving a thought to implications on quality.
In this blog I’ll briefly review some research on research we’ve recently conducted that sheds a little light on river sample. But first, a definition: river sampling is an online sampling method that drives potential respondents to an online portal where they are screened for studies in real-time. Qualified respondents are then randomly assigned to a survey. People are sourced, most often, from ads or pop-ups on social media and other websites. Because of the screening you can ensure the respondents match your demographic requirements.
So river sampling is a convenience sample of people attracted by an ad who are willing to complete your survey for some type of reward. The question then becomes ”are they different”?
We conducted a study in which we compared Vision Critical’s Springboard America panel to two other panels and two sources of river sample. One notable difference was a high drop-out rate with the river sources. Respondents from the river samples dropped out of the survey two and a half to four times more often than with the Springboard America panel sample.
We also found that the incidence of flat lining (giving the same answer over and over) was considerably higher: a worryingly sixteen times higher with the river sample than with the Springboard America panel sample.
These differences lead us to wonder about some of the river sample respondents’ commitment to actually participating in the survey. But otherwise, most of the river sample respondents’ answers were similar to what was observed on the panels.
So, overall, we have some cause for concern with lack of respondent engagement and data quality with these river samples, but no red flags in terms of radically different responses. We’re working on further research on this topic, because we really want to understand what different sample sources might do to our data. Like your health, data quality is not something you want to take a chance on.