Mixed Mode with Mobile: Why it Matters
Consumers—our clients’ customers—now live a good part of their virtual lives on their mobile devices. While that doesn’t mean they’ve abandoned their desktops and laptops entirely, rather than staying constantly tethered to their desks, they now use mobile devices to do a lot of the things that used to keep them tied to their PCs.
That, of course, includes taking surveys. In fact, in 2013, mobile users made up about 10 percent of a typical respondent sample. By the end of this year we expect that number to rise to 25 percent or more. And that makes it critical that MR professionals understand how mobile survey-taking impacts the way we plan, construct and analyze surveys that may be taken either solely on mobile devices or in any combination of mobile devices, desktops and laptops.
A Breed Apart
For one thing, it’s important that we understand that the survey experience for mobile survey takers is somewhat different than that for PC and laptop users, making mobile survey-takers a breed apart. One place that’s reflected is in completion rates, which tend to be lower than those for PC and laptop users. (See the graph below, which maps out mobile users’ completion rates in the U.S., the UK, Germany and France. While completion rates are the best in the U.S., they still stand below those for PC and laptop users.)
But Not All Modes Produce the Same Experience
To really understand mobile users, we first need to see that their survey-taking isn’t one-dimensional. A significant percentage of mobile survey takers (almost 40 percent in the U.S.) also take surveys on their laptops or PCs. This number is larger in other countries. It is clear that they are active survey-takers using mobile when it best fits into their active lives.
While there are ‘mobile-only’ surveys, most surveys remain mixed-mode, with mobile users making up an ever-increasing percentage of those studies—meaning that their impact on those studies is also constantly deepening. To really analyze the impact of mobile, it’s necessary to separate them out and look at the breed apart that they really are.
Toward that end, a sort of experiment within an experiment took place as part of a recent study, ‘The Foundations of Quality’ survey conducted on behalf of the Advertising Research Foundation (ARF). Seventeen providers contributed samples in this multi-mode, multi-component study. And, while a lot of instruction was given regarding the samples, none was offered relative to mobile users. Nevertheless, a substantial number of mobile users took the survey. Once analyzed as a group, those that took the survey via a mobile phone had some different characteristics compared to those that took the survey on another device. The first statistic that jumped out was that those using their mobile phone had a large suspend rate of 21.4 percent, as compared to 7.8 for laptops/PCs and 8.8 for tablets. At first glance that would seem like a large percentage. But looking through the data, it became apparent why it was so large.
The median length of interview (LOI) for smartphone users was an incredible 43.1 minutes.Forty-three minutes! That LOI was almost 20 minutes larger than the median interview length for those that took the survey via a PC or Laptop. That makes the suspend rate of 21 percent understandable.
Response Characteristics ARE NOT The Same By Mode
Here’s an even deeper look at why that’s true. Generally speaking, the number of speeders through a multi-mode survey is determined by taking one-half the median LOI for the whole survey and gauging which percentage of the sample fell below that number. Given that definition there were almost no speeders in the group using their mobile phones. But, if we separate out the smartphone users in this study and use half of the 43-minute figure as our guide, we see that there were more speeders in the smartphone and other mobile phone categories than in the PC/Laptop and Tablet groupings.
Using Aggregate Measures Suggest Almost No Speeders in the Phone Groups
Using another widely accepted measure of survey quality, response variance metrics, and Smartphone users did show fairly significant decrease late in the interview, perhaps signaling some decreased interest. However, if the sample is split by those who were identified as speeders under the device specific rule and those that were not identified as speeders, those who were not identified as speeders show similar levels of quality as those who were using their PC or Laptop to take the survey.
Speeding Is Not the Only Quality Metric – Response Variance Metrics
Note: In this chart higher is generally better and the values are indexed to the PC/Laptop values so a value of 100 means the same value as the PC/Laptop group.
What It All Means
Most of the talk in the MR industry is about mobile device research as a standalone activity. The reality is that most research done on a mobile device are via mixed mode surveys. Thinking about how to plan and analyze those surveys is important in generating the best data is vitally important. In the process, we’ve learned that there are a lot of people willing to take very long surveys on their mobile devices. This fact has an impact on the quality of the data obtained in this way. It’s hard to say just how much long-term damage might be done by pummeling users with extra-long surveys. (Some researches recommend limiting mobile surveys to just 10 minutes.) What is clear is that more research is necessary to effectively fold the new breed of mobile users into the MR fold. In the meantime, for quality purposes, we need to consider mobile users separately from PC and Laptop users in determining the quality of the data.
Read this article in the March issue of Survey Magazine here.