The purpose of this study was to examine the feasibility of conducting experience sampling method surveys with a nationally representative survey panel, as well as the costs and benefits of doing so using an app vs. web-based data collection. The first research question was whether panelists would respond to an experience sampling method study given its intensive nature. An overwhelming majority of panelists did respond, making this a viable data collection protocol for the future.

The next question was whether response rates would differ by experimental treatment. Panelists assigned to the web treatment were much more likely to respond than panelists assigned the app treatment. This has implications for both statistical power and costs for future studies.

The next question was how the demographic characteristics of the app respondents would differ from the web respondents, and how both groups would differ from the demographic characteristics of all smartphone users, as a measure of what’s known as non-response bias. The app and web respondents were quite similar demographically. However, the app respondents were younger, more likely to be from the Midwest and less likely to be registered to vote. These characteristics actually made them more representative of all smartphone users than were the web respondents. This result means that either group could represent all smartphone users in a future study and that non-response bias in the app group does not make them less representative of all smartphone users.

As a caveat to the demographic findings it is important to reiterate the technical difficulties experienced by some panelists assigned to the app treatment. Over 100 panelists called or emailed because they had trouble downloading the app or even understanding what an app was. This is most likely partially to blame for the differences in respondent age between the app and web group. For future app surveys, this may warrant limiting app use to more tech savvy individuals or providing more clear instructions on what an app is and how to download it.

The final question was how the weighted substantive survey responses would differ between the app and would respondents. There was basically no difference between the two groups save for on one of the 34 items measured. As such, either group could be used to represent all smartphone users in terms of weighted estimates from the data.

Apps certainly do have certain advantages beyond the findings of this study. Apps allow for features such as barcode scanning, taking and uploading pictures using the smartphone’s camera, and capturing the GPS location of the smartphone. It’s important to note though that many of these features are now available in web surveys using HTML5. If granted permission by the phone’s owner, they also allow for passive capture of data from the phone such as what other apps are running and for what purpose. This study did not utilize either the additional features or passive data collection capabilities of the app.

Apps also allow for offline data collection. This means that push notifications can be sent and survey responses can be captured even when the phone is not connected to the internet. This is helpful in a study like this one where the survey was time-sensitive and only open for two hours. However as you can see, this feature not increase the app response rate over the web group.

Apps have clear disadvantages as well. Data collection via the app cost substantially more than via the web for this study. This was primarily due to a per survey charge imposed for the app responses, which means a charge for each of the 14 surveys. For an experience sampling method study that employs intensive data collection this is not ideal. Additionally as mentioned above the lower response rate on the app would lead to higher costs if a minimum number of completes were required. This necessitates the use of additional sample and incentives for agreeing, since a large portion of the app group agreed to respond, were mailed the $5 incentive, but did not actually respond.

The app also had design constraints that were unexpected. For instance, the “next” button could not be moved from the top of the screen. This is problematic for a smartphone since the small screen makes scrolling necessary to see all questions or response options. If the next button is at the top there is the danger that respondents won’t realize there is more on the screen that they can’t see. Another example is that the check boxes had to be to the right of the response option labels, which is not best practice nor how they were displayed on the web version.

Finally, the app was also mobile-centric. Panelists assigned to the app group could only download it to their smartphone or tablet and thus had to complete their surveys using a mobile device. Web respondents on the other hand had the option to respond using a smartphone, tablet, laptop or desktop. The greater number of devices available to the web group is perhaps partly the reason for their higher response rates. This study did have its limitations. Some of the findings have uncertain applicability to a cross-sectional study because the sample used here was a well-established panel rather than a fresh sample of respondents from or for a cross-sectional study. This study was a part of the seventh wave of data collection and a considerable degree of trust had been established among the panelists. Asking respondents in a one-off or cross sectional sample to download an app or to take part in a series of surveys presumably would result in a lower rate of cooperation than we experienced.

While the app performed about the same as the web for data collection in this experiment, apps may or may not work as well for taking one-off surveys or as part of an ongoing panel. Respondents weigh the time and effort to download an app and understand how to use it, as well as any privacy concerns they may have, against the expected benefit the app can provide. If they are accessing surveys often, as was the case in this study with 14 surveys over seven days, they may see the value in downloading an app to facilitate accessing and completing the surveys. For instance even when respondents are outside the range of cellular service the app still alerts them and allows them to complete surveys, which is not the case with a mobile browser. However for a one-off survey or even surveys once a month, respondents may see this trade-off as more than they are willing to do. In addition to considerations about burden and privacy concerns associated with the app, a web browser also allows them the flexibility to use a mobile device (smartphone or tablet) or a traditional computer (laptop or desktop).