Conducting research over the Internet allows researchers to capture data as never before. But it also presents ethical challenges.
Several months ago, the instructional technology consultant in my department informed us that Qualtrics, one of the leaders in online data collection, had developed a mobile version of its survey software that could be used on smartphones and tablets. Not surprisingly, he was excited by the news, and especially by the fact that the software allows data to be collected even when researchers and participants are offline and data are uploaded later when connectivity is restored. My own reaction to this news was mixed. As a researcher who has chaired an institutional review board (IRB), every announcement of a new online or mobile tool for research participant recruitment or data collection leaves me conflicted.
Online studies are undeniably convenient for researchers and participants alike since people are able to participate at a time and location of their choice, without the need for a research assistant present. Researchers benefit, too, as they can obtain more diverse and larger samples of people with characteristics or conditions not readily available within the researcher's local community (e.g., Mason & Suri, 2012; Shapiro, Chandler & Mueller, 2013). In addition, recruitment and data collection can happen more quickly and cost effectively than in the traditional lab setting (e.g., Mason & Suri, 2012).
My IRB side, though, worries. To what extent do researchers and IRBs thoroughly understand the details of the technologies that are being adopted and the implications of using them for human research participant protection? In particular, can researchers and IRBs ensure the security of the data and make good on any assurances offered in the consent process regarding privacy, anonymity or confidentiality? Although the type of information needed to answer these questions is typically made available by online research service providers, researchers and IRBs often take some details for granted or gloss over them, particularly the more technical ones that fall far outside their expertise.
Lessons learned from Amazon's Mechanical Turk (MT) provide a good example. Although MT has never explicitly stated that participants remain anonymous, until recently researchers made that assumption (Mason & Suri, 2012; Paolacci, Chandler, & Ipeirotis, 2010), reducing IRBs' concerns about collecting sensitive data on this platform and leading them to categorize studies in MT as exempt — that is, the studies are not subject to the requirements of the federal regulations for the protection of human participants in research (Paolacci, Chandler, & Ipeirotis, 2010). However, last spring, a group of researchers found that MT participants are not anonymous, as their unique ID number can be used to identify them across any Amazon property, including revealing their user profiles, wish lists and product reviews (Lease et al., 2013). Revelation of this fact left researchers and IRBs scrambling with respect to protocols that had been previously approved as exempt and consent forms that assured anonymity.
Concerns about participant privacy and anonymity can arise in other ways, namely due to identifying information that online survey tools collect by default, such as Internet protocol addresses, geographical coordinates and email addresses. When researchers and IRBs are unaware that such information is collected automatically, erroneous decisions about participant privacy may lead to false guarantees of anonymity and confidentiality to participants. Knowledge of data security (or lack thereof) is equally critical in this regard.
To fully evaluate the level of privacy for a study, researchers and IRBs need to know exactly how an online research service works with respect to safeguards for secure data storage, who has access to the data at any given time (researchers, employees of the service, et al.) and the means of data transmission both when participants are performing an experiment and when researchers are retrieving their data (Buchanan & Hvizdak, 2009). Of course, even once all those factors are taken into account, the specter of hacking and data theft remains, while admittedly on the extreme end of concerns.
Equally worrisome is the possibility of human error. Unfortunately, that happened to a colleague when an Internet survey provider inadvertently allowed research participants' names, phone numbers and email addresses to become public for a short time. While, thankfully, no survey data were displayed, some participants did report that they received more spam and unwanted contact. Although this type of situation falls into the unanticipated problem category, it nonetheless raises the issue of what researchers are responsible for when they take their studies online.
Efforts for change
Identifying and managing that responsibility are tricky in a research landscape that is relatively young and ever-changing. Questions about what researchers should know and convey to subjects do not have clear answers when one tries to apply the existing ethical guidelines and regulations to circumstances they were not designed to address. However, efforts to change that are underway. For example, the Secretary's Advisory Committee on Human Research Protections of the U.S. Department of Health and Human Services has drafted an advisory document titled "Considerations and Recommendations concerning Internet Research and Human Subjects Research Regulations", which serves as a nice primer for those engaging in online research.
Collaboration between researchers and IRBs with their local information technology and security experts is also advisable to keep abreast of technological and security issues relevant to research. In addition, consent forms should be written to inform participants of the vagaries of online research, particularly those that are uncontrollable (Schadt, 2012). Statements such as, "Your confidentiality will be kept to the degree permitted by the technology used. Specifically, no guarantees can be made regarding the interception of data sent via the Internet by any third parties" (Pennsylvania State University, 2007) and "There is always a risk of intrusion by outside agents, i.e., hacking, and therefore the possibility of being identified" (Buchanan & Hvizdak, 2009) serve as useful reminders to participants about the safety of the Internet, particularly since many of us have become complacent through continual exposure to email, Facebook, online banking and the like.
Use of the Internet for research is one of the greatest paradigm shifts within our discipline. As with any new approach, we need to invest time and resources to understand the ethical challenges it presents at both the local level (i.e., within our own labs) and more globally within our field. Only then can both my researcher and IRB sides be satisfied that the benefits outweigh the risks.
This installment of "Ethically Speaking" was written by Janine M. Jennings, PhD, of Wake Forest University, 2014 chair of the APA Committee on Human Research. Jennings would like to acknowledge the valuable contributions and assistance of Jenna McGwin in preparation of this article. Send your comments and questions to Research Ethics.
- Buchanan, E. A. & Hvizdak, E. E. (2009). Online survey tools: Ethical and methodological concerns of human research ethics committees. Journal of Empirical Research on Human Research Ethics, 4(2), 37–48.
- Lease, M., Hullman, J., Bigham, J.P., Bernstein, M. S., Kim, J., Lasecki, W. S., … Miller, R. C. (2013). Mechanical Turk is not anonymous. In Social Science Research Network (SSRN) Online.
- Mason, W. & Suri, S. (2012). Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods, 44(1), 1–23.
- Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
- Pennsylvania State University (2007). Guidelines for computer- and internet-based research involving human participants. Retrieved from www.research.psu.edu/policies/research-protections/irb/irb-guideline-10.
- Schadt, E. E. (2012). The changing privacy landscape in the era of big data. Molecular Systems Biology, 8(612), 1–3.
- Shapiro, D. N., Chandler, J., & Mueller, P. A. (2013). Using Mechanical Turk to study clinical populations. Clinical Psychological Science, 1, 213 –220.
Letters to the Editor
- Send us a letter