Stop agonising over informed consent when researchers use crowdsourcing platforms to conduct survey research

Clinical Ethics 18 (4):343-346 (2023)
  Copy   BIBTEX

Abstract

Research ethics committees and institutional review boards spend considerable time developing, scrutinising, and revising specific consent processes and materials for survey-based studies conducted on crowdsourcing and online recruitment platforms such as MTurk and Prolific. However, there is evidence to suggest that many users of ICT services do not read the information provided as part of the consent process and they habitually provide or refuse their consent without adequate reflection. In principle, these practices call into question the validity of their consent. In this paper we argue that although the ‘no read problem’ and the routinisation of consent may apply to research participants’ consent practices for studies on crowdsourcing platforms, this is not a serious problem. Furthermore, given that the informational requirements for informed consent in these contexts are minimal, we argue that these participants are, nevertheless, sufficiently informed to give valid consent. We conclude that research ethics committees and institutional review boards should only agonise over the precise details of the informed consent process and materials in those rare cases where appreciable risks to research participants need to be managed.

Author Profiles

Jonathan Lewis
University of Manchester
Vilius Dranseika
Jagiellonian University

Analytics

Added to PP
2023-11-03

Downloads
167 (#75,168)

6 months
167 (#17,730)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?