Excerpt from: Freyd, J.J. (2012). A Plea to University Researchers. [Editorial] Journal of Trauma & Dissociation, 13, 497-508.
Note: The following excerpt is copyrighted. It is available for attributed public use under a Creative Commons CC-BY-ND 3.0 license. If you wish to copy, distribute, or otherwise re-use these materials or to modify them, please first contact Jennifer J. Freyd for reprint permission. This version is in press and has not been copyedited.
Early in the morning a few months ago, I was working on a manuscript in my home study. A Skype window was open on my computer. My daughter, then completing her freshman year at Harvard, was on the other end of my Skype communication. For some time she was working on homework as I wrote my paper. A companionable silence as we each plugged away on our projects. After some time, my daughter spoke to me, informing me about possible psychology studies she might select for fulfilling her research credit requirement for a psychology course she was taking. She read out loud to me the descriptions of studies available to her as a member of a Human Subjects Pool (HSP) and explained what was appealing and off-putting about each one. She eventually selected one of the studies, while I pondered the potentially profound implications of this sort of recruitment process for science.
Here is a list of descriptions from some studies recently available to members of the Harvard HSP.
Over the course of her spring semester my daughter completed 5 hours of research participation through the Harvard HSP. In doing so she was able to select studies that suited her interests and personality and avoid other studies. For example, my daughter found the last option initially intriguing by its intellectual content (“different ways people may know things”), but was put off by what sounded to her like vaguely intrusive pop psychology (“a chance to learn a little more about yourself”). For each study my daughter had a choice to sign up or not, and her choice depended upon her individual personality, history, and current interests. That might sound nice, all that choice, but in fact it was teaching our daughter and the many other undergraduates in the Harvard HPS bad scientific method. What justification can a research requirement like this have other than to teach good science?
One cannot be too sorry for Harvard undergraduates, but what about the rest of us who learn about findings based on this very subject population? We have all learned to question whether one can generalize from privileged 19 and 20 year old college students to the rest of human kind. But what about the possibility that we cannot generalize beyond the particular sample recruited for the study itself? To the extent that our participants are sampled randomly from some larger population we can at least generalize to that larger population, whatever that is. In the case of participants in human subject pools, if selection is reasonably random we can generalize at least to that larger population (e.g. Harvard undergraduates taking psychology classes) and perhaps, depending on the study content and how much it might be expected to interact with undergraduate institution or demographic variables, to even larger populations (undergraduates in the US, young adults in the northern hemisphere, etc).
Participant recruitment can limit external validity (i.e., generalizability beyond the unique procedures and participants involved) in various ways including through explicit exclusion criteria. However, the most pernicious threat to external validity in participant selection occurs through self-selection, which can occur outside the researcher’s awareness. Recruitment procedures can profoundly impact the probability of self-selection. In reality, the extent to which self-selection limits generalizability depends on the type of research. For some topics (such as some lower level cognitive investigations), results of investigations may not depend greatly on the participant sample. However, for other topics (such as personality psychology) self-selection could make all the difference. The fields of trauma and dissociation fall into the latter category, for our research is very much about the impact of differential experience and individual differences in response to that experience. Furthermore, our field investigates the impact of experience that may be stigmatizing and reluctantly acknowledged. This means, for instance, that individuals who respond to a research advertisement asking for people who have experienced a traumatic event (or abuse, etc) may constitute an atypical sample compared to the general population of trauma survivors. Failures to replicate findings from one study to the next might be expected in trauma psychology if the recruitment procedures lead to substantially different participant populations (for a specific example see Freyd, DePrince, & Gleaves, 2007).
There is a small literature investigating the impact of recruitment procedures on self-selection in HSPs. For instance, researchers have found that there are some differences between participants who sign up for studies that occur on-line versus in-person or between participants who sign up at the beginning or end of the term (e.g. Witt, Donnellan, and Orlando, 2011). Of particular relevance to participant self-selection due to recruitment materials, Saunders, Fisher, Hewitt, and Clayton (1985) compared students who were recruited for a study described as including personality questionnaires with one on erotica. Once in the lab all participants were asked to complete both questionnaires and to watch a film with erotic content and answer questions about the film. Recruitment condition impacted results for prior sexual experience, sexual opinions, and emotional responses to the erotic film. This suggests that conclusions about sexuality may not generalize from research conducted on populations who volunteer for sex research to other populations.
Jackson, Procidano, and Cohen (1989) investigated the impact of study descriptions on self-selection among introductory psychology students. In their first experiment they compared the personalities of students who signed up for a study on performance evaluation versus one on task evaluation. In the first case the description indicated that “participants will be evaluated by experts on their performance on a number of tasks. These include personal disclosures, friendship making, and manual tasks.” (p 32). In the second case the study was described as involving “various tasks will be evaluated for future experimental use. After performing these tasks, participants will evaluate them.” (p 32) When participants actually arrived for the study they were asked to complete questionnaires assessing various personality traits. The researchers found important differences between the two samples: the students who signed up for the performance evaluation study had higher ratings on a measure of complexity capturing a tendency to be thoughtful, analytical and to enjoy intricacy. In a second experiment, the researchers compared a study described as giving personality feedback to one described as involving proofreading. The students who selected the personality study as compared to the proofreading study were more broadminded, receptive, extraverted, and good-natured. These differences in samples could have had potentially profound impacts on study results and thus external validity.
Perhaps of greater relevance to the field of trauma psychology, Carnahan and McFarland (2007) asked whether the results of the famous Stanford Prison Experiment (Haney, Banks & Zimbardo, 1973) could have been biased by subject self-selection. They noted that the advertisement for participant recruitment included the phrase “a psychological study of prison life.” Could this phrase be more attractive to individuals with a more aggressive or authoritarian personality and if so could that have biased the results? Carnahan and McFarland (2007) ran an experiment in which they recruited volunteers for a study using two similar but slightly different advertisements. In one the original wording from the Stanford Prison Experiment was used: “Male college students needed for a psychological study of prison life”. In the other condition the phrase “of prison life” was omitted. Participants who responded to either advertisement were then asked to complete a battery of personality measures. Carnahan and McFarland (2007) report that volunteers who responded to the prison study advertisement as compared to those who responded to the neutral advertisement scored higher on measures of aggressiveness, authoritarianism, Machiavellianism, narcissism, and social dominance and lower on empathy and altruism. In other words, there was apparently self-selection based on the recruitment wording. This means that the results of the original Stanford Prison Experiment may not generalize to other populations. The Stanford Prison Experiment suggested that individuals were willing to behave in abusive ways when assigned to the role of prison guard and this finding has been used to argue that good people can be easily induced into doing evil (e.g. Zimbardo, Maslach, & Haney, 2000). Perhaps that conclusion is true, but the evidence from the Stanford Prison Experiment is compromised given that the individuals who elected to enter the study might have been much more ready to behave in abusive ways than would have individuals from a more generalizable sample.
Given these findings that confirm that information provided to participants about study content can impact self-selection and thus create a threat to external validity, my plea to university researchers is two-fold: First, be precise in methods sections about actual recruitment materials. Second, advocate for recruitment procedures that minimize self-selection when one can ethically do so. I elaborate on each plea below.
Plea 1: Be Precise in Methods Sections about Recruitment Materials
In methods sections details regarding recruitment that might powerfully impact the possibility of self-selection are often not provided. Studies using subject pools often simply indicate that participants “were recruited from a human subject pool” or “received course credit” for their participation. But how were those participants recruited? What information did potential participants have in advance of choosing to sign-up for the study? At some universities, students are provided with quite a bit of information about the studies they can select from for course credit. This information may include the name(s) of the investigators and the topic of the investigation. At other universities, such as my own, students select studies based only on schedule availability rather than knowledge of the content of the research. The implications of these two different procedures on potential self-selection and thus generalizability are profound. In one case a study may get an unrepresentative sample of students who have a particular interest in a topic. In the other case the sample is more likely representative of the larger population of students in the human subject pool. It is vital that scientists be exact in their description of methods so that all potential artifacts are exposed. Authors must provide sufficient information about recruitment materials in the methods sections of empirical reports so that readers can evaluate the potential for self-selection.
To this aim, the JTD “Author Assurance Form and Submission Checklist” now includes the following items:
For instance, in one study from my own laboratory we included this information in the methods section:
Undergraduate students (91 men, 227 women) in introductory psychology classes at the University of Oregon participated to partly fulfill a course requirement. Participants did not self-select into the study based on knowledge of the content; rather, participants were selected for the study based on schedule availability from a large human subject pool. (p. 16, Cromer & Freyd, 2007)
If specific information about the study is included in the recruitment materials this should be stated clearly in the methods section. For instance, one might write:
Undergraduate students (91 men, 227 women) in introductory psychology classes at the University of ---------- participated to partly fulfill a course requirement. Potential participants were invited to sign up for a study described this way: “We are interested in different ways people feel about stressful events. You will be asked to tell us about your experiences with trauma in your life, and how you feel about those experiences now.”
Plea 2: Advocate for recruitment procedures that minimize self-selection when one can ethically do so.
In the spring of 2009 our Human Subject Pool (HSP) at the University of Oregon (UO) was under standard continuing review by our university Institutional Review Board (IRB). One of the concerns expressed by the IRB in 2009 was the study sign-up procedure. The IRB requested that we provide information to students about the study content and principal investigators at the sign-up stage. We had been through this process in years past with our IRBs, but this time the committee seemed particularly insistent.
Our UO psychology department HSP was created in the early 1980s, spearheaded by then-faculty member Morton Gernsbacher. From the beginning Gernsbacher and her colleagues realized that a good HSP would be structured to avoid self-selection into studies. Thus studies in our HSP are titled with short, memorable, but non-descriptive and non-referential names, typically all within a category at a given time (for instance, studies might be named after species of dogs, names of rivers, names of composers, etc.) I joined the UO faculty in 1987, a few years after Gernsbacher and her colleagues had established the HSP. In the 25 years I have been a member of the department I have made heavy use of the HSP. I have also been a teacher of our large introductory psychology class, which serves as the primary source of participants in the HSP. During these many years I have been involved in surveys of our HSP participants regarding their experiences in the Pool. In all this time, as far as I know, no complaints had been raised by students regarding the de-identified study sign up process, however from time to time the University IRB has expressed concern about the system, and in 2009 went so far as to tell us we must change our procedure and give students more information about the studies before sign-up.
The IRB explained their concern was about participant informed consent. This is a legitimate concern. In response, I proposed that we create a consent process for the HSP itself such that before subjects are even eligible to sign up for studies they must go through an informed consent about the HSP process, including the nature of study sign up. I suggested that this consent process could explain both why we do the blind sign-up, the risks that might create for them (discovering they signed up for a study they decided they did not want to actually complete), and their rights (their rights not to complete a particular study they signed up for). Such a process could potentially inform, empower, and educate the student participants while communicating “respect for persons” one of the three fundamental ethical tenets from the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979).
This idea was accepted by our IRB and is implemented today through our informed consent
process and as part of our educational materials for students in introductory psychology courses.
In other words, participants have a two-tiered consent process. First, they have the option to
consent to being in the HSP itself and then, if they decided to be in the HSP, they have the option
consent to proceed with specific studies after they have signed-up for them. Even before the
consent material is provided to our participants, the process is explained to potential participants
in in-class presentations and to researchers in an online training that must be completed prior to
using the HSP. In the HSP consent process itself we explain our on-line study sign-up procedure.
Our current HSP consent document includes this language:
This website is constructed to help you select and choose studies that are available for you to participate in. It automatically allows researchers to post available studies and automatically tracks the credit you have earned from these studies. When you log in, you will be able to click "Study Sign-Up" and see a list of studies available. These studies are presented in random order—different every time! You will also note that they are not named after anything meaningful—some are named after states, trees, elements of the periodic table, or breeds of dog. This is to prevent selection bias. This bias occurs when people know what a study is about before they sign up for it. For example, if you are very emotional, you might prefer to take a study on emotions. However, to gain meaningful knowledge about emotions, that study would need to include people broadly representative of the general population in terms of emotional experience. The studies will cover a broad range of topics in psychology and linguistics.
Signing up for a study does not require you to participate in the study. When you arrive for an experiment, the researchers will explain to you what will occur in the study. That is, they will tell you what the study involves. Not only will you get credit for reviewing this information (this period is called "informed consent"), but you also have the right to opt out immediately for any reason: Simply tell the researcher that you do not consent and that you want to leave the study. This principle of "opting out" applies to the entire study. In such a situation, you will get credit for every 15-minute block (or fraction thereof) that you spend participating. The same rules will apply if the study is likely to run longer than expected.
Your participation in the Human Subjects Pool is voluntary. If you do not wish to participate in research to fulfill your class's "research requirement," refer to your syllabus for alternatives or speak to your instructor. You may choose to complete an alternative assignment at any point during the class.
We have been using this two-tiered consent system for three years without problems. Students learn about good methodology in psychological science, which is the fundamental justification in having the research participation requirement, and researchers get more generalizable data from the fulfillment of this requirement, thus making the contribution of knowledge more useful.
References
Acknowledgements: I am grateful to my current and former colleagues at the University of Oregon, particularly Lisa Cromer, Morton Gernsbacher, and Deborah Olson, for their wisdom and effort creating and maintaining a superb Human Subject Pool. For providing valuable information and feedback on this editorial I thank Morton Gernsbacher, Sasha Johnson-Freyd, Melissa Platt, and Sanjay Srivastava.