Debriefing and Accountability in Deceptive Research

1 downloads 0 Views 235KB Size Report
Kennedy Institute of Ethics Journal, Volume 18, Number 3, September. 2008, pp. 235-251 (Article) ...... Hurd, Heidi M. 1996. The Moral Magic of Consent.
Debriefing and Accountability in Deceptive Research Franklin G. Miller John P. Gluck Jr David Wendler Kennedy Institute of Ethics Journal, Volume 18, Number 3, September 2008, pp. 235-251 (Article) Published by The Johns Hopkins University Press DOI: 10.1353/ken.0.0196

For additional information about this article http://muse.jhu.edu/journals/ken/summary/v018/18.3.miller.html

Access Provided by your local institution at 06/30/10 8:42PM GMT

Franklin G. Miller*, John P. Gluck, Jr., and David Wendler*

Debriefing and Accountability in Deceptive Research

ABSTRACT. Debriefing is a standard ethical requirement for human research involving the use of deception. Little systematic attention, however, has been devoted to explaining the ethical significance of debriefing and the specific ethical functions that it serves. In this article, we develop an account of debriefing as a tool of moral accountability for the prima facie wrong of deception. Specifically, we contend that debriefing should include a responsibility to promote transparency by explaining the deception and its rationale, to provide an apology to subjects for infringing the principle of respect for persons, and to offer subjects an opportunity to withdraw their data. We also present recommendations concerning the discussion of deception in scientific articles reporting the results of research using deception.

D

eception is a common method of study design in research on attitudes and behavior, especially in experimental psychology and neuroscience. Many of the most important findings of psychological research have derived from deceptive experimentation with human subjects (Korn 1997). Although used less frequently in clinical research, it is not uncommon (Wendler and Miller 2004; Mann 2007). In general, deception is used when it is thought to be necessary to obtain valid experimental data: that is, when truthful disclosure to research subjects about the purpose of the research or the nature of research procedures is deemed likely to produce biased responses, or when deception is necessary to create controlled experimental conditions that credibly * The opinions expressed are those of the authors and do not necessarily reflect the position or policy of the National Institutes of Health, the Public Health Service, or the Department of Health and Human Services. Kennedy Institute of Ethics Journal Vol. 18, No. 3, 235–251

[ 235 ]

kennedy institute of ethics journal • september 2008

mimic natural situations of human interaction. Accordingly, deception, when methodologically necessary, serves to promote scientific validity, which is one of the ethical requirements of research on human subjects. Unless human experimentation is devoted to answering valuable research questions, using scientifically rigorous methods, there is no justification for the risks and burdens to which subjects are exposed. However, deception clearly violates the basic ethical principle of respect for persons and the requirement of informed consent. Given that moral considerations support promoting socially valuable human research, and deception is methodologically necessary to answer important research questions, use of deception creates an ethical conundrum. The technique of debriefing is a standard ethical safeguard employed in deceptive research. At the end of research participation, subjects are informed about the use of deception and its rationale. In this article, we examine the practice of debriefing from an ethical perspective. There has been little systematic attention to the ethics of debriefing. Most of the articles on this topic date from the mid 1960s to the mid 1980s and focus almost exclusively on deception in psychology research (Sieber 1983; Levine 1986). This literature has primarily addressed the role of debriefing in minimizing and mitigating potential harm caused by deceptive experimentation. The role of debriefing in ameliorating the moral wrong of deception has received less attention. TYPOLOGY OF DECEPTION

In general, deception involves deliberately misleading communication to prospective subjects about the purpose of the research and/or the nature of experimental procedures. As background to an ethical assessment of debriefing it is useful to consider the various types or features of deceptive research. Depending on the characteristics of deceptive experiments, debriefing may or may not be feasible or desirable; and when it is appropriate, the task of debriefing may vary accordingly. We describe five types of deception in research. A given experiment may employ one or more of these types. First, there may be deceptive disclosure to research subjects about the purpose of research. This may take of the form of inaccurate description in the written consent document and conversation with prospective subjects prior to research participation. The purpose of the research may be described in a deliberately vague manner, so as to avoid alerting subjects to the exact topic of interest to investigators. For example, a study of alcohol use may be presented to prospective [ 236 ]

Miller, Gluck, and Wendler • Deceptive Research

subjects as research on health in general. In contrast, other disclosures about the research purpose may deliberately misinform subjects, as when a study of the placebo effect, in which the outcome of interest is how subjects respond to a disguised placebo intervention, is described as research on the effects of an active treatment, which may or may not be administered in the research (Miller, Wendler, and Swartzman 2005). Second, deceptive experiments commonly involve fake or rigged instruments or procedures, which in fact are different from the way they are described or presented to the subjects. Third, the experimental design may include misleading play acting by the investigator and/or the use of “confederates,” employed by the research team to play a deceptive role. Fourth, research may adopt covert procedures that are not disclosed to subjects, such as observation of subjects behind a one-way mirror or by means of a hidden camera. Finally, the research may be entirely covert: for example, undercover observation or staged experimentation in a public place designed to appear as a natural occurrence. In covert research, debriefing may be impossible or not feasible. Several of these deceptive features were employed in Stanley Milgram’s famous experiments in the 1960s on obedience to authority (Milgram 1974). The research was deceptively presented to subjects as a study of learning and memory. The major experimental procedure was a fake shock generator. To secure belief in the reality of administered shocks, each subject received a real sample shock from the shock generator; however, real shocks were not administered during the experiment. The experimenter instructed and encouraged the subjects to continue administering “shocks” to the “learner” despite resistance. A confederate of the researchers, presented to subjects as a research volunteer, played the role of the “learner,” receiving “shocks” for mistakes and falsely signaling reactions to them. For some of the experiments, subjects were observed by researchers behind a one-way mirror. Despite the pervasive deceit in his experiments, Milgram eschewed “deception” as accurately characterizing his research. He is quoted as remarking, “It is true that technical illusions were used in the experiment. I would not call them deceptions because that already implies some base motivation” (Korn 1997, p. 104). It is not the motivation, however, that marks an experiment as deceptive; rather, it is the deliberate creation of false beliefs about the research on the part of the subjects. Milgram (1974) described a process of debriefing for each subject at the conclusion of research participation, which included being informed that [ 237 ]

kennedy institute of ethics journal • september 2008

real shocks were not administered and that obedience to the commands of the experimenter was “entirely normal.” In addition, “[e]ach subject had a friendly reconciliation with the unharmed victim.” A written report explaining the details of the experimental procedure and study results was sent to each subject after the conclusion of the experimental series. RECENT EXAMPLES OF DECEPTIVE RESEARCH

One area in which deceptive research is common is brain imaging studies aimed at elucidating the placebo effect (Miller, Wendler, and Swartzman 2005). We describe two deceptive experiments in this body of neuroscience research. Tor Wager and colleagues (2004) conducted a study aimed at examining whether placebo analgesia can cause reduced activity in pain-responsive areas of the brain. Twenty-four healthy volunteers were administered electric shocks while undergoing fMRI. Half of the subjects were administered a placebo cream. According to the research report, “Participants were told that they were taking part in a study of brain responses to a new analgesic cream.” In addition, half of the subjects were told that they would receive “an analgesic cream that would reduce but not eliminate the pain of the shocks.” Accordingly, subjects were deceived both about the nature of the study and the use of a placebo. Additional information about the research reported online stated that “Informed consent was obtained from all participants after the nature and possible consequences of the study were explained.” No mention was made about the use of debriefing; however, the researchers described their debriefing procedure in response to a letter to the editor that raised ethical issues concerning the research (Miller 2004; Wager 2004). A second fMRI experiment designed to extend understanding about brain activation related to placebo analgesia involved nine patient-subjects with irritable bowel syndrome who were administered placebo analgesia in connection with a painful stimulus of rectal distension (Price et al. 2007). A saline jelly was applied to the rectal balloon used for pain stimulation. The physician responsible for most of the patients in the university clinic administered the pain stimulus and placebo. In the placebo condition of the experiment, the subjects were informed that “The agent that you have just received is known to powerfully reduce pain in some patients.” We regard this disclosure as deceptive even though experimental evidence has demonstrated the efficacy of placebo analgesia, as the intent appears to be to make patient-subjects falsely believe that they are being administered an active medication. No information was provided concerning [ 238 ]

Miller, Gluck, and Wendler • Deceptive Research

the disclosure to subjects about the nature of the research, although it is probable that this was not described truthfully in the informed consent process. Once again the research report stated that “All patients signed informed consent prior to the start of the study,” and there was no mention of debriefing. IS THERE ANYTHING MORALLY WRONG ABOUT DECEPTIVE RESEARCH?

An account of the ethics of deception in research is a necessary prerequisite to ethical assessment of debriefing. Although the common morality includes a duty not to lie, it is important not to beg the question concerning the wrongfulness of deceptive research. Deception pervades ordinary life. In order to be polite, people often do not tell the truth about what they think. One tells self-interested “white lies,” when it appears expedient, which typically amount, at most, to peccadillos. Elaborate deception in the form of a surprise party is thought to be morally innocuous, if not good. In addition, many interactions in the market place, involving even large sums of money, are characterized by acceptable forms of deception, as in advertising and negotiation, and the buyer is expected to beware (caveat emptor). Why, then, should one be concerned about deception in research? Indeed, it might be argued that deception in scientific investigation is no more problematic than the pervasive and accepted use of deception in ordinary social contexts. Much of the deception in daily life, however, may be justified on the grounds that it is for the benefit of the individual who is being deceived. Such ordinary deception is considered to be acceptable or good because it is better to deceive someone slightly than criticize them or hurt their feelings. But this interpersonally beneficent deception is not relevant to human experimentation, where subjects are deceived for the benefit of science and society in general, through the development of generalizable knowledge, and not for their own benefit. On the other hand, deception in science may appear more easily justifiable than ordinary deception because it is aimed at promoting the social good of useful knowledge. For several reasons, deception of research subjects clearly conflicts with widely accepted ethical norms governing human experimentation. First, it violates the principle of respect for persons, articulated in the Belmont Report (National Commission 1979), by infringing the right of prospective research subjects to choose whether to participate in research based on truthful disclosure of relevant information. Deception may manipulate individuals to volunteer when they would not have chosen to do so had [ 239 ]

kennedy institute of ethics journal • september 2008

they been informed accurately about the nature of the research, including its use of deception. The U.S. federal regulations governing protection of subjects (DHHS 1991) state that “An investigator shall seek consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate.” Although this provision would seem to preclude deception, as discussed below, the regulations permit research that deviates from providing the standard “elements” of informed consent. Second, deception in research counts as exploitation. Investigators using deception unfairly take advantage of subjects unless, as rarely occurs, they are alerted to the use of deception in the process of soliciting consent (Wertheimer 1996). (We describe later the “authorized deception” approach in which subjects are informed prospectively about the use of deception as part of the consent process.) By virtue of being deceived, subjects lack a fair opportunity to decide whether to consent to research that employs deception. Third, when deception is revealed to subjects in the debriefing process, or otherwise discovered, it may cause distress and foster lack of trust in science. Extant evidence on deception in psychological research involving psychology students as subjects suggests that, in the aggregate, these subjects do not report a negative response to being informed that they have been deceived (Korn 1997, p. 172). But this may reflect the fact that members of this group of subjects, familiar with psychological research, anticipate being deceived. Additionally, subjects may be reluctant to report distress at being deceived, especially when they are interviewed about their responses by investigators responsible for the deception. Indeed, Janet Brody and colleagues (2000) found that psychology students who were interviewed by neutral investigators were frequently willing to report distress about participation in deceptive research. Although scant systematic data have been collected on the effects of deception on clinical research subjects, some available evidence indicates that when the deception is revealed it causes distress to at least some subjects (Fleming et al. 1989). Subjects in clinical research have a legitimate expectation of trust in, and truthful communication by, clinicians and clinical investigators. This trust is violated by the use of deception. Consequently, deception of patientsubjects may have deleterious effects on their willingness to volunteer for future clinical research. Moreover, by undermining patients’ faith in the truthfulness of physicians, deception might interfere with the future medical care of those who have experienced deceptive research. [ 240 ]

Miller, Gluck, and Wendler • Deceptive Research

Finally, deception in research raises ethical concern because it can be corrupting for the professionals who practice it, and for those who witness it. According to an ancient perspective in moral philosophy, moral character depends on habits of conduct (Aristotle 1955). The use of deception in research may interfere with the disposition not to lie or deceive persons. This problem is compounded when the study design requires repeated deception of subjects during the conduct of research. Those who witness deception, especially if performed or sanctioned by professionals in positions of authority, may develop skewed perceptions of the ethics of deception, which may have negative consequences for the development of moral character (Oliansky 1991). In sum, deception in research is prima facie wrongful, and it may be harmful to those who are deceived as well as to those who practice or witness it. REGULATORY AND ETHICAL STANDARDS

The U.S. federal regulations (DHHS 1991, 46.116c) do not mention the use of deception but permit the Institutional Review Board (IRB) approval of research protocols that deviate from the prescribed elements of informed consent: An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section . . . provided that the IRB finds and documents that: (1) the research involves no more than minimal risk to the subjects; (2) the waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) the research could not practically be carried out without the waiver or alteration; and (4) whenever appropriate, the subjects will be provided with additional pertinent information after participation.

The second condition is curious. Competent adult persons have a right not to be experimented on without their informed consent; accordingly, deceptive research deviating from informed consent would seem, on its face, to adversely affect subjects’ rights. It appears that this provision has been understood as meaning that the deviation from informed consent by virtue of deception will not otherwise infringe the rights of subjects. The fourth condition alludes to the use of debriefing. The regulatory provision for a waiver or deviation from informed consent indicates that it is the responsibility of the IRB to assess studies involving deception to determine whether they comply with these four conditions. Accordingly, investigators should include a justification for use of deception in study protocols reviewed by IRBs that permit the IRB [ 241 ]

kennedy institute of ethics journal • september 2008

to decide whether deviation from informed consent is appropriate. It is important to note that the U.S. federal regulations apply only to research that is funded by the federal government or conducted by academic institutions committed to abiding by these regulations. Nevertheless, from an ethical perspective all deceptive research, at a minimum, should satisfy these four conditions. The Belmont Report (National Commission 1979) addresses deception only in the mild form of “research involving incomplete disclosure” about “some pertinent aspect of the research.” For such deceptive research, it is prescribed that “there is an adequate plan for debriefing subjects, when appropriate.” The American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct stipulates a more specific provision relating to debriefing: Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but not later than at the conclusion of data collection, and permit participants to withdraw their data. (APA 2002)

We have argued elsewhere that, whenever it is practicable, the consent process for deceptive research should include an “authorized deception” disclosure to prospective subjects, alerting them to the use of deception in the study (Wendler and Miller 2004; Miller, Wendler, and Swartzman 2005). Variants of the authorized deception approach have been advocated or adopted since the 1970s (Milgram 1977; Holmes and Bennett 1974; Bock 1978; Wiener and Erker 1986); however, it has not become a routine feature of research using deception (Sieber, Iannuzzo, and Rodriguez 1995). The use of authorized deception contributes to making the process of deceptive research transparent. Subjects are informed that they will be misled or deceived, although obviously the exact nature of the deception cannot be disclosed prospectively. They should be assured that the research has been reviewed and approved by an independent ethics oversight committee with no vested interest in the research in question; that no important risks, other than the risks of the deception itself, have been concealed; and that no significant benefits have been falsely promised. Finally, they should be informed that debriefing will occur. ETHICAL ASSESSMENT OF DEBRIEFING

Important to an ethical assessment of debriefing is to dispel misconceptions about its purpose and function. Frederick Tesch (1977, p. 218) [ 242 ]

Miller, Gluck, and Wendler • Deceptive Research

notes that “the hidden assumption” of the practice of debriefing among psychologists is “one of magical undoing, of using debriefing as an eraser for emotional and behavioral residues.” No less magical is the tendency to see debriefing as erasing the wrong of deception. In a recent and fascinating book examining the illusions involved in the pursuit of happiness, Daniel Gilbert (2005) describes the import of numerous psychological experiments, many of which were deceptive. Possibly testifying to some residual moral discomfort, he includes a footnote that describes “the strict ethical guidelines” that psychological researchers follow, including IRB review and voluntary participation as well as debriefing: “If people are given any false information in the course of an experiment, they are told the truth when the experiment is over. In short, we’re really very nice people.” Adhering to ethical standards including debriefing thus is seen as neutralizing deception, wiping the moral slate clean. This conception of debriefing apparently serves to reduce the cognitive dissonance involved in the use of deception. The self-perception of investigators as virtuous seekers of socially valuable truth—and as nice people—remains intact despite the use of elaborate experimental deception. However, just as restitution for criminal behavior does not cancel the wrong committed, so debriefing, although ethically desirable, does not cancel the prima facie moral wrong of deception. Indeed, restitution involves compensation for wrong, whereas debriefing is not compensatory. The point of this analogy is not to suggest that deceptive research is “criminal,” but that debriefing cannot magically erase the moral taint of deception for the sake of science. Ironically, valid consent itself has been described as “moral magic,” because it functions to make morally permissible interpersonal behavior that otherwise would be immoral (Hurd 1996). The “magic” of consent is “real” when consent genuinely operates as a morally transformative act. Debriefing, however, is not morally transformative—the prima facie wrong of deception may be ameliorated but not erased by debriefing—although it is understandable why it may be seen by investigators, who employ or endorse deception, in this self-deceptively magical way. Also mistaken is the tendency to see debriefing as retrospectively providing the informed consent that deception prospectively compromises. A recent article reporting the results of a deceptive experiment concerning the placebo effect in asthma stated in its methods section that “All subjects provided written informed consent before screening that did not reveal that the central purpose of the study was to explore the placebo response; this deception was revealed at a debriefing at the end of the [ 243 ]

kennedy institute of ethics journal • september 2008

protocol, when the subjects were reconsented and given the opportunity to withdraw from the study” (Kemeny et al. 2007). Given that the study involved deception, informed consent was not provided by the subjects, unless they were alerted to the use of deception in the disclosure about the nature of the research. Presumably, consistent with most deceptive research, this technique of “authorized deception” was not employed. It is not clear exactly what the authors mean by “reconsenting,” as no details were provided. Giving subjects the opportunity to withdraw their data from the research, which we discuss later, helps to restore an aspect of autonomy, which was infringed by the prior use of deception—that is, they are allowed to decide whether to contribute their data once they understand the true nature of the research. Nevertheless, deciding not to take up the offer to withdraw data does not amount to (retrospective) informed consent for research participation. Valid consent is always current or prospective. It gives permission for interactions in the present or future that would be impermissible in the absence of consent. Subjects’ affirmatively permitting the use of their data during debriefing does count as (prospective) consent to future use of their data. But this, strictly speaking, is not valid consent to participation in the deceptive study, because the study participation already has occurred and the initial consent to enroll in the research was not valid, owing to deception. At the least, it is misleading to describe debriefing that occurs after study participation has been completed as “reconsenting.” In sum, it is important to recognize that debriefing does not serve as retrospective consent and does not eliminate the prima facie wrong of deception. To think that it does erroneously characterizes the ethical assessment of deceptive research, suggesting that there would be no reason to prefer nondeceptive research over deceptive research plus debriefing. In contrast, recognizing that a wrong remains, despite debriefing, highlights the fact that deceptive studies need to be justified prospectively and should be approved only when they offer the prospect of producing socially valuable knowledge that could not be obtained without deception. Instead of seeing debriefing as a mechanism for erasing the moral problem of deception, it should be understood as a tool of moral accountability. Research subjects are owed a timely explanation of how and why they were deceived, in contravention of the requirement and expectation of informed consent. Debriefing makes amends by retrospectively providing the disclosure about the research that standardly should have been offered prospectively. A requirement to explain to subjects retrospectively the [ 244 ]

Miller, Gluck, and Wendler • Deceptive Research

rationale for deception in promoting the scientific validity of the research suggests that participants were not merely used for the sake of science. Although this debriefing does not cancel the unfairness of deception, it does provide an opportunity and responsibility to make the deception appear reasonable to the subjects. More controversially, we contend that debriefing, as a tool of moral accountability, should include the requirement of a sincere apology for the wrongfulness of deception, provided that the research did not employ the authorized deception approach of prospectively alerting subjects to the use of deception. It might be objected that there is no need for an apology when the use of deception is rightly regarded as justified. Whether deceptive research, absent the use of authorized deception, can be justified, all things considered, will not be assessed here. However, assuming that it can does not entail that there is no need for an apology. Apologies should involve expression of regret, even when remorse for wrongful action is not appropriate. For example, consider a garden-variety example of conflict of moral duties. Having promised to meet a friend for dinner, on the way I encounter a person who falls to the ground and needs help in getting medical attention. My duty to help the person in need outweighs the duty to keep my promise. Yet I still owe my friend an apology for not being on time, or perhaps not being able to show up at all. An act that is prima facie wrong, although justified all things considered, calls for an apology to those whose rights accordingly have been overridden. I need not express remorse, because I did the right thing; but I should express regret for not being able to keep my promise and thus causing inconvenience or distress to my friend. One might think that in this case it would be sufficient from a moral perspective to offer an explanation for not keeping the promise, since helping the person in need obviously trumps keeping the promise. Yet an expression of regret is appropriate because the promisee has suffered a violation of legitimate expectations created by the promise as well as experiencing emotional frustration and disappointment due to my unexplained absence. To be sure, the promisor does not regret helping the person in need, but should regret inconveniencing his friend. W. D. Ross (1988, p. 28) explains how to think about this situation of moral conflict as follows: When we think ourselves justified in breaking, and indeed morally obliged to break, a promise in order to relieve some one’s distress, we do not for a moment cease to recognize a prima facie duty to keep our promise, and this

[ 245 ]

kennedy institute of ethics journal • september 2008

leads us to feel, not indeed shame or repentance, but certainly compunction for behaving as we do; we recognize, further that it is our duty to make up somehow to the promisee for the breaking of the promise.

“Compunction” nicely captures the proper moral phenomenology, as etymologically it means a sting or prick of conscience. The use of deception in research differs from this example of moral conflict, in that in the former the breach of an obligation was planned in advance. Yet the principle remains that when an obligation to another is justifiably overridden some redress is owed. Thus, although the use of deception may be justified to promote scientific validity of potentially valuable research, a regret-expressing apology is still owed to research subjects who have been deceived. An explanation of deception is not sufficient from a moral perspective. Expressing regret in the process of debriefing is a way of making up morally to the deceived subject. In addition, the requirement of debriefing, including a sincere apology to deceived research subjects, should give investigators pause about employing deception. If taken seriously, it invites researchers to engage their moral imagination and see the deception from the perspective of a naïve participant who has no vested interest in the research. Accordingly, debriefing may serve as a check on the cavalier use of deception in research. If investigators are never required to confront and apologize to the subjects they have deceived, the morally problematic use of deception becomes easier to practice than it should. And this implies that debriefing ought to be conducted by investigators responsible for deceiving, and not assigned to other members of the research team who played no significant role in deceiving the subjects. Certainly, debriefing should not be limited to distributing to the research subjects a printed statement about the true nature of the research with a phone number to be used if more information in requested. Should an apology be offered when deceptive research uses the authorized deception approach? Authorized deception definitely eases, if not eliminates, the burden of moral accountability in the form of an apology. In being alerted prospectively to the use of deception, research subjects are given a fair opportunity to decide whether or not they wish to participate in a deceptive study. Obviously, however, the authorized deception approach cannot disclose the exact nature of the deception employed in the research. It remains possible that some subjects would not have agreed to participate in the research had they known the true purpose of the research or nature of the deceptively presented experimental procedures. We suggest that when authorized deception is used an apology is not owed [ 246 ]

Miller, Gluck, and Wendler • Deceptive Research

uniformly, but only to those subjects who may object to the deception in the debriefing process. Once again, when needed, this apology calls for some expression of regret, but not remorse. SOLICITING CONSENT FOR THE USE OF DATA

The offer to withdraw data, sometimes adopted as an element of debriefing, presents competing ethical considerations, thus revisiting the conundrum of promoting scientific validity versus respecting the autonomy of subjects (Sieber 1983; Freedman 1983). On the one hand, offering subjects the option to withdraw their data potentially compromises the scientific validity of the research. To the extent that subjects take up the offer, the generalizability of the research results is impaired, by reducing the number of subjects studied and potentially introducing an element of selection bias. Those disposed to withdraw their data may have different personality characteristics than those who do not, such that excluding their data biases the research. This practice also might seem unfair to other subjects who are prepared to permit the use of their data despite being deceived, as it potentially diminishes the value of their contribution to the research. Accordingly, when deceptive research is ethically justified, offering subjects the option to withdraw seems ethically dubious. On the other hand, absent the offer to withdraw their data, the subjects are forced to contribute to research without their informed consent. Indeed, the defective consent in deceptive research, unlike valid consent, arguably does not authorize investigators to use subjects’ data. The offer to withdraw, as mentioned above, provides a retrospective measure of respect for subjects’ autonomy, and it prospectively offers them the opportunity to decide whether their data will be used. Therefore, it should be treated as affirmatively soliciting consent for use of subjects’ data, rather than simply giving subjects the option to withdraw. This process of seeking consent for use of subjects’ data, furthermore, is unlikely to have a strong impact on the validity of the research. Although no systematic evidence is available on the extent to which subjects are offered the option, and decide, to withdraw their data in the context of debriefing, we suggest that few subjects will do so. For example, in the study of the placebo effect in asthma described above, none of the 55 patient-subjects of this research conducted in a clinical setting accepted the offer to withdraw their data. If, however, after debriefing a substantial proportion of subjects choose to withdraw their data from a particular study, this sends a valuable message about the perceived ethics of this particular use of deception. [ 247 ]

kennedy institute of ethics journal • september 2008

We view the balance of moral considerations as favoring solicitation of consent for the use of subjects’ data as an element of debriefing. However, the balance arguably shifts when the authorized deception approach is employed. Because this method of disclosure makes deceptive research compatible with the spirit of informed consent, there is no reason to seek consent for use of data or to offer withdrawal of data. Nonetheless, it would seem fair to permit withdrawal of data by those subjects who express objection to being deceived and demand that their data not be used. Solicitation of consent for the use of data (or the offer to withdraw data) and the apology for deception dovetail as elements in the process of debriefing. Both can be seen as morally appropriate responses to defective consent for study participation. If the initial consent was valid, there would be no need either for an apology or for an offer to withdraw data. Both practices, therefore, can, and should, function as meaningful gestures of moral accountability. The process of debriefing also can serve moral accountability for the practice of deception if it is used as an opportunity to gather potentially valuable ethics-related data regarding the attitudes of research subjects. We recommend routine data collection concerning such issues as distress caused by the use of deception, approval or disapproval of deceptive methods, trust in science, willingness to participate in future research, appraisal of the debriefing process, and the number of participants electing to withdraw their data. This follow-up research should be useful to IRBs in overseeing deceptive research and potentially for publication. Such data are important because IRBs are charged with making riskbenefit assessments of research that employs deception. Without data on the responses of subjects to being deceived, these assessments are merely impressionistic. In this respect, the use of deception is no different from experimental procedures such as brain imaging and lumbar punctures, which carry the potential for harms to subjects that should be evaluated in the light of systematic data. To assure that debriefing functions as a tool of moral accountability, IRBs should review written plans for debriefing in all research that employs deception. This should include a sample script of the debriefing disclosure to subjects, detailing the explanation and rationale for deception, the apology, and the offer to withdraw data.

[ 248 ]

Miller, Gluck, and Wendler • Deceptive Research

DEBRIEFING AS PUBLIC ACCOUNTABILITY

Analogous to debriefing as a phase in the conduct of deceptive research, research reports that describe deceptive research may be seen as a form of public debriefing. Just as the prima facie wrongfulness of deception ethically demands debriefing as moral accountability to research participants, so published articles reporting deceptive research should satisfy public accountability for research that deviates from informed consent (Miller and Rosenstein 2002). The current practice of reporting deceptive research, however, falls short of this standard; indeed, it appears to perpetuate deception. The use of deception usually is not highlighted or explicitly mentioned in research reports. Rather, it often must be inferred from the description of study methods and any included statements about disclosure to subjects about the purpose and nature of the research. Articles that report patently deceptive research typically state, nonetheless, that “written informed consent was obtained.” This boiler plate statement is accurate only if informed consent is understood merely as signing a consent document. However, given that the purpose of such statements should be to signify that good faith efforts were made to obtain valid consent, the use of such a statement to describe research that employs deception, especially the standard approach without the use of the authorized deception, is itself deceptive. Finally, debriefing is not consistently mentioned and rarely described with any specificity. We recommend that all reports of deceptive research comply with the following standard of ethics disclosure: • highlight the fact that deception was used; • explain the rationale for deception, including reasons why nondeceptive methods would not be suitable; • describe specifically the way in which the consent disclosure to subjects deviated from informed consent; • describe the use and nature of debriefing, including the offer to withdraw data.

To accommodate space limits for published journals, the print version of articles can briefly cover these points, with more detailed description placed on a journal’s website. Describing such key ethical issues in scientific articles reporting deceptive research promotes public moral accountability, just as describing research methods promotes scientific accountability.

[ 249 ]

kennedy institute of ethics journal • september 2008

CONCLUSION

We have argued that debriefing retrospectively demonstrates respect for subjects as persons in the face of defective consent compromised by deception. When debriefing explains to subjects the nature and rationale for deception and includes, as appropriate, a sincere apology and offer to permit withdrawal of data, it provides moral accountability for the prima facie wrongfulness of deception. REFERENCES

APA. American Psychological Association. 2002. Ethical Principles of Psychologists and Code of Conduct. American Psychologist 57: 1060–73. Aristotle. 1955. Ethics. Trans. J. A. K. Thomson. London: Penguin Books. Bok, Sissela. 1978. Lying: Moral Choice in Public and Private Life. New York: Random House. Brody, Janet L.; Gluck, John P.; and Aragon, Alfredo. S. 2000. Particpants’ Understanding of Psychological Research: Debriefing. Ethics and Behavior 10 (1): 13–25. DHHS. Department of Health and Human Services. 1991. Rules and Regulations for the Protection of Human Research Subjects. 45 Code of Federal Regulations, Part 46. Fleming, Michael F.; Bruno, Michael; Barry, Kristen; and Fost, Norman. 1989. Informed Consent, Deception, and the Use of Disguised Alcohol Questionnaires. American Journal of Drug and Alcohol Abuse 15: 309–19. Freedman, Benjamin. 1983. Withdrawing Data as a Substitute for Consent. IRB 5 (6): 10. Gilbert, Daniel. Stumbling on Happiness. 2005. New York: Vintage Books. Holmes, David S., and Bennett, David H. 1974. Experiments to Answer Questions Raised by the Use of Deception in Psychological Research: I. Role Playing as an Alternative to Deception; II. Effectiveness of Debriefing after a Deception; III. Effect of Informed Consent on Deception. Journal of Personality and Social Psychology 29: 358–67. Hurd, Heidi M. 1996. The Moral Magic of Consent. Legal Theory 2: 121. Kemeny, Margaret E.; Rosenwasser, Lanny J.; Panettieri, Reynold A.; et al. 2007. Placebo Response in Asthma: a Robust and Objective Phenomenon. Journal of Allergy and Clinical Immunology 119: 1375–81. Korn, James H. 1997. Illusions of Reality: A History of Deception in Social Psychology. Albany: State University of New York Press. Levine, Robert J. 1986. Ethics and Regulation of Clinical Research. 2d ed. New Haven, CT: Yale University Press.

[ 250 ]

Miller, Gluck, and Wendler • Deceptive Research

Mann, Howard. 2007. Deception in the Single-Blind Run-In Phase of Clinical Trails. IRB 29 (2): 14–17. Milgram, Stanley. 1974. Obedience to Authority. New York: HarperPerennial. ———. 1977. Subject Reaction: The Neglected Factor in the Ethics of Experimentation. Hastings Center Report 7 (5): 19–23. Miller, Franklin G. 2004. Painful Deception. Science 304: 1109–10. ——— and Rosenstein, Donald L. 2002. Reporting of Ethical Issues in Publications of Medical Research. Lancet 360: 1326–28. Miller, Franklin G.; Wendler, David; and Swartzman, Leora C. 2005. Deception in Research on the Placebo Effect. PLOS Medicine 2: e262. National Commission. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont Report. Washington, DC: U.S. Government Printing Office. Oliansky, Adam. 1991. A Confederate’s Perspective on Deception. Ethics and Behavior 1: 253–58. Price Donald D.; Craggs, Jason; Verne, G. Nicholas; et al. 2007. Placebo Analgesia Is Accompanied by Large Reductions in Pain-related Brain Activity in Irritable Bowel Syndrome Patients. Pain 127: 63–72. Ross, W. D. 1988. The Right and the Good. Indianapolis, IN: Hackett. Sieber, Joan E. 1983. Deception in Social Research III: the Nature and Limits of Debriefing. IRB 5 (3): 1–6. ———; Iannuzzo, Rebecca; and Rodriguez, Beverly. 1995. Deception Methods in Psychology: Have They Changed in 23 Years? Ethics and Behavior 5: 67–85. Tesch, Frederick E. 1977. Debriefing Research Participants: Though this Be Method There Is Madness to It. Journal of Personality and Social Psychology 35: 217–24. Wager, Tor. 2004. Response. Science 304: 1110–11. ———; Rilling, James K.; Smith, Edward E.; et al. 2004. Placebo-Induced Changes in FMRI in the Anticipation and Experience of Pain. Science 303: 1162–70. Wendler, David, and Miller, Franklin G. 2004. Deception in the Pursuit of Science. Archives of Internal Medicine 164: 597–600. Wiener, R. L., and Erker, P. V. 1986. The Effects of Prebriefing Misinformed Research Participants on Their Attributions of Responsibility. Journal of Psychology 120: 397–410. Wertheimer, Alan. 1996. Exploitation. Princeton, NJ: Princeton University Press.

[ 251 ]

Suggest Documents