Accountability in Research: Policies and Quality ...

2 downloads 0 Views 143KB Size Report
Sep 12, 2013 - misconduct depends primarily on whistleblowers. ... cient since dependence on whistleblowers manifests itself as an accidental hit or miss.
This article was downloaded by: [University of Maryland, Baltimore], [Mr Adil E. Shamoo] On: 18 September 2013, At: 11:04 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Accountability in Research: Policies and Quality Assurance Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gacr20

Data Audit as a Way to Prevent/Contain Misconduct Adil E. Shamoo Ph.D.

a

a

University of Maryland School of Medicine , Baltimore , Maryland , USA Published online: 12 Sep 2013.

To cite this article: Adil E. Shamoo Ph.D. (2013) Data Audit as a Way to Prevent/Contain Misconduct, Accountability in Research: Policies and Quality Assurance, 20:5-6, 369-379, DOI: 10.1080/08989621.2013.822259 To link to this article: http://dx.doi.org/10.1080/08989621.2013.822259

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Accountability in Research, 20:369–379, 2013 Copyright © Taylor & Francis Group, LLC ISSN: 0898-9621 print / 1545-5815 online DOI: 10.1080/08989621.2013.822259

Data Audit as a Way to Prevent/Contain Misconduct Adil E. Shamoo, Ph.D. University of Maryland School of Medicine, Baltimore, Maryland, USA Research misconduct is frequently in the media headlines. There is consensus among leading experts on research integrity that the prevalence of misconduct in research is at least 1%, and shoddy work may even go over 5%. Unfortunately, misconduct in research impacts all walks of life from drugs to human subject protections, innovations, economy, policy, and even our national security. The main method of detecting research misconduct depends primarily on whistleblowers. The current regulations are insufficient since dependence on whistleblowers manifests itself as an accidental hit or miss. No other endeavor in our society depends on such a poor system of discovery of misconduct to remedy it. Nearly a quarter of a century ago, I proposed data audit as a means to prevent/contain research misconduct. The audit has to protect the creative process and be non-obtrusive. Data audit evaluates the degree of correspondence of published data with the source data. The proposed data audit does not require any changes in the way researchers carry out their work. Keywords: data audit, fraud, misconduct, NIH, NSF, ORI, RCR, research, researcher, research integrity

Currently, public and private expenditure on research and development in the United States exceeds $400 billion and employ over 4 million people. In 2009, the United States published over 200,000 journal articles in science and engineering (NSB, 2012). An enterprise of such magnitude reflects the values of our society. With the understanding that individuals from all walks of life in our society commit misconduct— priests, FBI agents, police officers, bankers, and accountants—we must recognize that scientists are vulnerable to these human failings as well.

This article, in-part, is based on a presentation at the ORI at 20: Reassessing Research Integrity–-A Leadership Conference, Sponsored by the Office of Research Integrity (ORI) and Johns Hopkins University, Royal Sonesta Harbor Court, Inner Harbor Baltimore, Maryland, April 3–5, 2013. Address correspondence to Adil E. Shamoo, University of Maryland School of Medicine, 108 N. Greene Street, Baltimore, MD 21201, USA. E-mail: ashamoo@som. umaryland.edu

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

370

A. E. Shamoo

Public scrutiny of the conduct of scientific researchers remains high. Media reports of scandals of misconduct in research, as well as reports of biased research, violations of human research ethics rules, and moral controversies in research occur on a weekly basis. Further, misconduct in research, defined by the federal government as fabrication, falsification, or plagiarism, has not subsided. Computer programs have detected thousands of cases of plagiarism of published research (iThenticate, 2012). Moreover, in the past five years there is an alarming increase in retractions in life sciences literature (Van Noorden, 2011; Retraction Watch, 2013). Examples are numerous. In 2011, headlines in major newspapers and in Science Magazine read “Scientific Misconduct-–Psychologist Accused of Fraud on ‘Astonishing Scale.’ The article recounts the fraud committed by Dutch Psychologist, Diederik Stapel (Vogel, 2011). In 2012, similar major headlines in major newspapers and Science magazine declared a Harvard professor accused of misconduct (Carpenter, 2012). Misconduct has become a part of our culture. In 2012, I participated in a panel on National Public Radio (Conan, 2012) with two other guests regarding Jonah Lehrer, a well-known journalist at the esteemed magazine New Yorker, for plagiarizing quotes in his article regarding Bob Dylan. The transcript on top of the show stated: “He joins a long list of writers, journalists, bloggers and scientists who’ve crossed the line from fact into fiction.”

ASSESSING PREVALENCE OF MISCONDUCT The following is the historical summary of articles dealing with assessment of prevalence of misconduct. I will refer to reports, reviews, and meta-analysis. Early assessment of prevalence of misconduct occurred in the 1980s, mostly culled from testimonies in congressional hearings reviewed by Glick (1989). In brief, complaints to National Institute of Health (NIH) were about .03–.1 % of those with extramural funding. Glick, using data from the U.S. Bureau of the Census for 18 years of age and older, noted that the percentage for all types of fraud is .09%. According to the U.S. Bureau of the Census, the ratio of property crimes to arrests for property crimes equals 7.0. Since no similar figure exists for fraud, and since scientific fraud is typically harder to detect than property crimes, one might estimate that .2–.7% of scientists commit scientific fraud. As I will discuss, this estimate is consistent with estimates made years later. In the first serious survey of 245 scientists by Tangney (1987), it was estimated that 32% of all scientists suspected, witnessed, or encountered misconduct in research. Steneck (2002), in an Office of Research Integrity (ORI) report assessing the integrity of publicly funded research, reviewed the data on assessment of prevalence of misconduct. The report indicates that from actual cases of misconduct confirmed by the federal government, only 1 in 10,000

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Data Audit as a Way to Prevent/Contain Misconduct

scientists are guilty. However, Steneck, analyzing available data to that date, concludes that scientific misconduct is 1% or higher citing a 1994 paper from Glick but similar to Glick’s (1989) earlier estimate. Martinson et. al. (2005) published a seminal work in Nature of a survey of 3,247 researchers funded by the NIH, regarding many unethical practices. The survey found that the frequency of researchers who admitted to engaging in unethical behavior in the last three years is less than 2% for the serious six items that included falsification and plagiarism. The other behaviors examined occur at a rate between 5% and 15%. The most disturbing finding is that 33% of researchers engaged in the top ten misbehaviors. Fanelli (2009) published a meta-analysis of surveys to date, and his conclusion is that the frequency of misconduct is higher than the 2% for scientists admitting to falsification of research at least once, and up to 34% for other misconducts. In 2011, the number of misconduct allegations reported to ORI jumped from an average of 198 for the years 1992–2007 to 240 in 2011. In the same report, ORI found that the allegations of misconduct jumped from the historical average of 33% to 44% in 2011 (ORI, 2011). It is not known whether this indicates that the incidence of misconduct is rising or whether this is due to an increased willingness to report misconduct.

CURRENT METHODS TO PREVENT/CONTAIN MISCONDUCT 1. Voluntary Compliance with Ethical Norms It appears that voluntary compliance with ethical norms has not decreased misconduct in research. To the contrary, fragmentary evidence indicates misconduct may be on the rise. Society’s norms for voluntary ethical behavior have come under scrutiny many times throughout history. Society’s response to repeated violation of voluntary ethical norms has been to enact laws to punish violators. Recent examples are the following ones: 1. Many clinical laboratories were owned by physicians and especially by pathologists. This relationship created a conflict of interest that led to fraud and corruption. In addition, there were no quality standards for these laboratories. This led to the loss of billions of dollars. The U.S. Congress enacted the Clinical Laboratory Improvement Act in 1988 to prohibit self-referral and instituted quality standards (AAPS, 2013). 2. The Enron company accounting debacle led to enactment of the Sarbanes– Oxley Law of 2002 (SOX, 2013), whereby all company executives are

371

372

A. E. Shamoo

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

criminally liable for their signatures attesting to the integrity of the financial records if they are fraudulent. In the United States, there are many other recent examples of laws enacted after members of society violated voluntary compliance with ethical norms.

2. Regulations and Standards Current regulations and standards do not appear to have decreased misconduct in research.

3. Education and Training Early work in the 1990s (Kalichman and Friedman, 1992; Eastwood et. al., 1996; Brown and Kalichman, 1998) indicated that teaching responsible conduct of research (RCR) was not effective in improving skills in research ethics. Anderson et. al. (2007) reported that formal training in RCR was ineffective in reducing unethical behavior. Macrina and his colleagues (Funk e. al., 2007; Ripley et al., 2012) found also that RCR education was not effective and that mentors might be more important. Wright et al. (2008), in reviewing ORI misconduct files, found that three quarters of mentors had not reviewed source data and two thirds had not set standards for their trainees. Resnik and Dinse (2012), in surveying 144 top NIH/National Science Foundation (NSF) funded research institutions with or without medical schools, found that 37.7% of institutions with medical schools satisfied the individual training requirement, as compared to 62.7% without medical schools. The survey also found that only 9.7% of all faculty/staff participating in externally funded research satisfied the training requirement. More important is that only 8.2% of all faculty/staff participating in human subject research in medical schools took the training, whereas no one took the training if the institution had no medical school. These data clearly indicate that the overall percentage of those faculty/staff taking training is less than 10%. Thus, the overwhelming number of faculty (i.e., mentors) receives no training at all.

4. Whistleblowers This is the primary basis of the current system in place, where investigation and audit are subsequently conducted by research institutions and ORI. No evidence exists that this method has decreased misconduct. Kornfeld (2012) studied 146 ORI reports of misconduct cases and recommends that an increase in whistleblowing and protection of the whistleblowers will contribute as a deterrent and an early exposure of misconduct. However, there are currently state and federal laws that protect whistleblowers, and providing

Data Audit as a Way to Prevent/Contain Misconduct

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

additional protections may be difficult, because some of the adverse impacts suffered by whistleblowers, such as stigma, are not illegal.

5. Data Audit A brief history of the development of the concept of data audit for research could serve the purpose of putting the subject in context. In 1987, my colleague, Dr. Zoltan Annau from Johns Hopkins University, and I wrote an article in the journal Nature on “data audit” as one of the means to prevent misconduct in research (Shamoo and Annau, 1987). I followed this with another article (Shamoo, 1988) titled “We Need Data Audit.” A decade earlier and unbeknown to us, Dr. J. Leslie Glick, the founder of Genex Corporation, made similar suggestions in scientific conferences in the mid-1970s. In our original publication, we emphasized that the procedure for data audit ought to be “non-obtrusive” and conducted by an independent body to protect the free process of research. In 1988, I organized the “First International Conference on Scientific Data Audit, Policies, and Quality Assurance.” Speakers were from government, industry, and academia and were attended by about 100 (Cassidy and Shamoo, 1989). The papers from the conference signaled the birth of the first issue of the journal Accountability in Research. The discussion on data audit emphasized the protection of the creative process of research. A year later, I edited a book titled “Principles of Research Data Audit” (Shamoo, 1989). So, here we are in 2013, exhuming the carcass of a dead idea. Data audit is alive, well, and widely used by many, under different names, forms, and for specific fields. However, data audit as currently used is not as effective as a regular financial audit, with the main exception of ORI audits. Academic institutions have resisted data audits. I will come to that point later. Accountability is one of the primary goals in many endeavors of a society. Accountability in scientific research is no different from accountability in other fields. Society holds researchers accountable for results that affect nearly all aspects of our lives, from national security to food, drugs, machines, and space– -just to name a few. The basic element of accountability in research is integrity. Research data that lacks reproducibility are shoddy, or are outright fraudulent, and fail to meet the standard of accountability. This is why in the past quarter of a century we are still searching for the means to improve the integrity of research data. Resistance to the concept of data audit from academic institutions has been fierce, marred by the lack of scientific discussion, and largely disappointing. Researchers value their freedom and don’t want any interference/oversight. They may also worry data audit could delay research. These are legitimate concerns, but the type of data audit we suggested is very sensitive to these concerns and will have little negative effect on research, if any. A few years later, the focus of my attention was on other topics, despite the fact that I organized

373

374

A. E. Shamoo

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

eleven additional conferences on research ethics where RCR and audits were part of the conference.

FAILURE OF THE CURRENT SYSTEM TO PREVENT/CONTAIN MISCONDUCT As we mentioned earlier, there is a rise in misconduct as evidenced by the increased allegations reported to ORI, increased findings of plagiarism, and scandals continue unabated. The current methods to prevent/contain misconduct have failed for several reasons, as follows: 1. Self-reporting is not reliable and motives vary; 2. Whistleblowing is not reliable, and the issue of motives is sometimes troubling; 3. Reports in surveys can exaggerate the true number of misconduct cases and is not reliable. The methods outlined above require confirmation by indisputable evidence and solid conclusions such as those conducted by ORI audits. Moreover, these methods capture a very tiny portion of the incidence of misconduct. The lack of a proven method to prevent/contain misconduct undermines public trust in scientific research. Audits such as financial audits for banks and those conducted by the Internal Revenue Service (IRS) convey the following messages to the public that: you may be audited; you can be selected in a random fashion with a certain degree of confidence of catching violators; and if financial irregularities are found, you will be punished. In any system of endeavor, members of the society like to know the ultimate boundaries of crimes and punishments. Data audit would increase public trust in scientific research and deter scientists from violating ethical standards.

THE NATURE OF THE DATA AUDIT We defined the data audit as follows: The systematic process by which objective evidence is obtained and evaluated as to assertions about research data and their value to determine the degree of correspondence between those assertions and established or predetermined criteria which can then be communicated to interested parties. (Loeb and Shamoo, 1989, p. 28) In brief, a data audit consists of the following elements:

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Data Audit as a Way to Prevent/Contain Misconduct

a.

A data audit should be a “systematic process” using “methodical examination and review.” The methodology should be known to all participants;

b.

A data audit should involve objective evidence to evaluate research data and their derivatives;

c.

A data audit should determine the degree of correspondence between assertions based on the data (e.g., results, conclusions, tables, figures) and the original data, using prior criteria;

d.

The data audit should be applied to a randomly selected sample of qualifying research;

e.

“For cause” audits may be conducted if there are suspected problems with the research;

f. Auditors should communicate the audit outcome to affected parties. In short, research data have to be “quantifiable and verifiable.” Utilizing the definition of data audit above, I converted the theoretical definition into a step-by-step practical method of conducting an audit of laboratory data (Shamoo, 1989). Dr. Glick applied the method we developed and audited a published paper authored by me (Glick and Shamoo, 1991) with full authority to publish anything he found. The audit consisted of the following steps: (i) selection of published data; (ii) identification of the corresponding laboratory notebook data; (iii) verification of the extent to which the published data agree with the corresponding laboratory notebook data, with respect to results and experimental conditions; and (iv) verification whether the data thus examined appear to be sufficient to justify the published conclusions. The audit conducted all of the necessary calculations to verify the data in the published paper such as those in figures and tables. Moreover, in the published audit report, side-by-side comparisons of the original published data with the newly calculated data appeared. In The Institute of Medicine (IOM, 1989) report from a panel titled "The Responsible Conduct of Research in the Health Sciences," most members of the panel opposed regulatory actions and audits associated with Federal Drug Administration (FDA)’s drug approval, Good Laboratory Practices (GLP), and others. As a matter of fact, I agree with them. Our suggestion for data audit was prior to the panel’s conclusion and in none of these articles and conference that I organized were there any suggestions emulating the aforementioned agencies’ audit. The audit I had proposed in1989 was focused on the degree of correspondence of the published data with the original source data. This type of an audit can be conducted on any method employed in obtaining the data or recording it. Moreover, I pointed out that the FDA’s audit focused more on an audit of the process rather than the data. It would have been helpful if the panel had used a wider net to capture all of the discussion on the subject.

375

376

A. E. Shamoo

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Cost/Benefit of a Data Audit Glick (1989) estimated that 7% of researchers are engaged in research of questionable value due to carelessness or fraud. Furthermore, he assumed that a data audit could cut down the incidence of questionable research by half. Using an acceptable business model, he concluded that each dollar spent on audits would benefit society 20 years later in the amount equivalent to $25–60.

The Current Use of the Data Audit ORI conducts an extensive data audit once a case reaches them. ORI’s method depends primarily on whistleblowers or self-reporting. The Food and Drug Administration (FDA) has an oversight authority on research conducted with human subjects for the purpose of marketing a licensed drug, device, or biologic to the public. It is estimated that the number of human subjects in research under the FDA’s oversight is over five million subjects per year (Shamoo, 2001). But, the number of human subjects in research in general, which includes all federal funding under the auspices of the Office for Human Subject Protection (OHRP) and privately funded research, could exceed 25 million subjects. Only half of the 25 million apply to drugs and chemical intervention, and the rest pertain to surveys. FDA (2013) has two types of audits. One type is a “for cause” audit, when FDA receives information regarding misconduct that jeopardizes the safety of subjects and/or information that specifically indicates violation of a federal regulation regarding the conduct of a clinical trial (Shamoo, 2001). FDA also conducts random audits on small samples. The percentage of clinical trials audited is not randomly selected to represent a statistically valid degree of confidence in the overall integrity of the data. In high risk drugs, FDA conducts audits, too. It is important to note that most FDA audits are concerned with auditing the procedures followed by the investigator. The assumption is that if all procedures are followed, then the integrity is verified. The FDA’s audit is also concerned with auditing compliance with regulations, such as requirements for informed consent and many other facets of regulations. In some cases, the FDA resorts to an actual data audit. Many major research institutions utilize internal data audits of research with human subjects in a similar fashion as those conducted by OHRP and FDA. In other words, the audit is primarily to ensure regulatory compliance and safety of human subjects. These audits are in part conducted to preempt agency audits. In rare circumstances, research institutions conduct for-cause audits when there is a complaint by a credible person. University of Miami conducts audits of "three kinds—random, directed, and focused (focusing on a particular aspect or aspects of a study). We tend to perform more audits on the highest risk studies (IND/IDE), but certainly we audit other studies

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Data Audit as a Way to Prevent/Contain Misconduct

as well." Also, their "audits examine source data and compare them with the research records" (Bixby, 2013). A decade earlier, when scandals hit research with human subjects and the oversight agency shut down several universities, academic institutions reacted with a dramatic increase in funding and oversight of human subject research. This was a positive reaction. However, scandals and severe measures should not be the only method to improve research integrity.

CONCLUSION The current methods of preventing and containing misconduct in research are at best marginally effective and at worst have no effect. A data audit should be considered again as the ultimate preventative for those committing research misconduct or producing questionable research results. The current methods of carrying out research from its inception to publication do not have to change in order to subject the data for an audit. As a first step, we can ask volunteer research institutions to subject their research data to an audit.

ACKNOWLEDGMENT I want to thank and acknowledge comments of Dr. J. Leslie Glick on an earlier version of this manuscript.

REFERENCES AAPS. Available at http://www.aapsonline.org/msas/clia.php. Last accessed March 20, 2013. Anderson, M. S., Horn, A. S., Risbey, K. R., Ronning, E. A., De Vries, R. D., and Martinson, B. C. (2007). What Do Mentoring and Training in the Responsible Conduct of Research Have to Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists, Academic Medicine 82: 853–860. Bixby, J. L. (2013). Personal communication through e-mails. Brown, S., and Kalichman, M. W. (1998). Effects of training in the responsible conduct of research: A survey of graduate students in experimental sciences. Science and Engineering Ethics 4: 487–498. Carpenter, S. (2012). Harvard Psychology Researcher Committed Fraud, U.S. Investigation Concludes. Available at http://news.sciencemag.org/scienceinsider/ 2012/09/harvard-psychology-researcher-co.html. Last accessed March 15, 2013. Cassidy, M. M., and Shamoo, A. E. (1989). First international conference on scientific data audit, policies and quality assurance. Accountability in Research 1: 1–3. Conan, N., Blair, J., O’Rourke, M., and Shamoo, A. (2012). National Public Radio. Who Makes Stuff Up, And Why They Do It, a Transcript. Available

377

378

A. E. Shamoo

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

at http://www.npr.org/2012/07/31/157663733/who-makes-stuff-up-and-why-theydo-it. Last accessed March 20, 2013. Eastwood, S., Derish, P., Leash, E., and Ordway, S. (1996). Ethical issues in biomedical research: Perceptions and practices of postdoctoral research fellows responding to a survey. Science and Engineering Ethics 2(1): 89–114. Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data, PLoS ONE. May 2009, 4(5): e5738. Food and Drug Administration. (FDA). (2013). http://www.fda.gov/AboutFDA/ Transparency/Basics/ucm194888.htm. Last accessed accessed on March 15, 2013. Funk, C. L., Barrett, K. A., and Macrina, F. L. (2007). Authorship and publications: Evaluation of the effect of responsible conduct of research instruction to postdoctoral trainees. Accountability in Research 14: 269–305. Glick, J. L. (1989). On the Potential Cost Effectiveness of Scientific Audit. Accountability in Research 1: 77–83. Glick, J.L., and Shamoo, A. E.(1991). Auditing Biochemical Research Data: A Case Study. Accountability in Research 1: 223–243. iThenticate. (2012). Available at http://www.ithenticate.com/Portals/92785/docs/ plagiarism-survey-results-120412.pdf. Last accessed March 15, 2013. Institute of Medicine. (IOM). (1989). Report of the Panel on: The Responsible Conduct of Research in the Health Sciences. Washington, D.C.: National Academy Press. Kalichman, M. W., and Friedman, P. J. (1992). A pilot study of biomedical trainees’ perceptions concerning research ethics. Academic Medicine 67(11): 769–775. Kornfeld, D. S. (2012). Perspective: Research Misconduct: The Search for a Remedy. Academic Medicine 87(7): 877–882. Loeb, S. E., and Shamoo, A. E. (1989). Data Audit: Its Place in Auditing. Accountability in Research 1: 23–33. Martinson, B.C., Anderson, M.S., and de Vries, R. (2005). Scientists Behaving Badly. Nature 435: 737–738. National Science Board. (2012). Science and Engineering Indicators 2012. Arlington, VA: National Science Foundation (NSB 12-01). Available at http://www.nsf.gov/ statistics/seind12/pdf/seind12.pdf. Last accessed March 21, 2013. Office of Research Integrity. (ORI). (2011). Annual Report. Available at http://ori.dhhs. gov/images/ddblock/ori_annual_report_2011.pdf. Last accessed March 21, 2013. Resnik, D.B., and Dinse, G.E. (2012). Do U.S. Research Institutions Meet or Exceed Federal Mandates for Instruction in Responsible Conduct of Research? A National Survey. Academic Medicine 87(9): 1–6. Retraction Watch, (2013). http://retractionwatch.wordpress.com/. Last accessed on March 27, 2013. Riply, E., Markowitz, M., Nichols-Casebolt, A., Williams, L., and Macrina, F. (2012). Guiding the Next Generation of NIH Investigators in Responsible Conduct of Research: The Role of the Mentor., Accountability in Research 19: 209–219. Sarbanes-Oxley Law (SOX). (2013). Available at http://www.sox-online.com/basics. html. Last accessed March 20, 2013. Shamoo, A. E., and Annau, Z. (1987). Ensuring Scientific Integrity. Nature, 327:550.

Data Audit as a Way to Prevent/Contain Misconduct Shamoo, A. E. (1988). We Need Data Audit. AAAS Observer, November 4, 4.

Downloaded by [University of Maryland, Baltimore], [Mr Adil E. Shamoo] at 11:04 18 September 2013

Shamoo, A. E. (1989). Auditing laboratory based data. In: Principles of Research Data Audit. New York: Gordon and Breach, pp. 79–93. Shamoo, A. E. (2001). Adverse Events Reporting—the Tip of an Iceberg. Accountability in Research 8: 197–218. Steneck, N. 2002. Assessing the Integrity of Publicly Supported Research. In: Proceedings from the ORI Conference on Research Integrity. Washington, DC, pp. 1–16. Tangney, J.P. (1987). Fraud Will Out-or Will it? New Scientist, August 6, 62–63. Van Noorden, R. (2011). The Trouble with Retractions – A surge in withdrawn papers highlighting weakness in the system of handling them. Nature 478: 26–28. Vogel, G. (2011). Psychologist Accused of Fraud on ‘Astonishing Scale.’ Science 334:579. Wright, D.E., Titus, S.L., and Cornelison, J.B. (2008). Mentoring and Research Misconduct: An Analysis of Research Mentoring in Closed ORI cases. Science and Engineering Ethics, 14: 323–336.

379